Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-magento-2-development-cookbook
Packt
03 Nov 2015
4 min read
Save for later

Upgrading from Magneto 1

Packt
03 Nov 2015
4 min read
In Magento 2 Development Cookbook by Bart Delvaux, the overarching goal of this book is to provides you with the with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. (For more resources related to this topic, see here.) Why Magento 2 Solve common problems encountered while extending your Magento 2 store to fit your business needs. Exciting and enhanced features of Magento 2 such as customizing security permissions, intelligent filtered search options, easy third-party integration, among others. Learn to build and maintain a Magento 2 shop via a visual-based page editor and customize the look and feel using Magento 2 offerings on the go. What this article covers? This article covers Preparing an upgrade from Magento 1. Preparing an upgrade from Magento 1 The differences between Magento 1 and Magento 2 are big. The code has a whole new structure with a lot of improvements but there is one big disadvantage. What to do if I want to upgrade my Magento 1 shop to a Magento 2 shop. Magento created an upgrade tool that migrates the data of the Magento 1 database to the right structure for a Magento 2 database. The custom modules in your Magento 1 shop will not work in a Magento 2. It is possible that some of your modules will have a Magento 2 version and depending of the module, the module author will have a migration tool to migrate the data that is in the module. Getting ready Before we get started, make sure you have an empty (without sample data) Magento 2 installation with the same version as the Migration tool that is available at: https://github.com/magento/data-migration-tool-ce How to do it In your Magento 2 version (with the same version as the migration tool), run the following commands: composer config repositories.data-migration-tool git https://github.com/magento/data-migration-tool-ce composer require magento/data-migration-tool:dev-master Install Magento 2 with an empty database by running the installer. Make sure you configure it with the right time zone and currencies. When these steps are done, you can test the tool by running the following command: php vendor/magento/data-migration-tool/bin/migrate This command will print the usage of the command. The next thing is creating the configuration files. The examples of the configuration files are in the following folder: vendor/magento/data-migration-tool/etc/<version>. We can create a copy of this folder where we can set our custom configuration values. For a Magento 1.9 installation, we have to run the following cp command: cp –R vendor/magento/data-migration-tool/etc/ce-to-ce/1.9.1.0/ vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration Open the vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist file and search for the source/database and destination/database tags. Change the values of these database settings to your database settings like in the following code: <source> <database host="localhost" name="magento1" user="root"/> </source> <destination> <database host="localhost" name="magento2_migration" user="root"/> </destination> Rename that file to config.xml with the following command: mv vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml How it works By adding a composer dependency, we installed the data migration tool for Magento 2 in the codebase. This migration tool is a PHP command line script that will handle the migration steps from a Magento 1 shop. In the etc folder of the migration module, there is an example configuration of an empty Magento 1.9 shop. If you want to migrate an existing Magento 1 shop, you have to customize these configuration files so it matches your preferred state. In the next recipe, we will learn how we can use the script to start the migration. Who this book is written for? This book is packed with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. Summary In this article, we learned about how to Prepare an upgrade from Magento 1. Read Magento 2 Development Cookbook to gain detailed knowledge of Magento 2 workflows, explore use cases for advanced features, craft well thought out orchestrations, troubleshoot unexpected behavior, and extend Magento 2 through customizations. Other related titles are: Magento : Beginner's Guide - Second Edition Mastering Magento Magento: Beginner's Guide Mastering Magento Theme Design Resources for Article: Further resources on this subject: Creating a Responsive Magento Theme with Bootstrap 3[article] Social Media and Magento[article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 1871

article-image-relational-databases-sqlalchemy
Packt
02 Nov 2015
28 min read
Save for later

Relational Databases with SQLAlchemy

Packt
02 Nov 2015
28 min read
In this article by Matthew Copperwaite, author of the book Learning Flask Framework, he talks about how relational databases are the bedrock upon which almost every modern web applications are built. Learning to think about your application in terms of tables and relationships is one of the keys to a clean, well-designed project. We will be using SQLAlchemy, a powerful object relational mapper that allows us to abstract away the complexities of multiple database engines, to work with the database directly from within Python. In this article, we shall: Present a brief overview of the benefits of using a relational database Introduce SQLAlchemy, The Python SQL Toolkit and Object Relational Mapper Configure our Flask application to use SQLAlchemy Write a model class to represent blog entries Learn how to save and retrieve blog entries from the database Perform queries—sorting, filtering, and aggregation Create schema migrations using Alembic (For more resources related to this topic, see here.) Why use a relational database? Our application's database is much more than a simple record of things that we need to save for future retrieval. If all we needed to do was save and retrieve data, we could easily use flat text files. The fact is, though, that we want to be able to perform interesting queries on our data. What's more, we want to do this efficiently and without reinventing the wheel. While non-relational databases (sometimes known as NoSQL databases) are very popular and have their place in the world of web, relational databases long ago solved the common problems of filtering, sorting, aggregating, and joining tabular data. Relational databases allow us to define sets of data in a structured way that maintains the consistency of our data. Using relational databases also gives us, the developers, the freedom to focus on the parts of our app that matter. In addition to efficiently performing ad hoc queries, a relational database server will also do the following: Ensure that our data conforms to the rules set forth in the schema Allow multiple people to access the database concurrently, while at the same time guaranteeing the consistency of the underlying data Ensure that data, once saved, is not lost even in the event of an application crash Relational databases and SQL, the programming language used with relational databases, are topics worthy of an entire book. Because this book is devoted to teaching you how to build apps with Flask, I will show you how to use a tool that has been widely adopted by the Python community for working with databases, namely, SQLAlchemy. SQLAlchemy abstracts away many of the complications of writing SQL queries, but there is no substitute for a deep understanding of SQL and the relational model. For that reason, if you are new to SQL, I would recommend that you check out the colorful book Learn SQL the Hard Way, Zed Shaw available online for free at http://sql.learncodethehardway.org/. Introducing SQLAlchemy SQLAlchemy is an extremely powerful library for working with relational databases in Python. Instead of writing SQL queries by hand, we can use normal Python objects to represent database tables and execute queries. There are a number of benefits to this approach which are listed as follows: Your application can be developed entirely in Python. Subtle differences between database engines are abstracted away. This allows you to do things just like a lightweight database, for instance, use SQLite for local development and testing, then switch to the databases designed for high loads (such as PostgreSQL) in production. Database errors are less common because there are now two layers between your application and the database server: the Python interpreter itself (which will catch the obvious syntax errors), and SQLAlchemy, which has well-defined APIs and it's own layer of error-checking. Your database code may become more efficient, thanks to SQLAlchemy's unit-of-work model which helps reduce unnecessary round-trips to the database. SQLAlchemy also has facilities for efficiently pre-fetching related objects known as eager loading. Object Relational Mapping (ORM) makes your code more maintainable, an asperation known as don't repeat yourself, (DRY). Suppose you add a column to a model. With SQLAlchemy it will be available whenever you use that model. If, on the other hand, you had hand-written SQL queries strewn throughout your app, you would need to update each query, one at a time, to ensure that you were including the new column. SQLAlchemy can help you avoid SQL injection vulnerabilities. Excellent library support: There are a multitude of useful libraries that can work directly with your SQLAlchemy models to provide things like maintenance interfaces and RESTful APIs. I hope you're excited after reading this list. If all the items in this list don't make sense to you right now, don't worry. Now that we have discussed some of the benefits of using SQLAlchemy, let's install it and start coding. If you'd like to learn more about SQLAlchemy, there is an article devoted entirely to its design in The Architecture of Open-Source Applications, available online for free at http://aosabook.org/en/sqlalchemy.html. Installing SQLAlchemy We will use pip to install SQLAlchemy into the blog app's virtualenv. To activate your virtualenv, change directories to source the activate script as follows: $ cd ~/projects/blog $ source bin/activate (blog) $ pip install sqlalchemy Downloading/unpacking sqlalchemy … Successfully installed sqlalchemy Cleaning up... You can check if your installation succeeded by opening a Python interpreter and checking the SQLAlchemy version; note that your exact version number is likely to differ. $ python >>> import sqlalchemy >>> sqlalchemy.__version__ '0.9.0b2' Using SQLAlchemy in our Flask app SQLAlchemy works very well with Flask on its own, but the author of Flask has released a special Flask extension named Flask-SQLAlchemy that provides helpers with many common tasks, and can save us from having to re-invent the wheel later on. Let's use pip to install this extension: (blog) $ pip install flask-sqlalchemy … Successfully installed flask-sqlalchemy Flask provides a standard interface for the developers who are interested in building extensions. As the framework has grown in popularity, the number of high quality extensions has increased. If you'd like to take a look at some of the more popular extensions, there is a curated list available on the Flask project website at http://flask.pocoo.org/extensions/. Choosing a database engine SQLAlchemy supports a multitude of popular database dialects, including SQLite, MySQL, and PostgreSQL. Depending on the database you would like to use, you may need to install an additional Python package containing a database driver. Listed next are several popular databases supported by SQLAlchemy and the corresponding pip-installable driver. Some databases have multiple driver options, so I have listed the most popular one first. Database Driver Package(s) SQLite Not needed, part of the Python standard library since version 2.5 MySQL MySQL-python, PyMySQL (pure Python), OurSQL PostgreSQL psycopg2 Firebird fdb Microsoft SQL Server pymssql, PyODBC Oracle cx-Oracle SQLite comes as standard with Python and does not require a separate server process, so it is perfect for getting up and running quickly. For simplicity in the examples that follow, I will demonstrate how to configure the blog app for use with SQLite. If you have a different database in mind that you would like to use for the blog project, feel free to use pip to install the necessary driver package at this time. Connecting to the database Using your favorite text editor, open the config.py module for our blog project (~/projects/blog/app/config.py). We are going to add an SQLAlchemy specific setting to instruct Flask-SQLAlchemy how to connect to our database. The new lines are highlighted in the following: class Configuration(object): APPLICATION_DIR = current_directory DEBUG = True SQLALCHEMY_DATABASE_URI = 'sqlite:///%s/blog.db' % APPLICATION_DIR The SQLALCHEMY_DATABASE_URIis comprised of the following parts: dialect+driver://username:password@host:port/database Because SQLite databases are stored in local files, the only information we need to provide is the path to the database file. On the other hand, if you wanted to connect to PostgreSQL running locally, your URI might look something like this: postgresql://postgres:secretpassword@localhost:5432/blog_db If you're having trouble connecting to your database, try consulting the SQLAlchemy documentation on the database URIs:http://docs.sqlalchemy.org/en/rel_0_9/core/engines.html Now that we've specified how to connect to the database, let's create the object responsible for actually managing our database connections. This object is provided by the Flask-SQLAlchemy extension and is conveniently named SQLAlchemy. Open app.py and make the following additions: from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) These changes instruct our Flask app, and in turn SQLAlchemy, how to communicate with our application's database. The next step will be to create a table for storing blog entries and to do so, we will create our first model. Creating the Entry model A model is the data representation of a table of data that we want to store in the database. These models have attributes called columns that represent the data items in the data. So, if we were creating a Person model, we might have columns for storing the first and last name, date of birth, home address, hair color, and so on. Since we are interested in creating a model to represent blog entries, we will have columns for things like the title and body content. Note that we don't say a People model or Entries model – models are singular even though they commonly represent many different objects. With SQLAlchemy, creating a model is as easy as defining a class and specifying a number of attributes assigned to that class. Let's start with a very basic model for our blog entries. Create a new file named models.py in the blog project's app/ directory and enter the following code: import datetime, re from app import db def slugify(s): return re.sub('[^w]+', '-', s).lower() class Entry(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) modified_timestamp = db.Column( db.DateTime, default=datetime.datetime.now, onupdate=datetime.datetime.now) def __init__(self, *args, **kwargs): super(Entry, self).__init__(*args, **kwargs) # Call parent constructor. self.generate_slug() def generate_slug(self): self.slug = '' if self.title: self.slug = slugify(self.title) def __repr__(self): return '<Entry: %s>' % self.title There is a lot going on, so let's start with the imports and work our way down. We begin by importing the standard library datetime and re modules. We will be using datetime to get the current date and time, and re to do some string manipulation. The next import statement brings in the db object that we created in app.py. As you recall, the db object is an instance of the SQLAlchemy class, which is a part of the Flask-SQLAlchemy extension. The db object provides access to the classes that we need to construct our Entry model, which is just a few lines ahead. Before the Entry model, we define a helper function slugify, which we will use to give our blog entries some nice URLs. The slugify function takes a string like A post about Flask and uses a regular expression to turn a string that is human-readable in a URL, and so returns a-post-about-flask. Next is the Entry model. Our Entry model is a normal class that extends db.Model. By extending the db.Model our Entry class will inherit a variety of helpers which we'll use to query the database. The attributes of the Entry model, are a simple mapping of the names and data that we wish to store in the database and are listed as follows: id: This is the primary key for our database table. This value is set for us automatically by the database when we create a new blog entry, usually an auto incrementing number for each new entry. While we will not explicitly set this value, a primary key comes in handy when you want to refer one model to another. title: The title for a blog entry, stored as a String column with a maximum length of 100. slug: The URL-friendly representation of the title, stored as a String column with a maximum length of 100. This column also specifies unique=True, so that no two entries can share the same slug. body: The actual content of the post, stored in a Text column. This differs from the String type of the Title and Slug as you can store as much text as you like in this field. created_timestamp: The time a blog entry was created, stored in a DateTime column. We instruct SQLAlchemy to automatically populate this column with the current time by default when an entry is first saved. modified_timestamp: The time a blog entry was last updated. SQLAlchemy will automatically update this column with the current time whenever we save an entry. For short strings such as titles or names of things, the String column is appropriate, but when the text may be especially long it is better to use a Text column, as we did for the entry body. We've overridden the constructor for the class (__init__) so that when a new model is created, it automatically sets the slug for us based on the title. The last piece is the __repr__ method which is used to generate a helpful representation of instances of our Entry class. The specific meaning of __repr__ is not important but allows you to reference the object that the program is working with, when debugging. A final bit of code needs to be added to main.py, the entry-point to our application, to ensure that the models are imported. Add the highlighted changes to main.py as follows: from app import app, db import models import views if __name__ == '__main__': app.run() Creating the Entry table In order to start working with the Entry model, we first need to create a table for it in our database. Luckily, Flask-SQLAlchemy comes with a nice helper for doing just this. Create a new sub-folder named scripts in the blog project's app directory. Then create a file named create_db.py: (blog) $ cd app/ (blog) $ mkdir scripts (blog) $ touch scripts/create_db.py Add the following code to the create_db.py module. This function will automatically look at all the code that we have written and create a new table in our database for the Entry model based on our models: from main import db if __name__ == '__main__': db.create_all() Execute the script from inside the app/ directory. Make sure the virtualenv is active. If everything goes successfully, you should see no output. (blog) $ python create_db.py (blog) $ If you encounter errors while creating the database tables, make sure you are in the app directory, with the virtualenv activated, when you run the script. Next, ensure that there are no typos in your SQLALCHEMY_DATABASE_URI setting. Working with the Entry model Let's experiment with our new Entry model by saving a few blog entries. We will be doing this from the Python interactive shell. At this stage let's install IPython, a sophisticated shell with features like tab-completion (that the default Python shell lacks): (blog) $ pip install ipython Now check if we are in the app directory and let's start the shell and create a couple of entries as follows: (blog) $ ipython In []: from models import * # First things first, import our Entry model and db object. In []: db # What is db? Out[]: <SQLAlchemy engine='sqlite:////home/charles/projects/blog/app/blog.db'> If you are familiar with the normal Python shell but not IPython, things may look a little different at first. The main thing to be aware of is that In[] refers to the code you type in, and Out[] is the output of the commands you put in to the shell. IPython has a neat feature that allows you to print detailed information about an object. This is done by typing in the object's name followed by a question-mark (?). Introspecting the Entry model provides a bit of information, including the argument signature and the string representing that object (known as the docstring) of the constructor: In []: Entry? # What is Entry and how do we create it? Type: _BoundDeclarativeMeta String Form:<class 'models.Entry'> File: /home/charles/projects/blog/app/models.py Docstring: <no docstring> Constructor information: Definition:Entry(self, *args, **kwargs) We can create Entry objects by passing column values in as the keyword-arguments. In the preceding example, it uses **kwargs; this is a shortcut for taking a dict object and using it as the values for defining the object, as shown next: In []: first_entry = Entry(title='First entry', body='This is the body of my first entry.') In order to save our first entry, we will to add it to the database session. The session is simply an object that represents our actions on the database. Even after adding it to the session, it will not be saved to the database yet. In order to save the entry to the database, we need to commit our session: In []: db.session.add(first_entry) In []: first_entry.id is None # No primary key, the entry has not been saved. Out[]: True In []: db.session.commit() In []: first_entry.id Out[]: 1 In []: first_entry.created_timestamp Out[]: datetime.datetime(2014, 1, 25, 9, 49, 53, 1337) As you can see from the preceding code examples, once we commit the session, a unique id will be assigned to our first entry and the created_timestamp will be set to the current time. Congratulations, you've created your first blog entry! Try adding a few more on your own. You can add multiple entry objects to the same session before committing, so give that a try as well. At any point while you are experimenting, feel free to delete the blog.db file and re-run the create_db.py script to start over with a fresh database. Making changes to an existing entry In order to make changes to an existing Entry, simply make your edits and then commit. Let's retrieve our Entry using the id that was returned to use earlier, make some changes and commit it. SQLAlchemy will know that it needs to be updated. Here is how you might make edits to the first entry: In []: first_entry = Entry.query.get(1) In []: first_entry.body = 'This is the first entry, and I have made some edits.' In []: db.session.commit() And just like that your changes are saved. Deleting an entry Deleting an entry is just as easy as creating one. Instead of calling db.session.add, we will call db.session.delete and pass in the Entry instance that we wish to remove: In []: bad_entry = Entry(title='bad entry', body='This is a lousy entry.') In []: db.session.add(bad_entry) In []: db.session.commit() # Save the bad entry to the database. In []: db.session.delete(bad_entry) In []: db.session.commit() # The bad entry is now deleted from the database. Retrieving blog entries While creating, updating, and deleting are fairly straightforward operations, the real fun starts when we look at ways to retrieve our entries. We'll start with the basics, and then work our way up to more interesting queries. We will use a special attribute on our model class to make queries: Entry.query. This attribute exposes a variety of APIs for working with the collection of entries in the database. Let's simply retrieve a list of all the entries in the Entry table: In []: entries = Entry.query.all() In []: entries # What are our entries? Out[]: [<Entry u'First entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>, <Entry u'Fourth entry'>] As you can see, in this example, the query returns a list of Entry instances that we created. When no explicit ordering is specified, the entries are returned to us in an arbitrary order chosen by the database. Let's specify that we want the entries returned to us in an alphabetical order by title: In []: Entry.query.order_by(Entry.title.asc()).all() Out []: [<Entry u'First entry'>, <Entry u'Fourth entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>] Shown next is how you would list your entries in reverse-chronological order, based on when they were last updated: In []: oldest_to_newest = Entry.query.order_by(Entry.modified_timestamp.desc()).all() Out []: [<Entry: Fourth entry>, <Entry: Third entry>, <Entry: Second entry>, <Entry: First entry>] Filtering the list of entries It is very useful to be able to retrieve the entire collection of blog entries, but what if we want to filter the list? We could always retrieve the entire collection and then filter it in Python using a loop, but that would be very inefficient. Instead we will rely on the database to do the filtering for us, and simply specify the conditions for which entries should be returned. In the following example, we will specify that we want to filter by entries where the title equals 'First entry'. In []: Entry.query.filter(Entry.title == 'First entry').all() Out[]: [<Entry u'First entry'>] If this seems somewhat magical to you, it's because it really is! SQLAlchemy uses operator overloading to convert expressions like <Model>.<column> == <some value> into an abstracted object called BinaryExpression. When you are ready to execute your query, these data-structures are then translated into SQL. A BinaryExpression is simply an object that represents the logical comparison and is produced by over-riding the standards methods that are typically called on an object when comparing values in Python. In order to retrieve a single entry, you have two options, .first() and .one(). Their differences and similarities are summarized in the following table: Number of matching rows first() behavior one() behavior 1 Return the object. Return the object. 0 Return None. Raise sqlalchemy.orm.exc.NoResultFound 2+ Return the first object (based on either explicit ordering or the ordering chosen by the database). Raise sqlalchemy.orm.exc.MultipleResultsFound Let's try the same query as before, but instead of calling .all(), we will call .first() to retrieve a single Entry instance: In []: Entry.query.filter(Entry.title == 'First entry').first() Out[]: <Entry u'First entry'> Notice how previously .all() returned a list containing the object, whereas .first() returned just the object itself. Special lookups In the previous example we tested for equality, but there are many other types of lookups possible. In the following table, have listed some that you may find useful. A complete list can be found in the SQLAlchemy documentation. Example Meaning Entry.title == 'The title' Entries where the title is "The title", case-sensitive. Entry.title != 'The title' Entries where the title is not "The title". Entry.created_timestamp < datetime.date(2014, 1, 25) Entries created before January 25, 2014. For less than or equal, use <=. Entry.created_timestamp > datetime.date(2014, 1, 25) Entries created after January 25, 2014. For greater than or equal, use >=. Entry.body.contains('Python') Entries where the body contains the word "Python", case-sensitive. Entry.title.endswith('Python') Entries where the title ends with the string "Python", case-sensitive. Note that this will also match titles that end with the word "CPython", for example. Entry.title.startswith('Python') Entries where the title starts with the string "Python", case-sensitive. Note that this will also match titles like "Pythonistas". Entry.body.ilike('%python%') Entries where the body contains the word "python" anywhere in the text, case-insensitive. The "%" character is a wild-card. Entry.title.in_(['Title one', 'Title two']) Entries where the title is in the given list, either 'Title one' or 'Title two'. Combining expressions The expressions listed in the preceding table can be combined using bitwise operators to produce arbitrarily complex expressions. Let's say we want to retrieve all blog entries that have the word Python or Flask in the title. To accomplish this, we will create two contains expressions, then combine them using Python's bitwise OR operator which is a pipe| character unlike a lot of other languages that use a double pipe || character: Entry.query.filter(Entry.title.contains('Python') | Entry.title.contains('Flask')) Using bitwise operators, we can come up with some pretty complex expressions. Try to figure out what the following example is asking for: Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) As you probably guessed, this query returns all entries where the title contains either Python or Flask, and which were created within the last 30 days. We are using Python's bitwise OR and AND operators to combine the sub-expressions. For any query you produce, you can view the generated SQL by printing the query as follows: In []: query = Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) In []: print str(query) SELECT entry.id AS entry_id, ... FROM entry WHERE ( (entry.title LIKE '%%' || :title_1 || '%%') OR (entry.title LIKE '%%' || :title_2 || '%%') ) AND entry.created_timestamp > :created_timestamp_1 Negation There is one more piece to discuss, which is negation. If we wanted to get a list of all blog entries which did not contain Python or Flask in the title, how would we do that? SQLAlchemy provides two ways to create these types of expressions, using either Python's unary negation operator (~) or by calling db.not_(). This is how you would construct this query with SQLAlchemy: Using unary negation: In []: Entry.query.filter(~(Entry.title.contains('Python') | Entry.title.contains('Flask')))   Using db.not_(): In []: Entry.query.filter(db.not_(Entry.title.contains('Python') | Entry.title.contains('Flask'))) Operator precedence Not all operations are considered equal to the Python interpreter. This is like in math class, where we learned that expressions like 2 + 3 * 4 are equal to 14 and not 20, because the multiplication operation occurs first. In Python, bitwise operators all have a higher precedence than things like equality tests, so this means that when you are building your query expression, you have to pay attention to the parentheses. Let's look at some example Python expressions and see the corresponding query: Expression Result (Entry.title == 'Python' | Entry.title == 'Flask') Wrong! SQLAlchemy throws an error because the first thing to be evaluated is actually the 'Python' | Entry.title! (Entry.title == 'Python') | (Entry.title == 'Flask') Right. Returns entries where the title is either "Python" or "Flask". ~Entry.title == 'Python' Wrong! SQLAlchemy will turn this into a valid SQL query, but the results will not be meaningful. ~(Entry.title == 'Python') Right. Returns entries where the title is not equal to "Python". If you find yourself struggling with the operator precedence, it's a safe bet to put parentheses around any comparison that uses ==, !=, <, <=, >, and >=. Making changes to the schema The final topic we will discuss in this article is how to make modifications to an existing Model definition. From the project specification, we know we would like to be able to save drafts of our blog entries. Right now we don't have any way to tell whether an entry is a draft or not, so we will need to add a column that let's us store the status of our entry. Unfortunately, while db.create_all() works perfectly for creating tables, it will not automatically modify an existing table; to do this we need to use migrations. Adding Flask-Migrate to our project We will use Flask-Migrate to help us automatically update our database whenever we change the schema. In the blog virtualenv, install Flask-migrate using pip: (blog) $ pip install flask-migrate The author of SQLAlchemy has a project called alembic; Flask-Migrate makes use of this and integrates it with Flask directly, making things easier. Next we will add a Migrate helper to our app. We will also create a script manager for our app. The script manager allows us to execute special commands within the context of our app, directly from the command-line. We will be using the script manager to execute the migrate command. Open app.py and make the following additions: from flask import Flask from flask.ext.migrate import Migrate, MigrateCommand from flask.ext.script import Manager from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) migrate = Migrate(app, db) manager = Manager(app) manager.add_command('db', MigrateCommand) In order to use the manager, we will add a new file named manage.py along with app.py. Add the following code to manage.py: from app import manager from main import * if __name__ == '__main__': manager.run() This looks very similar to main.py, the key difference being that instead of calling app.run(), we are calling manager.run(). Django has a similar, although auto-generated, manage.py file that serves a similar function. Creating the initial migration Before we can start changing our schema, we need to create a record of its current state. To do this, run the following commands from inside your blog's app directory. The first command will create a migrations directory inside the app folder which will track the changes we make to our schema. The second command db migrate will create a snapshot of our current schema so that future changes can be compared to it. (blog) $ python manage.py db init Creating directory /home/charles/projects/blog/app/migrations ... done ... (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. Generating /home/charles/projects/blog/app/migrations/versions/535133f91f00_.py ... done Finally, we will run db upgrade to run the migration which will indicate to the migration system that everything is up-to-date: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade None -> 535133f91f00, empty message Adding a status column Now that we have a snapshot of our current schema, we can start making changes. We will be adding a new column named status, which will store an integer value corresponding to a particular status. Although there are only two statuses at the moment (PUBLIC and DRAFT), using an integer instead of a Boolean gives us the option to easily add more statuses in the future. Open models.py and make the following additions to the Entry model: class Entry(db.Model): STATUS_PUBLIC = 0 STATUS_DRAFT = 1 id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) status = db.Column(db.SmallInteger, default=STATUS_PUBLIC) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) ... From the command-line, we will once again be running db migrate to generate the migration script. You can see from the command's output that it found our new column: (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.autogenerate.compare] Detected added column 'entry.status' Generating /home/charles/projects/blog/app/migrations/versions/2c8e81936cad_.py ... done Because we have blog entries in the database, we need to make a small modification to the auto-generated migration to ensure the statuses for the existing entries are initialized to the proper value. To do this, open up the migration file (mine is migrations/versions/2c8e81936cad_.py) and change the following line: op.add_column('entry', sa.Column('status', sa.SmallInteger(), nullable=True)) The replacement of nullable=True with server_default='0' tells the migration script to not set the column to null by default, but instead to use 0: op.add_column('entry', sa.Column('status', sa.SmallInteger(), server_default='0')) Finally, run db upgrade to run the migration and create the status column: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade 535133f91f00 -> 2c8e81936cad, empty message Congratulations, your Entry model now has a status field! Summary By now you should be familiar with using SQLAlchemy to work with a relational database. We covered the benefits of using a relational database and an ORM, configured a Flask application to connect to a relational database, and created SQLAlchemy models. All this allowed us to create relationships between our data and perform queries. To top it off, we also used a migration tool to handle future database schema changes. We will set aside the interactive interpreter and start creating views to display blog entries in the web browser. We will put all our SQLAlchemy knowledge to work by creating interesting lists of blog entries, as well as a simple search feature. We will build a set of templates to make the blogging site visually appealing, and learn how to use the Jinja2 templating language to eliminate repetitive HTML coding. Resources for Article:   Further resources on this subject: Man, Do I Like Templates! [article] Snap – The Code Snippet Sharing Application [article] Deploying on your own server [article]
Read more
  • 0
  • 0
  • 5907

article-image-making-simple-web-based-ssh-client-using-nodejs-and-socketio
Jakub Mandula
28 Oct 2015
7 min read
Save for later

Making a simple Web based SSH client using Node.js and Socket.io

Jakub Mandula
28 Oct 2015
7 min read
If you are reading this post, you probably know what SSH stands for. But just for the sake of formality, here we go: SSH stands for Secure Shell. It is a network protocol for secure access to the shell on a remote computer. You can do much more over SSH besides commanding your computer. Here you can find further information: http://en.wikipedia.org/wiki/Secure_Shell. In this post, we are going to create a very simple web terminal. And when I say simple, I mean it! However much you like colors, it will not support them because the parsing is just beyond the scope of this post. If you want a good client-side terminal library use term.js. It is made by the same guy who wrote pty.js, which we will be using. It is able to handle pretty much all key events and COLORS!!!! Installation I am going to assume you already have your node and npm installed. First we will install all of the npm packages we will be using: npm install express pty.js socket.io Express is a super cool web framework for Node. We are going to use it to serve our static files. I know it is a bit overkill, but I like Express. pty.js is where the magic will be happening. It forks processes into virtual pseudo terminals and provides bindings for communication. Socket.io is what we will use to transmit the data from the web browser to the server and back. It uses modern WebSockets, but provides fallbacks for backward compatibility. Anytime you want to create a real-time application, Socket.io is the way to go. Planning First things first, we need to think what we want the program to do. We want the program to create an instance of a shell on the server (remote machine) and send all of the text to the browser. Back in the browser, we want to capture any user events and send them back to the server shell. The WebSSH server This is the code that will power the terminal forwarding. Open a new file named server.js and start by importing all of the libraries: var express = require('express'); var https = require('https'); var http = require('http'); var fs = require('fs'); var pty = require('pty.js'); Set up express: // Setup the express app var app = express(); // Static file serving app.use("/",express.static("./")); Next we are going to create the server. // Creating an HTTP server var server = http.createServer(app).listen(8080) If you want to use HTTPS, which you probably will, you need to generate a key and certificate and import them as shown. var options = { key: fs.readFileSync('keys/key.pem'), cert: fs.readFileSync('keys/cert.pem') }; Then use the options object to create the actual server. Notice that this time we are using the https package. // Create an HTTPS server var server = https.createServer(options, app).listen(8080) CAUTION: Even if you use HTTPS, do not use this example program on the Internet. You are not authenticating the client in any way and thus providing a free open gate to your computer. Please make sure you only use this on your Private network protected by a firewall!!! Now bind the socket.io instance to the server: var io = require('socket.io')(server); After this, we can set up the place where the magic happens. // When a new socket connects io.on('connection', function(socket){ // Create terminal var term = pty.spawn('sh', [], { name: 'xterm-color', cols: 80, rows: 30, cwd: process.env.HOME, env: process.env }); // Listen on the terminal for output and send it to the client term.on('data', function(data){ socket.emit('output', data); }); // Listen on the client and send any input to the terminal socket.on('input', function(data){ term.write(data); }); // When socket disconnects, destroy the terminal socket.on("disconnect", function(){ term.destroy(); console.log("bye"); }); }); In this block, all we do is wait for new connections. When we get one, we spawn a new virtual terminal and start to pump the data from the terminal to the socket and vice versa. After the socket disconnects, we make sure to destroy the terminal. If you have noticed, I am using the simple sh shell. I did this mainly because I don't have a fancy prompt on it. Because we are not adding any parsing logic, my bash prompt would show up like this: ]0;piman@mothership: ~ _[01;32m✓ [33mpiman_[0m ↣ _[1;34m[~]_[37m$[0m - Eww! But you may use any shell you like. This is all that we need on the server side. Save the file and close it. Client side The client side is going to be just a very simple HTML file. Start with a very simple HTML markup: <!doctype html> <html> <head> <title>SSH Client</title> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/1.3.5/socket.io.min.js"></script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <style> body { margin: 0; padding: 0; } .terminal { font-family: monospace; color: white; background: black; } </style> </head> <body> <h1>SSH</h1> <div class="terminal"> </div> <script> </script> </body> </html> I am downloading the client side libraries jquery and socket.io from cdnjs. All of the client code will be written in the script tag below the terminal div. Surprisingly the code is very simple: // Connect to the socket.io server var socket = io.connect('http://localhost:8080'); // Wait for data from the server socket.on('output', function (data) { // Insert some line breaks where they belong data = data.replace("n", "<br>"); data = data.replace("r", "<br>"); // Append the data to our terminal $('.terminal').append(data); }); // Listen for user input and pass it to the server $(document).on("keypress",function(e){ var char = String.fromCharCode(e.which); socket.emit("input", char); }); Notice that we do not have to explicitly append the text the client types to the terminal mainly because the server echos it back anyways. Now we are done! Run the server and open up the URL in your browser. node server.js You should see a small prompt and be able to start typing commands. You can now explore you machine from the browser! Remember that our Web Terminal does not support Tab, Ctrl, Backspace or Esc characters. Implementing this is your homework. Conclusion I hope you found this tutorial useful. You can apply the knowledge in any real-time application where communication with the server is critical. All the code is available here. Please note, that if you'd like to use a browser terminal I strongly recommend term.js. It supports colors and styles and all the basic keys including Tabs, Backspace etc. I use it in my PiDashboard project. It is much cleaner and less tedious than the example I have here. I can't wait what amazing apps you will invent based on this. About the Author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 6
  • 32708
Banner background image

article-image-icons
Packt
26 Oct 2015
21 min read
Save for later

PrimeFaces Theme Development: Icons

Packt
26 Oct 2015
21 min read
In this article by Andy Bailey and Sudheer Jonna, the authors of the book, PrimeFaces Theme Development, we'll cover icons, which add a lot of value to an application based on the principle that a picture is worth a thousand words. Equally important is the fact that they can, when well designed, please the eye and serve as memory joggers for your user. We humans strongly associate symbols with actions. For example, a save button with a disc icon is more evocative. The association becomes even stronger when we use the same icon for the same action in menus and button bars. It is also possible to use icons in place of text labels. It is an important thing to keep in mind when designing the user interface of your application that the navigational and action elements (such as buttons) should not be so intrusive that the application becomes too cluttered with the things that can be done. The user wants to be able to see the information that they want to see and use input dialogs to add more. What they don't want is to be distracted with links, lots of link and button text, and glaring visuals. In this article, we will cover the following topics: The standard theme icon set Creating a set of icons of our own Adding new icons to a theme Using custom icons in a commandButton component Using custom icons in a menu component The FontAwesome icons as an alternative to the ThemeRoller icons (For more resources related to this topic, see here.) Introducing the standard theme icon set jQuery UI provides a big set of standard icons that can be applied by just adding icon class names to HTML elements. The full list of icons is available at its official site, which can be viewed by visiting http://api.jqueryui.com/theming/icons/. Also, available in some of the published icon cheat sheets at http://www.petefreitag.com/cheatsheets/jqueryui-icons/. The icon class names follow the following syntax in order to add them for HTML elements: .ui-icon-{icon type}-{icon sub description}-{direction} .ui-icon-{icon type}-{icon sub description}-{direction} For example, the following span element will display an icon of a triangle pointing to the south: <span class="ui-icon ui-icon-triangle-1-s"></span> Other icons such as ui-icon-triangle-1-n, ui-icon-triangle-1-e, and ui-icon-triangle-1-w represent icons of triangles pointing to the north, east, and west respectively. The direction element is optional, and it is available only for a few icons such as a triangle, an arrow, and so on. These theme icons will be integrated in a number of jQuery UI-based widgets such as buttons, menus, dialogs, date picker components, and so on. The aforementioned standard set of icons is available in the ThemeRoller as one image sprite instead of a separate image for each icon. That is, ThemeRoller is designed to use the image sprites technology for icons. The different image sprites that vary in color (based on the widget state) are available in the images folder of each downloaded theme. An image sprite is a collection of images put into a single image. A webpage with many images may take a long time to load and generate multiple server requests. For a high-performance application, this idea will reduce the number of server requests and bandwidth. Also, it centralizes the image locations so that all the icons can be found at one location. The basic image sprite for the PrimeFaces Aristo theme looks like this: The image sprite's look and feel will vary based on the screen area of the widget and its components such as the header and content and widget states such as hover, active, highlight, and error styles. Let us now consider a JSF/PF-based example, where we can add a standard set of icons for UI components such as the commandButton and menu bar. First, we will create a new folder in web pages called chapter6. Then, we will create a new JSF template client called standardThemeIcons.xhtml and add a link to it in the chaptersTemplate.xhtml template file. When adding a submenu, use Chapter 6 for the label name and for the menu item, use Standard Icon Set as its value. In the title section, replace the text title with the respective topic of this article, which is Standard Icons: <ui:define name="title">   Standard Icons </ui:define> In the content section, replace the text content with the code for commandButton and menu components. Let's start with the commandButton components. The set of commandButton components uses the standard theme icon set with the help of the icon attribute, as follows: <h:panelGroup style="margin-left:830px">   <h3 style="margin-top: 0">Buttons</h3>   <p:commandButton value="Edit" icon="ui-icon-pencil"     type="button" />   <p:commandButton value="Bookmark" icon="ui-icon-bookmark"     type="button" />   <p:commandButton value="Next" icon="ui-icon-circle-arrow-e"     type="button" />   <p:commandButton value="Previous" icon="ui-icon-circle-arrow-w"     type="button" /> </h:panelGroup> The generated HTML for the first commandButton that is used to display the standard icon will be as follows: <button id="mainForm:j_idt15" name="mainForm:j_idt15" class="ui-   button ui-widget ui-state-default ui-corner-all ui-button-text-   icon-left" type="button" role="button" aria-disabled="false">   <span class="ui-button-icon-left ui-icon ui-c   ui-icon-     pencil"></span>   <span class="ui-button-text ui-c">Edit</span> </button> The PrimeFaces commandButton renderer appends the icon position CSS class based on the icon position (left or right) to the HTML button element, apart from the icon CSS class in one child span element and text CSS class in another child span element. This way, it displays the icon on commandButton based on the icon position property. By default, the position of the icon is left. Now, we will move on to the menu components. A menu component uses the standard theme icon set with the help of the menu item icon attribute. Add the following code snippets of the menu component to your page: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="ui-icon-plus" />     <p:menuitem value="Delete" url="#" icon="ui-icon-close" />     <p:menuitem value="Refresh" url="#" icon="ui-icon-refresh" />     <p:menuitem value="Print" url="#" icon="ui-icon-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="ui-icon home" />     <p:menuitem value="Admin" url="#" icon="ui-icon-person" />     <p:menuitem value="Contact Us" url="#" icon="ui-icon-       contact" />   </p:submenu> </p:menu> You may have observed from the preceding code snippets that each icon from ThemeRoller starts with ui-icon for consistency. Now, run the application and navigate your way to the newly created page, and you should see the standard ThemeRoller icons applied to buttons and menu items, as shown in the following screenshot: For further information, you can use PrimeFaces showcase (http://www.primefaces.org/showcase/), where you can see the default icons used for components, applying standard theme icons with the help of the icon attribute, and so on. Creating a set of icons of our own In this section, we are going to discuss how to create our own icons for the PrimeFaces web application. Instead of using images, you need to use image sprites by considering the impact of application performance. Most of the time, we might be interested in adding custom icons to UI components apart from the regular standard icon set. Generally, in order to create our own custom icons, we need to provide CSS classes with the background-image property, which is referred to the image in the theme images folder. For example, the following commandButton components will use a custom icon: <p:commandButton value="With Icon" icon="disk"/> <p:commandButton icon="disk"/> The disk icon is created by adding the .disk CSS class with the background image property. In order to display the image, you need to provide the correct relative path of the image from the web application, as follows: .disk {   background-image: url('disk.png') !important; } However, as discussed earlier, we are going to use the image sprite technology instead of a separate image for each icon to optimize web performance. Before creating an image sprite, you need to select all the required images and convert those images (PNG, JPG, and so on) to the icon format with a size almost equal to to that of the ThemeRoller icons. In this article, we used the Paint.NET tool to convert images to the ICO format with a size of 16 by 16 pixels. Paint.NET is a free raster graphics editor for Microsoft Windows, and it is developed on the .NET framework. It is a good replacement for the Microsoft Paint program in an editor with support for layers blending, transparency, and plugins. If the ICO format is not available, then you have to add the file type plug-in for the Paint.NET installation directory. So, this is just a two-step process for the conversion: The image (PNG, JPG, and so on) need to be saved as the Icons (*.ico) option from the Save as type dropdown. Then, select 16 by 16 dimensions with the supported bit system (8-bit, 32-bit, and so on). All the PrimeFaces theme icons are designed to have the same dimensions. There are many online and offline tools available that can be used to create an image sprite. I used Instant Sprite, an open source CSS sprite generator tool, to create an image sprite in this article. You can have a look at the official site for this CSS generator tool by visiting http://instantsprite.com/. Let's go through the following step-by-step process to create an image sprite using the Instant Sprite tool: First, either select multiple icons from your computer, or drag and drop icons on to the tool page. In the Thumbnails section, just drag and drop the images to change their order in the sprite. Change the offset (in pixels), direction (horizontal, vertical, and diagonal), and the type (.png or .gif) values in the Options section. In the Sprite section, right-click on the image to save it on your computer. You can also save the image in a new window or as a base64 type. In the Usage section, you will find the generated sprite CSS classes and HTML. Once the image is created, you will be able to see the image in the preview section before finalizing the image. Now, let's start creating the image sprite for button bar and menu components, which are going to be used in later sections. First, download or copy the required individual icons on the computer. Then, select all those files and drag and drop them in a particular order, as follows: We can also configure a few options, such as an offset of 10 px for icon padding, direction as horizontal to display them horizontally, and then finally selecting the image as the PNG type: The image sprite is generated in the sprite section, as follows: Right-click on the image to save it on your computer. Now, we have created a custom image sprite from the set of icons. Once the image sprite has been created, change the sprite name to ui-custom-icons and copy the generated CSS styles for later. In the generated HTML, note that each div class is appended with the ui-icon class to display the icon with a width of 16 px and height of 16 px. Adding the new icons to your theme In order to apply the custom icons to your web page, we first need to copy the generated image sprite file and then add the generated CSS classes from the previous section. The following generated sprite file has to be added to the images folder of the primefaces-moodyBlue2 custom theme. Let's name the file ui-custom-icons: After this, copy the generated CSS rules from the previous section. The first CSS class (ui-icon) contains the image sprite to display custom icons using the background URL property and dimensions such as the width and height properties for each icon. But since we are going to add the image reference in widget state style classes, you need to remove the background image URL property from the ui-icon class. Hence, the ui-icon class contains only the width and height dimensions: .ui-icon {   width: 16px;   height: 16px; } Later, modify the icon-specific CSS class names as shown in the following format. Each icon has its own icon name: .ui-icon-{icon name} The following CSS classes are used to refer individual icons with the help of the background-position property. Now after modification, the positioning CSS classes will look like this: .ui-icon-edit { background-position: 0 0; } .ui-icon-bookmark { background-position: -26px 0; } .ui-icon-next { background-position: -52px 0; } .ui-icon-previous { background-position: -78px 0; } .ui-icon-new { background-position: -104px 0; } .ui-icon-delete { background-position: -130px 0; } .ui-icon-refresh { background-position: -156px 0; } .ui-icon-print { background-position: -182px 0; } .ui-icon-home { background-position: -208px 0; } .ui-icon-admin { background-position: -234px 0; } .ui-icon-contactus { background-position: -260px 0; } Apart from the preceding CSS classes, we have to add the component state CSS classes. Widget states such as hover, focus, highlight, active, and error need to refer to different image sprites in order to display the component state behavior for user interactions. For demonstration purposes, we created only one image sprite and used it for all the CSS classes. But in real-time development, the image will vary based on the widget state. The following widget states refer to image sprites for different widget states: .ui-icon, .ui-widget-content .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-widget-header .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-default .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-hover .ui-icon, .ui-state-focus .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-active .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-highlight .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-error .ui-icon, .ui-state-error-text .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } In the JSF ecosystem, image references in the theme.css file must be converted to an expression that JSF resource loading can understand. So at first, in the preceding CSS classes, all the image URLs are appeared in the following expression: background-image: url("images/ui-custom-icons.png"); The preceding expression, when modified, looks like this: background-image: url("#{resource['primefaces-   moodyblue2:images/ui-custom-icons.png']}");  We need to make sure that the default state classes are commented out in the theme.css (the moodyblue2 theme) file to display the custom icons. By default, custom theme classes (such as the state classes and icon classes available under custom states and images and custom icons positioning) are commented out in the source code of the GitHub project. So, we need to uncomment these sections and comment out the default theme classes (such as the state classes and icon classes available under states and images and positioning). This means that the default or custom style classes only need to be available in the theme.css file. (OR) You can see all these changes in moodyblue3 theme as well. The custom icons appeared in Custom Icons screen by just changing the current theme to moodyblue3. Using custom icons in the commandButton components After applying the new icons to the theme, you are ready to use them on the PrimeFaces components. In this section, we will add custom icons to command buttons. Let's add a link named Custom Icons to the chaptersTemplate.xhtml file. The title of this page is also named Custom Icons. The following code snippets show how custom icons are added to command buttons using the icon attribute: <h3 style="margin-top: 0">Buttons</h3> <p:commandButton value="Edit" icon="ui-icon-edit" type="button" /> <p:commandButton value="Bookmark" icon="ui-icon-bookmark"   type="button" /> <p:commandButton value="Next" icon="ui-icon-next" type="button" /> <p:commandButton value="Previous" icon="ui-icon-previous"   type="button" /> Now, run the application and navigate to the newly created page. You should see the custom icons applied to the command buttons, as shown in the following screenshot: The commandButton component also supports the iconpos attribute if you wish to display the icon either to the left or right side. The default value is left. Using custom icons in a menu component In this section, we are going to add custom icons to a menu component. The menuitem tag supports the icon attribute to attach a custom icon. The following code snippets show how custom icons are added to the menu component: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="ui-icon-new" />     <p:menuitem value="Delete" url="#" icon="ui-icon-delete" />     <p:menuitem value="Refresh" url="#" icon="ui-icon-refresh" />     <p:menuitem value="Print" url="#" icon="ui-icon-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="ui-icon-home" />     <p:menuitem value="Admin" url="#" icon="ui-icon-admin" />     <p:menuitem value="Contact Us" url="#" icon="ui-icon-       contactus" />   </p:submenu> </p:menu> Now, run the application and navigate to the newly created page. You will see the custom icons applied to the menu component, as shown in the following screenshot: Thus, you can apply custom icons on a PrimeFaces component that supports the icon attribute. The FontAwesome icons as an alternative to the ThemeRoller icons In addition to the default ThemeRoller icon set, the PrimeFaces team provided and supported a set of alternative icons named the FontAwesome iconic font and CSS framework. Originally, it was designed for the Twitter Bootstrap frontend framework. Currently, it works well with all frameworks. The official site for the FontAwesome toolkit is http://fortawesome.github.io/Font-Awesome/. The features of FontAwesome that make it a powerful iconic font and CSS toolkit are as follows: One font, 519 icons: In a single collection, FontAwesome is a pictographic language of web-related actions No JavaScript required: It has minimum compatibility issues because FontAwesome doesn't required JavaScript Infinite scalability: SVG (short for Scalable Vector Graphics) icons look awesome in any size Free to use: It is completely free and can be used for commercial usage CSS control: It's easy to style the icon color, size, shadow, and so on Perfect on retina displays: It looks gorgeous on high resolution displays It can be easily integrated with all frameworks Desktop-friendly Compatible with screen readers FontAwesome is an extension to Bootstrap by providing various icons based on scalable vector graphics. This FontAwesome feature is available from the PrimeFaces 5.2 release onwards. These icons can be customized in terms of size, color, drop and shadow and so on with the power of CSS. The full list of icons is available at both the official site of FontAwesome (http://fortawesome.github.io/Font-Awesome/icons/) as well as the PrimeFaces showcase (http://www.primefaces.org/showcase/ui/misc/fa.xhtml). In order to enable this feature, we have to set primefaces.FONT_AWESOME context param in web.xml to true, as follows: <context-param>   <param-name>primefaces.FONT_AWESOME</param-name>   <param-value>true</param-value> </context-param> The usage is as simple as using the standard ThemeRoller icons. PrimeFaces components such as buttons or menu items provide an icon attribute, which accepts an icon from the FontAwesome icon set. Remember that the icons should be prefixed by fa in a component. The general syntax of the FontAwesome icons will be as follows: fa fa-[name]-[shape]-[o]-[direction] Here, [name] is the name of the icon, [shape] is the optional shape of the icon's background (either circle or square), [o] is the optional outlined version of the icon, and [direction] is the direction in which certain icons point. Now, we first create a new navigation link named FontAwesome under chapter6 inside the chapterTemplate.xhtml template file. Then, we create a JSF template client called fontawesome.xhtml, where it explains the FontAwesome feature with the help of buttons and menu. This page has been added as a menu item for the top-level menu bar. In the content section, replace the text content with the following code snippets. The following set of buttons displays the FontAwesome icons with the help of the icon attribute. You may have observed that the fa-fw style class used to set icons at a fixed width. This is useful when variable widths throw off alignment: <h3 style="margin-top: 0">Buttons</h3> <p:commandButton value="Edit" icon="fa fa-fw fa-edit"   type="button" /> <p:commandButton value="Bookmark" icon="fa fa-fw fa-bookmark"   type="button" /> <p:commandButton value="Next" icon="fa fa-fw fa-arrow-right"   type="button" /> <p:commandButton value="Previous" icon="fa fa-fw fa-arrow-  left"   type="button" /> After this, apply the FontAwesome icons to navigation lists, such as the menu component, to display the icons just to the left of the component text content, as follows: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="fa fa-plus" />     <p:menuitem value="Delete" url="#" icon="fa fa-close" />     <p:menuitem value="Refresh" url="#" icon="fa fa-refresh" />     <p:menuitem value="Print" url="#" icon="fa fa-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="fa fa-home" />     <p:menuitem value="Admin" url="#" icon="fa fa-user" />     <p:menuitem value="Contact Us" url="#" icon="fa fa-       picture-o" />   </p:submenu> </p:menu> Now, run the application and navigate to the newly created page. You should see the FontAwesome icons applied to buttons and menu items, as shown in the following screenshot: Note that the 40 shiny new icons of FontAwesome are available only in the PrimeFaces Elite 5.2.2 release and the community PrimeFaces 5.3 release because PrimeFaces was upgraded to FontAwesome 4.3 version since its 5.2.2 release. Summary In this article, we explored the standard theme icon set and how to use it on various PrimeFaces components. We also learned how to create our own set of icons in the form of the image sprite technology. We saw how to create image sprites using open source online tools and add them on a PrimeFaces theme. Finally, we had a look at the FontAwesome CSS framework, which was introduced as an alternative to the standard ThemeRoller icons. To ensure best practice, we learned how to use icons on commandButton and menu components. Now that you've come to the end of this article, you should be comfortable using web icons for PrimeFaces components in different ways. Resources for Article: Further resources on this subject: Introducing Primefaces [article] Setting Up Primefaces [article] Components Of Primefaces Extensions [article]
Read more
  • 0
  • 0
  • 2566

article-image-guidelines-creating-responsive-forms
Packt
23 Oct 2015
12 min read
Save for later

Guidelines for Creating Responsive Forms

Packt
23 Oct 2015
12 min read
In this article by Chelsea Myers, the author of the book, Responsive Web Design Patterns, covers the guidelines for creating responsive forms. Online forms are already modular. Because of this, they aren't hard to scale down for smaller screens. The little boxes and labels can naturally shift around between different screen sizes since they are all individual elements. However, form elements are naturally tiny and very close together. Small elements that you are supposed to click and fill in, whether on a desktop or mobile device, pose obstacles for the user. If you developed a form for your website, you more than likely want people to fill it out and submit it. Maybe the form is a survey, a sign up for a newsletter, or a contact form. Regardless of the type of form, online forms have a purpose; get people to fill them out! Getting people to do this can be difficult at any screen size. But when users are accessing your site through a tiny screen, they face even more challenges. As designers and developers, it is our job to make this process as easy and accessible as possible. Here are some guidelines to follow when creating a responsive form: Give all inputs breathing room. Use proper values for input's type attribute. Increase the hit states for all your inputs. Stack radio inputs and checkboxes on small screens. Together, we will go over each of these guidelines and see how to apply them. (For more resources related to this topic, see here.) The responsive form pattern Before we get started, let's look at the markup for the form we will be using. We want to include a sample of the different input options we can have. Our form will be very basic and requires simple information from the users such as their name, e-mail, age, favorite color, and favorite animal. HTML: <form> <!—- text input --> <label class="form-title" for="name">Name:</label> <input type="text" name="name" id="name" /> <!—- email input --> <label class="form-title" for="email">Email:</label> <input type="email" name="email" id="email" /> <!—- radio boxes --> <label class="form-title">Favorite Color</label> <input type="radio" name="radio" id="red" value="Red" /> <label>Red</label> <input type="radio" name="radio" id="blue" value="Blue" /><label>Blue</label> <input type="radio" name="radio" id="green" value="Green" /><label>Green</label> <!—- checkboxes --> <label class="form-title" for="checkbox">Favorite Animal</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> <input type="checkbox" name="checkbox" id="cat" value="Cat" /><label>Cat</label> <input type="checkbox" name="checkbox" id="other" value="Other" /><label>Other</label> <!—- drop down selection --> <label class="form-title" for="select">Age:</label> <select name="select" id="select"> <option value="age-group-1">1-17</option> <option value="age-group-2">18-50</option> <option value="age-group-3">&gt;50</option> </select> <!—- textarea--> <label class="form-title" for="textarea">Tell us more:</label> <textarea cols="50" rows="8" name="textarea" id="textarea"></textarea> <!—- submit button --> <input type="submit" value="Submit" /> </form> With no styles applied, our form looks like the following screenshot: Several of the form elements are next to each other, making the form hard to read and almost impossible to fill out. Everything seems tiny and squished together. We can do better than this. We want our forms to be legible and easy to fill. Let's go through the guidelines and make this eyesore of a form more approachable. #1 Give all inputs breathing room In the preceding screenshot, we can't see when one form element ends and the other begins. They are showing up as inline, and therefore displaying on the same line. We don't want this though. We want to give all our form elements their own line to live on and not share any space to the right of each type of element. To do this, we add display: block to all our inputs, selects, and text areas. We also apply display:block to our form labels using the class .form-title. We will be going more into why the titles have their own class for the fourth guideline. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; } .form-title { display:block; font-weight: bold; } As mentioned, we are applying display:block to text and e-mail inputs. We are also applying it to textarea and select elements. Just having our form elements display on their own line is not enough. We also give everything a margin-bottom element of 10px to give the elements some breathing room between one another. Next, we apply display:block to all the form titles and make them bold to add more visual separation. #2 Use proper values for input's type attribute Technically, if you are collecting a password from a user, you are just asking for text. E-mail, search queries, and even phone numbers are just text too. So, why would we use anything other than <input type="text"…/>? You may not notice the difference on your desktop computer between these form elements, but the change is the biggest on mobile devices. To show you, we have two screenshots of what the keyboard looks like on an iPhone while filling out the text input and the e-mail input: In the left image, we are focused on the text input for entering your name. The keyboard here is normal and nothing special. In the right image, we are focused on the e-mail input and can see the difference on the keyboard. As the red arrow points out, the @ key and the . key are now present when typing in the e-mail input. We need both of those to enter in a valid e-mail, so the device brings up a special keyboard with those characters. We are not doing anything special other than making sure the input has type="email" for this to happen. This works because type="email" is a new HTML5 attribute. HTML5 will also validate that the text entered is a correct email format (which used to be done with JavaScript). Here are some other HTML5 type attribute values from the W3C's third HTML 5.1 Editor's Draft (http://www.w3.org/html/wg/drafts/html/master/semantics.html#attr-input-type-keywords): color date datetime email month number range search tel time url week #3 Increase the hit states for all your inputs It would be really frustrating for the user if they could not easily select an option or click a text input to enter information. Making users struggle isn't going to increase your chance of getting them to actually complete the form. The form elements are naturally very small and not large enough for our fingers to easily tap. Because of this, we should increase the size of our form inputs. Having form inputs to be at least 44 x 44 px large is a standard right now in our industry. This is not a random number either. Apple suggests this size to be the minimum in their iOS Human Interface Guidelines, as seen in the following quote: "Make it easy for people to interact with content and controls by giving each interactive element ample spacing. Give tappable controls a hit target of about 44 x 44 points." As you can see, this does not apply to only form elements. Apple's suggest is for all clickable items. Now, this number may change along with our devices' resolutions in the future. Maybe it will go up or down depending on the size and precision of our future technology. For now though, it is a good place to start. We need to make sure that our inputs are big enough to tap with a finger. In the future, you can always test your form inputs on a touchscreen to make sure they are large enough. For our form, we can apply this minimum size by increasing the height and/or padding of our inputs. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; font-size: 1em; padding:5px; min-height: 2.75em; width: 100%; max-width: 300px; } The first two styles are from the first guideline. After this, we are increasing the font-size attribute of the inputs, giving the inputs more padding, and setting a min-height attribute for each input. Finally, we are making the inputs wider by setting the width to 100%, but also applying a max-width attribute so the inputs do not get too unnecessarily wide. We want to increase the size of our submit button as well. We definitely don't want our users to miss clicking this: input[type="submit"] { min-height: 3em; padding: 0 2.75em; font-size: 1em; border:none; background: mediumseagreen; color: white; } Here, we are also giving the submit button a min-height attribute, some padding, and increasing the font-size attribute. We are striping the browser's native border style on the button with border:none. We also want to make this button very obvious, so we apply a mediumseagreen color as the background and white text color. If you view the form so far in the browser, or look at the image, you will see all the form elements are bigger now except for the radio inputs and checkboxes. Those elements are still squished together. To make our radio and checkboxes bigger in our example, we will make the option text bigger. Doesn't it make sense that if you want to select red as your favorite color, you would be able to click on the word "red" too, and not just the box next to the word? In the HTML for the radio inputs and the checkboxes, we have markup that looks like this: <input type="radio" name="radio" id="red" value="Red" /><label>Red</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> To make the option text clickable, all we have to do is set the for attribute on the label to match the id attribute of the input. We will wrap the radio and checkbox inputs inside of their labels so that we can easily stack them for guideline #4. We will also give the labels a class of choice to help style them. <label class="choice" for="red"><input type="radio" name="radio" id="red" value="Red" />Red</label> <label class="choice" for="dog"><input type="checkbox" name="checkbox" id="dog" value="Dog" />Dog</label> Now, the option text and the actual input are both clickable. After doing this, we can apply some more styles to make selecting a radio or checkbox option even easier: label input { margin-left: 10px; } .choice { margin-right: 15px; padding: 5px 0; } .choice + .form-title { margin-top: 10px; } With label input, we are giving the input and the label text a little more space between each other. Then, using the .choice class, we are spreading out each option with margin-right: 15px and making the hit states bigger with padding: 5px 0. Finally, with .choice + .form-title, we are giving any .form-title element that comes after an element with a class of .choice more breathing room. This is going back to the responsive form guideline #1. There is only one last thing we need to do. On small screens, we want to stack the radio and checkbox inputs. On large screens, we want to keep them inline. To do this, we will add display:block to the .choice class. We will then be using a media query to change it back: @media screen and (min-width: 600px){ .choice { display: inline; } } With each input on its own line for smaller screens, they are easier to select. But we don't need to take up all that vertical space on wider screens. With this, our form is done. You can see our finished form, as shown in the following screenshot: Much better, wouldn't you say? No longer are all the inputs tiny and mushed together. The form is easy to read, tap, and begin entering in information. Filling in forms is not considered a fun thing to do, especially on a tiny screen with big thumbs. But there are ways in which we can make the experience easier and a little more visually pleasing. Summary A classic user experience challenge is to design a form that encourages completion. When it comes to fact, figures, and forms, it can be hard to retain the user's attention. This does not mean it is impossible. Having a responsive website does make styling tables and forms a little more complex. But what is the alternative? Nonresponsive websites make you pinch and zoom endlessly to fill out a form or view a table. Having a responsive website gives you the opportunity to make this task easier. It takes a little more code, but in the end, your users will greatly benefit from it. With this article, we have wrapped up guidelines for creating responsive forms. Resources for Article: Further resources on this subject: Securing and Authenticating Web API [article] Understanding CRM Extendibility Architecture [article] CSS3 – Selectors and nth Rules [article]
Read more
  • 0
  • 0
  • 1444

article-image-securing-and-authenticating-web-api
Packt
21 Oct 2015
9 min read
Save for later

Securing and Authenticating Web API

Packt
21 Oct 2015
9 min read
In this article by Rajesh Gunasundaram, author of ASP.NET Web API Security Essentials, we will cover how to secure a Web API using forms authentication and Windows authentication. You will also get to learn the advantages and disadvantages of using the forms and Windows authentication in Web API. In this article, we will cover the following topics: The working of forms authentication Implementing forms authentication in the Web API Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism Configuring Windows authentication Enabling Windows authentication in Katana Discussing Hawkauthentication (For more resources related to this topic, see here.) The working of forms authentication The user credentials will be submitted to the server using the HTML forms in forms authentication. This can be used in the ASP.NET Web API only if it is consumed from a web application. Forms authentication is built under ASP.NET and uses the ASP.NET membership provider to manage user accounts. Forms authentication requires a browser client to pass the user credentials to the server. It sends the user credentials in the request and uses HTTP cookies for the authentication. Let's list out the process of forms authenticationstep by step: The browser tries to access a restricted action that requires an authenticated request. If the browser sends an unauthenticated request, thenthe server responds with an HTTP status 302 Found and triggers the URL redirection to the login page. To send the authenticated request, the user enters the username and password and submits the form. If the credentials are valid, the server responds with an HTTP 302 status code that initiates the browser to redirect the page to the original requested URI with the authentication cookie in the response. Any request from the browser will now include the authentication cookie and the server will grant access to any restricted resource. The following image illustrates the workflow of forms authentication: Fig 1 – Illustrates the workflow of forms authentication Implementing forms authentication in the Web API To send the credentials to the server, we need an HTML form to submit. Let's use the HTML form or view an ASP.NET MVC application. The steps to implement forms authentication in an ASP.NET MVC application areas follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Name the project Chapter06.FormsAuthentication and click OK. Fig 2 – We have named the ASP.NET Web Application as Chapter06.FormsAuthentication Select the MVC template in the New ASP.NET Project dialog. Tick Web APIunder Add folders and core referencesand press OKleaving Authentication to Individual User Accounts. Fig 3 – Select MVC template and check Web API in add folders and core references In the Models folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code snippet: namespaceChapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "[email protected]", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "[email protected]", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "[email protected]", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } As you can see in the preceding code, we decorated the Get() action in ContactsController with the [Authorize] attribute. So, this Web API action can only be accessed by an authenticated request. An unauthenticated request to this action will make the browser redirect to the login page and enable the user to either register or login. Once logged in, any request that tries to access this action will be allowed as it is authenticated.This is because the browser automatically sends the session cookie along with the request and forms authentication uses this cookie to authenticate the request. It is very important to secure the website using SSL as forms authentication sends unencrypted credentials. Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism First let's see the advantages of Windows authentication. Windows authentication is built under theInternet Information Services (IIS). It doesn't sends the user credentials along with the request. This authentication mechanism is best suited for intranet applications and doesn't need a user to enter their credentials. However, with all these advantages, there are a few disadvantages in the Windows authentication mechanism. It requires Kerberos that works based on tickets or NTLM, which is a Microsoft security protocols that should be supported by the client. The client'sPC must be underan active directory domain. Windows authentication is not suitable for internet applications as the client may not necessarily be on the same domain. Configuring Windows authentication Let's implement Windows authentication to an ASP.NET MVC application, as follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthentication and click OK. Fig 4 – We have named the ASP.NET Web Application as Chapter06.WindowsAuthentication Change the Authentication mode to Windows Authentication. Fig 5 – Select Windows Authentication in Change Authentication window Select the MVC template in the New ASP.NET Project dialog. Tick Web API under Add folders and core references and click OK. Fig 6 – Select MVC template and check Web API in add folders and core references Under theModels folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code: namespace Chapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "[email protected]", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "[email protected]", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "[email protected]", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } The Get() action in ContactsController is decorated with the[Authorize] attribute. However, in Windows authentication, any request is considered as an authenticated request if the client relies on the same domain. So no explicit login process is required to send an authenticated request to call theGet() action. Note that the Windows authentication is configured in the Web.config file: <system.web> <authentication mode="Windows" /> </system.web> Enabling Windows authentication in Katana The following steps will create a console application and enable Windows authentication in katana: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed TemplatenamedWindows Desktop. Select Console Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthenticationKatana and click OK: Fig 7 – We have named the Console Application as Chapter06.WindowsAuthenticationKatana Install NuGet Packagenamed Microsoft.Owin.SelfHost from NuGet Package Manager: Fig 8 – Install NuGet Package named Microsoft.Owin.SelfHost Add aStartup class with the following code snippet: namespace Chapter06.WindowsAuthenticationKatana { class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; listener.AuthenticationSchemes = AuthenticationSchemes.IntegratedWindowsAuthentication; app.Run(context => { context.Response.ContentType = "text/plain"; returncontext.Response.WriteAsync("Hello Packt Readers!"); }); } } } Add the following code in the Main function in Program.cs: using (WebApp.Start<Startup>("http://localhost:8001")) { Console.WriteLine("Press any Key to quit Web App."); Console.ReadKey(); } Now run the application and open http://localhost:8001/ in the browser: Fig 8 – Open the Web App in a browser If you capture the request using the fiddler, you will notice an Authorization Negotiate entry in the header of the request Try calling http://localhost:8001/ in the fiddler and you will get a 401 Unauthorized response with theWWW-Authenticate headers that indicates that the server attaches a Negotiate protocol that consumes either Kerberos or NTLM, as follows: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/8.0 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Tue, 01 Sep 2015 19:35:51 IST Content-Length: 6062 Proxy-Support: Session-Based-Authentication Discussing Hawk authentication Hawk authentication is a message authentication code-based HTTP authentication scheme that facilitates the partial cryptographic verification of HTTP messages. Hawk authentication requires a symmetric key to be shared between the client and server. Instead of sending the username and password to the server in order to authenticate the request, Hawk authentication uses these credentials to generate a message authentication code and is passed to the server in the request for authentication. Hawk authentication is mainly implemented in those scenarios where you need to pass the username and password via the unsecured layer and no SSL is implemented over the server. In such cases, Hawk authentication protects the username and password and passes the message authentication code instead. For example, if you are building a small product that has control over both the server and client and implementing SSL is too expensive for such a small project, then Hawk is the best option to secure the communication between your server and client. Summary Voila! We just secured our Web API using the forms- and Windows-based authentication. In this article,youlearnedabout how forms authentication works and how it is implemented in the Web API. You also learnedabout configuring Windows authentication and got to know about the advantages and disadvantages of using Windows authentication. Then you learned about implementing the Windows authentication mechanism in Katana. Finally, we had an introduction about Hawk authentication and the scenarios of using Hawk authentication. Resources for Article: Further resources on this subject: Working with ASP.NET Web API [article] Creating an Application using ASP.NET MVC, AngularJS and ServiceStack [article] Enhancements to ASP.NET [article]
Read more
  • 0
  • 1
  • 4973
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-gamification-moodle-lms
Packt
19 Oct 2015
11 min read
Save for later

Gamification with Moodle LMS

Packt
19 Oct 2015
11 min read
 In this article by Natalie Denmeade, author of the book, Gamification with Moodle describes how teachers can use Gamification design in their course development within the Moodle Learning Management System (LMS) to increase the motivation and engagement of learners. (For more resources related to this topic, see here.) Gamification is a design process that re-frames goals to be more appealing and achievable by using game design principles. The goal of this process is it to keep learners engaged and motivated in a way that is not always present in traditional courses. When implemented in elegant solutions, learners may be unaware of the subtle game elements being used. A gamification strategy can be considered successful if learners are more engaged, feel challenged and confident to keep progressing, which has implications for the way teachers consider their course evaluation processes. It is important to note that Gamification in education is more about how the person feels at certain points in their learning journey than about the end product which may or may not look like a game. Gamification and Moodle After following the tutorials in this book, teachers will gain the basic skills to get started applying Gamification design techniques in their Moodle courses. They can take learners on a journey of risk, choice, surprise, delight, and transformation. Taking an activity and reframing it to be more appealing and achievable sounds like the job description of any teacher or coach! Therefore, many teachers are already doing this! Understanding games and play better can help teachers be more effective in using a wider range of game elements to aid retention and completions in their courses. In this book you will find hints and tips on how to apply proven strategies to online course development, including the research into a growth mindset from Carol Dweck in her book Mindset. You will see how the use of game elements in Foursquare (badges), Twitter (likes), and Linkedin (progress bar), can also be applied to Moodle course design. In addition, you will use the core features available in Moodle which were designed to encourage learner participation as they collaborate, tag, share, vote, network, and generate learning content for each other. Finally, explore new features and plug-ins which offer dozens of ways that teachers can use game elements in Moodle such as, badges, labels, rubrics, group assignments, custom grading scales, forums, and conditional activities. A benefit of using Moodle as a Gamification LMS is it was developed on social constructivist principles. As these are learner-centric principles this means it is easy to use common Moodle features to apply gamification through the implementation of game components, mechanics and dynamics. These have been described by Kevin Werbach (in the Coursera MOOC on Gamification) as: Game Dynamics are the grammar: (the hidden elements) Constraints, emotions, narrative, progression, relationships Game Mechanics are the verbs: The action is driven forward by challenges, chance, competition/cooperation, feedback, resource acquisition, rewards, transactions, turns, win states Game Components are the nouns: Achievements, avatars, badges, boss fights, collections, combat, content, unlocking, gifting, leaderboards, levels, points, quests, teams, virtual goods Most of these game elements are not new ideas to teachers. It could be argued that school is already gamified through the use of grades and feedback. In fact it would be impossible to find a classroom that is not using some game elements. This book will help you identify which elements will be most effective in your current context. Teachers are encouraged to start with a few and gradually expanding their repertoire. As with professional game design, just using game elements will not ensure learners are motivated and engaged. The measure of success of a Gamification strategy is that learners continue to build resilience and autonomy in their own learning. When implemented well, the potential benefits of using a Gamification design process in Moodle are to: Provide manageable set of subtasks and tasks by hiding and revealing content Make assessment criteria visible, predictable, and in plain English using marking guidelines and rubrics Increase ownership of learning paths through choice and activity restrictions Build individual and group identity through work place simulations and role play Offer freedom to fail and try again without negative repercussions Increase enjoyment of both teacher and learners When teachers follow the step by step guide provided in this book they will create a basic Moodle course that acts as a flexible framework ready for learning content. This approach is ideal for busy teachers who want to respond to the changing needs and situations in the classroom. The dynamic approach keeps Teachers in control of adding and changing content without involving a technology support team. Onboarding tips By using focussed examples, the book describes how to use Moodle to implement an activity loop that identifies a desired behaviour and wraps motivations and feedback around that action. For example, a desired action may be for each learner to update their Moodle profile information with their interests and an avatar. Various motivational strategies could be put in place to prompt (or force) the learners to complete this task, including: Ask learners to share their avatars, with a link to their profile in a forum with ratings. Everyone else is doing it and they will feel left out if they don't get a like or a comment (creating a social norm). They might get rated as having the best avatar. Update the forum type so that learners can't see other avatars until they make a post. Add a theme (for example, Lego inspired avatars) so that creating an avatar is a chance to be creative and play. Choosing how they represent themselves in an online space is an opportunity for autonomy. Set the conditional release so learners cannot see the next activity until this activity is marked as complete (for example, post at least 3 comments on other avatars). The value in this process is that learners have started building connections between new classmates. This activity loop is designed to appeal to diverse motivations and achieve multiple goals: Encourages learners to create an online persona and choose their level of anonymity Invite learners to look at each other’s profiles and speed up the process of getting to know each other Introduce learners to the idea of forum posting and rating in a low-risk (non-assessable) way Take the workload off the Teacher to assess each activity directly Enforce compliance through software options which saves admin time and creates an expectation of work standards for learners Feedback options Games celebrate small and large successes and so should Moodle courses. There are a number of ways to do this in Moodle, including simply automating feedback with a Label, which is revealed once a milestone is reached. These milestones could be an activity completion, topic completion, or a level has been reached in the course total. Feedback can be provided through symbols of the achievement. Learners of all ages are highly motivated by this. Nearly all human cultures use symbols, icons, medals and badges to indicate status and achievements such as a black belt in Karate, Victoria Cross and Order of Australia Medals, OBE, sporting trophies, Gold Logies, feathers and tattoos. Symbols of achievement can be achieved through the use of open badges. Moodle offers a simple way to issue badges in line with Open Badges Industry (OBI) standards. The learner can take full ownership of this badge when they export it to their online backpack. Higher education institutes are finding evidence that open badges are a highly effective way to increase motivation for mature learners. Kaplan University found the implementation of badges resulted in increased student engagement by 17 percent. As well as improving learner reactions to complete harder tasks, grades increased up to 9 percent. Class attendance and discussion board posts increased over the non-badged counterparts. Using open badges as a motivation strategy enables feedback to be regularly provided along the way from peers, automated reporting and the teacher. For advanced Moodlers, the book describes how rubrics can be used for "levelling up" and how the Moodle gradebook can be configured as an exponential point scoring system to indicate progress. Social game elements Implementing social game elements is a powerful way to increase motivation and participation. A Gamification experiment with thousands of MOOC participants measured participation of learners in three groups of "plain, game and social". Students in the game condition had a 22.5 percent higher test score in the final test compared to students in the plain condition. Students in the social condition showed an even stronger increase of almost 40 percent compared to students in the plain condition. (See A Playful Game Changer: Fostering Student Retention in Online Education with Social Gamification Krause et al, 2014). Moodle has a number of components that can be used to encourage collaborative learning. Just as the online gaming world has created spaces where players communicate outside of the game in forums, wikis and You Tube channels as well as having people make cheat guides about the games and are happy to share their knowledge with beginners. In Moodle we can imitate these collaborative spaces gamers use to teach each other and make the most of the natural leaders and influencers in the class. Moodle activities can be used to encourage communication between learners and allow delegation and skill-sharing. For example, the teacher may quickly explain and train the most experienced in the group how to perform a certain task and then showcase their work to others as an example. The learner could create blog posts which become an online version of an exercise book. The learner chooses the sharing level so classmates only, or the whole world, can view what is shared and leave comments. The process of delegating instruction through the connection of leader/learners to lagger/learners, in a particular area, allows finish lines to be at different points. Rather spending the last few weeks marking every learner’s individual work, the Teacher can now focus their attention on the few people who have lagged behind and need support to meet the deadlines. It's worth taking the time to learn how to configure a Moodle course. This provides the ability to set up a system that is scalable and adaptable to each learner. The options in Moodle can be used to allow learners to create their own paths within the boundaries set by a teacher. Therefore, rather than creating personalised learning paths for every student, set up a suite of tools for learners to create their own learning paths. Learning how to configure Moodle activities will reduce administration tasks through automatic reports, assessments and conditional release of activities. The Moodle activities will automatically create data on learner participation and competence to assist in identifying struggling learners. The inbuilt reports available in Moodle LMS help Teachers to get to know their learners faster. In addition, the reports also create evidence for formative assessment which saves hours of marking time. Through the release from repetitive tasks, teachers can spend more time on the creative and rewarding aspects of teaching. Rather than wait for a game design company to create an awesome educational game for a subject area, get started by using the same techniques in your classroom. This creative process is rewarding for both teachers and learners because it can be constantly adapted for their unique needs. Summary Moodle provides a flexible Gamification platform because teachers are directly in control of modifying and adding a sequence of activities, without having to go through an administrator. Although it may not look as good as a video game (made with an extensive budget) learners will appreciate the effort and personalisation. The Gamification framework does require some preparation. However, once implemented it picks up a momentum of its own and the teacher has a reduced workload in the long run. Purchase the book and enjoy a journey into Gamification in education with Moodle! Resources for Article: Further resources on this subject: Virtually Everything for Everyone [article] Moodle for Online Communities [article] State of Play of BuddyPress Themes [article]
Read more
  • 0
  • 0
  • 4279

article-image-testing-your-site-and-monitoring-your-progress
Packt
15 Oct 2015
27 min read
Save for later

Testing Your Site and Monitoring Your Progress

Packt
15 Oct 2015
27 min read
In this article by Michael David, author of the book Wordpress Search Engine Optimization, you will learn about Google Analytics and Google Webmaster/Search Console. Once you have built your website and started promoting it, you'll want to monitor your progress to ensure that your hard work is yielding both high rankings, search engine visibility, and web traffic. In this article, we'll cover a range of tools with which you will monitor the quality of your website, learn how search spiders interact with your site, measure your rankings in search engines for various keywords, and analyze how your visitors behave, when they are on your site. With this information, you can gauge your progress and make adjustments to your strategy. (For more resources related to this topic, see here.) Obviously, you'll want to know where your web traffic is coming from, what search terms are being used to find your website, and where you are ranking in the search engines for each of these terms. This information will allow you to see what you still need to work on, in terms of building links to the pages on your website. There are five main tools you will use to analyze your site and evaluate your traffic and rankings, and in this article, we will cover each in turn. They are Google Analytics, Google Search Console (formerly Webmaster Tools), HTML Validator, Bing Webmaster, and Link Assistant's Rank Tracker. As an alternative to Bing Webmaster, you may also want to employ Majestic SEO to check your backlinks. We'll cover each of these tools in turn. Google Analytics Google Analytics monitors and analyzes your website's traffic. With this tool, you can see how many website visitors you have had, whether they found your site through the search engines or clicked through from another website, how these visitors behaved once they were on your site, and much more. You can even connect your Google AdSense account to one or more of the domains you are monitoring in Google Analytics, to get information about which pages and keywords generate the most income. While there are other analytics services available, none match the scope and scale of what Google Analytics offers. Setting up Google Analytics for your website To set up Google Analytics for your website, perform the following steps: To sign up for Google Analytics, visit http://www.google.com/analytics/. On the top-right corner of the page, you'll see a button that says, Sign In to Google Analytics. You'll need to have a Google account before signing in. You will have to click Sign up on the next page if you haven't already, and on the next page you'll enter your website's URL and time zone. If you have more than one website, just start with one. You can add the other sites later. The account name you use is not important, but you can add all of your sites to one account, so you might want to use something generic like your name or your business name. Select the time zone country and time zone, and then click on Continue. Enter your name and country, and click on Continue again. Then you will need to accept the Google Analytics terms of service. After you have accepted the terms, you will be given a snippet of HTML code, that you'll need to insert into the pages of your website. The snippet will look like the following: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-xxxxxx-x', 'auto'); ga('send', 'pageview');</script> The code must be placed just before the closing head tag in each page on your website. There are several ways to install the code. Check first to see, if your WordPress template offers a field to insert the code in the template admin area—most modern templates offer this feature. If not, you can insert the code manually just before the closing the </head> tag, which you will find in the file called header.php within your WordPress template files. There is yet a third way to do this, if you don't want to tinker with your WordPress code. You can download and install one of the many WordPress analytics plugins. Two sound choices for an analytics plugin would be Google Analyticator or Google Analytics by Yoast; both the plugins are available at Wordpress.org. If you have more than one website, you can add more websites within your analytics account. Personally, I like to keep one website per analytics account just to keep things neat, but some like to have all their websites under one analytics account. To add a second or third website to an analytics account, navigate to Admin on the top menu bar, and then pull down the menu under Account in the left column, and then click Create a New Account. When you are done adding websites, click Home on the top menu bar to navigate to the main analytics page. From here, you can select a web property. This is the screen that greets you when you log in to Google Analytics, the Audience Overview report. It offers an effective quick look at how your website is performing. Using Google Analytics Once you have installed Google Analytics on your websites, you'll need to wait a few weeks, or perhaps longer, for Analytics to collect enough data from your websites to be useful. Remember that with website analytics, like any type of statistical analysis, larger sets of data reveal more accurate and useful information. It may not take that long if you have a high-traffic site, but if you have a fairly new site that doesn't get a lot of traffic, it will take some time, so come back in a few weeks to a month to check to see how your sites are doing. The Audience Overview report is your default report, displayed to you, when you first log into analytics and gives you a quick overview of each site's traffic. You can see at a glance, whether the tracking code is receiving data, how many visits (sessions) your website has gotten in the past 30 days, the average time visitors stay on your site (session duration), your bounce rate (the percentage of users that come to visit your site and leave without visiting another page), and the percentage of new sessions (users that haven't visited before) in the past 30 days. The data displayed on the dashboard is just a small taste of what you can learn from Google Analytics. To drill down to more detailed information, you'll navigate to other sections of analytics using the left menu. Your main report areas in analytics are Audience (who your visitors are), Acquisition (how your visitors found your site), Behavior (what your visitors did on your site), and Conversions (did they make a purchase or complete a contact form). Within any of the main report areas, are a dozens of available reports. Google Analytics can offer you tremendous detail on almost any imaginable metric related to your website. Most of your time, however, is best spent with a few key reports. Understanding key analytics reports Your key analytics reports will be the Audience Overview, Acquisition Overview, and the Behavior Overview. You access each of these reports by navigating to the left menu area and each corresponding overview report is the first link in each list. The Audience Overview report is your default report, discussed earlier. The Acquisition Overview report drills down into how your visitors are finding your site, whether it is through organic search, pay per click programs, social media, or referrals from other websites. These different pathways by which customers find your site are referred to as channels or mediums on other reports, but mean essentially the same thing. The Acquisition Overview report also shows some very basic conversion data, although the reports in the Conversions section offer much more meaningful conversion data. The following screenshot is an Acquisition Overview report: Why is the Acquisition Overview report important? It shows us the relative strength of our inbound channels. In the case above, organic search is delivering the highest number of users. This tells us that we are getting most of our users from the organic search results—our organic campaign is running strong. Referrals generated 384 visits, which means we've got good links delivering traffic. Our 369 direct visitors don't come from other websites, it's a fresh browser page where users type our URL directly or follow a bookmark they've saved. That is a welcome figure, because it means we've got strong brand and name recognition and in the conversions column on the right side of the table, we can see that our direct traffic generated a measurable conversion rate. That's a positive sign that our brand recognition and reputation is strong. The Behavior Overview report tells us how users behave once they visit our site. Are they bouncing immediately or viewing several pages? What content are they viewing the most? Here's a sample Behavior Overview report: Some information is repeated on the Behavior Overview report, such as Pageviews and Bounce Rate. What is important here is the table under the graph. This table shows you the relative popularity of your pages. This data is important because it shows you what content is generating the most user interest. As you can see from the table, the page titled /add-sidebar-wordpress generated over 1,000 pageviews, more than the home page, which is indicated in analytics by a single slash (/). This table shows you your most popular content—these pages are your greatest success. And remember, you can click on any individual page and then see individual metrics for that page. One way to maximize your site earnings is to focus on improving the performance of the pages that are already earning money. Chances are, you earn at least 80 percent of your income from the top 20 percent of the pages on your site. Focus on improving the rankings for those pages in order to get the best return for your efforts. With any analytics report, by default the statistics shown will all be for the past month, but you can adjust the time period, by clicking on the down arrow next to the dates in the upper right hand corner. Setting up automated analytics reports You can also use Google Analytics to e-mail you daily, weekly, or monthly reports. To enable this feature, simply navigate to the report that you'd like to be sent to you. Then, click the Email link just under the report title. The following pop-up will appear: The Frequency field lets you determine how often the report is sent, or you can simply send the report one time. Google Webmasters/Search Console Now, we are going to go into a bit more detail, and show you how to use Google Webmaster Tools to obtain information that you can use to improve your website. Understanding your website's search queries The Search Console now shows you data on what search queries users entered in Google search to find their way to your website. This is valuable: it teaches you what query terms are effectively delivering customers. To see the report, expand Search Traffic on the left navigation menu and select Search Analytics. The following report will display: Examine the top queries that Search Console shows you are getting traffic for, and add them to your list of keywords to work on, if they are not already there. You will probably find that it is most beneficial to focus on those keywords, that are currently ranked between #4 and #10 in Google, to try to get them moved up to one of the top three spots. You'll also want to check for crawl errors on each of your sites while you work in the Search Console. A crawl error is an error that Google's spider encounters, when trying to find pages on your site. A dead link, a link to a page on your site that no longer exists, is a common and perfect example of a crawl error. Crawl errors are detrimental to rankings. First, crawl errors send a poor quality signal to search engines. Remember that, Google wants to deliver a great experience to users of its search engine and properties. Dead links and lost pages do not deliver a great experience to users. To see the crawl error data, expand the Crawl link on the left navigation bar and then click Crawl Errors. The crawl error report will show you any pages that Google attempted to read but could not reach. Not all entries on this report are bad news. If you remove a piece of content that Google crawled in the past, Google will attempt for months to try to find that piece of content again. So, a removed page will generate a crawl error. Another way crawl errors get generated is if you have a sitemap file with page entries that no longer exist. Even inbound links from third party websites to pages that don't exist, will generate crawl errors. Other errors you find might be the result of a typo or the other error that involves going into a specific page on your website to fix. For example, perhaps you made a mistake typing in the URL when linking from one page to another, resulting in a 404 error. To fix that, you need to edit the page that the URL with the error was linked from and correct the URL. Other errors might require editing the files for your template to fix multiple errors simultaneously. For example, if you see that the Googlebot is getting a 404 (page not found) error every time it attempts to crawl the comments feed for a post, then the template file probably doesn't have the right format for creating those URLs. Once you correct the template file, all of the crawl errors related to that problem will be fixed. There are other things you can do in Google Webmaster Tools, for example, you can check to see how many links to your site Google is detecting. Checking your website's code with a HTML Validator HyperText Markup Language (HTML) is a coding standard with a reasonable degree of complexity. HTML standards develop over time, and a valid HTML code displays more websites more accurately in a wider range of browsers. Sites that have substantial amounts of HTML coding errors can potentially be punished by search engines. For this reason, you should periodically check your website for HTML coding errors. There are two principal tools, that web professionals use to check the quality of their websites' code: the W3C HTML Validator and the CSE HTML Validator. The W3C HTML Validator (http://validator.w3.org) is the less robust of the two validators, but it is free. Both validators work in the same way; they examine the code on your website and issue a report advising you of any errors. The CSE HTML Validator (http://htmlvalidator.com) is not a free tool, but you can get a free trial of the software that is good for 30 days or 200 validations, whichever comes first. This software catches errors in HTML, XHTML, CSS, PHP, and JavaScript. It also checks your links, to ensure that they are all valid. It includes an accessibility checker, spell checker, and a SEO checker. With all of these functions, there is a good chance that if your website has any problems, the CSE HTML Validator will be able to find them. After downloading and installing the demo version of CSE HTML Validator, you will be given the option to go to a page that contains two video demos. It is a good idea to watch these videos, or at least the HTML validation video, before trying to use the program. They are not very long, and watching them will reduce the amount of time it takes you to learn to use the program. To validate a file that is on your hard drive, first open the file in the editor, then click the down arrow next to the Validate button on the task bar. You will see several options. Selecting Full will give you not only errors, but messages containing suggestions as well. Errors only will only show you actual errors, and Errors and warnings only will tell you if there are things that could be errors, but might not be. You can experiment with the different options to see which one you like best. After you select a validation option, a box will appear at the bottom of the screen listing all of the errors, as well as warnings and messages depending on the option you chose. You might be surprised at how many errors there are, especially if you are using WordPress to create the code for your site. The code is often not as clean as you might expect it to be. However, not all of the errors you see, will be things that you need to worry about or correct. Yes, it is better to have a website with perfect coding, but one of the advantages of using WordPress is that, you don't have to know how to code to build a website. If you do not know anything about HTML coding, you may do more harm than good by trying to fix minor errors, such as omission of a slash at the end of a tag. Most of these errors will not cause problems anyhow. You should look through the errors and fix the ones you know how to fix. If you are completely mystified by what you see here, don't worry about it too much unless you are having a problem with the way your website loads or displays. If the errors are causing problems, you'll either have to learn a bit about coding or hire someone who knows what they're doing to fix your website. If you want to be able to check the code of an entire website at once, you'll need to buy the Pro version of CSE HTML Validator. You can then use the batch wizard to check your website. This feature is not available in the Standard or Lite versions of the software. To use the batch wizard, click on Tools, then Batch Wizard. A new window will pop up, allowing you to choose the files you want to check. Click on the button with the large green plus sign, and select the appropriate option to add files or URLs to your list. You can add files individually, add an entire folder, or even add a URL. To check an entire site, you can add the root file for your domain from your hard drive, or you can add the URL for the root domain. Once you have added your target, click on it. Now, click on Target in the main menu bar, then on Properties. Click on the Follow Links tab in the box that pops up, then check the box in front of Follow and validate links. Click on the OK button and then click on the Process button to start validating. Checking your inbound link count with Bing Webmaster Bing Webmaster allows you to get information about backlinks, crawl errors, and search traffic to your websites. In order to get the maximum value from this tool, you'll need to authenticate each of your websites to prove that you own them. To get started, go to https://www.bing.com/webmaster/ and sign up with your Microsoft account. As a part of the sign up process, you'll get a HTML file, that you'll install in the root directory of your WordPress installation. This file validates that you are the owner of the website and entitled to see the data that Bing collects. One core use for Bing Webmaster is that it presents a highly accurate picture of one's inbound link counts. If you recall, Google does not present accurate inbound link counts to users. Thus, Bing Webmaster is the most authoritative picture from a search engine, that you'll have of how many backlinks your site enjoys. To see your inbound links, simply log in and navigate to the Dashboard. At the lower right, you'll see the Inbound Links table shown here: The table shows you the inbound links to your website for each page of content. This helpful feature lets you determine, which articles of content are garnering the most interest from the other webmasters. High link counts are always good, but you also want to make sure you are getting high quality links from websites in the same general category as your site. Bing offers an additional feature: link keyword research. Expand the Diagnostics & Tools entry on the navigation bar on the left, and click Keyword Research. The search traffic section will give you valuable information about the search terms you should be targeting for your site, as well as allowing you to see which of the terms you are already targeting are getting traffic. Just as you did with the keywords shown in Google Analytics and Google Webmaster Tools, you want to find keywords that are getting traffic, but are not currently ranked in the top one to three positions in the search engine results pages. Send more links to these pages with the keyword phrase you want to target as anchor text, in order to move these pages up in the rankings. Monitoring ranking positions and movement with Rank Tracker Rank Tracker is a paid tool and we've included it because it is tremendously valuable and noteworthy. Rank Tracker is a software tool that can monitor your website's rankings for thousands of terms in hundreds of different search engines. Even more, it maintains a historical ranking data that helps you gauge your progress as you work. It is a valuable tool that is used by many SEO professionals. There is a free version, although the free version does not allow you to save your work (and thus does not let you save historical information). To really harness the power of this software, you'll need the paid version. You can download either version from http://www.link-assistant.com/rank-tracker/. After you install the program, you will be prompted to enter the URL of your website. On the next screen, the program will ask you to input the keywords you wish to track. If you have them all listed in a spreadsheet somewhere, you can just copy the column that has your keywords in it and paste them all into the tool. Then Rank Tracker will ask you which search engines you are targeting and will check the rank of each keyword in all of the search engines you select. It only takes a few minutes for Rank Tracker to update hundreds of keywords, so you can find out where you are ranking in very little time. This screenshot shows the main interface of the Rank Tracker software. For monitoring progress on large numbers of keywords on several different search engines, Rank Tracker can be a real time-saver: Once your rankings have been updated, you can sort them by rank to see which ones will benefit the most from additional link building. Remember that more than half of the people who search for something in Google, click on the first result. If you are not in the top three to five results, you will get very little traffic from people searching for your targeted keyword. For this reason, you will usually get the best results by working on the keywords that are ranking between #4 and #10 in the search engine results pages. If you want more data to play with, you can click on Update KEI, to find out the keyword effectiveness index for each keyword. This tool gathers the data from Google, to tell you how many searches per month and how much competition there is for each keyword, and then calculates the KEI score based on that data. As a general rule, the higher the KEI number is, the easier it will be to rank for a given keyword. Keep in mind, however, that if you are already ranking for the keyword, it will usually not be that difficult to move up in the rankings, even if the KEI is low. To make it easier to see which keywords will be easy to rank, they are color-coded in the Rank Tracker tool. The easiest ones are green, and the hardest are red. Yellow and orange fall in between. In addition to checking your rankings on keywords you are targeting, you can use the Rank Tracker tool to find more keywords to target. If you navigate to Tools and then Get Keyword Suggestions, a window will pop up that will let you choose from a range of different methods of finding keywords. It is recommended that you start with the Google AdWords Keyword Tool. After you choose the method, you'll be asked to enter keywords that are relevant to the content of your website. You will only get about 100 results no matter how many keywords you enter, so it's best to work on just one at a time. On the next page, you will be presented with a list of keywords. Select the ones you want to add and click on Next. When the tool is done updating the data for the selected keywords, click on the Finish button to add them to your project. You can repeat this process as many times as you want to find new keywords for your website, and you can experiment with the other fifteen tools as well, which should give you more variety. When you find keywords that look promising, put them on a list, and plan on writing posts to target those keywords in the future. Look for keywords that have at least 100 searches per month, with a relatively low amount of competition. Rank Tracker not only allows you to check your current rankings, but it also keeps track of your historical data as well. Every time you update, a new point will be added to the progress graph so you can see your progress over time. This allows you to see, whether what you are doing is working or not. If you see that your rank is dropping for a certain keyword, you can use that information to figure out whether something you changed had the opposite effect that you intended. If you are doing SEO for clients, you'll find the reports in Rank Tracker to be extremely useful (if not mandatory). You can create a monthly report for each client that shows how many keywords are ranked #1, as well as how many are in the top 10, 20, or 100. The report also shows the number of keywords that moved up and the number that moved down. You can use these reports to show your clients how much progress you are making and demonstrate your value to the client. If you want to take advantage of the historical data, you'll have to purchase the paid version of Rank Tracker. The free version does not support saving your projects, so you won't have data from your past rankings to compare and see whether you are moving up or down in the search engine results. You also won't be able to generate reports, that tell you what has changed from the last time you updated your rankings. Monitoring backlinks with majestic SEO If you want to see how many backlinks you have pointing to your site along with a range of additional data, the king of free backlink tools is the powerful Majestic SEO backlink checker tool. To use this tool, go to https://majestic.com/ and enter your domain in the box at the top of the page. The tool will generate a report that shows how many URLs are indexed for your domain, how many total backlinks you have, and how many unique domains link to your website. For heavy-duty link reconnaissance, you'll want the paid upgrade. Underneath the site info, you can see the stats for each page on your site. The tool shows the number of backlinks for each page, as well as the number of domains linking to each page. You can only see ten results per page, so you'll have to click through numerous pages to see all of the results if you have a large site. Majestic SEO offers a few details that you won't get from Google or Bing Webmaster. Majestic SEO calculates and reports the number of separate C class subnets upon which your backlinks appear. As we have learned, links from sites on separate C class subnets are more valuable, because they are perceived by search engines as being truly non-duplicate links. Majestic SEO also reports the number of .edu and .gov upon which your links appear. This extra information gives you a clear picture of how effective your link building efforts are progressing. Majestic offers another feature of note: Majestic crawls the web more deeply, so you'll see higher link counts. Majestic is particularly useful when doing link cleanup, because it scrapes low-value sites that Google and Bing don't bother indexing. This screenshot highlights some of Majestic SEO's more robust features: it shows you the number of backlinks from .edu and .gov domains as well as the number of separate Class C subnets upon which your inbound links appear: Majestic SEO offers one more special feature; it records and graphs your backlink acquisition over time. The graph in the screenshot just above, shows this feature in action. Majestic SEO is not a search engine, so it will show you a count of inbound links without any regards to the quality of the pages on which your links appear. Put another way, Bing Webmaster will only show you links that appear on indexed pages. Lower value pages, such as pages with duplicate content or pages in low-value link directories, tend not to appear in search engine indexes. As such, Majestic SEO reports higher link counts than Bing Webmaster or Google. Summary In this article, we learned how to monitor your progress through the use of free and paid tools. We learned how to set up and employ Google Analytics, to measure and monitor where your website visitors are coming from and how they behave on your site. We learned how to set up and use Google Webmaster Tools to detect crawling errors on your site and learned how Googlebot interacts with your site. We discovered two HTML validation tools that you can use to ensure that your website's code meets current HTML standards. Finally, we learned how to measure and monitor your backlink efforts with Bing Webmaster and Majestic SEO. With the tools and techniques in this article, you can ensure that your optimization efforts are effective and remain on track. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [Article] Responsive Web Design with WordPress [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 56
  • 2111

Packt
14 Oct 2015
9 min read
Save for later

CSS3 – Selectors and nth Rules

Packt
14 Oct 2015
9 min read
In this article by Ben Frain, the author of Responsive Web Design with HTML5 and CSS3 Second Edition, we'll look in detail at pseudo classes, selectors such as the :last-child and nth-child, the nth rules and nth-based selection in responsive web design. CSS3 structural pseudo-classes CSS3 gives us more power to select elements based upon where they sit in the structure of the DOM. Let's consider a common design treatment; we're working on the navigation bar for a larger viewport and we want to have all but the last link over on the left. Historically, we would have needed to solve this problem by adding a class name to the last link so that we could select it, like this: <nav class="nav-Wrapper"> <a href="/home" class="nav-Link">Home</a> <a href="/About" class="nav-Link">About</a> <a href="/Films" class="nav-Link">Films</a> <a href="/Forum" class="nav-Link">Forum</a> <a href="/Contact-Us" class="nav-Link nav-LinkLast">Contact Us</a> </nav> This in itself can be problematic. For example, sometimes, just getting a content management system to add a class to a final list item can be frustratingly difficult. Thankfully, in those eventualities, it's no longer a concern. We can solve this problem and many more with CSS3 structural pseudo-classes. The :last-child selector CSS 2.1 already had a selector applicable for the first item in a list: div:first-child { /* Styles */ } However, CSS3 adds a selector that can also match the last: div:last-child { /* Styles */ } Let's look how that selector could fix our prior problem: @media (min-width: 60rem) { .nav-Wrapper { display: flex; } .nav-Link:last-child { margin-left: auto; } } There are also useful selectors for when something is the only item: :only-child and the only item of a type: :only-of-type. The nth-child selectors The nth-child selectors let us solve even more difficult problems. With the same markup as before, let's consider how nth-child selectors allow us to select any link(s) within the list. Firstly, what about selecting every other list item? We could select the odd ones like this: .nav-Link:nth-child(odd) { /* Styles */ } Or, if you wanted to select the even ones: .nav-Link:nth-child(even) { /* Styles */ } Understanding what nth rules do For the uninitiated, nth-based selectors can look pretty intimidating. However, once you've mastered the logic and syntax you'll be amazed what you can do with them. Let's take a look. CSS3 gives us incredible flexibility with a few nth-based rules: nth-child(n) nth-last-child(n) nth-of-type(n) nth-last-of-type(n) We've seen that we can use (odd) or (even) values already in an nth-based expression but the (n) parameter can be used in another couple of ways: As an integer; for example, :nth-child(2) would select the 
second item As a numeric expression; for example, :nth-child(3n+1) would start at 1 and then select every third element The integer based property is easy enough to understand, just enter the element number you want to select. The numeric expression version of the selector is the part that can be a little baffling for mere mortals. If math is easy for you, I apologize for this next section. For everyone else, let's break it down. Breaking down the math Let's consider 10 spans on a page: <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> By default they will be styled like this: span { height: 2rem; width: 2rem; background-color: blue; display: inline-block; } As you might imagine, this gives us 10 squares in a line: OK, let's look at how we can select different ones with nth-based selections. For practicality, when considering the expression within the parenthesis, I start from the right. So, for example, if I want to figure out what (2n+3) will select, I start with the right-most number (the three here indicates the third item from the left) and know it will select every second element from that point on. So adding this rule: span:nth-child(2n+3) { color: #f90; border-radius: 50%; } Which results in this in the browser: As you can see, our nth selector targets the third list item and then every subsequent second one after that too (if there were 100 list items, it would continue selecting every second one). How about selecting everything from the second item onwards? Well, although you could write :nth-child(1n+2), you don't actually need the first number 1 as unless otherwise stated, n is equal to 1. We can therefore just write :nth-child(n+2). Likewise, if we wanted to select every third element, rather than write :nth-child(3n+3), we can just write :nth-child(3n) as every third item would begin at the third item anyway, without needing to explicitly state it. The expression can also use negative numbers, for example, :nth-child(3n-2) starts at -2 and then selects every third item. You can also change the direction. By default, once the first part of the selection is found, the subsequent ones go down the elements in the DOM (and therefore from left to right in our example). However, you can reverse that with a minus. For example: span:nth-child(-2n+3) { background-color: #f90; border-radius: 50%; } This example finds the third item again, but then goes in the opposite direction to select every two elements (up the DOM tree and therefore from right to left in our example): Hopefully, the nth-based expressions are making perfect sense now? The nth-child and nth-last-child differ in that the nth-last-child variant works from the opposite end of the document tree. For example, :nth-last-child(-n+3) starts at 3 from the end and then selects all the items after it. Here's what that rule gives us in the browser: Finally, let's consider :nth-of-type and :nth-last-of-type. While the previous examples count any children regardless of type (always remember the nth-child selector targets all children at the same DOM level, regardless of classes), :nth-of-type and :nth-last-of-type let you be specific about the type of item you want to select. Consider the following markup <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> If we used the selector: .span-class:nth-of-type(-2n+3) { background-color: #f90; border-radius: 50%; } Even though all the elements have the same span-class, we will only actually be targeting the span elements (as they are the first type selected). Here is what gets selected: We will see how CSS4 selectors can solve this issue shortly. CSS3 doesn't count like JavaScript and jQuery! If you're used to using JavaScript and jQuery you'll know that it counts from 0 upwards (zero index based). For example, if selecting an element in JavaScript or jQuery, an integer value of 1 would actually be the second element. CSS3 however, starts at 1 so that a value of 1 is the first item it matches. nth-based selection in responsive web designs Just to close out this little section I want to illustrate a real life responsive web design problem and how we can use nth-based selection to solve it. Let's consider how a horizontal scrolling panel might look in a situation where horizontal scrolling isn't possible. So, using the same markup, let's turn the top 10 grossing films of 2014 into a grid. For some viewports the grid will only be two items wide, as the viewport increases we show three items and at larger sizes still we show four. Here is the problem though. Regardless of the viewport size, we want to prevent any items on the bottom row having a border on the bottom. Here is how it looks with four items wide: See that pesky border below the bottom two items? That's what we need to remove. However, I want a robust solution so that if there were another item on the bottom row, the border would also be removed on that too. Now, because there are a different number of items on each row at different viewports, we will also need to change the nth-based selection at different viewports. For the sake of brevity, I'll show you the selection that matches four items per row (the larger of the viewports). You can view the code sample to see the amended selection at the different viewports. @media (min-width: 55rem) { .Item { width: 25%; } /* Get me every fourth item and of those, only ones that are in the last four items */ .Item:nth-child(4n+1):nth-last-child(-n+4), /* Now get me every one after that same collection too. */ .Item:nth-child(4n+1):nth-last-child(-n+4) ~ .Item { border-bottom: 0; } } You'll notice here that we are chaining the nth-based pseudo-class selectors. It's important to understand that the first doesn't filter the selection for the next, rather the element has to match each of the selections. For our preceding example, the first element has to be the first item of four and also be one of the last four. Nice! Thanks to nth-based selections we have a defensive set of rules to remove the bottom border regardless of the viewport size or number of items we are showing. To learn how to build websites with a responsive and mobile first methodology, allowing a website to display effortlessly on all devices, take a look at Responsive Web Design with HTML5 and CSS3 -Second Edition.
Read more
  • 0
  • 0
  • 1365

article-image-how-build-cross-platform-desktop-application-nodejs-and-electron
Mika Turunen
14 Oct 2015
9 min read
Save for later

How to build a cross-platform desktop application with Node.js and Electron

Mika Turunen
14 Oct 2015
9 min read
Do you want to make a desktop application, but you have only mastered web development so far? Or maybe you feel overwhelmed by all of the different API’s that different desktop platforms have to offer? Or maybe you want to write a beautiful application in HTML5 and JavaScript and have it working on the desktop? Maybe you want to port an existing web application to the desktop? Well, luckily for us, there are a number of alternatives and we are going to look into Node.js and Electron to help us get our HTML5 and JavaScript running on the desktop side with no hiccups. What are the different parts in an Electron application Commonly, all of the different components in Electron are either running in the main process (backend) or the rendering process (frontend). The main process can communicate with different parts of the operating system if there’s a need for that, and the rendering process mainly just focuses on showing the content, pretty much like in any HTML5 application you find on the Internet. The processes communicate with each other through IPC (inter-process communication), which in Node.js terms is just a super simple event emitter and nothing else. You can send events and listen for events. You can get the complete source code from here for this post. Let's start working on it You need to have node.js installed and you can install it from https://nodejs.org/. Now that you have Node.js installed you can start focusing on creating the application. First of all, create an empty directory where you will be placing your code. # Open up your favourite terminal, command-line tool or any other alternative as we'll be running quite a bit of commands # Create the directory mkdir /some/location/that/works/in/your/system # Go into the directory cd /some/location/that/works/in/your/system # Now we need to initialize it for our Electron and Node work npm init NPM will start asking you questions about the application we are about to make. You can just hit Enter and not answer any of them if you feel like it. We can fill them in manually once we know a bit more about our application. Now we should have a directory structure with the following files in it: package.json And that's it, nothing else. We'll start by creating two new files in your favorite text editor or IDE. The files are (leave the files empty): main.js index.html Drop all of the files into the same directory as the package.json is in for easier handling of everything for now. Main.js will be our main process file, which is the connecting layer to the underlying desktop operating system for our Electron application. At this point we need to install Electron as a dependency for our application, which is really easy. Just write: npm install --save electron-prebuilt Alternatively if you cloned/downloaded the associated Github repository you can just go into the directory and write: npm install This will install all dependencies from package.json, including the prebuilt-electron. Now we have Electron's prebuilt binaries installed as a direct dependency for our application and we can run our application on our platform. It's wise to manually update our package.json file using the npm init command generated for us. Open up package.josn file and modify the scripts block to look like this (or if it's missing, create it): "main": "main.js", "scripts": { "start": "electron ." }, The whole package.json file should be roughly something like this (taken from the tutorial repo I linked earlier): { "name": "", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { "start": "electron ." }, "repository": { }, "keywords": [ ], "author": "", "license": "MIT", "bugs": { }, "homepage": "", "dependencies": { "electron-prebuilt": "^0.25.3" } } The main property in the file points to the main.js and the scripts sections start property tells it to run command "Electron .", which essentially tells Electron to digest the current directory as an application and Electron hardwires the property main as the main process for the application. This means that main.js is now our main process, just like we wanted. Main and rendering process We need to write the main process JavaScript and the rendering process HTML to get our application to start. Let's start with the main process, main.js. You can also find all of the code below from the tutorial repository here. The code has been peppered with a good amount of comments to give a deeper understanding of what is going on in the code and what different parts do in the context of Electron. // Loads Electron specific app that is not commonly available for node or io.js var app = require("app"); // Inter process communication -- Used to communicate from Main process (this) // to the actual rendering process (index.html) -- not really used in this example var ipc = require("ipc"); // Loads the Electron specific module or browser handling var BrowserWindow = require("browser-window"); // Report crashes to our server. var crashReporter = require("crash-reporter"); // Keep a global reference of the window object, if you don't, the window will // be closed automatically when the javascript object is garbage collected var mainWindow = null; // Quit when all windows are closed. app.on("window-all-closed", function() { // OS X specific check if (process.platform != "darwin") { app.quit(); } }); // This event will be called when Electron has done initialization and ready for creating browser windows. app.on("ready", function() { crashReporter.start(); // Create the browser window (where the applications visual parts will be) mainWindow = newBrowserWindow({ width: 800, height: 600 }); // Building the file path to the index.html mainWindow.loadUrl("file://" + __dirname + "/index.html"); // Emitted when the window is closed. // The function just deferences the mainWindow so garbage collection can // pick it up mainWindow.on("closed", function() { mainWindow = null; }); }); You can now start the application, but it'll just start an empty window since we have nothing to render in the rendering process. Let's fix that by populating our index.html with some content. <!DOCTYPE html> <html> <head> <title>Hello Tutorial!</title> </head> <body> <h2>Tutorial</h2> We are using node.js <script>document.write(process.version)</script> and Electron <script>document.write(process.versions["electron"])</script>. </body> </html> Because this is an Electron application we have used the node.js/io.js process and other content relating to the actual node.js/io.js setup we have going. The line document.write(process.version) actually is a call to the Node.js process. This is one of the great things about Electron: we are essentially bridging the gap between the desktop applications and HTML5 applications. Now to run the application. npm start There is a huge list of different desktop environment integration possibilities you can do with Electron and you can read more about them from the Electron documentation at http://electron.atom.io/. Obviously this is still far from a complete application, but this should give you the understanding on how to work with Electron, how it behaves and what you can do with it. You can start using your favorite JavaScript/CSS frontend framework in the index.html to build a great looking GUI for your new desktop application and you can also use all Node.js specific NPM modules in the backend along with the desktop environment integration. Maybe we'll look into writing a great looking GUI for our application with some additional desktop environment integration in another post. Packaging and distributing Electron applications Applications can be packaged into distributable operating system specific containers. For example, .exe files allow them to run on different hardware. The packaging process is fairly simple and well documented in the Electron's documentation and it is out of the scope for this post but worth the look if you want to package your application. To understand more of the application distribution and packaging process, read the Electrons official documentation on it here. Electron and it's current use Electron is still really fresh and right out of GitHub's knowing hands, but it's already been adopted by quite few companies for use and there are number of applications already built on top of it. Companies using Electron: Slack Microsoft Github Applications built with Electron or using Electron Visual Studio Code - Microsofts Visual Studio code Heartdash - Hearthdash is a card tracking application for Hearthstone. Monu - Process monitoring app Kart - Frontend for RetroArch Friends - P2P chat powered by the web Final words on Electron It's obvious that Electron is still taking its first baby steps, but it's hard to deny the fact that more and more user interfaces will be written in different web technologies with HTML5 and this is one of the great starts for it. It'll be interesting to see how the gap between the desktop and the web application develop as time goes on and people like you and me will be playing a key role in the development of future applications. With help of technologies like Electron the desktop application development just got that much easier. For more Node.js content, look no further than our dedicated page! About the author Mika Turunen is a software professional hailing from the frozen cold Finland. He spends a good part of his day playing with emerging web and cloud related technologies, but he also has a big knack for games and game development. His hobbies include game collecting, game development and games in general. When he's not playing with technology he is spending time with his two cats and growing his beard.
Read more
  • 0
  • 0
  • 4543
article-image-templates-web-pages
Packt
13 Oct 2015
13 min read
Save for later

Templates for Web Pages

Packt
13 Oct 2015
13 min read
In this article, by Kai Nacke, author of the book D Web Development, we will learn that every website has some recurring elements, often called a theme. Templates are an easy way to define these elements only once and then reuse them. A template engine is included in vibe.dwith the so-called Diet templates. The template syntax is based on the Jade templates (http://jade-lang.com/), which you might already know about. In this article, you will learn the following: Why are templates useful Key concepts of Diet templates: inheritance, include and blocks How to use filters and how to create your own filter (For more resources related to this topic, see here.) Using templates Let's take a look at the simple HTML5 page with a header, footer, navigation bar and some content in the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Demo site</title> <link rel="stylesheet" type="text/css" href="demo.css" /> </head> <body> <header> Header </header> <nav> <ul> <li><a href="link1">Link 1</a></li> <li><a href="link2">Link 2</a></li> <li><a href="link3">Link 3</a></li> </ul> </nav> <article> <h1>Title</h1> <p>Some content here.</p> </article> <footer> Footer </footer> </body> </html> The formatting is done with a CSS file, as shown in the following: body { font-size: 1em; color: black; background-color: white; font-family: Arial; } header { display: block; font-size: 200%; font-weight: bolder; text-align: center; } footer { clear: both; display: block; text-align: center; } nav { display: block; float: left; width: 25%; } article { display: block; float: left; } Despite being simple, this page has elements that you often find on websites. If you create a website with more than one page, then you will use this structure on every page in order to provide a consistent user interface. From the 2nd page, you will violate the Don't Repeat Yourself(DRY) principle: the header and footer are the elements with fixed content. The content of the navigation bar is also fixed but not every item is always displayed. Only the real content of the page (in the article block) changes with every page. Templates solve this problem. A common approach is to define a base template with the structure. For each page, you will define a template that inherits from the base template and adds the new content. Creating your first template In the following sections, you will create a Diet template from the HTML page using different techniques. Turning the HTML page into a Diet template Let's start with a one-to-one translation of the HTML page into a Diet template. The syntax is based on the Jade templates. It looks similar to the following: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body header | Header nav ul li a(href='link1') Link 1 li a(href='link2') Link 2 li a(href='link3') Link 3 article h1 Title p Some content here. footer | Footer The template resembles the HTML page. Here are the basic syntax rules for a template: The first word on a line is an HTML tag Attributes of an HTML tag are written as a comma-separated list surrounded by parenthesis A tag may be followed by plain text that may contain the HTML code Plain text on a new line starts with the pipe symbol Nesting of elements is done by increasing the indentation. If you want to see the result of this template, save the code as index.dt and put it together with the demo.css CSS file in the views folder. The Jade templates have a special syntax for the nested elements. The list item/anchor pair from the preceding code could be written in one line, as follows: li: a(href='link1') Link1 This syntax is currently not supported by vibe.d. Now, you need to create a small application to see the result of the template by following the given steps: Create a new project template with dub, using the following command: $ dub init template vibe.d Save the template as the views/index.dt file. Copy the demo.css CSS file in the public folder. Change the generated source/app.d application to the following: import vibe.d; shared static this() { auto router = new URLRouter; router.get("/", staticTemplate!"index.dt"); router.get("*", serveStaticFiles("public/")); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, router); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Run dub inside the project folder to start the application and then browse to http://127.0.0.1:8080/ to see the resulting page. The application uses a new URLRouter class. This class is used to map a URL to a web page. With the router.get("/", staticTemplate!"index.dt");statement, every request for the base URL is responded with rendering of the index.dt template. The router.get("*", serveStaticFiles("public/")); statement uses a wild card to serve all other requests as static files that are stored in the public folder. Adding inheritance Up to now, the template is only a one-to-one translation of the HTML page. The next step is to split the file into two, layout.dt and index.dt. The layout.dtfile defines the general structure of a page while index.dt inherits from this file and adds new content. The key to template inheritance is the definition of a block. A block has a name and contains some template code. A child template may replace the block, append, or prepend the content to a block. In the following layout.dt file, four blocks are defined: header, navigation, content and footer. For all the blocks, except content, a default text is defined, as follows: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body block header header Header block navigation nav ul li <a href="link1">Link 1</a> li <a href="link2">Link 2</a> li <a href="link3">Link 3</a> block content block footer footer Footer The template in the index.dt file inherits this layout and replaces the block content, as shown here: extends layout block content article h1 Title p Some content here. You can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. You can now add more pages and reuse the layout. It is also possible to change the common elements that you defined in the header, footer and navigation blocks. There is no restriction on the level of inheritance. This allows you to construct very sophisticated template systems. Using include Inheritance is not the only way to avoid repetition of template code. With the include keyword, you insert the content of another file. This allows you to put the reusable template code in separate files. As an example, just put the following navigation in a separate navigation.dtfile: nav     ul         li <a href="link1">Link 1</a>         li <a href="link2">Link 2</a>         li <a href="link3">Link 3</a> The index.dt file uses the include keyword to insert the navigation.dt file, as follows: doctype html html     head        meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body         header Header         include navigation         article             h1 Title             p Some content here.         footer Footer Just as with the inheritance example, you can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. The Jade templates allow you to apply a filter to the included content. This is not yet implemented. Integrating other languages with blocks and filters So far, the templates only used the HTML content. However, a web application usually builds on a bunch of languages, most often integrated in a single document, as follows: CSS styles inside the style element JavaScript code inside the script element Content in a simplified markup language such as Markdown Diet templates have two mechanisms that are used for integration of other languages. If a tag is followed by a dot, then the block is treated as plain text. For example, the following template code: p. Some text And some more text It translates into the following: <p>     Some text     And some more text </p> The same can also be used for scripts and styles. For example, you can use the following script tag with the JavaScript code in it: script(type='text/javascript').     console.log('D is awesome') It translates to the following: <script type="text/javascript"> console.log('D is awesome') </script> An alternative is to use a filter. You specify a filter with a colon that is followed by the filter name. The script example can be written with a filter, as shown in the following: :javascript     console.log('D is awesome') This is translated to the following: <script type="text/javascript">     //<![CDATA[     console.log('D is aewsome')     //]]> </script> The following filters are provided by vibe.d: javascript for JavaScript code css for CSS styles markdown for content written in Markdown syntax htmlescapeto escape HTML symbols The css filter works in the same way as the javascript filter. The markdown filter accepts the text written in the Markdown syntax and translates it into HTML. Markdown is a simplified markup language for web authors. The syntax is available on the internet at http://daringfireball.net/projects/markdown/syntax. Here is our template, this time using the markdown filter for the navigation and the article content: doctype html html     head         meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body        header Header         nav             :markdown                 - [Link 1](link1)                 - [Link 2](link2)                 - [Link 3](link3)         article             :markdown                 Title                 =====                 Some content here.         footer Footer The rendered HTML page is still the same. The advantage is that you have less to type, which is good if you produce a lot of content. The disadvantage is that you have to remember yet another syntax. A normal plain text block can contain HTML tags, as follows: p. Click this <a href="link">link</a> This is rendered as the following: <p>     Click this <a href="link">link</a> </p> There are situations where you want to even treat the HTML tags as plain text, for example, if you want to explain HTML syntax. In this case, you use the htmlescape filter, as follows: p     :htlmescape         Link syntax: <a href="url target">text to display</a> This is rendered as the following: <p>     Link syntax: &lt;a href="url target"&gt;text to display&lt;/a&gt; </p> You can also add your owns filters. The registerDietTextFilter()function is provided by vibe.d to register the new filters. This function takes the name of the filter and a pointer to the filter function. The filter function is called with the text to filter and the indentation level. It returns the filtered text. For example, you can use this functionality for pretty printing of D code, as follows: Create a new project with dub,using the following command: $ dub init filter vibe.d Create the index.dttemplate file in the viewsfolder. Use the new dcodefilter to format the D code, as shown in the following: doctype html head title Code filter example :css .keyword { color: #0000ff; font-weight: bold; } body p You can create your own functions. :dcode T max(T)(T a, T b) { if (a > b) return a; return b; } Implement the filter function in the app.dfile in the sourcefolder. The filter function outputs the text inside a <pre> tag. Identified keywords are put inside the <span class="keyword"> element to allow custom formatting. The whole application is as follows: import vibe.d; string filterDCode(string text, size_t indent) { import std.regex; import std.array; auto dst = appender!string; filterHTMLEscape(dst, text, HTMLEscapeFlags.escapeQuotes); auto regex = regex(r"(^|s)(if|return)(;|s)"); text = replaceAll(dst.data, regex, "$1<span class="keyword">$2</span>$3"); auto lines = splitLines(text); string indent_string = "n"; while (indent-- > 0) indent_string ~= "t"; string ret = indent_string ~ "<pre>"; foreach (ln; lines) ret ~= indent_string ~ ln; ret ~= indent_string ~ "</pre>"; return ret; } shared static this() { registerDietTextFilter("dcode", &filterDCode); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, staticTemplate!"index.dt"); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Compile and run this application to see that the keywords are bold and blue. Summary In this article, we have seen how to create a Diet template using different techniques such as translating the HTML page into a Diet template, adding inheritance, using include, and integrating other languages with blocks and filters. Resources for Article: Further resources on this subject: MODx Web Development: Creating Lists [Article] MODx 2.0: Web Development Basics [Article] Ruby with MongoDB for Web Development [Article]
Read more
  • 0
  • 0
  • 1605

article-image-asynchronous-communication-between-components
Packt
09 Oct 2015
12 min read
Save for later

Asynchronous Communication between Components

Packt
09 Oct 2015
12 min read
In this article by Andreas Niedermair, the author of the book Mastering ServiceStack, we will see the communication between asynchronous components. The recent release of .NET has added several new ways to further embrace asynchronous and parallel processing by introducing the Task Parallel Library (TPL) and async and await. (For more resources related to this topic, see here.) The need for asynchronous processing has been there since the early days of programming. Its main concept is to offload the processing to another thread or process to release the calling thread from waiting and it has become a standard model since the rise of GUIs. In such interfaces only one thread is responsible for drawing the GUI, which must not be blocked in order to remain available and also to avoid putting the application in a non-responding state. This paradigm is a core point in distributed systems, at some point, long running operations are offloaded to a separate component, either to overcome blocking or to avoid resource bottlenecks using dedicated machines, which also makes the processing more robust against unexpected application pool recycling and other such issues. A synonym for "fire-and-forget" is "one-way", which is also reflected by the design of static routes of ServiceStack endpoints, where the default is /{format}/oneway/{service}. Asynchronism adds a whole new level of complexity to our processing chain, as some callers might depend on a return value. This problem can be overcome by adding callback or another event to your design. Messaging or in general a producer consumer chain is a fundamental design pattern, which can be applied within the same process or inter-process, on the same or a cross machine to decouple components. Consider the following architecture: The client issues a request to the service, which processes the message and returns a response. The server is known and is directly bound to the client, which makes an on-the-fly addition of servers practically impossible. You'd need to reconfigure the clients to reflect the collection of servers on every change and implement a distribution logic for requests. Therefore, a new component is introduced, which acts as a broker (without any processing of the message, except delivery) between the client and service to decouple the service from the client. This gives us the opportunity to introduce more services for heavy load scenarios by simply registering a new instance to the broker, as shown in the following figure:. I left out the clustering (scaling) of brokers and also the routing of messages on purpose at this stage of introduction. In many cross process scenarios a database is introduced as a broker, which is constantly polled by services (and clients, if there's a response involved) to check whether there's a message to be processed or not. Adding a database as a broker and implementing your own logic can be absolutely fine for basic systems, but for more advanced scenarios it lacks some essential features, which Messages Queues come shipped with. Scalability: Decoupling is the biggest step towards a robust design, as it introduces the possibility to add more processing nodes to your data flow. Resilience: Messages are guaranteed to be delivered and processed as automatic retrying is available for non-acknowledged (processed) messages. If the retry count is exceeded, failed messages are stored in a Dead Letter Queue (DLQ) to be inspected later and are requeued after fixing the issue that caused the failure. In case of a partial failure of your infrastructure, clients can still produce messages that get delivered and processed as soon as there is even a single consumer back online. Pushing instead of polling: This is where asynchronism comes into play, as clients do not constantly poll for messages but instead it gets pushed by the broker when there's a new message in their subscribed queue. This minimizes the spinning and wait time, when the timer ticks only for 10 seconds. Guaranteed order: Most Message Queues offer a guaranteed order of the processing under defined conditions (mostly FIFO). Load balancing: With multiple services registered for messages, there is an inherent load balancing so that the heavy load scenarios can be handled better. In addition to this round-robin routing there are other routing logics, such as smallest-mailbox, tail-chopping, or random routing. Message persistence: Message Queues can be configured to persist their data to disk and even survive restarts of the host on which they are running. To overcome the downtime of the Message Queue you can even setup a cluster to offload the demand to other brokers while restarting a single node. Built-in priority: Message Queues usually have separate queues for different messages and even provide a separate in queue for prioritized messages. There are many more features, such as Time to live, security and batching modes, which we will not cover as they are outside the scope of ServiceStack. In the following example we will refer to two basic DTOs: public class Hello : ServiceStack.IReturn<HelloResponse> { public string Name { get; set; } } public class HelloResponse { public string Result { get; set; } } The Hello class is used to send a Name to a consumer that generates a message, which will be enqueued in the Message Queue as well. RabbitMQ RabbitMQ is a mature broker built on top of the Advanced Message Queuing Protocol (AMQP), which makes it possible to solve even more complex scenarios, as shown here: The messages will survive restarts of the RabbitMQ service and the additional guaranty of delivery is accomplished by depending upon an acknowledgement of the receipt (and processing) of the message, by default it is done by ServiceStack for typical scenarios. The client of this Message Queue is located in the ServiceStack.RabbitMq object's NuGet package (it uses the official client in the RabbitMQ.Client package under the hood). You can add additional protocols to RabbitMQ, such as Message Queue Telemetry Transport (MQTT) and Streaming Text Oriented Messaging Protocol (STOMP), with plugins to ease Interop scenarios. Due to its complexity, we will focus on an abstracted interaction with the broker. There are many books and articles available for a deeper understanding of RabbitMQ. A quick overview of the covered scenarios is available at https://www.rabbitmq.com/getstarted.html. The method of publishing a message with RabbitMQ does not differ much from RedisMQ: using ServiceStack; using ServiceStack.RabbitMq; using (var rabbitMqServer = new RabbitMqServer()) { using (var messageProducer = rabbitMqServer.CreateMessageProducer()) { var hello = new Hello { Name = "Demo" }; messageProducer.Publish(hello); } } This will create a Helloobject and publish it to the corresponding queue in RabbitMQ. To retrieve this message, we need to register a handler, as shown here: using System; using ServiceStack; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); rabbitMqServer.RegisterHandler<Hello>(message => { var hello = message.GetBody(); var name = hello.Name; var result = "Hello {0}".Fmt(name); result.Print(); return null; }); rabbitMqServer.Start(); "Listening for hello messages".Print(); Console.ReadLine(); rabbitMqServer.Dispose(); This registers a handler for Hello objects and prints a message to the console. In favor of a straightforward example we are omitting all the parameters with default values of the constructor of RabbitMqServer, which will connect us to the local instance at port 5672. To change this, you can either provide a connectionString parameter (and optional credentials) or use a RabbitMqMessageFactory object to customize the connection. Setup Setting up RabbitMQ involves a bit of effort. At first you need to install Erlang from http://www.erlang.org/download.html, which is the runtime for RabbitMQ due to its functional and concurrent nature. Then you can grab the installer from https://www.rabbitmq.com/download.html, which will set RabbitMQ up and running as a service with a default configuration. Processing chain Due to its complexity, the processing chain with any mature Message Queue is different from what you know from RedisMQ. Exchanges are introduced in front of queues to route the messages to their respective queues according to their routing keys: The default exchange name is mx.servicestack (defined in ServiceStack.Messaging.QueueNames.Exchange) and is used in any Publish to call an IMessageProducer or IMessageQueueClient object. With IMessageQueueClient.Publish you can inject a routing key (queueName parameter), to customize the routing of a queue. Failed messages are published to the ServiceStack.Messaging.QueueNames.ExchangeDlq (mx.servicestack.dlq) and routed to queues with the name mq:{type}.dlq. Successful messages are published to ServiceStack.Messaging.QueueNames.ExchangeTopic (mx.servicestack.topic) and routed to the queue mq:{type}.outq. Additionally, there's also a priority queue to the in-queue with the name mq:{type}.priority. If you interact with RabbitMQ on a lower level, you can directly publish to queues and leave the routing via an exchange out of the picture. Each queue has features to define whether the queue is durable, deletes itself after the last consumer disconnected, or which exchange is to be used to publish dead messages with which routing key. More information on the concepts, different exchange types, queues, and acknowledging messages can be found at https://www.rabbitmq.com/tutorials/amqp-concepts.html. Replying directly back to the producer Messages published to a queue are dequeued in FIFO mode, hence there is no guarantee if the responses are delivered to the issuer of the initial message or not. To force a response to the originator you can make use of the ReplyTo property of a message: using System; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); var messageQueueClient = rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); var hello = new Hello { Name = "reply to originator" }; messageQueueClient.Publish(new Message<Hello>(hello) { ReplyTo = queueName }); var message = messageQueueClient.Get<HelloResponse>(queueName); var helloResponse = message.GetBody(); This code is more or less identical to the RedisMQ approach, but it does something different under the hood. The messageQueueClient.GetTempQueueName object creates a temporary queue, whose name is generated by ServiceStack.Messaging.QueueNames.GetTempQueueName. This temporary queue does not survive a restart of RabbitMQ, and gets deleted as soon as the consumer disconnects. As each queue is a separate Erlang process, you may encounter the process limits of Erlang and the maximum amount of file descriptors of your OS. Broadcasting a message In many scenarios a broadcast to multiple consumers is required, for example if you need to attach multiple loggers to a system it needs a lower level of implementation. The solution to this requirement is to create a fan-out exchange that will forward the message to all the queues instead of one connected queue, where one queue is consumed exclusively by one consumer, as shown: using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var fanoutExchangeName = string.Concat(QueueNames.Exchange, ".", ExchangeType.Fanout); var rabbitMqServer = new RabbitMqServer(); var messageProducer= (RabbitMqProducer) rabbitMqServer.CreateMessageProducer(); var channel = messageProducer.Channel; channel.ExchangeDeclare(exchange: fanoutExchangeName, type: ExchangeType.Fanout, durable: true, autoDelete: false, arguments: null); With the cast to RabbitMqProducer we have access to lower level actions, we need to declare and exchange this with the name mx.servicestack.fanout, which is durable and does not get deleted. Now, we need to bind a temporary and an exclusive queue to the exchange: var messageQueueClient = (RabbitMqQueueClient) rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); channel.QueueBind(queue: queueName, exchange: fanoutExchangeName, routingKey: QueueNames<Hello>.In); The call to messageQueueClient.GetTempQueueName() creates a temporary queue, which lives as long as there is just one consumer connected. This queue is bound to the fan-out exchange with the routing key mq:Hello.inq, as shown here: To publish the messages we need to use the RabbitMqProducer object (messageProducer): var hello = new Hello { Name = "Broadcast" }; var message = new Message<Hello>(hello); messageProducer.Publish(queueName: QueueNames<Hello>.In, message: message, exchange: fanoutExchangeName); Even though the first parameter of Publish is named queueName, it is propagated as the routingKey to the underlying PublishMessagemethod call. This will publish the message on the newly generated exchange with mq:Hello.inq as the routing key: Now, we need to encapsulate the handling of the message as: var messageHandler = new MessageHandler<Hello>(rabbitMqServer, message => { var hello = message.GetBody(); var name = hello.Name; name.Print(); return null; }); The MessageHandler<T> class is used internally in all the messaging solutions and looks for retries and replies. Now, we need to connect the message handler to the queue. using System; using System.IO; using System.Threading.Tasks; using RabbitMQ.Client; using RabbitMQ.Client.Exceptions; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var consumer = new RabbitMqBasicConsumer(channel); channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer); Task.Run(() => { while (true) { BasicGetResult basicGetResult; try { basicGetResult = consumer.Queue.Dequeue(); } catch (EndOfStreamException) { // this is ok return; } catch (OperationInterruptedException) { // this is ok return; } var message = basicGetResult.ToMessage<Hello>(); messageHandler.ProcessMessage(messageQueueClient, message); } }); This creates a RabbitMqBasicConsumer object, which is used to consume the temporary queue. To process messages we try to dequeuer from the Queue property in a separate task. This example does not handle the disconnects and reconnects from the server and does not integrate with the services (however, both can be achieved). Integrate RabbitMQ in your service The integration of RabbitMQ in a ServiceStack service does not differ overly from RedisMQ. All you have to do is adapt to the Configure method of your host. using Funq; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; public override void Configure(Container container) { container.Register<IMessageService>(arg => new RabbitMqServer()); container.Register<IMessageFactory>(arg => new RabbitMqMessageFactory()); var messageService = container.Resolve<IMessageService>(); messageService.RegisterHandler<Hello> (this.ServiceController.ExecuteMessage); messageService.Start(); } The registration of an IMessageService is needed for the rerouting of the handlers to your service; and also, the registration of an IMessageFactory is relevant if you want to publish a message in your service with PublishMessage. Summary In this article the messaging pattern was introduced along with all the available clients of existing Message Queues. Resources for Article: Further resources on this subject: ServiceStack applications[article] Web API and Client Integration[article] Building a Web Application with PHP and MariaDB – Introduction to caching [article]
Read more
  • 0
  • 0
  • 3350

article-image-using-underscorejs-collections
Packt
01 Oct 2015
21 min read
Save for later

Using Underscore.js with Collections

Packt
01 Oct 2015
21 min read
In this article Alex Pop, the author of the book, Learning Underscore.js, we will explore Underscore functionality for collections using more in-depth examples. Some of the more advanced concepts related to Underscore functions such as scope resolution and execution context will be explained. The topics of the article are as follows: Key Underscore functions revisited Searching and filtering This article assumes that you are familiar with JavaScript fundamentals such as prototypical inheritance and the built-in data types. The source code for the examples from this article is hosted online at https://github.com/popalexandruvasile/underscorejs-examples/tree/master/collections, and you can execute the examples using the Cloud9 IDE at the address https://ide.c9.io/alexpop/underscorejs-examples from the collections folder. (For more resources related to this topic, see here.) Key Underscore functions – each, map, and reduce This flexible approach means that some Underscore functions can operate over collections: an Underscore-specific term for arrays, array like objects, and objects (where the collection represents the object properties). We will refer to the elements within these collections as collection items. By providing functions that operate over object properties Underscore expands JavaScript reflection like capabilities. Reflection is a programming feature for examining the structure of a computer program, especially during program execution. JavaScript is a dynamic language without static type system support (as of ES6). This makes it convenient to use a technique named duck typing when working with objects that share similar behaviors. Duck typing is a programming technique used in dynamic languages where objects are identified through their structure represented by properties and methods rather than their type (the name of duck typing is derived from the phrase "if it walks like a duck, swims like a duck, and quacks like a duck, then it is a duck"). Underscore itself uses duck typing to assert that an object is an array by checking for a property called length of type Number. Applying reflection techniques We will build an example that demonstrates duck typing and reflection techniques through a function that will extract object properties so that they can be persisted to a relational database. Usually relational database stores objects represented as a data row with columns types that map to regular SQL data types. We will use the _.each() function to iterate over object properties and extract those of type boolean, number, string and Date as they be easily mapped to SQL data type and ignore everything else: var propertyExtractor = (function() { "use strict" return { extractStorableProperties: function(source) { var storableProperties = {}; if (!source || source.id !== +source.id) { return storableProperties; } _.each(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { storableProperties[key] = value; } }); return storableProperties; } }; }()); You can find the example in the propertyExtractor.js file within the each-with-properties-and-context folder from the source code for this article. The first highlighted code snippet checks whether the object passed to the extractStorableProperties() function has a property called id that is a number. The + sign converts the id property to a number and the non-identity operator !== compares the result of this conversion with the unconverted original value. The non-identity operator returns true only if the type of the compared objects is different or they are of the same type and have different values. This was a duck typing technique used by Underscore up until version 1.7 to assert whether it deals with an array-like instance or an object instance in its collections related functions. Underscore collection related functions operate over array-like objects as they do not strictly check for the built in Array object. These functions can also work with the arguments objects or the HTML DOM NodeList objects. The last highlighted code snippet is the _.each() function that operates over object properties using an iteration function that receives the property value as its first argument and the property name as the optional second argument. If a property has a null or undefined value it will not appear in the returned object. The extractStorableProperties() function will return a new object with all the storable properties. The return value is used in the test specifications to assert that, given a sample object, the function behaves as expected: describe("Given propertyExtractor", function() { describe("when calling extractStorableProperties()", function() { var storableProperties; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; storableProperties = propertyExtractor.extractStorableProperties(source); }); it("then the property count should be correct", function() { expect(Object.keys(storableProperties).length).toEqual(5); }); it("then the 'price' property should be correct", function() { expect(storableProperties.price).toEqual(10); }); it("then the 'description' property should not be defined", function() { expect(storableProperties.description).toEqual(undefined); }); }); }); Notice how we used the propertyExtractor global instance to access the function under test, and then, we used the ES5 function Object.keys to assert that the number of returned properties has the correct size. In a production ready application, we need to ensure that the global objects names do not clash among other best practices. You can find the test specification in the spec/propertyExtractorSpec.js file and execute them by browsing the SpecRunner.html file from the example source code folder. There is also an index.html file that will display the results of the example rendered in the browser using the index.js file. Manipulating the this variable Many Underscore functions have a similar signature with _.each(list, iteratee, [context]),where the optional context parameter will be used to set the this value for the iteratee function when it is called for each collection item. In JavaScript, the built in this variable will be different depending on the context where it is used. When the this variable is used in the global scope context, and in a browser environment, it will return the native window object instance. If this is used in a function scope, then the variable will have different values: If the function is an object method or an object constructor, then this will return the current object instance. Here is a short example code for this scenario: var item1 = { id: 1, name: "Item1", getInfo: function(){ return "Object: " + this.id + "-" + this.name; } }; console.log(item1.getInfo()); // -> “Object: 1-Item1” If the function does not belong to an object, then this will be undefined in the JavaScript strict mode. In the non-strict mode, this will return its global scope value. With a library such as Underscore that favors a functional style, we need to ensure that the functions used as parameters are using the this variable correctly. Let's assume that you have a function that references this (maybe it was used as an object method) and you want to use it with one of the Underscore functions such as _.each.. You can still use the function as is and provide the desired this value as the context parameter value when calling each. I have rewritten the previous example function to showcase the use of the context parameter: var propertyExtractor = (function() { "use strict"; return { extractStorablePropertiesWithThis: function(source) { var storableProperties = {}; if (!source || source.id !== +source.id) { return storableProperties; } _.each(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { this[key] = value; } }, storableProperties); return storableProperties; } }; }()); The first highlighted snippet shows the use of this, which is typical for an object method. The last highlighted snippet shows the context parameter value that this was set to. The storableProperties value will be passed as this for each iteratee function call. The test specifications for this example are identical with the previous example, and you can find them in the same folder each-with-properties-and-context from the source code for this article. You can use the optional context parameter in many of the Underscore functions where applicable and is a useful technique when working with functions that rely on a specific this value. Using map and reduce with object properties In the previous example, we had some user interface-specific code in the index.js file that was tasked with displaying the results of the propertyExtractor.extractStorableProperties() call in the browser. Let's pull this functionality in another example and imagine that we need a new function that, given an object, will transform its properties in a format suitable for displaying in a browser by returning an array of formatted text for each property. To achieve this, we will use the Underscore _.map() function over object properties as demonstrated in the next example: var propertyFormatter = (function() { "use strict"; return { extractPropertiesForDisplayAsArray: function(source) { if (!source || source.id !== +source.id) { return []; } return _.map(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { return "Property: " + key + " of type: " + typeof value + " has value: " + value; } return "Property: " + key + " cannot be displayed."; }); } }; }()); With Underscore, we can write compact and expressive code that manipulates these properties with little effort. The test specifications for the extractPropertiesForDisplayAsArray() function are using Jasmine regular expression matchers to assert the test conditions in the highlighted code snippets from the following example: describe("Given propertyFormatter", function() { describe("when calling extractPropertiesForDisplayAsArray()", function() { var propertiesForDisplayAsArray; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; propertiesForDisplayAsArray = propertyFormatter.extractPropertiesForDisplayAsArray(source); }); it("then the returned property count should be correct", function() { expect(propertiesForDisplayAsArray.length).toEqual(7); }); it("then the 'price' property should be displayed", function() { expect(propertiesForDisplayAsArray[4]).toMatch("price.+10"); }); it("then the 'description' property should not be displayed", function() { expect(propertiesForDisplayAsArray[2]).toMatch("cannot be displayed"); }); }); }); The following example shows how _.reduce() is used to manipulate object properties. This will transform the properties of an object in a format suitable for browser display by returning a string value that contains all the properties in a convenient format: extractPropertiesForDisplayAsString: function(source) { if (!source || source.id !== +source.id) { return []; } return _.reduce(source, function(memo, value, key) { if (memo && memo !== "") { memo += "<br/>"; } var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { return memo + "Property: " + key + " of type: " + typeof value + " has value: " + value; } return memo + "Property: " + key + " cannot be displayed."; }, ""); } The example is almost identical with the previous one with the exception of the memo accumulator used to build the returned string value. The test specifications for the extractPropertiesForDisplayAsString() function are using a regular expression matcher and can be found in the spec/propertyFormatterSpec.js file: describe("when calling extractPropertiesForDisplayAsString()", function() { var propertiesForDisplayAsString; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; propertiesForDisplayAsString = propertyFormatter.extractAllPropertiesForDisplay(source); }); it("then the returned string has expected length", function() { expect(propertiesForDisplayAsString.length).toBeGreaterThan(0); }); it("then the 'price' property should be displayed", function() { expect(propertiesForDisplayAsString).toMatch("<br/>Property: price of type: number has value: 10<br/>"); }); }); The examples from this subsection can be found within the map.reduce-with-properties folder from the source code for this article. Searching and filtering The _.find(list, predicate, [context]) function is part of the Underscore comprehensive functionality for searching and filtering collections represented by object properties and array like objects. We will make a distinction between search and filter functions with the former tasked with finding one item in a collection and the latter tasked with retrieving a subset of the collection (although sometimes, you will find the distinction between these functions thin and blurry). We will revisit the find function and the other search- and filtering-related functions using an example with slightly more diverse data that is suitable for database persistence. We will use the problem domain of a bicycle rental shop and build an array of bicycle objects with the following structure: var getBicycles = function() { return [{ id: 1, name: "A fast bike", type: "Road Bike", quantity: 10, rentPrice: 20, dateAdded: new Date(2015, 1, 2) }, { ... }, { id: 12, name: "A clown bike", type: "Children Bike", quantity: 2, rentPrice: 12, dateAdded: new Date(2014, 11, 1) }]; }; Each bicycle object has an id property, and we will use the propertyFormatter object built in the previous section to display the examples results in the browser for your convenience. The code was shortened here for brevity (you can find its full version alongside the other examples from this section within the searching and filtering folders from the source code for this article). All the examples are covered by tests and these are the recommended starting points if you want to explore them in detail. Searching For the first example of this section, we will define a bicycle-related requirement where we need to search for a bicycle of a specific type and with a rental price under a maximum value. Compared to the previous _.find() example, we will start with writing the tests specifications first for the functionality that is yet to be implemented. This is a test-driven development approach where we will define the acceptance criteria for the function under test first followed by the actual implementation. Writing the tests first forces us to think about what the code should do, rather than how it should do it, and this helps eliminate waste by writing only the code required to make the tests pass. Underscore find The test specifications for our initial requirement are as follows: describe("Given bicycleFinder", function() { describe("when calling findBicycle()", function() { var bicycle; beforeEach(function() { bicycle = bicycleFinder.findBicycle("Urban Bike", 16); }); it("then it should return an object", function() { expect(bicycle).toBeDefined(); }); it("then the 'type' property should be correct", function() { expect(bicycle.type).toEqual("Urban Bike"); }); it("then the 'rentPrice' property should be correct", function() { expect(bicycle.rentPrice).toEqual(15); }); }); }); The highlighted function call bicyleFinder.findBicycle() should return one bicycle object of the expected type and price as asserted by the tests. Here is the implementation that satisfies the test specifications: var bicycleFinder = (function() { "use strict"; var getBicycles = function() { return [{ id: 1, name: "A fast bike", type: "Road Bike", quantity: 10, rentPrice: 20, dateAdded: new Date(2015, 1, 2) }, { ... }, { id: 12, name: "A clown bike", type: "Children Bike", quantity: 2, rentPrice: 12, dateAdded: new Date(2014, 11, 1) }]; }; return { findBicycle: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.find(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } }; }()); The code returns the first bicycle that satisfies the search criteria ignoring the rest of the bicycles that might meet the same criteria. You can browse the index.html file from the searching folder within the source code for this article to see the result of calling the bicyleFinder.findBicycle() function displayed on the browser via the propertyFormatter object. Underscore some There is a closely related function to _.find() with the signature _.some(list, [predicate], [context]). This function will return true if at least one item of the list collection satisfies the predicate function. The predicate parameter is optional, and if it is not specified, the _.some() function will return true if at least one item of the collection is not null. This makes the function a good candidate for implementing guard clauses. A guard clause is a function that ensures that a variable (usually a parameter) satisfies a specific condition before it is being used any further. The next example shows how _.some() is used to perform checks that are typical for a guard clause: var list1 = []; var list2 = [null, , undefined, {}]; var object1 = {}; var object2 = { property1: null, property3: true }; if (!_.some(list1) && !_.some(object1)) { alert("Collections list1 and object1 are not valid when calling _.some() over them."); } if(_.some(list2) && _.some(object2)){ alert("Collections list2 and object2 have at least one valid item and they are valid when calling _.some() over them."); } If you execute this code in a browser, you will see both alerts being displayed. The first alert gets triggered when an empty array or an object without any properties defined are found. The second alert appears when we have an array with at least one element that is not null and is not undefined or when we have an object that has at least one property that evaluates as true. Going back to our bicycle data, we will define a new requirement to showcase the use of _.some() in this context. We will implement a function that will ensure that we can find at least one bicycle of a specific type and with a maximum rent price. The code is very similar to the bicycleFinder.findBicycle() implementation with the difference that the new function returns true if the specific bicycle is found (rather than the actual object): hasBicycle: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.some(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } You can find the tests specifications for this function in the spec/bicycleFinderSpec.js file from the searching example folder. Underscore findWhere Another function similar to _.find() has the signature _.findWhere(list, properties). This compares the property key-value pairs of each collection item from list with the property key-value pairs found on the properties object parameter. Usually, the properties parameter is an object literal that contains a subset of the properties of a collection item. The _.findWhere() function is useful when we need to extract a collection item matching an exact value compared to _.find() that can extract a collection item that matches a range of values or more complex criteria. To showcase the function, we will implement a requirement that needs to search a bicycle that has a specific id value. This is how the test specifications look like: describe("when calling findBicycleById()", function() { var bicycle; beforeEach(function() { bicycle = bicycleFinder.findBicycleById(6); }); it("then it should return an object", function() { expect(bicycle).toBeDefined(); }); it("then the 'id' property should be correct", function() { expect(bicycle.id).toEqual(6); }); }); And the next code snippet from the bicycleFinder.js file contains the actual implementation: findBicycleById: function(id){ var bicycles = getBicycles(); return _.findWhere(bicycles, {id: id}); } Underscore contains In a similar vein, with the _.some() function, there is a _.contains(list, value) function that will return true if there is at least one item from the list collection that is equal to the value parameter. The equality check is based on the strict comparison operator === where the operands will be checked for both type and value equality. We will implement a function that checks whether a bicycle with a specific id value exists in our collection: hasBicycleWithId: function(id) { var bicycles = getBicycles(); var bicycleIds = _.pluck(bicycles,"id"); return _.contains(bicycleIds, id); } Notice how the _.pluck(list, propertyName) function was used to create an array that stores the id property value for each collection item. In its implementation, _.pluck() is actually using _.map(), acting like a shortcut function for it. Filtering As we mentioned at the beginning of this section, Underscore provides powerful filtering functions, which are usually tasked with working on a subsection of a collection. We will reuse the same example data as before, and we will build some new functions to explore this functionality. Underscore filter We will start by defining a new requirement for our data where we need to build a function that retrieves all bicycles of a specific type and with a maximum rent price. This is how the test specifications looks like for the yet to be implemented function bicycleFinder.filterBicycles(type, maxRentPrice): describe("when calling filterBicycles()", function() { var bicycles; beforeEach(function() { bicycles = bicycleFinder.filterBicycles("Urban Bike", 16); }); it("then it should return two objects", function() { expect(bicycles).toBeDefined(); expect(bicycles.length).toEqual(2); }); it("then the 'type' property should be correct", function() { expect(bicycles[0].type).toEqual("Urban Bike"); expect(bicycles[1].type).toEqual("Urban Bike"); }); it("then the 'rentPrice' property should be correct", function() { expect(bicycles[0].rentPrice).toEqual(15); expect(bicycles[1].rentPrice).toEqual(14); }); }); The test expectations are assuming the function under test filterBicycles() returns an array, and they are asserting against each element of this array. To implement the new function, we will use the _.filter(list, predicate, [context]) function that returns an array with all the items from the list collection that satisfy the predicate function. Here is our example implementation code: filterBicycles: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.filter(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } The usage of the _.filter() function is very similar to the _.find() function with the only difference in the return type of these functions. You can find this example together with the rest of examples from this subsection within the filtering folder from the source code for this article. Underscore where Underscore defines a shortcut function for _.filter() which is _.where(list, properties). This function is similar to the _.findWhere() function, and it uses the properties object parameter to compare and retrieve all the items from the list collection with matching properties. To showcase the function, we defined a new requirement for our example data where we need to retrieve all bicycles of a specific type. This is the code that implements the requirement: filterBicyclesByType: function(type) { var bicycles = getBicycles(); return _.where(bicycles, { type: type }); } By using _.where(), we are in fact using a more compact and expressive version of _.filter() in scenarios where we need to perform exact value matches. Underscore reject and partition Underscore provides a useful function which is the opposite for _.filter() and has a similar signature: _.reject(list, predicate, [context]). Calling the function will return an array of values from the list collection that do not satisfy the predicate function. To show its usage we will implement a function that retrieves all bicycles with a rental price less than or equal with a given value. Here is the function implementation: getAllBicyclesForSetRentPrice: function(setRentPrice) { var bicycles = getBicycles(); return _.reject(bicycles, function(bicycle) { return bicycle.rentPrice > setRentPrice; }); } Using the _.filter() function alongside the _.reject() function with the same list collection and predicate function will allow us to partition the collection in two arrays. One array holds items that do satisfy the predicate function while the other holds items that do not satisfy the predicate function. Underscore has a more convenient function that achieves the same result and this is _.partition(list, predicate). It returns an array that has two array elements: the first has the values that would be returned by calling _.filter() using the same input parameters and the second has the values for calling _.reject(). Underscore every We mentioned _.some() as being a great function for implementing guard clauses. It is also worth mentioning another closely related function _.every(list, [predicate], [context]). The function will check every item of the list collection and will return true if every item satisfies the predicate function or if list is null, undefined or empty. If the predicate function is not specified the value of each item will be evaluated instead. If we use the same data from the guard clause example for _.some() we will get the opposite results as shown in the next example: var list1 = []; var list2 = [null, , undefined, {}]; var object1 = {}; var object2 = { property1: null, property3: true }; if (_.every(list1) && _.every(object1)) { alert("Collections list1 and object1 are valid when calling _.every() over them."); } if(!_.every(list2) && !_.every(object2)){ alert("Collections list2 and object2 do not have all items valid so they are not valid when calling _.every() over them."); } To ensure a collection is not null, undefined, or empty and each item is also not null or undefined we should use both _.some() and _.every() as part of the same check as shown in the next example: var list1 = [{}]; var object1 = { property1: {}}; if (_.every(list1) && _.every(object1) && _.some(list1) && _.some(object1)) { alert("Collections list1 and object1 are valid when calling both _some() and _.every() over them."); } If the list1 object is an empty array or an empty object literal calling _.every() for it returns true while calling _some() returns false hence the need to use both functions when validating a collection. These code examples demonstrate how you can build your own guard clauses or data validation rules by using simple Underscore functions. Summary In this article, we explored many of the collection specific functions provided by Underscore and demonstrated additional functionality. We continued with searching and filtering functions. Resources for Article: Further resources on this subject: Packaged Elegance[article] Marshalling Data Services with Ext.Direct[article] Understanding and Developing Node Modules [article]
Read more
  • 0
  • 0
  • 4122
article-image-deploying-your-own-server
Packt
30 Sep 2015
16 min read
Save for later

Deploying on your own server

Packt
30 Sep 2015
16 min read
In this article by Jack Stouffer, the author of the book Mastering Flask, you will learn how to deploy and host your application on the different options available, and the advantages and disadvantages related to them. The most common way to deploy any web app is to run it on a server that you have control over. Control in this case means access to the terminal on the server with an administrator account. This type of deployment gives you the most amount of freedom out of the other choices as it allows you to install any program or tool you wish. This is in contrast to other hosting solutions where the web server and database are chosen for you. This type of deployment also happens to be the least expensive option. The downside to this freedom is that you take the responsibility of keeping the server up, backing up user data, keeping the software on the server up to date to avoid security issues, and so on. Entire books have been written on good server management, so if this is not a responsibility that you believe you or your company can handle, it would be best if you choose one of the other deployment options. This section will be based on a Debian Linux-based server, as Linux is far and away the most popular OS for running web servers, and Debian is the most popular Linux distro (a particular combination of software and the Linux kernel released as a package). Any OS with Bash and a program called SSH (which will be introduced in the next section) will work for this article, the only differences will be the command-line programs to install software on the server. (For more resources related to this topic, see here.) Each of these web servers will use a protocol named Web Server Gateway Interface (WSGI), which is a standard designed to allow Python web applications to easily communicate with web servers. We will never directly work with WSGI. However, most of the web server interfaces we will be using will have WSGI in their name, and it can be confusing if you don't know what the name is. Pushing code to your server with fabric To automate the process of setting up and pushing our application code to the server, we will use a Python tool called fabric. Fabric is a command-line program that reads and executes Python scripts on remote servers using a tool called SSH. SSH is a protocol that allows a user of one computer to remotely log in to another computer and execute commands on the command line, provided that the user has an account on the remote machine. To install fabric, we will use pip: $ pip install fabric Fabric commands are collections of command-line programs to be run on the remote machine's shell, in this case, Bash. We are going to make three different commands: one to run our unit tests, one to set up a brand new server to our specifications, and one to have the server update its copy of the application code with git. We will store these commands in a new file at the root of our project directory called fabfile.py. As it's the easiest to create, let's make the test command first: from fabric.api import local def test(): local('python -m unittest discover') To run this function from the command line, we can use fabric's command-line interface by passing the name of the command to run: $ fab test [localhost] local: python -m unittest discover ..... --------------------------------------------------------------------- Ran 5 tests in 6.028s OK Fabric has three main commands: local, run, and sudo. The local function, as seen in the preceding function, runs commands on the local computer. The run and sudo functions run commands on a remote machine, but sudo runs commands as an administrator. All of these functions notify fabric if the command ran successfully or not. If a command didn't run successfully, meaning that our tests failed in this case, any other commands in the function will not be run. This is useful for our commands because it allows us to force ourselves not to push any code to the server that does not pass our tests. Now we need to create the command to set up a new server from scratch. What this command will do is install the software our production environment needs as well as downloads the code from our centralized git repository. It will also create a new user that will act as the runner of the web server as well as the owner of the code repository. Do not run your webserver or have your code deployed by the root user. This opens your application to a whole host of security vulnerabilities. This command will differ based on your operating system, and we will be adding to this command in the rest of the article based on what server you choose: from fabric.api import env, local, run, sudo, cd env.hosts = ['deploy@[your IP]'] def upgrade_libs(): sudo("apt-get update") sudo("apt-get upgrade") def setup(): test() upgrade_libs() # necessary to install many Python libraries sudo("apt-get install -y build-essential") sudo("apt-get install -y git") sudo("apt-get install -y python") sudo("apt-get install -y python-pip") # necessary to install many Python libraries sudo("apt-get install -y python-all-dev") run("useradd -d /home/deploy/ deploy") run("gpasswd -a deploy sudo") # allows Python packages to be installed by the deploy user sudo("chown -R deploy /usr/local/") sudo("chown -R deploy /usr/lib/python2.7/") run("git config --global credential.helper store") with cd("/home/deploy/"): run("git clone [your repo URL]") with cd('/home/deploy/webapp'): run("pip install -r requirements.txt") run("python manage.py createdb") There are two new fabric features in this script. One is the env.hosts assignment, which tells fabric the user and IP address of the machine it should be logging in to. Second, there is the cd function used in conjunction with the with keyword, which executes any functions in the context of that directory instead of the home directory of the deploy user. The line that modifies the git configuration is there to tell git to remember your repository's username and password, so you do not have to enter it every time you wish to push code to the server. Also, before the server is set up, we make sure to update the server's software to keep the server up to date. Finally, we have the function to push our new code to the server. In time, this command will also restart the web server and reload any configuration files that come from our code. But this depends on the server you choose, so this is filled out in the subsequent sections: def deploy(): test() upgrade_libs() with cd('/home/deploy/webapp'): run("git pull") run("pip install -r requirements.txt") So, if we were to begin working on a new server, all we would need to do to set it up is to run the following commands: $ fabric setup $ fabric deploy Running your web server with supervisor Now that we have automated our updating process, we need some program on the server to make sure that our web server, and database if you aren't using SQLite, is running. To do this, we will use a simple program called supervisor. All that supervisor does is automatically run command-line programs in background processes and allows you to see the status of running programs. Supervisor also monitors all of the processes its running, and if the process dies, it tries to restart it. To install supervisor, we need to add it to the setup command in our fabfile.py: def setup(): … sudo("apt-get install -y supervisor") To tell supervisor what to do, we need to create a configuration file and then copy it to the /etc/supervisor/conf.d/ directory of our server during the deploy fabric command. Supervisor will load all of the files in this directory when it starts and attempt to run them. In a new file in the root of our project directory named supervisor.conf, add the following: [program:webapp] command= directory=/home/deploy/webapp user=deploy [program:rabbitmq] command=rabbitmq-server user=deploy [program:celery] command=celery worker -A celery_runner directory=/home/deploy/webapp user=deploy This is the bare minimum configuration needed to get a web server up and running. But, supervisor has a lot more configuration options. To view all of the customizations, go to the supervisor documentation at http://supervisord.org/. This configuration tells supervisor to run a command in the context of /home/deploy/webapp under the deploy user. The right hand of the command value is empty because it depends on what server you are running and will be filled in for each section. Now we need to add a sudo call in the deploy command to copy this configuration file to the /etc/supervisor/conf.d/ directory: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp supervisord.conf /etc/supervisor/conf.d/webapp.conf") sudo('service supervisor restart') A lot of projects just create the files on the server and forget about them, but having the configuration file stored in our git repository and copied on every deployment gives several advantages. First, this means that it easy to revert changes if something goes wrong using git. Second, it means that we don't have to log in to our server in order to make changes to the files. Don't use the Flask development server in production. Not only it fails to handle concurrent connections, but it also allows arbitrary Python code to be run on your server. Gevent The simplest option to get a web server up and running is to use a Python library called gevent to host your application. Gevent is a Python library that adds an alternative way of doing concurrent programming outside of the Python threading library called coroutines. Gevent has an interface for running WSGI applications that is both simple and has good performance. A simple gevent server can easily handle hundreds of concurrent users, which is more in number than 99 percent of websites on the Internet will ever have. The downside to this option is that its simplicity means a lack of configuration options. There is no way, for example, to add rate limiting to the server or to add HTTPS traffic. This deployment option is purely for sites that you don't expect to receive a huge amount of traffic. Remember YAGNI (short for You Aren't Gonna Need It); only upgrade to a different web server if you really need to. Coroutines are a bit outside of the scope of this book, so a good explanation can be found at https://en.wikipedia.org/wiki/Coroutine. To install gevent, we will use pip: $ pip install gevent In a new file in the root of the project directory named gserver.py, add the following: from gevent.wsgi import WSGIServer from webapp import create_app app = create_app('webapp.config.ProdConfig') server = WSGIServer(('', 80), app) server.serve_forever() To run the server with supervisor, just change the command value to the following: [program:webapp] command=python gserver.py directory=/home/deploy/webapp user=deploy Now when you deploy, gevent will be automatically installed for you by running your requirements.txt on every deployment, that is, if you are properly pip freeze–ing after every new dependency is added. Tornado Tornado is another very simple way to deploy WSGI apps purely with Python. Tornado is a web server that is designed to handle thousands of simultaneous connections. If your application needs real-time data, Tornado also supports websockets for continuous, long-lived connections to the server. Do not use Tornado in production on a Windows server. The Windows version of Tornado is not only much slower, but it is considered beta quality software. To use Tornado with our application, we will use Tornado's WSGIContainer in order to wrap the application object to make it Tornado compatible. Then, Tornado will start to listen on port 80 for requests until the process is terminated. In a new file named tserver.py, add the following: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from webapp import create_app app = WSGIContainer(create_app("webapp.config.ProdConfig")) http_server = HTTPServer(app) http_server.listen(80) IOLoop.instance().start() To run the Tornado with supervisor, just change the command value to the following: [program:webapp] command=python tserver.py directory=/home/deploy/webapp user=deploy Nginx and uWSGI If you need more performance or customization, the most popular way to deploy a Python web application is to use the web server Nginx as a frontend for the WSGI server uWSGI by using a reverse proxy. A reverse proxy is a program in networks that retrieves contents for a client from a server as if they returned from the proxy itself as shown in the following figure: Nginx and uWSGI are used in this way because we get the power of the Nginx frontend while having the customization of uWSGI. Nginx is a very powerful web server that became popular by providing the best combination of speed and customization. Nginx is consistently faster than other web severs, such as Apache httpd, and has native support for WSGI applications. The way it achieves this speed is several good architecture decisions as well as the decision early on that they were not going to try to cover a large amount of use cases like Apache does. Having a smaller feature set makes it much easier to maintain and optimize the code. From a programmer's perspective, it is also much easier to configure Nginx, as there is no giant default configuration file (httpd.conf) that needs to be overridden with .htaccess files in each of your project directories. One downside is that Nginx has a much smaller community than Apache, so if you have an obscure problem, you are less likely to be able to find answers online. Also, it's possible that a feature that most programmers are used to in Apache isn't supported in Nginx. uWSGI is a web server that supports several different types of server interfaces, including WSGI. uWSGI handles severing the application content as well as things such as load balancing traffic across several different processes and threads. To install uWSGI, we will use pip in the following way: $ pip install uwsgi In order to run our application, uWSGI needs a file with an accessible WSGI application. In a new file named wsgi.py in the top level of the project directory, add the following: from webapp import create_app app = create_app("webapp.config.ProdConfig") To test uWSGI, we can run it from the command line with the following: $ uwsgi --socket 127.0.0.1:8080 --wsgi-file wsgi.py --callable app --processes 4 --threads 2 If you are running this on your server, you should be able to access port 8080 and see your app (if you don't have a firewall that is). What this command does is load the app object from the wsgi.py file and makes it accessible from localhost on port 8080. It also spawns four different processes with two threads each, which are automatically load balanced by a master process. This amount of processes is the overkill for the vast, vast majority of websites. To start off, use a single process with two threads and scale up from there. Instead of adding all of the configuration options on the command line, we can create a text file to hold our configuration, which brings the same benefits for configuration that were listed in the section on supervisor. In a new file in the root of the project directory named uwsgi.ini, add the following: [uwsgi] socket = 127.0.0.1:8080 wsgi-file = wsgi.py callable = app processes = 4 threads = 2 uWSGI supports hundreds of configuration options as well as several official and unofficial plugins. To leverage the full power of uWSGI, you can explore the documentation at http://uwsgi-docs.readthedocs.org/. Let's run the server now from supervisor: [program:webapp] command=uwsgi uwsgi.ini directory=/home/deploy/webapp user=deploy We also need to install Nginx during the setup function: def setup(): … sudo("apt-get install -y nginx") Because we are installing Nginx from the OS's package manager, the OS will handle running Nginx for us. At the time of writing, the Nginx version in the official Debian package manager is several years old. To install the most recent version, follow the instructions here: http://wiki.nginx.org/Install. Next, we need to create an Nginx configuration file and then copy it to the /etc/nginx/sites-available/ directory when we push the code. In a new file in the root of the project directory named nginx.conf, add the following server { listen 80; server_name your_domain_name; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:8080; } location /static { alias /home/deploy/webapp/webapp/static; } } What this configuration file does is tell Nginx to listen for incoming requests on port 80 and forward all requests to the WSGI application that is listening on port 8080. Also, it makes an exception for any requests for static files and instead sends those requests directly to the file system. Bypassing uWSGI for static files gives a great performance boost, as Nginx is really good at serving static files quickly. Finally, in the fabfile.py file: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp nginx.conf " "/etc/nginx/sites-available/[your_domain]") sudo("ln -sf /etc/nginx/sites-available/your_domain " "/etc/nginx/sites-enabled/[your_domain]") sudo("service nginx restart") Apache and uWSGI Using Apache httpd with uWSGI has mostly the same setup. First off, we need an apache configuration file in a new file in the root of our project directory named apache.conf: <VirtualHost *:80> <Location /> ProxyPass / uwsgi://127.0.0.1:8080/ </Location> </VirtualHost> This file just tells Apache to pass all requests on port 80 to the uWSGI web server listening on port 8080. But, this functionality requires an extra Apache plugin from uWSGI called mod proxy uWSGI. We can install this as well as Apache in the set command: def setup(): … sudo("apt-get install -y apache2") sudo("apt-get install -y libapache2-mod-proxy-uwsgi") Finally, in the deploy command, we need to copy our Apache configuration file into Apache's configuration directory. def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp apache.conf " "/etc/apache2/sites-available/[your_domain]") sudo("ln -sf /etc/apache2/sites-available/[your_domain] " "/etc/apache2/sites-enabled/[your_domain]") sudo("service apache2 restart") Summary In this article you learnt that there are many different options to hosting your application, each having their own pros and cons. Deciding on one depends on the amount of time and money you are willing to spend as well as the total number of users you expect. Resources for Article: Further resources on this subject: Handling sessions and users[article] Snap – The Code Snippet Sharing Application[article] Man, Do I Like Templates! [article] from fabric.api import local def test():     local('python -m unittest discover')
Read more
  • 0
  • 0
  • 6600

article-image-oracle-api-management-implementation-12c
Packt
29 Sep 2015
5 min read
Save for later

Oracle API Management Implementation 12c

Packt
29 Sep 2015
5 min read
 This article by Luis Augusto Weir, the author of the book, Oracle API Management 12c Implementation, gives you a gist of what is covered in the book. At present, the digital transformation is essential for any business strategy, regardless of the industry they belong to an organization. (For more resources related to this topic, see here.) The companies who embark on a journey of digital transformation, they become able to create innovative and disruptive solutions; this in order to deliver a user experience much richer, unified, and personalized at lower cost. These organizations are able to address customers dynamically and across a wide variety of channels, such as mobile applications, highly responsive websites, and social networks. Ultimately, companies that develop models aligned digital innovation business, acquire a considerable competitive advantage over those that do not. The main trigger for this transformation is the ability to expose and make available business information and key technological capabilities for this, which often are buried in information systems (EIS) of the organization, or in components integration are only visible internally. In the digital economy, it is highly desirable to realize those assets in a standardized way through APIs, this course, in a controlled, scalable, and secure environment. The lightweight nature and ease of finding/using these APIs greatly facilitates its adoption as the essential mechanism to expose and/or consume various features from a multichannel environment. API Management is the discipline that governs the development cycle of APIs, defining the tools and processes needed to build, publish, and operate, also including management development communities around them. Our recent book, API Management Oracle 12c (Luis Weir, Andrew Bell, Rolando Carrasco, Arturo Viveros), is a very comprehensive and detailed to implement API Management in an organization guide. In this book, he explains the relationship that keeps this discipline with concepts such great detail as SOA Governance and DevOps .The convergence of API Management with SOA and governance of such services is addressed particularly to explain and shape the concept of Application Services Governance (ESG). On the other hand, it highlights the presence of case studies based on real scenarios, with multiple examples to demonstrate the correct definition and implementation of a robust strategy in solving supported Oracle Management API. The book begins by describing a number of key concepts about API Management and contextualizing the complementary disciplines, such as SOA Governance, DevOps, and Enterprise Architecture (EA). This is in order to clear up any confusion about the relationship to these topics. Then, all these concepts are put into practice by defining the case study of an organization with real name, which previously dealt with successfully implementing a service-oriented architecture considering the government of it, and now It is the need/opportunity to extend its technology platform by implementing a strategy of API Management. Throughout the narrative of the case are also described: Business requirements justifying the adoption of API Management The potential impact of the proposed solution on the organization The steps required to design and implement the strategy The definition and implementation of the assessment of maturity (API Readiness) and analysis of gaps in terms of: people, tools, and technology The exercise of evaluation and selection of products, explaining the choice of Oracle as the most appropriate solution The implementation roadmap API Management In later chapters, the various steps are being addressed one by one needed to solve the raised stage, by implementing the following reference architecture for API Management, based on the components of the Oracle solution: Catalog API, API Manager, and API Gateway. In short, the book will enable the reader to acquire a number of advanced knowledge on the following topics: API Management, its definition, concepts, and objectives Differences and similarities between API Management and SOA Governance; where and how these two disciplines converge in the concept of ESG Application Services Governance[d1]  and how to define a framework aimed at ASG Definition and implementation of the assessment of maturity for API Management Criteria for the selection and evaluation tools; Why Oracle API Management Suite? Implementation of Oracle API Catalog (OAC), including OAC harvesting by bootstrapping & ANT scripts and JDev, OAC Console, user creation and management, metadata API, API Discovery, and how to extend the functionality of OAC REX by API. Management APIs and challenges in general API Management Oracle Implementation Manager API (OAPIM), including the creation, publishing, monitoring, subscription, and life cycle management APIs by OAPIM Portal Common scenarios for adoption/implementation of API Management and how to solve them[d2]  Implementation of Oracle API Gateway (OAG), including creation of policies with different filters, OAuth authentication, integration with LDAP, SOAP/REST APIs conversions, and Testing. Defining the deployment topology for Oracle API Management Suite Installing and configuring OAC, OAPIM, and OAG 12c Oracle Management API is designed for the following audience: Enterprise Architects, Solution Architects, Technical Leader and SOA and APIs professionals seeking to know thoroughly and successfully implement the Oracle API Management solution. Summary In this article, we looked at Oracle API Management Implementation 12c in brief. More information on this is provided in the book. Resources for Article: Further resources on this subject: Oracle 12c SQL and PL/SQL New Features[article] Securing Data at Rest in Oracle 11g[article] Getting Started with Oracle Primavera P6 [article]
Read more
  • 0
  • 0
  • 2513