Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
article-image-deploy-toshi-bitcoin-node-docker-aws
Alex Leishman
05 Aug 2015
8 min read
Save for later

Deploy Toshi Bitcoin Node with Docker on AWS

Alex Leishman
05 Aug 2015
8 min read
Toshi is an implementation of the Bitcoin protocol, written in Ruby and built by Coinbase in response to their fast growth and need to build Bitcoin infrastructure at scale. This post will cover: How to deploy Toshi to an Amazon AWS instance with Redis and PostgreSQL using Docker. How to query the data to gain insights into the Blockchain To get the most out of this post you will need some basic familiarity with Linux, SQL and AWS. Most Bitcoin nodes run “Bitcoin Core”, which is written in C++ and serves as the de-facto standard implementation of the Bitcoin protocol. Its advantages are that it is fast for light-medium use and efficiently stores the transaction history of the network (the blockchain) in LevelDB, a key-value datastore developed at Google. It has wallet management features and an easy-to-use JSON RPC interface for communicating with other applications. However, Bitcoin Core has some shortcomings that make it difficult to use for wallet/address management in at-scale applications. Its database, although efficient, makes it impossible or very difficult to perform certain queries on the blockchain. For example, if you wanted to get the balance of any bitcoin address, you would have to write a script to parse the blockchain separately to find the answer. Additionally, Bitcoin Core starts to significantly slow down when it has to manage and monitor large amounts of addresses (> ~10^7). For a web app with hundreds of thousands of users, each regularly generating new addresses, Bitcoin Core is not ideal. Toshi attempts to address the flexibility and scalability issues facing Bitcoin Core by parsing and storing the entire blockchain in an easily-queried PostgreSQL database. Here is a list of tables in Toshi’s DB: schema.txt We will see the direct benefit of this structure when we start querying our data to gain insights from the blockchain. Since Toshi is written in Ruby it has the added advantage of being developer friendly and easy to customize. The main downside of Toshi is the need for ~10x more storage than Bitcoin core, as storing and indexing the blockchain in well-indexed relational DB requires significantly more disk space. First we will create an instance on Amazon AWS. You will need at least 300GB of storage for the Postgres database. Be sure to auto assign a public IP and allow TLS incoming connections on Port 5000, as this is how we will access the Toshi web interface. Once you get your instance up and running, SSH into the instance using the commands given by Amazon. First we will set up a user for Toshi: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi Adding user `toshi' ... Adding new group `toshi' (1001) ... Adding new user `toshi' (1001) with group `toshi' ... Creating home directory `/home/toshi' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for toshi Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y Then we will add the new user to the sudoers group and switch to that user: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi sudo Adding user `toshi' to group `sudo' ... Adding user toshi to group sudo Done. ubuntu@ip-172-31-62-77:~$ su – toshi toshi@ip-172-31-62-77:~$ Next, we will install Docker and all of its dependencies through an automated script available on the Docker website. This will provision our instance with the necessary software packages. toshi@ip-172-31-62-77:~$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir ..... Then we will clone the Toshi repo from Github and move into the new directory: toshi@ip-172-31-62-77:~$ git clone https://github.com/coinbase/toshi.gittoshi@ip-172-31-62-77:~$ cd toshi/ Next, build the coinbase/toshi Docker image from the Dockerfile located in the /toshi directory. Don’t forget the dot at the end of the command!! toshi@ip-172-31-62-77:~/toshi$ sudo docker build -t=coinbase/toshi .Sending build context to Docker daemon 13.03 MB Sending build context to Docker daemon … … … Removing intermediate container c15dd6c961c2 Step 3 : ADD Gemfile /toshi/Gemfile INFO[0120] Error getting container dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465 from driver devicemapper: Error mounting '/dev/mapper/docker-202:1-524950-dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465' on '/var/lib/docker/devicemapper/mnt/dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465': no such file or directory Note, you might see ‘Error getting container’ when this runs. If so don’t worry about it at this point. Next, we will build and run our Redis and Postgres containers. toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_db -d postgres toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_redis -d redis This will build and run Docker containers named toshi_db and toshi_redis based on standard postgres and redis images pulled from Dockerhub. The ‘-d’ flag indicates that the container will run in the background (daemonized). If you see ‘Error response from daemon: Cannot start container’ error while running either of these commands, simply run ‘sudo docker start toshi_redis [or toshi_postgres]’ again. To ensure that our containers are running properly, run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4de43ccc8e80 redis:latest "/entrypoint.sh redi 7 minutes ago Up 3 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 8 minutes ago Up 2 minutes 5432/tcp toshi_db You should see both containers running, along with their port numbers. When we run our Toshi container we need to tell it where to find the Postgres and Redis containers, so we must find the toshi_db and toshi_redis IP addresses. Remember we have not run a Toshi container yet, we only built the image from the Dockerfile. You can think of a container as a running version of an image. To learn more about Docker see the docs. toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_db | grep IPAddress "IPAddress": "172.17.0.3", toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_redis | grep IPAddress "IPAddress": "172.17.0.2", Now we have everything we need to get our Toshi container up and running. To do this run: sudo docker run --name toshi_main -d -p 5000:5000 -e REDIS_URL=redis://172.17.0.2:6379 -e DATABASE_URL=postgres://postgres:@172.17.0.3:5432 -e TOSHI_ENV=production coinbase/toshi sh -c 'bundle exec rake db:create db:migrate; foreman start' Be sure to replace the IP addresses in the above command with your own. This creates a container named ‘toshi_main’, runs it as a daemon (-d) and sets three environment variables in the container (-e) which are required for Toshi to run. It also maps port 5000 inside the container to port 5000 of our host (-p). Lastly it runs a shell script in the container (sh –c) which creates and migrates the database, then starts the Toshi web server. To see that it has started properly run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 017c14cbf432 coinbase/toshi:latest "sh -c 'bundle exec 6 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp toshi_main 4de43ccc8e80 redis:latest "/entrypoint.sh redi 43 minutes ago Up 38 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 43 minutes ago Up 38 minutes 5432/tcp toshi_db If you have set your AWS security settings properly, you should be able to see the syncing progress of Toshi in your browser. Find your instance’s public IP address from the AWS console and then point your browser there using port 5000. For example: ‘http://54.174.195.243:5000/’. You can also see the logs of our Toshi container by running: toshi@ip-172-31-62-77:~$ sudo docker logs –f toshi_main That’s it! We’re all up and running. Be prepared to wait a long time for the blockchain to finish syncing. This could take more than a week or two, but you can start playing around with the data right away through the GUI to get a sense of the power you now have. About the Author Alex Leishman is a software engineer who is passionate about Bitcoin and other digital currencies. He works at MaiCoin.com where he is helping to build the future of money.
Read more
  • 0
  • 0
  • 3016

Packt
23 Jul 2015
9 min read
Save for later

Eloquent… without Laravel!

Packt
23 Jul 2015
9 min read
In this article by, Francesco Malatesta author of the book, Learning Laravel’s Eloquent, we will learn everything about Eloquent, starting from the very basics and going through models, relationships, and other topics. You probably started to like it and think about implementing it in your next project. In fact, creating an application without a single SQL query is tempting. Maybe you also showed it to your boss and convinced him/her to use it in your next production project. However, there is a little problem. Yeah, the next project isn't so new. It already exists, and, despite everything, it doesn't use Laravel! You start to shiver. This is so sad because you passed the last week studying this new ORM, a really cool one, and then moving forward. There is always a solution! You are a developer! Also, the solution is not so hard to find. If you want, you can use Eloquent without Laravel. Actually, Laravel is not a monolithic framework. It is made up of several, separate parts, which are combined together to build something greater. However, nothing prevents you from using only selected packages in another application. (For more resources related to this topic, see here.) So, what are we going to see in this article? First of all, we will explore the structure of the database package and see what is inside it. Then, you will learn how to install the illuminate/database package separately for your project and how to configure it for the first use. Then, you will encounter some examples. First of all, we will look at the Eloquent ORM. You will learn how to define models and use them. Having done this, as a little extra, I will show you how to use the Query Builder (remember that the "illuminate/database" package isn't just Eloquent). Maybe you would also enjoy the Schema Builder class. I will cover it, don't worry! We will cover the following: Exploring the directory structure Installing and configuring the database package Using the ORM Using the Query and Schema Builders Summary Exploring the directory structure As I mentioned before, the key step in order to use Eloquent in your application without Laravel is to use the "illuminate/database" package. So, before we install it, let's examine it a little. You can see the package contents here: https://github.com/illuminate/database. So, this is what you will probably see: Folder Description Capsule The capsule manager is a fundamental component. It instantiates the service container and loads some dependencies. Connectors The database package can communicate with various DB systems. For instance, SQLite, MySQL, or PostgreSQL. Every type of database has its own connector. This is the folder in which you will find them. Console The database package isn't just Eloquent with a bunch of connectors. In this specific folder, you will find everything related to console commands, such as artisan db:seed or artisan migrate. Eloquent Every single Eloquent class is placed here. Migrations Don't confuse this with the Console folder. Every class related to migrations is stored here. When you type artisan migrate in your terminal, you are calling a class that is placed here. Query The Query Builder is placed here. Schema Everything related to the Schema Builder is placed here. In the main folder, you will also find some other files. However, don't worry as you don't need to know what they are. If you open the composer.json file, take a look at the following "require" section: "require": { "php": ">=5.4.0", "illuminate/container": "5.1.*", "illuminate/contracts": "5.1.*", "illuminate/support": "5.1.*", "nesbot/carbon": "~1.0" }, As you can see, the database package has some prerequisites that you can't avoid. However, the container is quite small, and it is the same for contracts (just some interfaces) and "illuminate/support". Eloquent uses Carbon (https://github.com/briannesbitt/Carbon) to deal with dates in a smarter way. So, if you are seeing this for the first time and you are confused, don't worry! Everything is all right. Now that you know what you can find in this package, let's see how to install it and configure it for the first time. Installing and configuring the database package Let's start with the setup. First of all, we will install the package using composer as usual. After that, we will configure the capsule manager in order to get started. Installing the package Installing the "illuminate/database" package is really easy. All you have to do is to add "illuminate/database" to the "require" section of your composer.json file, like this: "require": {     "illuminate/database": "5.0.*",   }, Then type composer update in to your terminal, and wait for a few seconds. Another way is to include it with the shortcut in your project folder, obviously from the terminal: composer require illuminate/database No matter which method you chose, you just installed the package. Configuring the package Time to use the capsule manager! In your project, you will use something like this to get started: use Illuminate\Database\Capsule\Manager as Capsule;   $capsule = new Capsule;   $capsule->addConnection([ 'driver'   => 'mysql', 'host'     => 'localhost', 'database' => 'database', 'username' => 'root', 'password' => 'password', 'charset'   => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix'   => '', ]);     // Set the event dispatcher used by Eloquent models... (optional) use Illuminate\Events\Dispatcher; use Illuminate\Container\Container; $capsule->setEventDispatcher(new Dispatcher(new Container)); The config syntax I used is exactly the same you can find in the config/database.php configuration file. The only difference is that this time you are explicitly using an instance of the capsule manager in order to do everything. In the second part of the code, I am setting up the event dispatcher. You must do this if events are required from your project. However, events are not included by default in this package, so you will have to manually add the "illuminate/events" dependency to your composer.json file. Now, the final step! Add this code to your setup file: // Make this Capsule instance available globally via static methods... (optional) $capsule->setAsGlobal();   // Setup the Eloquent ORM... (optional; unless you've used setEventDispatcher()) $capsule->bootEloquent(); With setAsGlobal() called on the capsule manager, you can set it as a global component in order to use it with static methods. You may like it or not; the choice is yours. The final line starts up Eloquent, so you will need it. However, this is also an optional instruction. In some situations you may need the Query Builder only. Then there is nothing else to do! Your application is now configured with the database package (and Eloquent)! Using the ORM Using the Eloquent ORM in a non-Laravel application is not a big change. All you have to do is to declare your model as you are used to doing. Then, you need to call it and use it as you are used to. Here is a perfect example of what I am talking about: use Illuminate\Database\Eloquent\Model;   class Book extends Model {   ...   // some attributes here… protected $table = 'my_books_table';   // some scopes here... public function scopeNewest() {    // query here... }   ...   } Exactly as you did with Laravel, the package you are using is the same. So, no worries! If you want to use the model you just created, then use the following: $books = Book::newest()->take(5)->get(); This also applies for relationships, observers, and so on. Everything is the same. In order to use the database package and ORM exactly, you would do the same thing you did in Laravel; remember to set up the project structure in a way that follows the PSR-4 autoloading convention. Using the Query and Schema Builder It's not just about the ORM; with the database package, you can also use the Query and the Schema Builders. Let's discover how! The Query Builder The Query Builder is also very easy to use. The only difference, this time, is that you are passing through the capsule manager object, like this: $books = Capsule::table('books')              ->where('title', '=', "Michael Strogoff")              ->first(); However, the result is still the same. Also, if you like the DB facade in Laravel, you can use the capsule manager class in the same way: $book = Capsule::select('select title, pages_count from books where id = ?', array(12)); The Schema Builder Now, without Laravel, you don't have migrations. However, you can still use the Schema Builder without Laravel. Like this: Capsule::schema()->create('books', function($table){     $table->increments(''id');     $table->string(''title'', 30);     $table->integer(''pages_count'');    $table->decimal(''price'', 5, 2);.   $table->text(''description'');     $table->timestamps(); }); Previously, you used to call the create() method of the Schema facade. This time is a little different: you will use the create() method, chained to the schema() method of the Capsule class. Obviously, you can use any Schema class method in this way. For instance, you could call something like following: Capsule::schema()->table('books', function($table){    $table->string('title', 50)->change();    $table->decimal('special_price', 5, 2); }); And you are good to go! Remember that if you want to unlock some Schema Builder-specific features you will need to install other dependencies. For example, do you want to rename a column? You will need the doctrine/dbal dependency package. Summary I decided to add this article because many people ask me how to use Eloquent without Laravel. Mostly because they like the framework, but they can't migrate an already started project in its entirety. Also, I think that it's cool to know, in a certain sense, what you can find under the hood. It is always just about curiosity. Curiosity opens new paths, and you have a choice to solve a problem in a new and more elegant way. In these few pages, I just scratched the surface. I want to give you some advice: explore the code. The best way to write good code is to read good code. Resources for Article: Further resources on this subject: Your First Application [article] Building a To-do List with Ajax [article] Laravel 4 - Creating a Simple CRUD Application in Hours [article]
Read more
  • 0
  • 0
  • 9836

article-image-introduction-mastering-javascript-promises-and-its-implementation-angularjs
Packt
23 Jul 2015
21 min read
Save for later

An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js

Packt
23 Jul 2015
21 min read
In this article by Muzzamil Hussain, the author of the book Mastering JavaScript Promises, introduces us to promises in JavaScript and its implementation in Angular.js. (For more resources related to this topic, see here.) For many of us who are working with JavaScript, we all know that working with JavaScript means you must have to be a master is asynchronous coding but this skill doesn't come easily. You have to understand callbacks and when you learn it, a sense of realization started to bother you that managing callbacks is not a very easy task, and it's really not an effective way of asynchronous programming. Those of you who already been through this experience, promises is not that new; even if you haven't used it in your recent project, but you would really want to go for it. For those of you who neither use any of callbacks or promises, understanding promises or seeking difference between callbacks and promise would be a hard task. Some of you have used promises in JavaScript during the use of popular and mature JavaScript libraries such as Node.js, jQuery, or WinRT. You are already aware of the advantages of promises and how it's helping out in making your work efficient and code look beautiful. For all these three classes of professionals, gathering information on promises and its implementation in different libraries is quite a task and much of the time you spent is on collecting the right information about how you can attach an error handler in promise, what is a deferred object, and how it can pass it on to different function. Possession of right information in the time you need is the best virtue one could ask for. Keeping all these elements in mind, we have written a book named Mastering JavaScript Promises. This book is all about JavaScript and how promises are implemented in some of the most renowned libraries of the world. This book will provide a foundation for JavaScript, and gradually, it will take you through the fruitful journey of learning promises in JavaScript. The composition of chapters in this book are engineered in such a way that it provides knowledge from the novice level to an advance level. The book covers a wide range of topics with both theoretical and practical content in place. You will learn about evolution of JavaScript, the programming models of different kinds, the asynchronous model, and how JavaScript uses it. The book will take you right into the implementation mode with a whole lot of chapters based on promises implementation of WinRT, Node.js, Angular.js, and jQuery. With easy-to-follow example code and simple language, you will absorb a huge amount information on this topic. Needless to say, books on such topics are in itself an evolutionary process, so your suggestions are more than welcome. Here are few extracts from the book to give you a glimpse of what we have in store for you in this book, but most of the part in this section will focus on Angular.js and how promises are implemented in it. Let's start our journey to this article with programming models. Models Models are basically templates upon which the logics are designed and fabricated within a compiler/interpreter of a programming language so that software engineers can use these logics in writing their software logically. Every programming language we use is designed on a particular programming model. Since software engineers are asked to solve a particular problem or to automate any particular service, they adopt programming languages as per the need. There is no set rule that assigns a particular language to create products. Engineers adopt any language based on the need. The asynchronous programming model Within the asynchronous programming model, tasks are interleaved with one another in a single thread of control. This single thread may have multiple embedded threads and each thread may contain several tasks linked up one after another. This model is simpler in comparison to the threaded case, as the programmers always know the priority of the task executing at a given slot of time in memory. Consider a task in which an OS (or an application within OS) uses some sort of a scenario to decide how much time is to be allotted to a task, before giving the same chance to others. The behavior of the OS of taking control from one task and passing it on to another task is called preempting. Promise The beauty of working with JavaScript's asynchronous events is that the program continues its execution, even when it doesn't have any value it needs to work that is in progress. Such scenarios are named as yet known values from unfinished work. This can make working with asynchronous events in JavaScript challenging. Promises are a programming construct that represents a value that is still unknown. Promises in JavaScript enable us to write asynchronous code in a parallel manner to synchronous code. How to implement promises So far, we have learned the concept of promise, its basic ingredients, and some of the basic functions it has to offer in nearly all of its implementations, but how are these implementations using it? Well, it's quite simple. Every implementation, either in the language or in the form of a library, maps the basic concept of promises. It then maps it to a compiler/interpreter or in code. This allows the written code or functions to behave in the paradigm of promise, which ultimately presents its implementations. Promises are now part of the standard package for many languages. The obvious thing is that they have implemented it in their own way as per the need. Implementing promises in Angular.js Promise is all about how async behavior can be applied on a certain part of an application or on the whole. There is a list of many other JavaScript libraries where the concept of promises exists but in Angular.js, it's present in a much more efficient way than any other client-side applications. Promises comes in two flavors in Angular.js, one is $q and the other is Q. What is the difference between them? We will explore it in detail in the following sections. For now, we will look at what promise means to Angular.js. There are many possible ways to implement promises in Angular.js. The most common one is to use the $q parameter, which is inspired by Chris Kowal's Q library. Mainly, Angular.js uses this to provide asynchronous methods' implementations. With Angular.js, the sequence of services is top to bottom starting with $q, which is considered as the top class; within it, many other subclasses are embedded, for example, $q.reject() or $q.resolve(). Everything that is related to promises in Angular.js must follow the $q parameters. Starting with the $q.when() method, it seems like it creates a method immediately rather it only normalizes the value that may or may not create the promise object. The usage of $q.when() is based on the value supplied to it. If the value provided is a promise, $q.when() will do its job and if it's not, a promise value, $q.when() will create it. The schematics of using promises in Angular.js Since Chris Kowal's Q library is the global provider and inspiration of promises callback returns, Angular.js also uses it for its promise implementations. Many of Angular.js services are by nature promise oriented in return type by default. This includes $interval, $http, and $timeout. However, there is a proper mechanism of using promises in Angular.js. Look at the following code and see how promises maps itself within Angular.js: var promise = AngularjsBackground(); promise.then( function(response) {    // promise process }, function(error) {    // error reporting }, function(progress) {    // send progress    }); All of the mentioned services in Angular.js return a single object of promise. They might be different in taking parameters in, but in return all of them respond back in a single promise object with multiple keys. For example, $http.get returns a single object when you supply four parameters named data, status, header, and config. $http.get('/api/tv/serials/sherlockHolmes ') .success(function(data, status, headers, config) {    $scope.movieContent = data; }); If we employ the promises concept here, the same code will be rewritten as: var promise = $http.get('/api/tv/serials/sherlockHolmes ') promise.then( function(payload) {    $scope.serialContent = payload.data; }); The preceding code is more concise and easier to maintain than the one before this, which makes the usage of Angular.js more adaptable to the engineers using it. Promise as a handle for callback The implementation of promise in Angular.js defines your use of promise as a callback handle. The implementations not only define how to use promise for Angular.js, but also what steps one should take to make the services as "promise-return". This states that you do something asynchronously, and once your said job is completed, you have to trigger the then() service to either conclude your task or to pass it to another then() method: /asynchronous _task.then().then().done(). In simpler form, you can do this to achieve the concept of promise as a handle for call backs: angular.module('TVSerialApp', []) .controller('GetSerialsCtrl',    function($log, $scope, TeleService) {      $scope.getserialListing = function(serial) {        var promise =          TeleService.getserial('SherlockHolmes');        promise.then(          function(payload) {            $scope.listingData = payload.data;          },          function(errorPayload) {            $log.error('failure loading serial', errorPayload);        });      }; }) .factory('TeleService', function($http) {    return {      getserial: function(id) {        return $http.get(''/api/tv/serials/sherlockHolmes' + id);      }    } }); Blindly passing arguments and nested promises Whatever service of promise you use, you must be very sure of what you are passing and how this can affect the overall working of your promise function. Blindly passing arguments can cause confusion for the controller as it has to deal with its own results too while handling other requests. Say we are dealing with the $http.get service and you blindly pass too much of load to it. Since it has to deal with its own results too in parallel, it might get confused, which may result in callback hell. However, if you want to post-process the result instead, you have to deal with an additional parameter called $http.error. In this way, the controller doesn't have to deal with its own result, and calls such as 404 and redirects will be saved. You can also redo the preceding scenario by building your own promise and bringing back the result of your choice with the payload that you want with the following code: factory('TVSerialApp', function($http, $log, $q) { return {    getSerial: function(serial) {      var deferred = $q.defer();      $http.get('/api/tv/serials/sherlockHolmes' + serial)        .success(function(data) {          deferred.resolve({            title: data.title,            cost: data.price});        }).error(function(msg, code) {            deferred.reject(msg);            $log.error(msg, code);        });        return deferred.promise;    } } }); By building a custom promise, you have many advents. You can control inputs and output calls, log the error messages, transform the inputs into desired outputs, and share the status by using the deferred.notify(mesg) method. Deferred objects or composed promises Since custom promise in Angular.js can be hard to handle sometimes and can fall into malfunction in the worse case, the promise provides another way to implement itself. It asks you to transform your response within a then method and returns a transformed result to the calling method in an autonomous way. Considering the same code we used in the previous section: this.getSerial = function(serial) {    return $http.get('/api/tv/serials/sherlockHolmes'+ serial)        .then(                function (response) {                    return {                        title: response.data.title,                        cost: response.data.price                      });                  }); }; The output we yield from the preceding method will be a chained, promised, and transformed. You can again reuse the output for another output, chain it to another promise, or simply display the result. The controller can then be transformed into the following lines of code: $scope.getSerial = function(serial) { service.getSerial(serial) .then(function(serialData) {    $scope.serialData = serialData; }); }; This has significantly reduced the lines of code. Also, this helps us in maintaining the service level since the automechanism of failsafe in then() will help it to be transformed into failed promise and will keep the rest of the code intact. Dealing with the nested calls While using internal return values in the success function, promise code can sense that you are missing one most obvious thing: the error controller. The missing error can cause your code to stand still or get into a catastrophe from which it might not recover. If you want to overcome this, simply throw the errors. How? See the following code: this.getserial = function(serial) {    return $http.get('/api/tv/serials/sherlockHolmes' + serial)        .then(            function (response) {                return {                    title: response.data.title,                    cost: response.data.price               });            },            function (httpError) {                // translate the error                throw httpError.status + " : " +                    httpError.data;            }); }; Now, whenever the code enters into an error-like situation, it will return a single string, not a bunch of $http statutes or config details. This can also save your entire code from going into a standstill mode and help you in debugging. Also, if you attached log services, you can pinpoint the location that causes the error. Concurrency in Angular.js We all want to achieve maximum output at a single slot of time by asking multiple services to invoke and get results from them. Angular.js provides this functionality via its $q.all service; you can invoke many services at a time and if you want to join all/any of them, you just need then() to get them together in the sequence you want. Let's get the payload of the array first: [ { url: 'myUr1.html' }, { url: 'myUr2.html' }, { url: 'myUr3.html' } ] And now this array will be used by the following code: service('asyncService', function($http, $q) {      return {        getDataFrmUrls: function(urls) {          var deferred = $q.defer();          var collectCalls = [];          angular.forEach(urls, function(url) {            collectCalls.push($http.get(url.url));          });            $q.all(collectCalls)          .then(            function(results) {            deferred.resolve(              JSON.stringify(results))          },          function(errors) {          deferred.reject(errors);          },          function(updates) {            deferred.update(updates);          });          return deferred.promise;        }      }; }); A promise is created by executing $http.get for each URL and is added to an array. The $q.all function takes the input of an array of promises, which will then process all results into a single promise containing an object with each answer. This will get converted in JSON and passed on to the caller function. The result might be like this: [ promiseOneResultPayload, promiseTwoResultPayload, promiseThreeResultPayload ] The combination of success and error The $http returns a promise; you can define its success or error depending on this promise. Many think that these functions are a standard part of promise—but in reality, they are not as they seem to be. Using promise means you are calling then(). It takes two parameters—a callback function for success and a callback function for failure. Imagine this code: $http.get("/api/tv/serials/sherlockHolmes") .success(function(name) {    console.log("The tele serial name is : " + name); }) .error(function(response, status) {    console.log("Request failed " + response + " status code: " +     status); }; This can be rewritten as: $http.get("/api/tv/serials/sherlockHolmes") .success(function(name) {    console.log("The tele serial name is : " + name); }) .error(function(response, status) {    console.log("Request failed " + response + " status code: " +     status); };   $http.get("/api/tv/serials/sherlockHolmes") .then(function(response) {    console.log("The tele serial name is :" + response.data); }, function(result) {    console.log("Request failed : " + result); }; One can use either the success or error function depending on the choice of a situation, but there is a benefit in using $http—it's convenient. The error function provides response and status, and the success function provides the response data. This is not considered as a standard part of a promise. Anyone can add their own versions of these functions to promises, as shown in the following code: //my own created promise of success function   promise.success = function(fn) {    promise.then(function(res) {        fn(res.data, res.status, res.headers, config);    });    return promise; };   //my own created promise of error function   promise.error = function(fn) {      promise.then(null, function(res) {        fn(res.data, res.status, res.headers, config);    });    return promise; }; The safe approach So the real matter of discussion is what to use with $http? Success or error? Keep in mind that there is no standard way of writing promise; we have to look at many possibilities. If you change your code so that your promise is not returned from $http, when we load data from a cache, your code will break if you expect success or error to be there. So, the best way is to use then whenever possible. This will not only generalize the overall approach of writing promise, but also reduce the prediction element from your code. Route your promise Angular.js has the best feature to route your promise. This feature is helpful when you are dealing with more than one promise at a time. Here is how you can achieve routing through the following code: $routeProvider .when('/api/', {      templateUrl: 'index.php',      controller: 'IndexController' }) .when('/video/', {      templateUrl: 'movies.php',      controller: 'moviesController' }) As you can observe, we have two routes: the api route takes us to the index page, with IndexController, and the video route takes us to the movie's page. app.controller('moviesController', function($scope, MovieService) {    $scope.name = null;      MovieService.getName().then(function(name) {        $scope.name = name;    }); }); There is a problem, until the MovieService class gets the name from the backend, the name is null. This means if our view binds to the name, first it's empty, then its set. This is where router comes in. Router resolves the problem of setting the name as null. Here's how we can do it: var getName = function(MovieService) {        return MovieService.getName();    };   $routeProvider .when('/api/', {      templateUrl: 'index.php',      controller: 'IndexController' }) .when('/video/', {      templateUrl: 'movies.php',      controller: 'moviesController' }) After adding the resolve, we can revisit our code for a controller: app.controller('MovieController', function($scope, getName) {      $scope.name = name;   }); You can also define multiple resolves for the route of your promises to get the best possible output: $routeProvider .when('/video', {      templateUrl: '/MovieService.php',      controller: 'MovieServiceController',      // adding one resole here      resolve: {          name: getName,          MovieService: getMovieService,          anythingElse: getSomeThing      }      // adding another resole here        resolve: {          name: getName,          MovieService: getMovieService,          someThing: getMoreSomeThing      } }) An introduction to WinRT Our first lookout for the technology is WinRT. What is WinRT? It is the short form for Windows Runtime. This is a platform provided by Microsoft to build applications for Windows 8+ operating system. It supports application development in C++/ICX, C# (C sharp), VB.NET, TypeScript, and JavaScript. Microsoft adopted JavaScript as one of its prime and first-class tools to develop cross-browser apps and for the development on other related devices. We are now fully aware of what the pros and cons of using JavaScript are, which has brought us here to implement the use of promise. Summary This article/post is just to give an understanding of what we have in our book for you; focusing on just Angular.js doesn't mean we have only one technology covered in the entire book for implementation of promise, it's just to give you an idea about how the flow of information goes from simple to advanced level, and how easy it is to keep on following the context of chapters. Within this book, we have also learned about Node.js, jQuery, and WinRT so that even readers from different experience levels can read, understand, and learn quickly and become an expert in promises. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Installing jQuery [article] Cordova Plugins [article]
Read more
  • 0
  • 0
  • 2038
Banner background image

article-image-role-management
Packt
23 Jul 2015
15 min read
Save for later

Role Management

Packt
23 Jul 2015
15 min read
In this article by Gavin Henrick and Karen Holland, author of the book Moodle Administration Essentials, roles play a key part in the ability of the Moodle site. They are able to restrict the access of users to only the data they should have access to, and whether or not they are able to alter it or add to it. In each course, every user will have been assigned a role when they are enrolled, such as teacher, student, or customized role. In this article, we deal with the essential areas of role management that every administrator may have to deal with: Cloning a role Creating a new role Creating a course requester role Overriding a permission in a role in a course Testing a role Manually adding a role to a user in a course Enabling self-enrolment for a course (For more resources related to this topic, see here.) Understanding terminologies There are some key terms used to describe users' abilities in Moodle and how they are defined, which are as follows: Role: A role is a set or collection of permissions on different capabilities. There are default roles like teacher and student, which have predefined sets of permissions. Capability: A capability is a specific behavior in Moodle, such as Start new discussions (mod/forum:startdiscussion), which can have a permission set within a role such as Allow or Not set/Inherit. Permission: Permission is associated with a capability. There are four possible values: allow, prevent, prohibit, or not set. Not set: This means that that there is not a specific setting for this user role, and Moodle will determine if it is allowed, if set in a higher context. Allow: The permission is explicitly granted for the capability. Prevent: The permission is removed for the capability, even if allowed in a higher context. However, it can be overridden at a specific context. Prohibit: The permission is completely denied and cannot be overridden at any lower context. By default, the only configuration option displayed is Allow. To show the full list of options in the role edit page, click on the Show advanced button, just above the Filter option, as shown in the following image: Context: A context is an area of Moodle, such as the whole system, a category, a course, an activity, a block, or a user. A role will have permission for a capability on a specific context. An example of this will be where a student can start a discussion in a specific forum. This is set up by enabling a permission to Allow for the capability Start new discussions for a Student role on that specific forum. Standard roles There are a number of different roles configured in Moodle, by default these are: Site administrator: The site administrator can do everything on the site including creating the site structure, courses, activities, and resources, and managing user accounts. Manager: The manager can access courses and modify them. They usually do not participate in teaching courses. Course creator: The course creator can create courses when assigned rights in a category. Teacher: The teacher can do anything within a course, including adding and removing resources and activities, communicating with students, and grading them. Non-editing teacher: The non-editing teacher can teach and communicate in courses and grade students, but cannot alter or add activities, nor change the course layout or settings. Student: The student can access and participate in courses, but cannot create or edit resources or activities within a course. Guest: The guest can view courses if allowed, but cannot participate. Guests have minimal privileges, and usually cannot enter text anywhere. Authenticated user: The role all logged in users get. Authenticated user on the front page role: A logged in user role for the front page only. Managing role permissions Let's learn how to manage permissions for existing roles in Moodle. Cloning a role It is possible to duplicate an existing role in Moodle. The main reasons for doing this will be so that you can have a variation of the existing role, such as a teacher, but with the role having reduced capabilities. For instance, to stop a teacher being able to add or remove students to the course, this process will be achieved by creating a course editing role, which is a clone of the standard editingteacher role with enrolment aspects removed. This is typically done when students are added to courses centrally with a student management system. To duplicate a role, in this case editing teacher: Log in as an administrator level user account. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select an existing role from the Use role or archetype dropdown. Click on Continue. Enter the short role name in the Short name field. This must be unique. Enter the full role name in the Custom full name field. This is what appears on the user interface in Moodle. Enter an explanation for the role in the Description field. This should explain why the role was created, and what changes from default were planned. Scroll to the bottom of the page. Click on Create this role. This will create a duplicate version of the teacher role with all the same permissions and capabilities. Creating a new role It is also possible to create a new role. The main reason for doing this would be to have a specific role to do a specific task and nothing else, such as a user that can manage users only. This is the alternative to cloning one of the existing roles, and then disabling everything except the one set of capabilities required. To create a new role: Log in as an administrator level user account. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select No role from the Use role or archetype dropdown. Click on Continue. Enter the short role name in the Short name field. This must be unique. Enter the full role name in the Custom full name field. This is what appears on the user interface in Moodle. Enter an explanation for the role in the Description field. This should explain why the role was created. Select the appropriate Role archetype, in this case, None. The role archetype determines the permissions when a role is reset to default and any new permissions for the role when the site is upgraded. Select Context types where this role may be assigned. Set the permissions as required by searching for the appropriate Capability and clicking on Allow. Scroll to the bottom of the page. Click on Create this role. This will create the new role with the settings as defined. If you want the new role to appear in the course listing, you must enable it by navigating to Administration block | Site administration | Appearance | Courses | Course Contacts. Creating a course requester role There is a core Moodle feature that enables users to request a course to be created. This is not normally used, especially as students and most teachers just have responsibility within their own course context. So, it can be useful to create a role just with this ability, so that a faculty or department administrator can request a new course space when needed, without giving the ability to all users. There are a few steps in this process: Remove the capability from other roles. Set up the new role. Assign the role to a user at the correct context. Firstly, we remove the capability from other roles by altering the authenticated user role as shown: In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on edit for the Authenticated user role. Enter the request text into Filter. Select the Not set radio button under moodle/course:request to change the Allow permission. Scroll to the bottom of the page. Click on Save changes. Next, we create the new role with the specific capability set to Allow. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select No role from the Use role or archetype dropdown. Click on Continue. Enter courserequester in the Short name field. Enter Course Requester in the Custom full name field. This is what appears on the user interface in Moodle. Enter the explanation for the role in the Description field. Select system under Context types where this role may be assigned. Change moodle/course:request to Allow. Scroll to the bottom of the page. Click on Create this role. Lastly, you assign the role to a user at system level. This is different from giving a role to a user in a course. In the Administration block, navigate to Site administration | Users | Permissions | Assign system roles. Click on Course Requester. Search for the specific user in the Potential user list. Select the user from the list, using the Search filter if required. Click on the Add button. Any roles you assign from this page will apply to the assigned users throughout the entire system, including the front page and all the courses. Applying a role override for a specific context You can change how a specific role behaves in a certain context by enabling an override, thereby granting or removing, the permission in that context. An example of this is, in general, students cannot rate a forum post in a forum in their course. When ratings are enabled, only the manager, teacher, and non-editing teacher roles are those with permission to rate the posts. So, to enable the students to rate posts, you need to change the permissions for the student role on that specific forum. Browse to the forum where you want to allow students to rate forum posts. This process assumes that the rating has been already enabled in the forum. From the Forum page, go to the Administration block, then to Forum administration, and click on the link to Permissions. Scroll down the page to locate the permission Rate posts. This is the mod/forum:rate capability. By default, you should not see the student role listed to the right of the permission. Click on the plus sign (+) that appears below the roles already listed for the Rate posts permission. Select Student from the Select role menu, and click on the Allow button. Student should now appear in the list next to the Rate posts permission. Participants will now be able to rate each other's posts in this forum. Making the change in this forum does not impact other forums. Testing a role It is possible to use the Switch role to feature to see what the other role behaves like in the different contexts. However, the best way to test a role is to create a new user account, and then assign that user the role in the correct context as follows: Create the new user by navigating to Site administration | Users | Accounts | Add a new user. Assign your new user the role in the correct context, such as system roles or in a course as required. Log in with this user in a different browser to check what they can do / see. Having two different roles logged in at the same time, each using a different browser, means that you can test the new role in one browser while still logged in as the administrator in your main browser. This saves so much time when building courses especially. Manually adding a user to a course Depending on what your role is on a course, you can add other users to the course by manually enrolling them to the course. In this example, we are logged in as the administrator, which can add a number of roles, including: Manager Teacher Non-Editing Teacher Student To enrol a user in your course: Go to the Course administration menu in the Administration block. Expand on the User settings. Click on the Enrolled users link. This brings up the enrolled users page that lists all enrolled users—this can be filtered by role and by default shows the enrolled participants only. Click on the Enrol users button. From the Assign roles dropdown, select which role you want to assign to the user. This is limited to the roles which you can assign. Search for the user that you want to add to the course. Click on the Enrol button to enroll the user with the assigned role. Click on Finish enrolling users. The page will now reload with the new enrolments. To see the users that you added with the given role, you may need to change the filter to the specific role type. This is how you manually add someone to a Moodle course. User upload CSV files allow you to include optional enrolment fields, which will enable you to enroll existing or new users. The sample user upload CSV file will enroll each user as a student to both specified courses, identified by their course shortnames: Teaching with Moodle, and Induction. Enabling self-enrolment for a course In addition to manually adding users to a course, you can configure a course so that students can self-enroll onto the course, either with or without an enrolment key or password. There are two dependencies required for this to work: Firstly, the self-enrolment plugin needs to be enabled at the site level. This is found in the Administration block, by navigating to Site Administration | Plugins | Enrolments | Manage enroll plugins. If it is not enabled, you need to click on the eye icon to enable it. It is enabled by default in Moodle. Secondly, you need to enable the self-enrolment method in the course itself, and configure it accordingly. In the course that you want to enable self-enrolment, the following are the essential steps: In the Administration block, navigate to Administration | Course administration | Users | Enrolment methods. Click on the eye icon to turn on the Self enrolment method. Click on the cogwheel icon to access the configuration for the Self enrolment method. You can optionally enter a name for the method into the Custom instance name field; however, this is not required. Typically, you would do this if you are enabling multiple self-enrolment options and want to identify them separately. Enter an enrolment key or password into the Enrolment key field if you want to restrict self-enrolment to those who are issued the password. Once the user knows the password, they will be able to enroll. If you are using groups in your course, and configure them with different passwords for each group, it is possible to use the Use group enrolment keys option to use those passwords from the different groups to automatically place the self-enrolling users into those groups when they enroll, using the correct key/password. If you want the self-enrolment to enroll users as students, leave the Default assigned role as Student, or change it to whichever role you intend it to operate for. Some organizations will give one password for the students to enrol with and another for the teachers to enrol with, so that the organization does not need to manage the enrolment centrally. So, having two self-enrolment methods set up, one pointing at student and one at teacher, makes this possible. If you want to control the length of enrolment, you can do this by setting the Enrolment duration column. In this case, you can also issue a warning to the user before it expires by using Notify before enrolment expires and Notification threshold options. If you want to specify the length of the enrolment, you can set this with Start date and End date. You can un-enroll the user if they are inactive for a period of time by setting Unenroll to inactive after a specific number of days. Set Max enrolled users if you want to limit the number of users using this specific password to enroll to the course. This is useful if you are selling a specific number of seats on the course. This self-enrolment method may be restricted to members of a specified cohort only. You can enable this by selecting a cohort from the dropdown for the Only cohort members setting. Leave Send course welcome message ticked if you want to send a message to those who self-enroll. This is recommended. Enter a welcome message in the Custom welcome message field. This will be sent to all users who self-enroll using this method, and can be used to remind them of key information about the course, such as a starting date, or asking them to do something like complete the icebreaker in the course. Click on Save changes. Once enabled and configured, users will now be able to self-enroll and will be added as whatever role you selected. This is dependent on the course itself being visible to the users when browsing the site. Other custom roles Moodle docs has a list of potential custom roles with instructions on how to create them including: Parent Demo teacher Forum moderator Forum poster role Calendar editor Blogger Quiz user with unlimited time Question Creator Question sharer Course requester role Feedback template creator Grading forms publisher Grading forms manager Grade view Gallery owner role For more information, check https://docs.moodle.org/29/en/Creating_custom_roles. Summary In this article, we looked at the core administrator tasks in role management, and the different aspects to consider when deciding which approach to take in either extending or reducing role permissions. Resources for Article: Further resources on this subject: Moodle for Online Communities [article] Configuring your Moodle Course [article] Moodle Plugins [article]
Read more
  • 0
  • 0
  • 1337

article-image-deployment-and-maintenance
Packt
20 Jul 2015
21 min read
Save for later

Deployment and Maintenance

Packt
20 Jul 2015
21 min read
 In this article by Sandro Pasquali, author of Deploying Node.js, we will learn about the following: Automating the deployment of applications, including a look at the differences between continuous integration, delivery, and deployment Using Git to track local changes and triggering deployment actions via webhooks when appropriate Using Vagrant to synchronize your local development environment with a deployed production server Provisioning a server with Ansible Note that application deployment is a complex topic with many dimensions that are often considered within unique sets of needs. This article is intended as an introduction to some of the technologies and themes you will encounter. Also, note that the scaling issues are part and parcel of deployment. (For more resources related to this topic, see here.) Using GitHub webhooks At the most basic level, deployment involves automatically validating, preparing, and releasing new code into production environments. One of the simplest ways to set up a deployment strategy is to trigger releases whenever changes are committed to a Git repository through the use of webhooks. Paraphrasing the GitHub documentation, webhooks provide a way for notifications to be delivered to an external web server whenever certain actions occur on a repository. In this section, we'll use GitHub webhooks to create a simple continuous deployment workflow, adding more realistic checks and balances. We'll build a local development environment that lets developers work with a clone of the production server code, make changes, and see the results of those changes immediately. As this local development build uses the same repository as the production build, the build process for a chosen environment is simple to configure, and multiple production and/or development boxes can be created with no special effort. The first step is to create a GitHub (www.github.com) account if you don't already have one. Basic accounts are free and easy to set up. Now, let's look at how GitHub webhooks work. Enabling webhooks Create a new folder and insert the following package.json file: {"name": "express-webhook","main": "server.js","dependencies": {"express": "~4.0.0","body-parser": "^1.12.3"}} This ensures that Express 4.x is installed and includes the body-parser package, which is used to handle POST data. Next, create a basic server called server.js: var express = require('express');var app = express();var bodyParser = require('body-parser');var port = process.env.PORT || 8082;app.use(bodyParser.json());app.get('/', function(req, res) {res.send('Hello World!');});app.post('/webhook', function(req, res) {// We'll add this next});app.listen(port);console.log('Express server listening on port ' + port); Enter the folder you've created, and build and run the server with npm install; npm start. Visit localhost:8082/ and you should see "Hello World!" in your browser. Whenever any file changes in a given repository, we want GitHub to push information about the change to /webhook. So, the first step is to create a GitHub repository for the Express server mentioned in the code. Go to your GitHub account and create a new repository with the name 'express-webhook'. The following screenshot shows this: Once the repository is created, enter your local repository folder and run the following commands: git initgit add .git commit -m "first commit"git remote add origin [email protected]:<your username>/express-webhook You should now have a new GitHub repository and a local linked version. The next step is to configure this repository to broadcast the push event on the repository. Navigate to the following URL: https://github.com/<your_username>/express-webhook/settings From here, navigate to Webhooks & Services | Add webhook (you may need to enter your password again). You should now see the following screen: This is where you set up webhooks. Note that the push event is already set as default, and, if asked, you'll want to disable SSL verification for now. GitHub needs a target URL to use POST on change events. If you have your local repository in a location that is already web accessible, enter that now, remembering to append the /webhook route, as in http://www.example.com/webhook. If you are building on a local machine or on another limited network, you'll need to create a secure tunnel that GitHub can use. A free service to do this can be found at http://localtunnel.me/. Follow the instructions on that page, and use the custom URL provided to configure your webhook. Other good forwarding services can be found at https://forwardhq.com/ and https://meetfinch.com/. Now that webhooks are enabled, the next step is to test the system by triggering a push event. Create a new file called readme.md (add whatever you'd like to it), save it, and then run the following commands: git add readme.mdgit commit -m "testing webhooks"git push origin master This will push changes to your GitHub repository. Return to the Webhooks & Services section for the express-webhook repository on GitHub. You should see something like this: This is a good thing! GitHub noticed your push and attempted to deliver information about the changes to the webhook endpoint you set, but the delivery failed as we haven't configured the /webhook route yet—that's to be expected. Inspect the failed delivery payload by clicking on the last attempt—you should see a large JSON file. In that payload, you'll find something like this: "committer": {"name": "Sandro Pasquali","email": "[email protected]","username": "sandro-pasquali"},"added": ["readme.md"],"removed": [],"modified": [] It should now be clear what sort of information GitHub will pass along whenever a push event happens. You can now configure the /webhook route in the demonstration Express server to parse this data and do something with that information, such as sending an e-mail to an administrator. For example, use the following code: app.post('/webhook', function(req, res) {console.log(req.body);}); The next time your webhook fires, the entire JSON payload will be displayed. Let's take this to another level, breaking down the autopilot application to see how webhooks can be used to create a build/deploy system. Implementing a build/deploy system using webhooks To demonstrate how to build a webhook-powered deployment system, we're going to use a starter kit for application development. Go ahead and use fork on the repository at https://github.com/sandro-pasquali/autopilot.git. You now have a copy of the autopilot repository, which includes scaffolding for common Gulp tasks, tests, an Express server, and a deploy system that we're now going to explore. The autopilot application implements special features depending on whether you are running it in production or in development. While autopilot is a little too large and complex to fully document here, we're going to take a look at how major components of the system are designed and implemented so that you can build your own or augment existing systems. Here's what we will examine: How to create webhooks on GitHub programmatically How to catch and read webhook payloads How to use payload data to clone, test, and integrate changes How to use PM2 to safely manage and restart servers when code changes If you haven't already used fork on the autopilot repository, do that now. Clone the autopilot repository onto a server or someplace else where it is web-accessible. Follow the instructions on how to connect and push to the fork you've created on GitHub, and get familiar with how to pull and push changes, commit changes, and so on. PM2 delivers a basic deploy system that you might consider for your project (https://github.com/Unitech/PM2/blob/master/ADVANCED_README.md#deployment). Install the cloned autopilot repository with npm install; npm start. Once npm has installed dependencies, an interactive CLI application will lead you through the configuration process. Just hit the Enter key for all the questions, which will set defaults for a local development build (we'll build in production later). Once the configuration is complete, a new development server process controlled by PM2 will have been spawned. You'll see it listed in the PM2 manifest under autopilot-dev in the following screenshot: You will make changes in the /source directory of this development build. When you eventually have a production server in place, you will use git push on the local changes to push them to the autopilot repository on GitHub, triggering a webhook. GitHub will use POST on the information about the change to an Express route that we will define on our server, which will trigger the build process. The build runner will pull your changes from GitHub into a temporary directory, install, build, and test the changes, and if all is well, it will replace the relevant files in your deployed repository. At this point, PM2 will restart, and your changes will be immediately available. Schematically, the flow looks like this: To create webhooks on GitHub programmatically, you will need to create an access token. The following diagram explains the steps from A to B to C: We're going to use the Node library at https://github.com/mikedeboer/node-github to access GitHub. We'll use this package to create hooks on Github using the access token you've just created. Once you have an access token, creating a webhook is easy: var GitHubApi = require("github");github.authenticate({type: "oauth",token: <your token>});github.repos.createHook({"user": <your github username>,"repo": <github repo name>,"name": "web","secret": <any secret string>,"active": true,"events": ["push"],"config": {"url": "http://yourserver.com/git-webhook","content_type": "json"}}, function(err, resp) {...}); Autopilot performs this on startup, removing the need for you to manually create a hook. Now, we are listening for changes. As we saw previously, GitHub will deliver a payload indicating what has been added, what has been deleted, and what has changed. The next step for the autopilot system is to integrate these changes. It is important to remember that, when you use webhooks, you do not have control over how often GitHub will send changesets—if more than one person on your team can push, there is no predicting when those pushes will happen. The autopilot system uses Redis to manage a queue of requests, executing them in order. You will need to manage multiple changes in a way. For now, let's look at a straightforward way to build, test, and integrate changes. In your code bundle, visit autopilot/swanson/push.js. This is a process runner on which fork has been used by buildQueue.js in that same folder. The following information is passed to it: The URL of the GitHub repository that we will clone The directory to clone that repository into (<temp directory>/<commit hash>) The changeset The location of the production repository that will be changed Go ahead and read through the code. Using a few shell scripts, we will clone the changed repository and build it using the same commands you're used to—npm install, npm test, and so on. If the application builds without errors, we need only run through the changeset and replace the old files with the changed files. The final step is to restart our production server so that the changes reach our users. Here is where the real power of PM2 comes into play. When the autopilot system is run in production, PM2 creates a cluster of servers (similar to the Node cluster module). This is important as it allows us to restart the production server incrementally. As we restart one server node in the cluster with the newly pushed content, the other clusters continue to serve old content. This is essential to keeping a zero-downtime production running. Hopefully, the autopilot implementation will give you a few ideas on how to improve this process and customize it to your own needs. Synchronizing local and deployed builds One of the most important (and often difficult) parts of the deployment process is ensuring that the environment an application is being developed, built, and tested within perfectly simulates the environment that application will be deployed into. In this section, you'll learn how to emulate, or virtualize, the environment your deployed application will run within using Vagrant. After demonstrating how this setup can simplify your local development process, we'll use Ansible to provision a remote instance on DigitalOcean. Developing locally with Vagrant For a long while, developers would work directly on running servers or cobble together their own version of the production environment locally, often writing ad hoc scripts and tools to smoothen their development process. This is no longer necessary in a world of virtual machines. In this section, we will learn how to use Vagrant to emulate a production environment within your development environment, advantageously giving you a realistic box to work on testing code for production and isolating your development process from your local machine processes. By definition, Vagrant is used to create a virtual box emulating a production environment. So, we need to install Vagrant, a virtual machine, and a machine image. Finally, we'll need to write the configuration and provisioning scripts for our environment. Go to http://www.vagrantup.com/downloads and install the right Vagrant version for your box. Do the same with VirtualBox here at https://www.virtualbox.org/wiki/Downloads. You now need to add a box to run. For this example, we're going to use Centos 7.0, but you can choose whichever you'd prefer. Create a new folder for this project, enter it, and run the following command: vagrant box add chef/centos-7.0 Usefully, the creators of Vagrant, HashiCorp, provide a search service for Vagrant boxes at https://atlas.hashicorp.com/boxes/search. You will be prompted to choose your virtual environment provider—select virtualbox. All relevant files and machines will now be downloaded. Note that these boxes are very large and may take time to download. You'll now create a configuration file for Vagrant called Vagrantfile. As with npm, the init command quickly sets up a base file. Additionally, we'll need to inform Vagrant of the box we'll be using: vagrant init chef/centos-7.0 Vagrantfile is written in Ruby and defines the Vagrant environment. Open it up now and scan it. There is a lot of commentary, and it makes a useful read. Note the config.vm.box = "chef/centos-7.0" line, which was inserted during the initialization process. Now you can start Vagrant: vagrant up If everything went as expected, your box has been booted within Virtualbox. To confirm that your box is running, use the following code: vagrant ssh If you see a prompt, you've just set up a virtual machine. You'll see that you are in the typical home directory of a CentOS environment. To destroy your box, run vagrant destroy. This deletes the virtual machine by cleaning up captured resources. However, the next vagrant up command will need to do a lot of work to rebuild. If you simply want to shut down your machine, use vagrant halt. Vagrant is useful as a virtualized, production-like environment for developers to work within. To that end, it must be configured to emulate a production environment. In other words, your box must be provisioned by telling Vagrant how it should be configured and what software should be installed whenever vagrant up is run. One strategy for provisioning is to create a shell script that configures our server directly and point the Vagrant provisioning process to that script. Add the following line to Vagrantfile: config.vm.provision "shell", path: "provision.sh" Now, create that file with the following contents in the folder hosting Vagrantfile: # install nvmcurl https://raw.githubusercontent.com/creationix/nvm/v0.24.1/install.sh | bash# restart your shell with nvm enabledsource ~/.bashrc# install the latest Node.jsnvm install 0.12# ensure server default versionnvm alias default 0.12 Destroy any running Vagrant boxes. Run Vagrant again, and you will notice in the output the execution of the commands in our provisioning shell script. When this has been completed, enter your Vagrant box as the root (Vagrant boxes are automatically assigned the root password "vagrant"): vagrant sshsu You will see that Node v0.12.x is installed: node -v It's standard to allow password-less sudo for the Vagrant user. Run visudo and add the following line to the sudoers configuration file: vagrant ALL=(ALL) NOPASSWD: ALL Typically, when you are developing applications, you'll be modifying files in a project directory. You might bind a directory in your Vagrant box to a local code editor and develop in that way. Vagrant offers a simpler solution. Within your VM, there is a /vagrant folder that maps to the folder that Vagrantfile exists within, and these two folders are automatically synced. So, if you add the server.js file to the right folder on your local machine, that file will also show up in your VM's /vagrant folder. Go ahead and create a new test file either in your local folder or in your VM's /vagrant folder. You'll see that file synchronized to both locations regardless of where it was originally created. Let's clone our express-webhook repository from earlier in this article into our Vagrant box. Add the following lines to provision.sh: # install various packages, particularly for gityum groupinstall "Development Tools" -yyum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel-yyum install git -y# Move to shared folder, clone and start servercd /vagrantgit clone https://github.com/sandro-pasquali/express-webhookcd express-webhooknpm i; npm start Add the following to Vagrantfile, which will map port 8082 on the Vagrant box (a guest port representing the port our hosted application listens on) to port 8000 on our host machine: config.vm.network "forwarded_port", guest: 8082, host: 8000 Now, we need to restart the Vagrant box (loading this new configuration) and re-provision it: vagrant reloadvagrant provision This will take a while as yum installs various dependencies. When provisioning is complete, you should see this as the last line: ==> default: Express server listening on port 8082 Remembering that we bound the guest port 8082 to the host port 8000, go to your browser and navigate to localhost:8000. You should see "Hello World!" displayed. Also note that in our provisioning script, we cloned to the (shared) /vagrant folder. This means the clone of express-webhook should be visible in the current folder, which will allow you to work on the more easily accessible codebase, knowing it will be automatically synchronized with the version on your Vagrant box. Provisioning with Ansible Configuring your machines by hand, as we've done previously, doesn't scale well. For one, it can be overly difficult to set and manage environment variables. Also, writing your own provisioning scripts is error-prone and no longer necessary given the existence of provisioning tools, such as Ansible. With Ansible, we can define server environments using an organized syntax rather than ad hoc scripts, making it easier to distribute and modify configurations. Let's recreate the provision.sh script developed earlier using Ansible playbooks: Playbooks are Ansible's configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce or a set of steps in a general IT process. Playbooks are expressed in the YAML format (a human-readable data serialization language). To start with, we're going to change Vagrantfile's provisioner to Ansible. First, create the following subdirectories in your Vagrant folder: provisioningcommontasks These will be explained as we proceed through the Ansible setup. Next, create the following configuration file and name it ansible.cfg: [defaults]roles_path = provisioninglog_path = ./ansible.log This indicates that Ansible roles can be found in the /provisioning folder, and that we want to keep a provisioning log in ansible.log. Roles are used to organize tasks and other functions into reusable files. These will be explained shortly. Modify the config.vm.provision definition to the following: config.vm.provision "ansible" do |ansible|ansible.playbook = "provisioning/server.yml"ansible.verbose = "vvvv"end This tells Vagrant to defer to Ansible for provisioning instructions, and that we want the provisioning process to be verbose—we want to get feedback when the provisioning step is running. Also, we can see that the playbook definition, provisioning/server.yml, is expected to exist. Create that file now: ---- hosts: allsudo: yesroles:- commonvars:env:user: 'vagrant'nvm:version: '0.24.1'node_version: '0.12'build:repo_path: 'https://github.com/sandro-pasquali'repo_name: 'express-webhook' Playbooks can contain very complex rules. This simple file indicates that we are going to provision all available hosts using a single role called common. In more complex deployments, an inventory of IP addresses could be set under hosts, but, here, we just want to use a general setting for our one server. Additionally, the provisioning step will be provided with certain environment variables following the forms env.user, nvm.node_version, and so on. These variables will come into play when we define the common role, which will be to provision our Vagrant server with the programs necessary to build, clone, and deploy express-webhook. Finally, we assert that Ansible should run as an administrator (sudo) by default—this is necessary for the yum package manager on CentOS. We're now ready to define the common role. With Ansible, folder structures are important and are implied by the playbook. In our case, Ansible expects the role location (./provisioning, as defined in ansible.cfg) to contain the common folder (reflecting the common role given in the playbook), which itself must contain a tasks folder containing a main.yml file. These last two naming conventions are specific and required. The final step is creating the main.yml file in provisioning/common/tasks. First, we replicate the yum package loaders (see the file in your code bundle for the full list): ---- name: Install necessary OS programsyum: name={{ item }} state=installedwith_items:- autoconf- automake...- git Here, we see a few benefits of Ansible. A human-readable description of yum tasks is provided to a looping structure that will install every item in the list. Next, we run the nvm installer, which simply executes the auto-installer for nvm: - name: Install nvmsudo: noshell: "curl https://raw.githubusercontent.com/creationix/nvm/v{{ nvm.version }}/install.sh | bash" Note that, here, we're overriding the playbook's sudo setting. This can be done on a per-task basis, which gives us the freedom to move between different permission levels while provisioning. We are also able to execute shell commands while at the same time interpolating variables: - name: Update .bashrcsudo: nolineinfile: >dest="/home/{{ env.user }}/.bashrc"line="source /home/{{ env.user }}/.nvm/nvm.sh" Ansible provides extremely useful tools for file manipulation, and we will see here a very common one—updating the .bashrc file for a user. The lineinfile directive makes the addition of aliases, among other things, straightforward. The remainder of the commands follow a similar pattern to implement, in a structured way, the provisioning directives we need for our server. All the files you will need are in your code bundle in the vagrant/with_ansible folder. Once you have them installed, run vagrant up to see Ansible in action. One of the strengths of Ansible is the way it handles contexts. When you start your Vagrant build, you will notice that Ansible gathers facts, as shown in the following screenshot: Simply put, Ansible analyzes the context it is working in and only executes what is necessary to execute. If one of your tasks has already been run, the next time you try vagrant provision, that task will not run again. This is not true for shell scripts! In this way, editing playbooks and reprovisioning does not consume time redundantly changing what has already been changed. Ansible is a powerful tool that can be used for provisioning and much more complex deployment tasks. One of its great strengths is that it can run remotely—unlike most other tools, Ansible uses SSH to connect to remote servers and run operations. There is no need to install it on your production boxes. You are encouraged to browse the Ansible documentation at http://docs.ansible.com/index.html to learn more. Summary In this article, you learned how to deploy a local build into a production-ready environment and the powerful Git webhook tool was demonstrated as a way of creating a continuous integration environment. Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] API with MongoDB and Node.js [Article] So, what is Node.js? [Article]
Read more
  • 0
  • 0
  • 2324

article-image-how-to-build-remote-controlled-tv-node-webkit
Roberto González
08 Jul 2015
14 min read
Save for later

How to build a Remote-controlled TV with Node-Webkit

Roberto González
08 Jul 2015
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html // This is the starting point for our desktop app - css // Our desktop app styles - js // This is where the magic happens - remote // This is where the magic happens (Part 2) - libraries // FFMPEG libraries, which give you H.264 video support in Node-Webkit - player // Our youtube player - Gruntfile.js // Build scripts - run.bat // run.bat runs the app on Windows - run.sh // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -g sudo npm install grunt-cli -g  On Windows: npm install node-gyp -g npm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: { "//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!", "name": "Remote", "//": "A simple description of what the app does.", "description": "An example of node-webkit", "//": "This is the first html the app will load. Just leave this this way", "main": "app://host/index.html", "//": "The version number. 0.0.1 is a good start :D", "version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.", "window": { "//": "The Window Title for the app", "title": "Remote", "//": "The Icon for the app", "icon": "css/images/icon.png", "//": "Do you want the File/Edit/Whatever toolbar?", "toolbar": false, "//": "Do you want a standard window around your app (a title bar and some borders)?", "frame": true, "//": "Can you resize the window?", "resizable": true}, "webkit": { "plugin": false, "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36" }, "//": "These are the libraries we’ll be using:", "//": "Express is a web server, which will handle the files for the remote", "//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.", "dependencies": { "express": "^4.9.5", "socket.io": "^1.1.0" }, "//": "And these are just task handlers to make things easier", "devDependencies": { "grunt": "^0.4.5", "grunt-contrib-copy": "^0.6.0", "grunt-node-webkit-builder": "^0.1.21" } } You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm install grunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html> <head> <metacharset="utf-8"/> <title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/> <linkhref="css/normalize.css"rel="stylesheet"type="text/css"/> <linkhref="css/styles.css"rel="stylesheet"type="text/css"/> </head> <body> <divid="serverInfo"> <h1>Youtube TV</h1> </div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script> <script src="js/app.js"></script> </body> </html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express'); var app = express(); var server = require('http').Server(app); var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080; server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) { // video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video Youtube.watchVideo(video); }); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () { Youtube.playVideo(); }); socket.on('pause', function () { Youtube.pauseVideo(); }); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) { io.emit('statusChange', status); }; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html> <head> <metacharset=“utf-8”/> <title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/> <linkrel="stylesheet"href="/css/styles.css"/> </head> <body> <divclass="controls"> <divclass="search"> <inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/> </div> <divclass="playback"> <buttonclass="play">&gt;</button> <buttonclass="pause">||</button> </div> </div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"> <articleclass="video"> <figure><imgsrc=""alt=""/></figure> <divclass="info"> <h2></h2> </div> </article> </div> <script src="/socket.io/socket.io.js"></script> <script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script> <script src="/js/remote.js"></script> </body> </html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null; $('#searchQuery').on('keyup', function(event){ clearTimeout(searchTimeout); searchTimeout = setTimeout(function(){ searchYoutube($('#searchQuery').val()); }, 500); }); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){ // Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data()); }); // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){ if( status ==='play' ) { $('.playback .pause').show(); $('.playback .play').hide(); } elseif( status ==='pause'|| status ==='stop' ) { $('.playback .pause').hide(); $('.playback .play').show(); } }); // Notify the app when we hit the play button$('.playback .play').on('click', function(event){ socket.emit('play'); }); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){ socket.emit('pause'); }); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) { require('dns').lookup( require('os').hostname(), function (err, add, fam) { typeof callback =='function'? callback(add) :null; }); } // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){ $('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote'); }); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 4083
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-file-sharing
Packt
08 Jul 2015
14 min read
Save for later

File Sharing

Packt
08 Jul 2015
14 min read
In this article by Dan Ristic, author of the book Learning WebRTC, we will cover the following topics: Getting a file with File API Setting up our page Getting a reference to a file The real power of a data channel comes when combining it with other powerful technologies from a browser. By opening up the power to send data peer-to-peer and combining it with a File API, we could open up all new possibilities in your browser. This means you could add file sharing functionalities that are available to any user with an Internet connection. The application that we will build will be a simple one with the ability to share files between two peers. The basics of our application will be real-time, meaning that the two users have to be on the page at the same time to share a file. There will be a finite number of steps that both users will go through to transfer an entire file between them: User A will open the page and type a unique ID. User B will open the same page and type the same unique ID. The two users can then connect to each other using RTCPeerConnection. Once the connection is established, one user can select a file to share. The other user will be notified of the file that is being shared, where it will be transferred to their computer over the connection and they will download the file. The main thing we will focus on throughout the article is how to work with the data channel in new and exciting ways. We will be able to take the file data from the browser, break it down into pieces, and send it to the other user using only the RTCPeerConnection API. The interactivity that the API promotes will stand out in this article and can be used in a simple project. Getting a file with the File API One of the first things that we will cover is how to use the File API to get a file from the user's computer. There is a good chance you have interacted with the File API on a web page and have not even realized it yet! The API is usually denoted by the Browse or Choose File text located on an input field in the HTML page and often looks something similar to this: Although the API has been around for quite a while, the one you are probably familiar with is the original specification, dating back as far as 1995. This was the Form-based File Upload in HTML specification that focused on allowing a user to upload a file to a server using an HTML form. Before the days of the file input, application developers had to rely on third-party tools to request files of data from the user. This specification was proposed in order to make a standard way to upload files for a server to download, save, and interact with. The original standard focused entirely on interacting with a file via an HTML form, however, and did not detail any way to interact with a file via JavaScript. This was the origin of the File API. Fast-forward to the groundbreaking days of HTML5 and we now have a fully-fledged File API. The goal of the new specification was to open the doors to file manipulation for web applications, allowing them to interact with files similar to how a native-installed application would. This means providing access to not only a way for the user to upload a file, but also ways to read the file in different formats, manipulate the data of the file, and then ultimately do something with this data. Although there are many great features of the API, we are going to only focus on one small aspect of this API. This is the ability to get binary file data from the user by asking them to upload a file. A typical application that works with files, such as Notepad on Windows, will work with file data in pretty much the same way. It asks the user to open a file in which it will read the binary data from the file and display the characters on the screen. The File API gives us access to the same binary data that any other application would use in the browser. This is the great thing about working with the File API: it works in most browsers from a HTML page; similar to the ones we have been building for our WebRTC demos. To start building our application, we will put together another simple web page. This will look similar to the last ones, and should be hosted with a static file server as done in the previous examples. By the end of the article, you will be a professional single page application builder! Now let's take a look at the following HTML code that demonstrates file sharing: <!DOCTYPE html> <html lang="en"> <head>    <meta charset="utf-8" />      <title>Learning WebRTC - Article: File Sharing</title>      <style>      body {        background-color: #404040;        margin-top: 15px;        font-family: sans-serif;        color: white;      }        .thumb {        height: 75px;        border: 1px solid #000;        margin: 10px 5px 0 0;      }        .page {        position: relative;        display: block;        margin: 0 auto;        width: 500px;        height: 500px;      }        #byte_content {        margin: 5px 0;        max-height: 100px;        overflow-y: auto;        overflow-x: hidden;      }        #byte_range {        margin-top: 5px;      }    </style> </head> <body>    <div id="login-page" class="page">      <h2>Login As</h2>      <input type="text" id="username" />      <button id="login">Login</button>    </div>      <div id="share-page" class="page">      <h2>File Sharing</h2>        <input type="text" id="their-username" />      <button id="connect">Connect</button>      <div id="ready">Ready!</div>        <br />      <br />           <input type="file" id="files" name="file" /> Read bytes:      <button id="send">Send</button>    </div>      <script src="client.js"></script> </body> </html> The page should be fairly recognizable at this point. We will use the same page showing and hiding via CSS as done earlier. One of the main differences is the appearance of the file input, which we will utilize to have the user upload a file to the page. I even picked a different background color this time to spice things up. Setting up our page Create a new folder for our file sharing application and add the HTML code shown in the preceding section. You will also need all the steps from our JavaScript file to log in two users, create a WebRTC peer connection, and create a data channel between them. Copy the following code into your JavaScript file to get the page set up: var name, connectedUser;   var connection = new WebSocket('ws://localhost:8888');   connection.onopen = function () { console.log("Connected"); };   // Handle all messages through this callback connection.onmessage = function (message) { console.log("Got message", message.data);   var data = JSON.parse(message.data);   switch(data.type) {    case "login":      onLogin(data.success);      break;    case "offer":      onOffer(data.offer, data.name);      break;    case "answer":      onAnswer(data.answer);      break;    case "candidate":      onCandidate(data.candidate);      break;    case "leave":      onLeave();      break;    default:      break; } };   connection.onerror = function (err) { console.log("Got error", err); };   // Alias for sending messages in JSON format function send(message) { if (connectedUser) {    message.name = connectedUser; }   connection.send(JSON.stringify(message)); };   var loginPage = document.querySelector('#login-page'), usernameInput = document.querySelector('#username'), loginButton = document.querySelector('#login'), theirUsernameInput = document.querySelector('#their- username'), connectButton = document.querySelector('#connect'), sharePage = document.querySelector('#share-page'), sendButton = document.querySelector('#send'), readyText = document.querySelector('#ready'), statusText = document.querySelector('#status');   sharePage.style.display = "none"; readyText.style.display = "none";   // Login when the user clicks the button loginButton.addEventListener("click", function (event) { name = usernameInput.value;   if (name.length > 0) {    send({      type: "login",      name: name    }); } });   function onLogin(success) { if (success === false) {    alert("Login unsuccessful, please try a different name."); } else {    loginPage.style.display = "none";    sharePage.style.display = "block";      // Get the plumbing ready for a call    startConnection(); } };   var yourConnection, connectedUser, dataChannel, currentFile, currentFileSize, currentFileMeta;   function startConnection() { if (hasRTCPeerConnection()) {    setupPeerConnection(); } else {    alert("Sorry, your browser does not support WebRTC."); } }   function setupPeerConnection() { var configuration = {    "iceServers": [{ "url": "stun:stun.1.google.com:19302 " }] }; yourConnection = new RTCPeerConnection(configuration, {optional: []});   // Setup ice handling yourConnection.onicecandidate = function (event) {    if (event.candidate) {      send({        type: "candidate",       candidate: event.candidate      });    } };   openDataChannel(); }   function openDataChannel() { var dataChannelOptions = {    ordered: true,    reliable: true,    negotiated: true,    id: "myChannel" }; dataChannel = yourConnection.createDataChannel("myLabel", dataChannelOptions);   dataChannel.onerror = function (error) {    console.log("Data Channel Error:", error); };   dataChannel.onmessage = function (event) {    // File receive code will go here };   dataChannel.onopen = function () {    readyText.style.display = "inline-block"; };   dataChannel.onclose = function () {    readyText.style.display = "none"; }; }   function hasUserMedia() { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; return !!navigator.getUserMedia; }   function hasRTCPeerConnection() { window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection; window.RTCSessionDescription = window.RTCSessionDescription || window.webkitRTCSessionDescription || window.mozRTCSessionDescription; window.RTCIceCandidate = window.RTCIceCandidate || window.webkitRTCIceCandidate || window.mozRTCIceCandidate; return !!window.RTCPeerConnection; }   function hasFileApi() { return window.File && window.FileReader && window.FileList && window.Blob; }   connectButton.addEventListener("click", function () { var theirUsername = theirUsernameInput.value;   if (theirUsername.length > 0) {    startPeerConnection(theirUsername); } });   function startPeerConnection(user) { connectedUser = user;   // Begin the offer yourConnection.createOffer(function (offer) {    send({      type: "offer",      offer: offer    });    yourConnection.setLocalDescription(offer); }, function (error) {    alert("An error has occurred."); }); };   function onOffer(offer, name) { connectedUser = name; yourConnection.setRemoteDescription(new RTCSessionDescription(offer));   yourConnection.createAnswer(function (answer) {    yourConnection.setLocalDescription(answer);      send({      type: "answer",      answer: answer    }); }, function (error) {    alert("An error has occurred"); }); };   function onAnswer(answer) { yourConnection.setRemoteDescription(new RTCSessionDescription(answer)); };   function onCandidate(candidate) { yourConnection.addIceCandidate(new RTCIceCandidate(candidate)); };   function onLeave() { connectedUser = null; yourConnection.close(); yourConnection.onicecandidate = null; setupPeerConnection(); }; We set up references to our elements on the screen as well as get the peer connection ready to be processed. When the user decides to log in, we send a login message to the server. The server will return with a success message telling the user they are logged in. From here, we allow the user to connect to another WebRTC user who is given their username. This sends offer and response, connecting the two users together through the peer connection. Once the peer connection is created, we connect the users through a data channel so that we can send arbitrary data across. Hopefully, this is pretty straightforward and you are able to get this code up and running in no time. It should all be familiar to you by now. This is the last time we are going to refer to this code, so get comfortable with it before moving on! Getting a reference to a file Now that we have a simple page up and running, we can start working on the file sharing part of the application. The first thing the user needs to do is select a file from their computer's filesystem. This is easily taken care of already by the input element on the page. The browser will allow the user to select a file from their computer and then save a reference to that file in the browser for later use. When the user presses the Send button, we want to get a reference to the file that the user has selected. To do this, you need to add an event listener, as shown in the following code: sendButton.addEventListener("click", function (event) { var files = document.querySelector('#files').files;   if (files.length > 0) {    dataChannelSend({      type: "start",      data: files[0]    });      sendFile(files[0]); } }); You might be surprised at how simple the code is to get this far! This is the amazing thing about working within a browser. Much of the hard work has already been done for you. Here, we will get a reference to our input element and the files that it has selected. The input element supports both multiple and single selection of files, but in this example we will only work with one file at a time. We then make sure we have a file to work with, tell the other user that we want to start sending data, and then call our sendFile function, which we will implement later in this article. Now, you might think that the object we get back will be in the form of the entire data inside of our file. What we actually get back from the input element is an object representing metadata about the file itself. Let's take a look at this metadata: { lastModified: 1364868324000, lastModifiedDate: "2013-04-02T02:05:24.000Z", name: "example.gif", size: 1745559, type: "image/gif" } This will give us the information we need to tell the other user that we want to start sending a file with the example.gif name. It will also give a few other important details, such as the type of file we are sending and when it has been modified. The next step is to read the file's data and send it through the data channel. This is no easy task, however, and we will require some special logic to do so. Summary In this article we covered the basics of using the File API and retrieving a file from a user's computer. The article also discusses the page setup for the application using JavaScript and getting a reference to a file. Resources for Article: Further resources on this subject: WebRTC with SIP and IMS [article] Using the WebRTC Data API [article] Applications of WebRTC [article]
Read more
  • 0
  • 0
  • 1227

article-image-deployment-preparations
Packt
08 Jul 2015
23 min read
Save for later

Deployment Preparations

Packt
08 Jul 2015
23 min read
In this article by Jurie-Jan Botha, author of the book Grunt Cookbook, has covered the following recipes: Minifying HTML Minifying CSS Optimizing images Linting JavaScript code Uglifying JavaScript code Setting up RequireJS (For more resources related to this topic, see here.) Once our web application is built and its stability ensured, we can start preparing it for deployment to its intended market. This will mainly involve the optimization of the assets that make up the application. Optimization in this context mostly refers to compression of one kind or another, some of which might lead to performance increases too. The focus on compression is primarily due to the fact that the smaller the asset, the faster it can be transferred from where it is hosted to a user's web browser. This leads to a much better user experience, and can sometimes be essential to the functioning of an application. Minifying HTML In this recipe, we make use of the contrib-htmlmin (0.3.0) plugin to decrease the size of some HTML documents by minifying them. Getting ready In this example, we'll work with the a basic project structure. How to do it... The following steps take us through creating a sample HTML document and configuring a task that minifies it: We'll start by installing the package that contains the contrib-htmlmin plugin. Next, we'll create a simple HTML document called index.html in the src directory, which we'd like to minify, and add the following content in it: <html> <head>    <title>Test Page</title> </head> <body>    <!-- This is a comment! -->    <h1>This is a test page.</h1> </body> </html> Now, we'll add the following htmlmin task to our configuration, which indicates that we'd like to have the white space and comments removed from the src/index.html file, and that we'd like the result to be saved in the dist/index.html file: htmlmin: { dist: {    src: 'src/index.html',    dest: 'dist/index.html',    options: {      removeComments: true,      collapseWhitespace: true    } } } The removeComments and collapseWhitespace options are used as examples here, as using the default htmlmin task will have no effect. Other minification options can be found at the following URL: https://github.com/kangax/html-minifier#options-quick-reference We can now run the task using the grunt htmlmin command, which should produce output similar to the following: Running "htmlmin:dist" (htmlmin) task Minified dist/index.html 147 B ? 92 B If we now take a look at the dist/index.html file, we will see that all white space and comments have been removed: <html> <head>    <title>Test Page</title> </head> <body>    <h1>This is a test page.</h1> </body> </html> Minifying CSS In this recipe, we'll make use of the contrib-cssmin (0.10.0) plugin to decrease the size of some CSS documents by minifying them. Getting ready In this example, we'll work with a basic project structure. How to do it... The following steps take us through creating a sample CSS document and configuring a task that minifies it. We'll start by installing the package that contains the contrib-cssmin plugin. Then, we'll create a simple CSS document called style.css in the src directory, which we'd like to minify, and provide it with the following contents: body { /* Average body style */ background-color: #ffffff; color: #000000; /*! Black (Special) */ } Now, we'll add the following cssmin task to our configuration, which indicates that we'd like to have the src/style.css file compressed, and have the result saved to the dist/style.min.css file: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css' } } We can now run the task using the grunt cssmin command, which should produce the following output: Running "cssmin:dist" (cssmin) taskFile dist/style.css created: 55 B ? 38 B If we take a look at the dist/style.min.css file that was produced, we will see that it has the compressed contents of the original src/style.css file: body{background-color:#fff;color:#000;/*! Black (Special) */} There's more... The cssmin task provides us with several useful options that can be used in conjunction with its basic compression feature. We'll look at prefixing a banner, removing special comments, and reporting gzipped results. Prefixing a banner In the case that we'd like to automatically include some information about the compressed result in the resulting CSS file, we can do so in a banner. A banner can be prepended to the result by supplying the desired banner content to the banner option, as shown in the following example: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css',    options: {      banner: '/* Minified version of style.css */'    } } } Removing special comments Comments that should not be removed by the minification process are called special comments and can be indicated using the "/*! comment */" markers. By default, the cssmin task will leave all special comments untouched, but we can alter this behavior by making use of the keepSpecialComments option. The keepSpecialComments option can be set to either the *, 1, or 0 value. The * value is the default and indicates that all special comments should be kept, 1 indicates that only the first comment that is found should be kept, and 0 indicates that none of them should be kept. The following configuration will ensure that all comments are removed from our minified result: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css',    options: {      keepSpecialComments: 0    } } } Reporting on gzipped results Reporting is useful to see exactly how well the cssmin task has compressed our CSS files. By default, the size of the targeted file and minified result will be displayed, but if we'd also like to see the gzipped size of the result, we can set the report option to gzip, as shown in the following example: cssmin: { dist: {    src: 'src/main.css',    dest: 'dist/main.css',    options: {      report: 'gzip'    } } } Optimizing images In this recipe, we'll make use of the contrib-imagemin (0.9.4) plugin to decrease the size of images by compressing them as much as possible without compromising on their quality. This plugin also provides a plugin framework of its own, which is discussed at the end of this recipe. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through configuring a task that will compress an image for our project. We'll start by installing the package that contains the contrib-imagemin plugin. Next, we can ensure that we have an image called image.jpg in the src directory on which we'd like to perform optimizations. Now, we'll add the following imagemin task to our configuration and indicate that we'd like to have the src/image.jpg file optimized, and have the result saved to the dist/image.jpg file: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg' } } We can then run the task using the grunt imagemin command, which should produce the following output: Running "imagemin:dist" (imagemin) task Minified 1 image (saved 13.36 kB) If we now take a look at the dist/image.jpg file, we will see that its size has decreased without any impact on the quality. There's more... The imagemin task provides us with several options that allow us to tweak its optimization features. We'll look at how to adjust the PNG compression level, disable the progressive JPEG generation, disable the interlaced GIF generation, specify SVGO plugins to be used, and use the imagemin plugin framework. Adjusting the PNG compression level The compression of a PNG image can be increased by running the compression algorithm on it multiple times. By default, the compression algorithm is run 16 times. This number can be changed by providing a number from 0 to 7 to the optimizationLevel option. The 0 value means that the compression is effectively disabled and 7 indicates that the algorithm should run 240 times. In the following configuration we set the compression level to its maximum: imagemin: { dist: {    src: 'src/image.png',    dest: 'dist/image.png',    options: {      optimizationLevel: 7    } } } Disabling the progressive JPEG generation Progressive JPEGs are compressed in multiple passes, which allows a low-quality version of them to quickly become visible and increase in quality as the rest of the image is received. This is especially helpful when displaying images over a slower connection. By default, the imagemin plugin will generate JPEG images in the progressive format, but this behavior can be disabled by setting the progressive option to false, as shown in the following example: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg',    options: {      progressive: false    } } } Disabling the interlaced GIF generation An interlaced GIF is the equivalent of a progressive JPEG in that it allows the contained image to be displayed at a lower resolution before it has been fully downloaded, and increases in quality as the rest of the image is received. By default, the imagemin plugin will generate GIF images in the interlaced format, but this behavior can be disabled by setting the interlaced option to false, as shown in the following example: imagemin: { dist: {    src: 'src/image.gif',    dest: 'dist/image.gif',    options: {      interlaced: false    } } } Specifying SVGO plugins to be used When optimizing SVG images, the SVGO library is used by default. This allows us to specify the use of various plugins provided by the SVGO library that each performs a specific function on the targeted files. Refer to the following URL for more detailed instructions on how to use the svgo plugins options and the SVGO library: https://github.com/sindresorhus/grunt-svgmin#available-optionsplugins Most of the plugins in the library are enabled by default, but if we'd like to specifically indicate which of these should be used, we can do so using the svgoPlugins option. Here, we can provide an array of objects, where each contain a property with the name of the plugin to be affected, followed by a true or false value to indicate whether it should be activated. The following configuration disables three of the default plugins: imagemin: { dist: {    src: 'src/image.svg',    dest: 'dist/image.svg',    options: {      svgoPlugins: [        {removeViewBox:false},        {removeUselessStrokeAndFill:false},        {removeEmptyAttrs:false}      ]    } } } Using the 'imagemin' plugin framework In order to provide support for the various image optimization projects, the imagemin plugin has a plugin framework of its own that allows developers to easily create an extension that makes use of the tool they require. You can get a list of the available plugin modules for the imagemin plugin's framework at the following URL: https://www.npmjs.com/browse/keyword/imageminplugin The following steps will take us through installing and making use of the mozjpeg plugin to compress an image in our project. These steps start where the main recipe takes off. We'll start by installing the imagemin-mozjpeg package using the npm install imagemin-mozjpeg command, which should produce the following output: [email protected] node_modules/imagemin-mozjpeg With the package installed, we need to import it into our configuration file, so that we can make use of it in our task configuration. We do this by adding the following line at the top of our Gruntfile.js file: var mozjpeg = require('imagemin-mozjpeg'); With the plugin installed and imported, we can now change the configuration of our imagemin task by adding the use option and providing it with the initialized plugin: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg',    options: {      use: [mozjpeg()]    } } } Finally, we can test our setup by running the task using the grunt imagemin command. This should produce an output similar to the following: Running "imagemin:dist" (imagemin) task Minified 1 image (saved 9.88 kB) Linting JavaScript code In this recipe, we'll make use of the contrib-jshint (0.11.1) plugin to detect errors and potential problems in our JavaScript code. It is also commonly used to enforce code conventions within a team or project. As can be derived from its name, it's basically a Grunt adaptation for the JSHint tool. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating a sample JavaScript file and configuring a task that will scan and analyze it using the JSHint tool. We'll start by installing the package that contains the contrib-jshint plugin. Next, we'll create a sample JavaScript file called main.js in the src directory, and add the following content in it: sample = 'abc'; console.log(sample); With our sample file ready, we can now add the following jshint task to our configuration. We'll configure this task to target the sample file and also add a basic option that we require for this example: jshint: { main: {    options: {      undef: true    },    src: ['src/main.js'] } } The undef option is a standard JSHint option used specifically for this example and is not required for this plugin to function. Specifying this option indicates that we'd like to have errors raised for variables that are used without being explicitly defined. We can now run the task using the grunt jshint command, which should produce output informing us of the problems found in our sample file: Running "jshint:main" (jshint) task      src/main.js      1 |sample = 'abc';          ^ 'sample' is not defined.      2 |console.log(sample);          ^ 'console' is not defined.      2 |console.log(sample);                      ^ 'sample' is not defined.   >> 3 errors in 1 file There's more... The jshint task provides us with several options that allow us to change its general behavior, in addition to how it analyzes the targeted code. We'll look at how to specify standard JSHint options, specify globally defined variables, send reported output to a file, and prevent task failure on JSHint errors. Specifying standard JSHint options The contrib-jshint plugin provides a simple way to pass all the standard JSHint options from the task's options object to the underlying JSHint tool. A list of all the options provided by the JSHint tool can be found at the following URL: http://jshint.com/docs/options/ The following example adds the curly option to the task we created in our main recipe to enforce the use of curly braces wherever they are appropriate: jshint: { main: {    options: {      undef: true,      curly: true    },    src: ['src/main.js'] } } Specifying globally defined variables Making use of globally defined variables is quite common when working with JavaScript, which is where the globals option comes in handy. Using this option, we can define a set of global values that we'll use in the targeted code, so that errors aren't raised when JSHint encounters them. In the following example, we indicate that the console variable should be treated as a global, and not raise errors when encountered: jshint: { main: {    options: {      undef: true,      globals: {        console: true      }    },    src: ['src/main.js'] } } Sending reported output to a file If we'd like to store the resulting output from our JSHint analysis, we can do so by specifying a path to a file that should receive it using the reporterOutput option, as shown in the following example: jshint: { main: {    options: {      undef: true,      reporterOutput: 'report.dat'    },    src: ['src/main.js'] } } Preventing task failure on JSHint errors The default behavior for the jshint task is to exit the running Grunt process once a JSHint error is encountered in any of the targeted files. This behavior becomes especially undesirable if you'd like to keep watching files for changes, even when an error has been raised. In the following example, we indicate that we'd like to keep the process running when errors are encountered by giving the force option a true value: jshint: { main: {    options: {      undef: true,      force: true    },    src: ['src/main.js'] } } Uglifying JavaScript Code In this recipe, we'll make use of the contrib-uglify (0.8.0) plugin to compress and mangle some files containing JavaScript code. For the most part, the process of uglifying just removes all the unnecessary characters and shortens variable names in a source code file. This has the potential to dramatically reduce the size of the file, slightly increase performance, and make the inner workings of your publicly available code a little more obscure. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating a sample JavaScript file and configuring a task that will uglify it. We'll start by installing the package that contains the contrib-uglify plugin. Then, we can create a sample JavaScript file called main.js in the src directory, which we'd like to uglify, and provide it with the following contents: var main = function () { var one = 'Hello' + ' '; var two = 'World';   var result = one + two;   console.log(result); }; With our sample file ready, we can now add the following uglify task to our configuration, indicating the sample file as the target and providing a destination output file: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js' } } We can now run the task using the grunt uglify command, which should produce output similar to the following: Running "uglify:main" (uglify) task >> 1 file created. If we now take a look at the resulting dist/main.js file, we should see that it contains the uglified contents of the original src/main.js file. There's more... The uglify task provides us with several options that allow us to change its general behavior and see how it uglifies the targeted code. We'll look at specifying standard UglifyJS options, generating source maps, and wrapping generated code in an enclosure. Specifying standard UglifyJS options The underlying UglifyJS tool can provide a set of options for each of its separate functional parts. These parts are the mangler, compressor, and beautifier. The contrib-plugin allows passing options to each of these parts using the mangle, compress, and beautify options. The available options for each of the mangler, compressor, and beautifier parts can be found at each of following URLs (listed in the order mentioned): https://github.com/mishoo/UglifyJS2#mangler-options https://github.com/mishoo/UglifyJS2#compressor-options https://github.com/mishoo/UglifyJS2#beautifier-options The following example alters the configuration of the main recipe to provide a single option to each of these parts: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      mangle: {        toplevel: true      },      compress: {        evaluate: false      },      beautify: {        semicolons: false      }    } } } Generating source maps As code gets mangled and compressed, it becomes effectively unreadable to humans, and therefore, nearly impossible to debug. For this reason, we are provided with the option of generating a source map when uglifying our code. The following example makes use of the sourceMap option to indicate that we'd like to have a source map generated along with our uglified code: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      sourceMap: true    } } } Running the altered task will now, in addition to the dist/main.js file with our uglified source, generate a source map file called main.js.map in the same directory as the uglified file. Wrapping generated code in an enclosure When building your own JavaScript code modules, it's usually a good idea to have them wrapped in a wrapper function to ensure that you don't pollute the global scope with variables that you won't be using outside of the module itself. For this purpose, we can use the wrap option to indicate that we'd like to have the resulting uglified code wrapped in a wrapper function, as shown in the following example: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      wrap: true    } } } If we now take a look at the result dist/main.js file, we should see that all the uglified contents of the original file are now contained within a wrapper function. Setting up RequireJS In this recipe, we'll make use of the contrib-requirejs (0.4.4) plugin to package the modularized source code of our web application into a single file. For the most part, this plugin just provides a wrapper for the RequireJS tool. RequireJS provides a framework to modularize JavaScript source code and consume those modules in an orderly fashion. It also allows packaging an entire application into one file and importing only the modules that are required while keeping the module structure intact. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating some files for a sample application and setting up a task that bundles them into one file. We'll start by installing the package that contains the contrib-requirejs plugin. First, we'll need a file that will contain our RequireJS configuration. Let's create a file called config.js in the src directory and add the following content in it: require.config({ baseUrl: 'app' }); Secondly, we'll create a sample module that we'd like to use in our application. Let's create a file called sample.js in the src/app directory and add the following content in it: define(function (require) { return function () {    console.log('Sample Module'); } }); Lastly, we'll need a file that will contain the main entry point for our application, and also makes use of our sample module. Let's create a file called main.js in the src/app directory and add the following content in it: require(['sample'], function (sample) { sample(); }); Now that we've got all the necessary files required for our sample application, we can setup a requirejs task that will bundle it all into one file: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js'    } } } The mainConfigFile option points out the configuration file that will determine the behavior of RequireJS. The name option indicates the name of the module that contains the application entry point. In the case of this example, our application entry point is contained in the app/main.js file, and app is the base directory of our application in the src/config.js file. This translates the app/main.js filename into the main module name. The out option is used to indicate the file that should receive the result of the bundled application. We can now run the task using the grunt requirejs command, which should produce output similar to the following: Running "requirejs:app" (requirejs) task We should now have a file named app.js in the www/js directory that contains our entire sample application. There's more... The requirejs task provides us with all the underlying options provided by the RequireJS tool. We'll look at how to use these exposed options and generate a source map. Using RequireJS optimizer options The RequireJS optimizer is quite an intricate tool, and therefore, provides a large number of options to tweak its behavior. The contrib-requirejs plugin allows us to easily set any of these options by just specifying them as options of the plugin itself. A list of all the available configuration options for the RequireJS build system can be found in the example configuration file at the following URL: https://github.com/jrburke/r.js/blob/master/build/example.build.js The following example indicates that the UglifyJS2 optimizer should be used instead of the default UglifyJS optimizer by using the optimize option: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js',      optimize: 'uglify2'    } } } Generating a source map When the source code is bundled into one file, it becomes somewhat harder to debug, as you now have to trawl through miles of code to get to the point you're actually interested in. A source map can help us with this issue by relating the resulting bundled file to the modularized structure it is derived from. Simply put, with a source map, our debugger will display the separate files we had before, even though we're actually using the bundled file. The following example makes use of the generateSourceMap option to indicate that we'd like to generate a source map along with the resulting file: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js',      optimize: 'uglify2',      preserveLicenseComments: false,      generateSourceMaps: true    } } } In order to use the generateSourceMap option, we have to indicate that UglifyJS2 is to be used for optimization, by setting the optimize option to uglify2, and that license comments should not be preserved, by setting the preserveLicenseComments option to false. Summary This article covers the optimization of images, minifying of CSS, ensuring the quality of our JavaScript code, compressing it, and packaging it all together into one source file. Resources for Article: Further resources on this subject: Grunt in Action [article] So, what is Node.js? [article] Exploring streams [article]
Read more
  • 0
  • 0
  • 1024

article-image-man-do-i-templates
Packt
07 Jul 2015
22 min read
Save for later

Man, Do I Like Templates!

Packt
07 Jul 2015
22 min read
In this article by Italo Maia, author of the book Building Web Applications with Flask, we will discuss what Jinja2 is, and how Flask uses Jinja2 to implement the View layer and awe you. Be prepared! (For more resources related to this topic, see here.) What is Jinja2 and how is it coupled with Flask? Jinja2 is a library found at http://jinja.pocoo.org/; you can use it to produce formatted text with bundled logic. Unlike the Python format function, which only allows you to replace markup with variable content, you can have a control structure, such as a for loop, inside a template string and use Jinja2 to parse it. Let's consider this example: from jinja2 import Template x = """ <p>Uncle Scrooge nephews</p> <ul> {% for i in my_list %} <li>{{ i }}</li> {% endfor %} </ul> """ template = Template(x) # output is an unicode string print template.render(my_list=['Huey', 'Dewey', 'Louie']) In the preceding code, we have a very simple example where we create a template string with a for loop control structure ("for tag", for short) that iterates over a list variable called my_list and prints the element inside a "li HTML tag" using curly braces {{ }} notation. Notice that you could call render in the template instance as many times as needed with different key-value arguments, also called the template context. A context variable may have any valid Python variable name—that is, anything in the format given by the regular expression [a-zA-Z_][a-zA-Z0-9_]*. For a full overview on regular expressions (Regex for short) with Python, visit https://docs.python.org/2/library/re.html. Also, take a look at this nice online tool for Regex testing http://pythex.org/. A more elaborate example would make use of an environment class instance, which is a central, configurable, extensible class that may be used to load templates in a more organized way. Do you follow where we are going here? This is the basic principle behind Jinja2 and Flask: it prepares an environment for you, with a few responsive defaults, and gets your wheels in motion. What can you do with Jinja2? Jinja2 is pretty slick. You can use it with template files or strings; you can use it to create formatted text, such as HTML, XML, Markdown, and e-mail content; you can put together templates, reuse templates, and extend templates; you can even use extensions with it. The possibilities are countless, and combined with nice debugging features, auto-escaping, and full unicode support. Auto-escaping is a Jinja2 configuration where everything you print in a template is interpreted as plain text, if not explicitly requested otherwise. Imagine a variable x has its value set to <b>b</b>. If auto-escaping is enabled, {{ x }} in a template would print the string as given. If auto-escaping is off, which is the Jinja2 default (Flask's default is on), the resulting text would be b. Let's understand a few concepts before covering how Jinja2 allows us to do our coding. First, we have the previously mentioned curly braces. Double curly braces are a delimiter that allows you to evaluate a variable or function from the provided context and print it into the template: from jinja2 import Template # create the template t = Template("{{ variable }}") # – Built-in Types – t.render(variable='hello you') >> u"hello you" t.render(variable=100) >> u"100" # you can evaluate custom classes instances class A(object): def __str__(self):    return "__str__" def __unicode__(self):    return u"__unicode__" def __repr__(self):    return u"__repr__" # – Custom Objects Evaluation – # __unicode__ has the highest precedence in evaluation # followed by __str__ and __repr__ t.render(variable=A()) >> u"__unicode__" In the preceding example, we see how to use curly braces to evaluate variables in your template. First, we evaluate a string and then an integer. Both result in a unicode string. If we evaluate a class of our own, we must make sure there is a __unicode__ method defined, as it is called during the evaluation. If a __unicode__ method is not defined, the evaluation falls back to __str__ and __repr__, sequentially. This is easy. Furthermore, what if we want to evaluate a function? Well, just call it: from jinja2 import Template # create the template t = Template("{{ fnc() }}") t.render(fnc=lambda: 10) >> u"10" # evaluating a function with argument t = Template("{{ fnc(x) }}") t.render(fnc=lambda v: v, x='20') >> u"20" t = Template("{{ fnc(v=30) }}") t.render(fnc=lambda v: v) >> u"30" To output the result of a function in a template, just call the function as any regular Python function. The function return value will be evaluated normally. If you're familiar with Django, you might notice a slight difference here. In Django, you do not need the parentheses to call a function, or even pass arguments to it. In Flask, the parentheses are always needed if you want the function return evaluated. The following two examples show the difference between Jinja2 and Django function call in a template: {# flask syntax #} {{ some_function() }}   {# django syntax #} {{ some_function }} You can also evaluate Python math operations. Take a look: from jinja2 import Template # no context provided / needed Template("{{ 3 + 3 }}").render() >> u"6" Template("{{ 3 - 3 }}").render() >> u"0" Template("{{ 3 * 3 }}").render() >> u"9" Template("{{ 3 / 3 }}").render() >> u"1" Other math operators will also work. You may use the curly braces delimiter to access and evaluate lists and dictionaries: from jinja2 import Template Template("{{ my_list[0] }}").render(my_list=[1, 2, 3]) >> u'1' Template("{{ my_list['foo'] }}").render(my_list={'foo': 'bar'}) >> u'bar' # and here's some magic Template("{{ my_list.foo }}").render(my_list={'foo': 'bar'}) >> u'bar' To access a list or dictionary value, just use normal plain Python notation. With dictionaries, you can also access a key value using variable access notation, which is pretty neat. Besides the curly braces delimiter, Jinja2 also has the curly braces/percentage delimiter, which uses the notation {% stmt %} and is used to execute statements, which may be a control statement or not. Its usage depends on the statement, where control statements have the following notation: {% stmt %} {% endstmt %} The first tag has the statement name, while the second is the closing tag, which has the name of the statement appended with end in the beginning. You must be aware that a non-control statement may not have a closing tag. Let's look at some examples: {% block content %} {% for i in items %} {{ i }} - {{ i.price }} {% endfor %} {% endblock %} The preceding example is a little more complex than what we have been seeing. It uses a control statement for loop inside a block statement (you can have a statement inside another), which is not a control statement, as it does not control execution flow in the template. Inside the for loop you see that the i variable is being printed together with the associated price (defined elsewhere). A last delimiter you should know is {# comments go here #}. It is a multi-line delimiter used to declare comments. Let's see two examples that have the same result: {# first example #} {# second example #} Both comment delimiters hide the content between {# and #}. As can been seen, this delimiter works for one-line comments and multi-line comments, what makes it very convenient. Control structures We have a nice set of built-in control structures defined by default in Jinja2. Let's begin our studies on it with the if statement. {% if true %}Too easy{% endif %} {% if true == true == True %}True and true are the same{% endif %} {% if false == false == False %}False and false also are the same{% endif %} {% if none == none == None %}There's also a lowercase None{% endif %} {% if 1 >= 1 %}Compare objects like in plain python{% endif %} {% if 1 == 2 %}This won't be printed{% else %}This will{% endif %} {% if "apples" != "oranges" %}All comparison operators work = ]{% endif %} {% if something %}elif is also supported{% elif something_else %}^_^{% endif %} The if control statement is beautiful! It behaves just like a python if statement. As seen in the preceding code, you can use it to compare objects in a very easy fashion. "else" and "elif" are also fully supported. You may also have noticed that true and false, non-capitalized, were used together with plain Python Booleans, True and False. As a design decision to avoid confusion, all Jinja2 templates have a lowercase alias for True, False, and None. By the way, lowercase syntax is the preferred way to go. If needed, and you should avoid this scenario, you may group comparisons together in order to change precedence evaluation. See the following example: {% if 5 < 10 < 15 %}true{%else%}false{% endif %} {% if (5 < 10) < 15 %}true{%else%}false{% endif %} {% if 5 < (10 < 15) %}true{%else%}false{% endif %} The expected output for the preceding example is true, true, and false. The first two lines are pretty straightforward. In the third line, first, (10<15) is evaluated to True, which is a subclass of int, where True == 1. Then 5 < True is evaluated, which is certainly false. The for statement is pretty important. One can hardly think of a serious Web application that does not have to show a list of some kind at some point. The for statement can iterate over any iterable instance and has a very simple, Python-like syntax: {% for item in my_list %} {{ item }}{# print evaluate item #} {% endfor %} {# or #} {% for key, value in my_dictionary.items() %} {{ key }}: {{ value }} {% endfor %} In the first statement, we have the opening tag indicating that we will iterate over my_list items and each item will be referenced by the name item. The name item will be available inside the for loop context only. In the second statement, we have an iteration over the key value tuples that form my_dictionary, which should be a dictionary (if the variable name wasn't suggestive enough). Pretty simple, right? The for loop also has a few tricks in store for you. When building HTML lists, it's a common requirement to mark each list item in alternating colors in order to improve readability or mark the first or/and last item with some special markup. Those behaviors can be achieved in a Jinja2 for-loop through access to a loop variable available inside the block context. Let's see some examples: {% for i in ['a', 'b', 'c', 'd'] %} {% if loop.first %}This is the first iteration{% endif %} {% if loop.last %}This is the last iteration{% endif %} {{ loop.cycle('red', 'blue') }}{# print red or blue alternating #} {{ loop.index }} - {{ loop.index0 }} {# 1 indexed index – 0 indexed index #} {# reverse 1 indexed index – reverse 0 indexed index #} {{ loop.revindex }} - {{ loop.revindex0 }} {% endfor %} The for loop statement, as in Python, also allow the use of else, but with a slightly different meaning. In Python, when you use else with for, the else block is only executed if it was not reached through a break command like this: for i in [1, 2, 3]: pass else: print "this will be printed" for i in [1, 2, 3]: if i == 3:    break else: print "this will never not be printed" As seen in the preceding code snippet, the else block will only be executed in a for loop if the execution was never broken by a break command. With Jinja2, the else block is executed when the for iterable is empty. For example: {% for i in [] %} {{ i }} {% else %}I'll be printed{% endfor %} {% for i in ['a'] %} {{ i }} {% else %}I won't{% endfor %} As we are talking about loops and breaks, there are two important things to know: the Jinja2 for loop does not support break or continue. Instead, to achieve the expected behavior, you should use loop filtering as follows: {% for i in [1, 2, 3, 4, 5] if i > 2 %} value: {{ i }}; loop.index: {{ loop.index }} {%- endfor %} In the first tag you see a normal for loop together with an if condition. You should consider that condition as a real list filter, as the index itself is only counted per iteration. Run the preceding example and the output will be the following: value:3; index: 1 value:4; index: 2 value:5; index: 3 Look at the last observation in the preceding example—in the second tag, do you see the dash in {%-? It tells the renderer that there should be no empty new lines before the tag at each iteration. Try our previous example without the dash and compare the results to see what changes. We'll now look at three very important statements used to build templates from different files: block, extends, and include. block and extends always work together. The first is used to define "overwritable" blocks in a template, while the second defines a parent template that has blocks, for the current template. Let's see an example: # coding:utf-8 with open('parent.txt', 'w') as file:    file.write(""" {% block template %}parent.txt{% endblock %} =========== I am a powerful psychic and will tell you your past   {#- "past" is the block identifier #} {% block past %} You had pimples by the age of 12. {%- endblock %}   Tremble before my power!!!""".strip())   with open('child.txt', 'w') as file:    file.write(""" {% extends "parent.txt" %}   {# overwriting the block called template from parent.txt #} {% block template %}child.txt{% endblock %}   {#- overwriting the block called past from parent.txt #} {% block past %} You've bought an ebook recently. {%- endblock %}""".strip()) with open('other.txt', 'w') as file:    file.write(""" {% extends "child.txt" %} {% block template %}other.txt{% endblock %}""".strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() # tell the environment how to load templates env.loader = FileSystemLoader('.') # look up our template tmpl = env.get_template('parent.txt') # render it to default output print tmpl.render() print "" # loads child.html and its parent tmpl = env.get_template('child.txt') print tmpl.render() # loads other.html and its parent env.get_template('other.txt').render() Do you see the inheritance happening, between child.txt and parent.txt? parent.txt is a simple template with two block statements, called template and past. When you render parent.txt directly, its blocks are printed "as is", because they were not overwritten. In child.txt, we extend the parent.txt template and overwrite all its blocks. By doing that, we can have different information in specific parts of a template without having to rewrite the whole thing. With other.txt, for example, we extend the child.txt template and overwrite only the block-named template. You can overwrite blocks from a direct parent template or from any of its parents. If you were defining an index.txt page, you could have default blocks in it that would be overwritten when needed, saving lots of typing. Explaining the last example, Python-wise, is pretty simple. First, we create a Jinja2 environment (we talked about this earlier) and tell it how to load our templates, then we load the desired template directly. We do not have to bother telling the environment how to find parent templates, nor do we need to preload them. The include statement is probably the easiest statement so far. It allows you to render a template inside another in a very easy fashion. Let's look at an example: with open('base.txt', 'w') as file: file.write(""" {{ myvar }} You wanna hear a dirty joke? {% include 'joke.txt' %} """.strip()) with open('joke.txt', 'w') as file: file.write(""" A boy fell in a mud puddle. {{ myvar }} """.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() # tell the environment how to load templates env.loader = FileSystemLoader('.') print env.get_template('base.txt').render(myvar='Ha ha!') In the preceding example, we render the joke.txt template inside base.txt. As joke.txt is rendered inside base.txt, it also has full access to the base.txt context, so myvar is printed normally. Finally, we have the set statement. It allows you to define variables for inside the template context. Its use is pretty simple: {% set x = 10 %} {{ x }} {% set x, y, z = 10, 5+5, "home" %} {{ x }} - {{ y }} - {{ z }} In the preceding example, if x was given by a complex calculation or a database query, it would make much more sense to have it cached in a variable, if it is to be reused across the template. As seen in the example, you can also assign a value to multiple variables at once. Macros Macros are the closest to coding you'll get inside Jinja2 templates. The macro definition and usage are similar to plain Python functions, so it is pretty easy. Let's try an example: with open('formfield.html', 'w') as file: file.write(''' {% macro input(name, value='', label='') %} {% if label %} <label for='{{ name }}'>{{ label }}</label> {% endif %} <input id='{{ name }}' name='{{ name }}' value='{{ value }}'></input> {% endmacro %}'''.strip()) with open('index.html', 'w') as file: file.write(''' {% from 'formfield.html' import input %} <form method='get' action='.'> {{ input('name', label='Name:') }} <input type='submit' value='Send'></input> </form> '''.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() env.loader = FileSystemLoader('.') print env.get_template('index.html').render() In the preceding example, we create a macro that accepts a name argument and two optional arguments: value and label. Inside the macro block, we define what should be output. Notice we can use other statements inside a macro, just like a template. In index.html we import the input macro from inside formfield.html, as if formfield was a module and input was a Python function using the import statement. If needed, we could even rename our input macro like this: {% from 'formfield.html' import input as field_input %} You can also import formfield as a module and use it as follows: {% import 'formfield.html' as formfield %} When using macros, there is a special case where you want to allow any named argument to be passed into the macro, as you would in a Python function (for example, **kwargs). With Jinja2 macros, these values are, by default, available in a kwargs dictionary that does not need to be explicitly defined in the macro signature. For example: # coding:utf-8 with open('formfield.html', 'w') as file:    file.write(''' {% macro input(name) -%} <input id='{{ name }}' name='{{ name }}' {% for k,v in kwargs.items() -%}{{ k }}='{{ v }}' {% endfor %}></input> {%- endmacro %} '''.strip())with open('index.html', 'w') as file:    file.write(''' {% from 'formfield.html' import input %} {# use method='post' whenever sending sensitive data over HTTP #} <form method='post' action='.'> {{ input('name', type='text') }} {{ input('passwd', type='password') }} <input type='submit' value='Send'></input> </form> '''.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() env.loader = FileSystemLoader('.') print env.get_template('index.html').render() As you can see, kwargs is available even though you did not define a kwargs argument in the macro signature. Macros have a few clear advantages over plain templates, that you notice with the include statement: You do not have to worry about variable names in the template using macros You can define the exact required context for a macro block through the macro signature You can define a macro library inside a template and import only what is needed Commonly used macros in a Web application include a macro to render pagination, another to render fields, and another to render forms. You could have others, but these are pretty common use cases. Regarding our previous example, it is good practice to use HTTPS (also known as, Secure HTTP) to send sensitive information, such as passwords, over the Internet. Be careful about that! Extensions Extensions are the way Jinja2 allows you to extend its vocabulary. Extensions are not enabled by default, so you can enable an extension only when and if you need, and start using it without much trouble: env = Environment(extensions=['jinja2.ext.do',   'jinja2.ext.with_']) In the preceding code, we have an example where you create an environment with two extensions enabled: do and with. Those are the extensions we will study in this article. As the name suggests, the do extension allows you to "do stuff". Inside a do tag, you're allowed to execute Python expressions with full access to the template context. Flask-Empty, a popular flask boilerplate available at https://github.com/italomaia/flask-empty uses the do extension to update a dictionary in one of its macros, for example. Let's see how we could do the same: {% set x = {1:'home', '2':'boat'} %} {% do x.update({3: 'bar'}) %} {%- for key,value in x.items() %} {{ key }} - {{ value }} {%- endfor %} In the preceding example, we create the x variable with a dictionary, then we update it with {3: 'bar'}. You don't usually need to use the do extension but, when you do, a lot of coding is saved. The with extension is also very simple. You use it whenever you need to create block scoped variables. Imagine you have a value you need cached in a variable for a brief moment; this would be a good use case. Let's see an example: {% with age = user.get_age() %} My age: {{ age }} {% endwith %} My age: {{ age }}{# no value here #} As seen in the example, age exists only inside the with block. Also, variables set inside a with block will only exist inside it. For example: {% with %} {% set count = query.count() %} Current Stock: {{ count }} Diff: {{ prev_count - count }} {% endwith %} {{ count }} {# empty value #} Filters Filters are a marvelous thing about Jinja2! This tool allows you to process a constant or variable before printing it to the template. The goal is to implement the formatting you want, strictly in the template. To use a filter, just call it using the pipe operator like this: {% set name = 'junior' %} {{ name|capitalize }} {# output is Junior #} Its name is passed to the capitalize filter that processes it and returns the capitalized value. To inform arguments to the filter, just call it like a function, like this: {{ ['Adam', 'West']|join(' ') }} {# output is Adam West #} The join filter will join all values from the passed iterable, putting the provided argument between them. Jinja2 has an enormous quantity of available filters by default. That means we can't cover them all here, but we can certainly cover a few. capitalize and lower were seen already. Let's look at some further examples: {# prints default value if input is undefined #} {{ x|default('no opinion') }} {# prints default value if input evaluates to false #} {{ none|default('no opinion', true) }} {# prints input as it was provided #} {{ 'some opinion'|default('no opinion') }}   {# you can use a filter inside a control statement #} {# sort by key case-insensitive #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort %}{{ key }}{% endfor %} {# sort by key case-sensitive #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort(true) %}{{ key }}{% endfor %} {# sort by value #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort(false, 'value') %}{{ key }}{% endfor %} {{ [3, 2, 1]|first }} - {{ [3, 2, 1]|last }} {{ [3, 2, 1]|length }} {# prints input length #} {# same as in python #} {{ '%s, =D'|format("I'm John") }} {{ "He has two daughters"|replace('two', 'three') }} {# safe prints the input without escaping it first#} {{ '<input name="stuff" />'|safe }} {{ "there are five words here"|wordcount }} Try the preceding example to see exactly what each filter does. After reading this much about Jinja2, you're probably thinking: "Jinja2 is cool but this is a book about Flask. Show me the Flask stuff!". Ok, ok, I can do that! Of what we have seen so far, almost everything can be used with Flask with no modifications. As Flask manages the Jinja2 environment for you, you don't have to worry about creating file loaders and stuff like that. One thing you should be aware of, though, is that, because you don't instantiate the Jinja2 environment yourself, you can't really pass to the class constructor, the extensions you want to activate. To activate an extension, add it to Flask during the application setup as follows: from flask import Flask app = Flask(__name__) app.jinja_env.add_extension('jinja2.ext.do') # or jinja2.ext.with_ if __name__ == '__main__': app.run() Messing with the template context You can use the render_template method to load a template from the templates folder and then render it as a response. from flask import Flask, render_template app = Flask(__name__)   @app.route("/") def hello():    return render_template("index.html") If you want to add values to the template context, as seen in some of the examples in this article, you would have to add non-positional arguments to render_template: from flask import Flask, render_template app = Flask(__name__)   @app.route("/") def hello():    return render_template("index.html", my_age=28) In the preceding example, my_age would be available in the index.html context, where {{ my_age }} would be translated to 28. my_age could have virtually any value you want to exhibit, actually. Now, what if you want all your views to have a specific value in their context, like a version value—some special code or function; how would you do it? Flask offers you the context_processor decorator to accomplish that. You just have to annotate a function that returns a dictionary and you're ready to go. For example: from flask import Flask, render_response app = Flask(__name__)   @app.context_processor def luck_processor(): from random import randint def lucky_number():    return randint(1, 10) return dict(lucky_number=lucky_number)   @app.route("/") def hello(): # lucky_number will be available in the index.html context by default return render_template("index.html") Summary In this article, we saw how to render templates using only Jinja2, how control statements look and how to use them, how to write a comment, how to print variables in a template, how to write and use macros, how to load and use extensions, and how to register context processors. I don't know about you, but this article felt like a lot of information! I strongly advise you to run the experiment with the examples. Knowing your way around Jinja2 will save you a lot of headaches. Resources for Article: Further resources on this subject: Recommender systems dissected Deployment and Post Deployment [article] Handling sessions and users [article] Introduction to Custom Template Filters and Tags [article]
Read more
  • 0
  • 0
  • 5999

article-image-groups-and-cohorts
Packt
06 Jul 2015
20 min read
Save for later

Groups and Cohorts in Moodle

Packt
06 Jul 2015
20 min read
In this article by William Rice, author of the book, Moodle E-Learning Course Development - Third Edition shows you how to use groups to separate students in a course into teams. You will also learn how to use cohorts to mass enroll students into courses. Groups versus cohorts Groups and cohorts are both collections of students. There are several differences between them. We can sum up these differences in one sentence, that is; cohorts enable administrators to enroll and unenroll students en masse, whereas groups enable teachers to manage students during a class. Think of a cohort as a group of students working together through the same academic curriculum. For example, a group of students all enrolled in the same course. Think of a group as a subset of students enrolled in a course. Groups are used to manage various activities within a course. Cohort is a system-wide or course category-wide set of students. There is a small amount of overlap between what you can do with a cohort and a group. However, the differences are large enough that you would not want to substitute one for the other. Cohorts In this article, we'll look at how to create and use cohorts. You can perform many operations with cohorts in bulk, affecting many students at once. Creating a cohort To create a cohort, perform the following steps: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, click on the Add button. The Add New Cohort page is displayed. Enter a Name for the cohort. This is the name that you will see when you work with the cohort. Enter a Cohort ID for the cohort. If you upload students in bulk to this cohort, you will specify the cohort using this identifier. You can use any characters you want in the Cohort ID; however, keep in mind that the file you upload to the cohort can come from a different computer system. To be safe, consider using only ASCII characters; such as letters, numbers, some special characters, and no spaces in the Cohort ID option. For example, Spring_2012_Freshmen. Enter a Description that will help you and other administrators remember the purpose of the cohort. Click on Save changes. Now that the cohort is created, you can begin adding users to this cohort. Adding students to a cohort Students can be added to a cohort manually by searching and selecting them. They can also be added in bulk by uploading a file to Moodle. Manually adding and removing students to a cohort If you add a student to a cohort, that student is enrolled in all the courses to which the cohort is synchronized. If you remove a student from a cohort, that student will be unenrolled from all the courses to which the cohort is synchronized. We will look at how to synchronize cohorts and course enrollments later. For now, here is how to manually add and remove students from a cohort: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, for the cohort to which you want to add students, click on the people icon: The Cohort Assign page is displayed. The left-hand side panel displays users that are already in the cohort, if any. The right-hand side panel displays users that can be added to the cohort. Use the Search field to search for users in each panel. You can search for text that is in the user name and e-mail address fields. Use the Add and Remove button to move users from one panel to another. Adding students to a cohort in bulk – upload When you upload students to Moodle, you can add them to a cohort. After you have all the students in a cohort, you can quickly enroll and unenroll them in courses just by synchronizing the cohort to the course. If you are going to upload students in bulk, consider putting them in a cohort. This makes it easier to manipulate them later. Here is an example of a cohort. Note that there are 1,204 students enrolled in the cohort: These students were uploaded to the cohort under Administration | Site Administration | Users | Upload users: The file that was uploaded contained information about each student in the cohort. In a spreadsheet, this is how the file looks: username,email,firstname,lastname,cohort1 moodler_1,[email protected],Bill,Binky,open-enrollmentmoodlers moodler_2,[email protected],Rose,Krial,open-enrollmentmoodlers moodler_3,[email protected],Jeff,Marco,open-enrollmentmoodlers moodler_4,[email protected],Dave,Gallo,open-enrollmentmoodlers In this example, we have the minimum required information to create new students. These are as follows: The username The e-mail address The first name The last name We also have the cohort ID (the short name of the cohort) in which we want to place a student. During the upload process, you can see a preview of the file that you will upload: Further down on the Upload users preview page, you can choose the Settings option to handle the upload: Usually, when we upload users to Moodle, we will create new users. However, we can also use the upload option to quickly enroll existing users in the cohort. You saw previously (Manually adding and removing students to a cohort) how to search for and then enroll users in a cohort. However, when you want to enroll hundreds of users in the cohort, it's often faster to create a text file and upload it, than to search your existing users. This is because when you create a text file, you can use powerful tools—such as spreadsheets and databases—to quickly create this file. If you want to perform this, you will find options to Update existing users under the Upload type field. In most Moodle systems, a user's profile must include a city and country. When you upload a user to a system, you can specify the city and country in the upload file or omit them from the upload file and assign the city and country to the system while the file is uploaded. This is performed under Default values on the Upload users page: Now that we have examined some of the capabilities and limitations of this process, let's list the steps to upload a cohort to Moodle: Prepare a plain file that has, at minimum, the username, email, firstname, lastname, and cohort1 information. If you were to create this in a spreadsheet, it may look similar to the following screenshot: Under Administration | Site Administration | Users | Upload users, select the text file that you will upload. On this page, choose Settings to describe the text file, such as delimiter (separator) and encoding. Click on the Upload users button. You will see the first few rows of the text file displayed. Also, additional settings become available on this page. In the Settings section, there are settings that affect what happens when you upload information about existing users. You can choose to have the system overwrite information for existing users, ignore information that conflicts with existing users, create passwords, and so on. In the Default values section, you can enter values to be entered into the user profiles. For example, you can select a city, country, and department for all the users. Click on the Upload users button to begin the upload. Cohort sync Using the cohort sync enrolment method, you can enroll and un-enroll large collections of students at once. Using cohort sync involves several steps: Creating a cohort. Enrolling students in the cohort. Enabling the cohort sync enrollment method. Adding the cohort sync enrollment method to a course. You saw the first two steps: how to create a cohort and how to enroll students in the cohort. We will cover the last two steps: enabling the cohort sync method and adding the cohort sync to a course. Enabling the cohort sync enrollment method To enable the cohort sync enrollment method, you will need to log in as an administrator. This cannot be done by someone who has only teacher rights: Select Site administration | Plugins | Enrolments | Manage enrol plugins. Click on the Enable icon located next to Cohort sync. Then, click on the Settings button located next to Cohort sync. On the Settings page, choose the default role for people when you enroll them in a course using Cohort sync. You can change this setting for each course. You will also choose the External unenrol action. This is what happens to a student when they are removed from the cohort. If you choose Unenrol user from course, the user and all his/her grades are removed from the course. The user's grades are purged from Moodle. If you were to read this user to the cohort, all the user's activity in this course will be blank, as if the user was never in the course. If you choose Disable course enrolment and remove roles, the user and all his/her grades are hidden. You will not see this user in the course's grade book. However, if you were to read this user to the cohort or to the course, this user's course records will be restored. After enabling the cohort sync method, it's time to actually add this method to a course. Adding the cohort sync enrollment method to a course To perform this, you will need to log in as an administrator or a teacher in the course: Log in and enter the course to which you want to add the enrolment method. Select Course administration | Users | Enrolment methods. From the Add method drop-down menu, select Cohort sync. In Custom instance name, enter a name for this enrolment method. This will enable you to recognize this method in a list of cohort syncs. For Active, select Yes. This will enroll the users. Select the Cohort option. Select the role that the members of the cohort will be given. Click on the Save changes button. All the users in the cohort will be given a selected role in the course. Un-enroll a cohort from a course There are two ways to un-enroll a cohort from a course. First, you can go to the course's enrollment methods page and delete the enrollment method. Just click on the X button located next to the cohort sync field that you added to the course. However, this will not just remove users from the course, but also delete all their course records. The second method preserves the student records. Once again, go to the course's enrollment methods page located next to the Cohort sync method that you added and click on the Settings icon. On the Settings page, select No for Active. This will remove the role that the cohort was given. However, the members of the cohort will still be listed as course participants. So, as the members of the cohort do not have a role in the course, they can no longer access this course. However, their grades and activity reports are preserved. Differences between cohort sync and enrolling a cohort Cohort sync and enrolling a cohort are two different methods. Each has advantages and limitations. If you follow the preceding instructions, you can synchronize a cohort's membership to a course's enrollment. As people are added to and removed from the cohort, they are enrolled and un-enrolled from the course. When working with a large group of users, this can be a great time saver. However, using cohort sync, you cannot un-enroll or change the role of just one person. Consider a scenario where you have a large group of students who want to enroll in several courses, all at once. You put these students in a cohort, enable the cohort sync enrollment method, and add the cohort sync enrollment method to each of these courses. In a few minutes, you have accomplished your goal. Now, if you want to un-enroll some users from some courses, but not from all courses, you remove them from the cohort. So, these users are removed from all the courses. This is how cohort sync works. Cohort sync is everyone or no one When a person is added to or removed from the cohort, this person is added to or removed from all the courses to which the cohort is synced. If that's what you want, great. If not, An alternative to cohort sync is to enroll a cohort. That is, you can select all the members of a cohort and enroll them in a course, all at once. However, this is a one-way journey. You cannot un-enroll them all at once. You will need to un-enroll them one at a time. If you enroll a cohort all at once, after enrollment, users are independent entities. You can un-enroll them and change their role (for example, from student to teacher) whenever you wish. To enroll a cohort in a course, perform the following steps: Enter the course as an administrator or teacher. Select Administration | Course administration | Users | Enrolled users. Click on the Enrol cohort button. A popup window appears. This window lists the cohorts on the site. Click on Enrol users next to the cohort that you want to enroll. The system displays a confirmation message. Now, click on the OK button. You will be taken back to the Enrolled users page. Note that although you can enroll all users in a cohort (all at once), there is no button to un-enroll them all at once. You will need to remove them one at a time from your course. Managing students with groups A group is a collection of students in a course. Outside of a course, a group has no meaning. Groups are useful when you want to separate students studying the same course. For example, if your organization is using the same course for several different classes or groups, you can use the group feature to separate students so that each group can see only their peers in the course. For example, you can create a new group every month for employees hired that month. Then, you can monitor and mentor them together. After you have run a group of people through a course, you may want to reuse this course for another group. You can use the group feature to separate groups so that the current group doesn't see the work done by the previous group. This will be like a new course for the current group. You may want an activity or resource to be open to just one group of people. You don't want others in the class to be able to use that activity or resource. Course versus activity You can apply the groups setting to an entire course. If you do this, every activity and resource in the course will be segregated into groups. You can also apply the groups setting to an individual activity or resource. If you do this, it will override the groups setting for the course. Also, it will segregate just this activity, or resource between groups. The three group modes For a course or activity, there are several ways to apply groups. Here are the three group modes: No groups: There are no groups for a course or activity. If students have been placed in groups, ignore it. Also, give everyone the same access to the course or activity. Separate groups: If students have been placed in groups, allow them to see other students and only the work of other students from their own group. Students and work from other groups are invisible. Visible groups: If students have been placed in groups, allow them to see other students and the work of other students from all groups. However, the work from other groups is read only. You can use the No groups setting on an activity in your course. Here, you want every student who ever took the course to be able to interact with each other. For example, you may use the No groups setting in the news forum so that all students who have ever taken the course can see the latest news. Also, you can use the Separate groups setting in a course. Here, you will run different groups at different times. For each group that runs through the course, it will be like a brand new course. You can use the Visible groups setting in a course. Here, students are part of a large and in-person class; you want them to collaborate in small groups online. Also, be aware that some things will not be affected by the groups setting. For example, no matter what the group setting, students will never see each other's assignment submissions. Creating a group There are three ways to create groups in a course. You can: Manually create and populate each group Automatically create and populate groups based on the characteristics of students Import groups using a text file We'll cover these methods in the following subsections. Manually creating and populating a group Don't be discouraged by the idea of manually populating a group with students. It takes only a few clicks to place a student in a group. To create and populate a group, perform the following steps: Select Course administration | Users | Groups. This takes you to the Groups page. Click on the Create group button. The Create group page is displayed. You must enter a Name for the group. This will be the name that teachers and administrators see when they manage a group. The Group ID number is used to match up this group with a group identifier in another system. If your organization uses a system outside Moodle to manage students and this system categorizes students in groups, you can enter the group ID from the other system in this field. It does not need to be a number. This field is optional. The Group description field is optional. It's good practice to use this to explain the purpose and criteria for belonging to a group. The Enrolment key is a code that you can give to students who self enroll in a course. When the student enrolls, he/she is prompted to enter the enrollment key. On entering this key, the student is enrolled in the course and made a member of the group. If you add a picture to this group, then when members are listed (as in a forum), the member will have the group picture shown next to them. Here is an example of a contributor to a forum on http://www.moodle.org with her group memberships: Click on the Save changes button to save the group. On the Groups page, the group appears in the left-hand side column. Select this group. In the right-hand side column, search for and select the students that you want to add to this group: Note the Search fields. These enable you to search for students that meet a specific criteria. You can search the first name, last name, and e-mail address. The other part of the user's profile information is not available in this search box. Automatically creating and populating a group When you automatically create groups, Moodle creates a number of groups that you specify and then takes all the students enrolled in the course and allocates them to these groups. Moodle will put the currently enrolled students in these groups even if they already belong to another group in the course. To automatically create a group, use the following steps: Click on the Auto-create groups button. The Auto-create groups page is displayed. In the Naming scheme field, enter a name for all the groups that will be created. You can enter any characters. If you enter @, it will be converted to sequential letters. If you enter #, it will be converted to sequential numbers. For example, if you enter Group @, Moodle will create Group A, Group B, Group C, and so on. In the Auto-create based on field, you will tell the system to choose either of the following options:     Create a specific number of groups and then fill each group with as many students as needed (Number of groups)     Create as many groups as needed so that each group has a specific number of students (Members per group). In the Group/member count field, you will tell the system to choose either of the following options:     How many groups to create (if you choose the preceding Number of groups option)     How many members to put in each group (if you choose the preceding Members per group option) Under Group members, select who will be put in these groups. You can select everyone with a specific role or everyone in a specific cohort. The setting for Prevent last small group is available if you choose Members per group. It prevents Moodle from creating a group with fewer than the number of students that you specify. For example, if your class has 12 students and you choose to create groups with five members per group, Moodle would normally create two groups of five. Then, it would create another group for the last two members. However, with Prevent last small group selected, it will distribute the remaining two members between the first two groups. Click on the Preview button to preview the results. The preview will not show you the names of the members in groups, but it will show you how many groups and members will be in each group. Importing groups The term importing groups may give you the impression that you will import students into a group. The import groups button does not import students into groups. It imports a text file that you can use to create groups. So, if you need to create a lot of groups at once, you can use this feature to do this. This needs to be done by a site administrator. If you need to import students and put them into groups, use the upload students feature. However, instead of adding students to the cohort, you will add them to a course and group. You perform this by specifying the course and group fields in the upload file, as shown in the following code: username,email,firstname,lastname,course1,group1,course2 moodler_1,[email protected],Bill,Binky,history101,odds,science101 moodler_2,[email protected],Rose,Krial,history101,even,science101 moodler_3,[email protected],Jeff,Marco,history101,odds,science101 moodler_4,[email protected],Dave,Gallo,history101,even,science101 In this example, we have the minimum needed information to create new students. These are as follows: The username The e-mail address The first name The last name We have also enrolled all the students in two courses: history101 and science101. In the history101 course, Bill Binky, and Jeff Marco are placed in a group called odds. Rose Krial and Dave Gallo are placed in a group called even. In the science101 course, the students are not placed in any group. Remember that this student upload doesn't happen on the Groups page. It happens under Administration | Site Administration | Users | Upload users. Summary Cohorts and groups give you powerful tools to manage your students. Cohorts are a useful tool to quickly enroll and un-enroll large numbers of students. Groups enable you to separate students who are in the same course and give teachers the ability to quickly see only those students that they are responsible for. Useful Links: What's New in Moodle 2.0 Moodle for Online Communities Understanding Web-based Applications and Other Multimedia Forms
Read more
  • 0
  • 0
  • 6242
article-image-json-jsonnet
Packt
25 Jun 2015
16 min read
Save for later

JSON with JSON.Net

Packt
25 Jun 2015
16 min read
In this article by Ray Rischpater, author of the book JavaScript JSON Cookbook, we show you how you can use strong typing in your applications with JSON using C#, Java, and TypeScript. You'll find the following recipes: How to deserialize an object using Json.NET How to handle date and time objects using Json.NET How to deserialize an object using gson for Java How to use TypeScript with Node.js How to annotate simple types using TypeScript How to declare interfaces using TypeScript How to declare classes with interfaces using TypeScript Using json2ts to generate TypeScript interfaces from your JSON (For more resources related to this topic, see here.) While some say that strong types are for weak minds, the truth is that strong typing in programming languages can help you avoid whole classes of errors in which you mistakenly assume that an object of one type is really of a different type. Languages such as C# and Java provide strong types for exactly this reason. Fortunately, the JSON serializers for C# and Java support strong typing, which is especially handy once you've figured out your object representation and simply want to map JSON to instances of classes you've already defined. We use Json.NET for C# and gson for Java to convert from JSON to instances of classes you define in your application. Finally, we take a look at TypeScript, an extension of JavaScript that provides compile-time checking of types, compiling to plain JavaScript for use with Node.js and browsers. We'll look at how to install the TypeScript compiler for Node.js, how to use TypeScript to annotate types and interfaces, and how to use a web page by Timmy Kokke to automatically generate TypeScript interfaces from JSON objects. How to deserialize an object using Json.NET In this recipe, we show you how to use Newtonsoft's Json.NET to deserialize JSON to an object that's an instance of a class. We'll use Json.NET because although this works with the existing .NET JSON serializer, there are other things that I want you to know about Json.NET, which we'll discuss in the next two recipes. Getting ready To begin, you need to be sure you have a reference to Json.NET in your project. The easiest way to do this is to use NuGet; launch NuGet, search for Json.NET, and click on Install, as shown in the following screenshot: You'll also need a reference to the Newonsoft.Json namespace in any file that needs those classes with a using directive at the top of your file: usingNewtonsoft.Json; How to do it… Here's an example that provides the implementation of a simple class, converts a JSON string to an instance of that class, and then converts the instance back into JSON: using System; usingNewtonsoft.Json;   namespaceJSONExample {   public class Record {    public string call;    public double lat;    public double lng; } class Program {    static void Main(string[] args)      {        String json = @"{ 'call': 'kf6gpe-9',        'lat': 21.9749, 'lng': 159.3686 }";          var result = JsonConvert.DeserializeObject<Record>(          json, newJsonSerializerSettings            {        MissingMemberHandling = MissingMemberHandling.Error          });        Console.Write(JsonConvert.SerializeObject(result));          return;        } } } How it works… In order to deserialize the JSON in a type-safe manner, we need to have a class that has the same fields as our JSON. The Record class, defined in the first few lines does this, defining fields for call, lat, and lng. The Newtonsoft.Json namespace provides the JsonConvert class with static methods SerializeObject and DeserializeObject. DeserializeObject is a generic method, taking the type of the object that should be returned as a type argument, and as arguments the JSON to parse, and an optional argument indicating options for the JSON parsing. We pass the MissingMemberHandling property as a setting, indicating with the value of the enumeration Error that in the event that a field is missing, the parser should throw an exception. After parsing the class, we convert it again to JSON and write the resulting JSON to the console. There's more… If you skip passing the MissingMember option or pass Ignore (the default), you can have mismatches between field names in your JSON and your class, which probably isn't what you want for type-safe conversion. You can also pass the NullValueHandling field with a value of Include or Ignore. If Include, fields with null values are included; if Ignore, fields with Null values are ignored. See also The full documentation for Json.NET is at http://www.newtonsoft.com/json/help/html/Introduction.htm. Type-safe deserialization is also possible with JSON support using the .NET serializer; the syntax is similar. For an example, see the documentation for the JavaScriptSerializer class at https://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer(v=vs.110).aspx. How to handle date and time objects using Json.NET Dates in JSON are problematic for people because JavaScript's dates are in milliseconds from the epoch, which are generally unreadable to people. Different JSON parsers handle this differently; Json.NET has a nice IsoDateTimeConverter that formats the date and time in ISO format, making it human-readable for debugging or parsing on platforms other than JavaScript. You can extend this method to converting any kind of formatted data in JSON attributes, too, by creating new converter objects and using the converter object to convert from one value type to another. How to do it… Simply include a new IsoDateTimeConverter object when you call JsonConvert.Serialize, like this: string json = JsonConvert.SerializeObject(p, newIsoDateTimeConverter()); How it works… This causes the serializer to invoke the IsoDateTimeConverter instance with any instance of date and time objects, returning ISO strings like this in your JSON: 2015-07-29T08:00:00 There's more… Note that this can be parsed by Json.NET, but not JavaScript; in JavaScript, you'll want to use a function like this: Function isoDateReviver(value) { if (typeof value === 'string') { var a = /^(d{4})-(d{2})-(d{2})T(d{2}):(d{2}):(d{2}(?:.d*)?)(?:([+-])(d{2}):(d{2}))?Z?$/ .exec(value); if (a) {      var utcMilliseconds = Date.UTC(+a[1],          +a[2] - 1,          +a[3],          +a[4],          +a[5],          +a[6]);        return new Date(utcMilliseconds);    } } return value; } The rather hairy regular expression on the third line matches dates in the ISO format, extracting each of the fields. If the regular expression finds a match, it extracts each of the date fields, which are then used by the Date class's UTC method to create a new date. Note that the entire regular expression—everything between the/characters—should be on one line with no whitespace. It's a little long for this page, however! See also For more information on how Json.NET handles dates and times, see the documentation and example at http://www.newtonsoft.com/json/help/html/SerializeDateFormatHandling.htm. How to deserialize an object using gson for Java Like Json.NET, gson provides a way to specify the destination class to which you're deserializing a JSON object. Getting ready You'll need to include the gson JAR file in your application, just as you would for any other external API. How to do it… You use the same method as you use for type-unsafe JSON parsing using gson using fromJson, except you pass the class object to gson as the second argument, like this: // Assuming we have a class Record that looks like this: /* class Record { private String call; private float lat; private float lng;    // public API would access these fields } */   Gson gson = new com.google.gson.Gson(); String json = "{ "call": "kf6gpe-9", "lat": 21.9749, "lng": 159.3686 }"; Record result = gson.fromJson(json, Record.class); How it works… The fromGson method always takes a Java class. In the example in this recipe, we convert directly to a plain old Java object that our application can use without needing to use the dereferencing and type conversion interface of JsonElement that gson provides. There's more… The gson library can also deal with nested types and arrays as well. You can also hide fields from being serialized or deserialized by declaring them transient, which makes sense because transient fields aren't serialized. See also The documentation for gson and its support for deserializing instances of classes is at https://sites.google.com/site/gson/gson-user-guide#TOC-Object-Examples. How to use TypeScript with Node.js Using TypeScript with Visual Studio is easy; it's just part of the installation of Visual Studio for any version after Visual Studio 2013 Update 2. Getting the TypeScript compiler for Node.js is almost as easy—it's an npm install away. How to do it… On a command line with npm in your path, run the following command: npm install –g typescript The npm option –g tells npm to install the TypeScript compiler globally, so it's available to every Node.js application you write. Once you run it, npm downloads and installs the TypeScript compiler binary for your platform. There's more… Once you run this command to install the compiler, you'll have the TypeScript compiler tsc available on the command line. Compiling a file with tsc is as easy as writing the source code and saving in a file that ends in .ts extension, and running tsc on it. For example, given the following TypeScript saved in the file hello.ts: function greeter(person: string) { return "Hello, " + person; }   var user: string = "Ray";   console.log(greeter(user)); Running tschello.ts at the command line creates the following JavaScript: function greeter(person) { return "Hello, " + person; }   var user = "Ray";   console.log(greeter(user)); Try it! As we'll see in the next section, the function declaration for greeter contains a single TypeScript annotation; it declares the argument person to be string. Add the following line to the bottom of hello.ts: console.log(greeter(2)); Now, run the tschello.ts command again; you'll get an error like this one: C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2082: Supplied parameters do not match any signature of call target:        Could not apply type 'string' to argument 1 which is         of type 'number'. C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2087: Could not select overload for 'call' expression. This error indicates that I'm attempting to call greeter with a value of the wrong type, passing a number where greeter expects a string. In the next recipe, we'll look at the kinds of type annotations TypeScript supports for simple types. See also The TypeScript home page, with tutorials and reference documentation, is at http://www.typescriptlang.org/. How to annotate simple types using TypeScript Type annotations with TypeScript are simple decorators appended to the variable or function after a colon. There's support for the same primitive types as in JavaScript, and to declare interfaces and classes, which we will discuss next. How to do it… Here's a simple example of some variable declarations and two function declarations: function greeter(person: string): string { return "Hello, " + person; }   function circumference(radius: number) : number { var pi: number = 3.141592654; return 2 * pi * radius; }   var user: string = "Ray";   console.log(greeter(user)); console.log("You need " + circumference(2) + " meters of fence for your dog."); This example shows how to annotate functions and variables. How it works… Variables—either standalone or as arguments to a function—are decorated using a colon and then the type. For example, the first function, greeter, takes a single argument, person, which must be a string. The second function, circumference, takes a radius, which must be a number, and declares a single variable in its scope, pi, which must be a number and has the value 3.141592654. You declare functions in the normal way as in JavaScript, and then add the type annotation after the function name, again using a colon and the type. So, greeter returns a string, and circumference returns a number. There's more… TypeScript defines the following fundamental type decorators, which map to their underlying JavaScript types: array: This is a composite type. For example, you can write a list of strings as follows: var list:string[] = [ "one", "two", "three"]; boolean: This type decorator can contain the values true and false. number: This type decorator is like JavaScript itself, can be any floating-point number. string: This type decorator is a character string. enum: An enumeration, written with the enum keyword, like this: enumColor { Red = 1, Green, Blue }; var c : Color = Color.Blue; any: This type indicates that the variable may be of any type. void: This type indicates that the value has no type. You'll use void to indicate a function that returns nothing. See also For a list of the TypeScript types, see the TypeScript handbook at http://www.typescriptlang.org/Handbook. How to declare interfaces using TypeScript An interface defines how something behaves, without defining the implementation. In TypeScript, an interface names a complex type by describing the fields it has. This is known as structural subtyping. How to do it… Declaring an interface is a little like declaring a structure or class; you define the fields in the interface, each with its own type, like this: interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686};   printLocation(myObj); How it works… The interface keyword in TypeScript defines an interface; as I already noted, an interface consists of the fields it declares with their types. In this listing, I defined a plain JavaScript object, myObj and then called the function printLocation, that I previously defined, which takes a Record. When calling printLocation with myObj, the TypeScript compiler checks the fields and types each field and only permits a call to printLocation if the object matches the interface. There's more… Beware! TypeScript can only provide compile-type checking. What do you think the following code does? interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686}; printLocation(myObj);   var json = '{"call":"kf6gpe-7","lat":21.9749}'; var myOtherObj = JSON.parse(json); printLocation(myOtherObj); First, this compiles with tsc just fine. When you run it with node, you'll see the following: kf6gpe-7: 21.9749, 159.3686 kf6gpe-7: 21.9749, undefined What happened? The TypeScript compiler does not add run-time type checking to your code, so you can't impose an interface on a run-time created object that's not a literal. In this example, because the lng field is missing from the JSON, the function can't print it, and prints the value undefined instead. This doesn't mean that you shouldn't use TypeScript with JSON, however. Type annotations serve a purpose for all readers of the code, be they compilers or people. You can use type annotations to indicate your intent as a developer, and readers of the code can better understand the design and limitation of the code you write. See also For more information about interfaces, see the TypeScript documentation at http://www.typescriptlang.org/Handbook#interfaces. How to declare classes with interfaces using TypeScript Interfaces let you specify behavior without specifying implementation; classes let you encapsulate implementation details behind an interface. TypeScript classes can encapsulate fields or methods, just as classes in other languages. How to do it… Here's an example of our Record structure, this time as a class with an interface: class RecordInterface { call: string; lat: number; lng: number;   constructor(c: string, la: number, lo: number) {} printLocation() {}   }   class Record implements RecordInterface { call: string; lat: number; lng: number; constructor(c: string, la: number, lo: number) {    this.call = c;    this.lat = la;    this.lng = lo; }   printLocation() {    console.log(this.call + ': ' + this.lat + ', ' + this.lng); } }   var myObj : Record = new Record('kf6gpe-7', 21.9749, 159.3686);   myObj.printLocation(); How it works… The interface keyword, again, defines an interface just as the previous section shows. The class keyword, which you haven't seen before, implements a class; the optional implements keyword indicates that this class implements the interface RecordInterface. Note that the class implementing the interface must have all of the same fields and methods that the interface prescribes; otherwise, it doesn't meet the requirements of the interface. As a result, our Record class includes fields for call, lat, and lng, with the same types as in the interface, as well as the methods constructor and printLocation. The constructor method is a special method called when you create a new instance of the class using new. Note that with classes, unlike regular objects, the correct way to create them is by using a constructor, rather than just building them up as a collection of fields and values. We do that on the second to the last line of the listing, passing the constructor arguments as function arguments to the class constructor. See also There's a lot more you can do with classes, including defining inheritance and creating public and private fields and methods. For more information about classes in TypeScript, see the documentation at http://www.typescriptlang.org/Handbook#classes. Using json2ts to generate TypeScript interfaces from your JSON This last recipe is more of a tip than a recipe; if you've got some JSON you developed using another programming language or by hand, you can easily create a TypeScript interface for objects to contain the JSON by using Timmy Kokke's json2ts website. How to do it… Simply go to http://json2ts.com and paste your JSON in the box that appears, and click on the generate TypeScript button. You'll be rewarded with a second text-box that appears and shows you the definition of the TypeScript interface, which you can save as its own file and include in your TypeScript applications. How it works… The following figure shows a simple example: You can save this typescript as its own file, a definition file, with the suffix .d.ts, and then include the module with your TypeScript using the import keyword, like this: import module = require('module'); Summary In this article we looked at how you can adapt the type-free nature of JSON with the type safety provided by languages such as C#, Java, and TypeScript to reduce programming errors in your application. Resources for Article: Further resources on this subject: Playing with Swift [article] Getting Started with JSON [article] Top two features of GSON [article]
Read more
  • 0
  • 0
  • 3590

article-image-code-style-django
Packt
17 Jun 2015
16 min read
Save for later

Code Style in Django

Packt
17 Jun 2015
16 min read
In this article written by Sanjeev Jaiswal and Ratan Kumar, authors of the book Learning Django Web Development, this article will cover all the basic topics which you would require to follow, such as coding practices for better Django web development, which IDE to use, version control, and so on. We will learn the following topics in this article: Django coding style Using IDE for Django web development Django project structure This article is based on the important fact that code is read much more often than it is written. Thus, before you actually start building your projects, we suggest that you familiarize yourself with all the standard practices adopted by the Django community for web development. Django coding style Most of Django's important practices are based on Python. Though chances are you already know them, we will still take a break and write all the documented practices so that you know these concepts even before you begin. To mainstream standard practices, Python enhancement proposals are made, and one such widely adopted standard practice for development is PEP8, the style guide for Python code–the best way to style the Python code authored by Guido van Rossum. The documentation says, "PEP8 deals with semantics and conventions associated with Python docstrings." For further reading, please visit http://legacy.python.org/dev/peps/pep-0008/. Understanding indentation in Python When you are writing Python code, indentation plays a very important role. It acts as a block like in other languages, such as C or Perl. But it's always a matter of discussion amongst programmers whether we should use tabs or spaces, and, if space, how many–two or four or eight. Using four spaces for indentation is better than eight, and if there are a few more nested blocks, using eight spaces for each indentation may take up more characters than can be shown in single line. But, again, this is the programmer's choice. The following is what incorrect indentation practices lead to: >>> def a(): ...   print "foo" ...     print "bar" IndentationError: unexpected indent So, which one we should use: tabs or spaces? Choose any one of them, but never mix up tabs and spaces in the same project or else it will be a nightmare for maintenance. The most popular way of indention in Python is with spaces; tabs come in second. If any code you have encountered has a mixture of tabs and spaces, you should convert it to using spaces exclusively. Doing indentation right – do we need four spaces per indentation level? There has been a lot of confusion about it, as of course, Python's syntax is all about indentation. Let's be honest: in most cases, it is. So, what is highly recommended is to use four spaces per indentation level, and if you have been following the two-space method, stop using it. There is nothing wrong with it, but when you deal with multiple third party libraries, you might end up having a spaghetti of different versions, which will ultimately become hard to debug. Now for indentation. When your code is in a continuation line, you should wrap it vertically aligned, or you can go in for a hanging indent. When you are using a hanging indent, the first line should not contain any argument and further indentation should be used to clearly distinguish it as a continuation line. A hanging indent (also known as a negative indent) is a style of indentation in which all lines are indented except for the first line of the paragraph. The preceding paragraph is the example of hanging indent. The following example illustrates how you should use a proper indentation method while writing the code: bar = some_function_name(var_first, var_second,                                            var_third, var_fourth) # Here indentation of arguments makes them grouped, and stand clear from others. def some_function_name(        var_first, var_second, var_third,        var_fourth):    print(var_first) # This example shows the hanging intent. We do not encourage the following coding style, and it will not work in Python anyway: # When vertical alignment is not used, Arguments on the first line are forbidden foo = some_function_name(var_first, var_second,    var_third, var_fourth) # Further indentation is required as indentation is not distinguishable between arguments and source code. def some_function_name(    var_first, var_second, var_third,    var_fourth):    print(var_first) Although extra indentation is not required, if you want to use extra indentation to ensure that the code will work, you can use the following coding style: # Extra indentation is not necessary. if (this    and that):    do_something() Ideally, you should limit each line to a maximum of 79 characters. It allows for a + or – character used for viewing difference using version control. It is even better to limit lines to 79 characters for uniformity across editors. You can use the rest of the space for other purposes. The importance of blank lines The importance of two blank lines and single blank lines are as follows: Two blank lines: A double blank lines can be used to separate top-level functions and the class definition, which enhances code readability. Single blank lines: A single blank line can be used in the use cases–for example, each function inside a class can be separated by a single line, and related functions can be grouped together with a single line. You can also separate the logical section of source code with a single line. Importing a package Importing a package is a direct implication of code reusability. Therefore, always place imports at the top of your source file, just after any module comments and document strings, and before the module's global and constants as variables. Each import should usually be on separate lines. The best way to import packages is as follows: import os import sys It is not advisable to import more than one package in the same line, for example: import sys, os You may import packages in the following fashion, although it is optional: from django.http import Http404, HttpResponse If your import gets longer, you can use the following method to declare them: from django.http import ( Http404, HttpResponse, HttpResponsePermanentRedirect ) Grouping imported packages Package imports can be grouped in the following ways: Standard library imports: Such as sys, os, subprocess, and so on. import reimport simplejson Related third party imports: These are usually downloaded from the Python cheese shop, that is, PyPy (using pip install). Here is an example: from decimal import * Local application / library-specific imports: This included the local modules of your projects, such as models, views, and so on. from models import ModelFoofrom models import ModelBar Naming conventions in Python/Django Every programming language and framework has its own naming convention. The naming convention in Python/Django is more or less the same, but it is worth mentioning it here. You will need to follow this while creating a variable name or global variable name and when naming a class, package, modules, and so on. This is the common naming convention that we should follow: Name the variables properly: Never use single characters, for example, 'x' or 'X' as variable names. It might be okay for your normal Python scripts, but when you are building a web application, you must name the variable properly as it determines the readability of the whole project. Naming of packages and modules: Lowercase and short names are recommended for modules. Underscores can be used if their use would improve readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. Since module names are mapped to file names (models.py, urls.py, and so on), it is important that module names be chosen to be fairly short as some file systems are case insensitive and truncate long names. Naming a class: Class names should follow the CamelCase naming convention, and classes for internal use can have a leading underscore in their name. Global variable names: First of all, you should avoid using global variables, but if you need to use them, prevention of global variables from getting exported can be done via __all__, or by defining them with a prefixed underscore (the old, conventional way). Function names and method argument: Names of functions should be in lowercase and separated by an underscore and self as the first argument to instantiate methods. For classes or methods, use CLS or the objects for initialization. Method names and instance variables: Use the function naming rules—lowercase with words separated by underscores as necessary to improve readability. Use one leading underscore only for non-public methods and instance variables. Using IDE for faster development There are many options on the market when it comes to source code editors. Some people prefer full-fledged IDEs, whereas others like simple text editors. The choice is totally yours; pick up whatever feels more comfortable. If you already use a certain program to work with Python source files, I suggest that you stick to it as it will work just fine with Django. Otherwise, I can make a couple of recommendations, such as these: SublimeText: This editor is lightweight and very powerful. It is available for all major platforms, supports syntax highlighting and code completion, and works well with Python. The editor is open source and you can find it at http://www.sublimetext.com/ PyCharm: This, I would say, is most intelligent code editor of all and has advanced features, such as code refactoring and code analysis, which makes development cleaner. Features for Django include template debugging (which is a winner) and also quick documentation, so this look-up is a must for beginners. The community edition is free and you can sample a 30-day trial version before buying the professional edition. Setting up your project with the Sublime text editor Most of the examples that we will show you in this book will be written using Sublime text editor. In this section, we will show how to install and set up the Django project. Download and installation: You can download Sublime from the download tab of the site www.sublimetext.com. Click on the downloaded file option to install. Setting up for Django: Sublime has a very extensive plug-in ecosystem, which means that once you have downloaded the editor, you can install plug-ins for adding more features to it. After successful installation, it will look like this: Most important of all is Package Control, which is the manager for installing additional plugins directly from within Sublime. This will be your only manual installation of the package. It will take care of the rest of the package installation ahead. Some of the recommendations for Python development using Sublime are as follows: Sublime Linter: This gives instant feedback about the Python code as you write it. It also has PEP8 support; this plugin will highlight in real time the things we discussed about better coding in the previous section so that you can fix them.   Sublime CodeIntel: This is maintained by the developer of SublimeLint. Sublime CodeIntel have some of advanced functionalities, such as directly go-to definition, intelligent code completion, and import suggestions.   You can also explore other plugins for Sublime to increase your productivity. Setting up the pycharm IDE You can use any of your favorite IDEs for Django project development. We will use pycharm IDE for this book. This IDE is recommended as it will help you at the time of debugging, using breakpoints that will save you a lot of time figuring out what actually went wrong. Here is how to install and set up pycharm IDE for Django: Download and installation: You can check the features and download the pycharm IDE from the following link: http://www.jetbrains.com/pycharm/ Setting up for Django: Setting up pycharm for Django is very easy. You just have to import the project folder and give the manage.py path, as shown in the following figure: The Django project structure The Django project structure has been changed in the 1.6 release version. Django (django-admin.py) also has a startapp command to create an application, so it is high time to tell you the difference between an application and a project in Django. A project is a complete website or application, whereas an application is a small, self-contained Django application. An application is based on the principle that it should do one thing and do it right. To ease out the pain of building a Django project right from scratch, Django gives you an advantage by auto-generating the basic project structure files from which any project can be taken forward for its development and feature addition. Thus, to conclude, we can say that a project is a collection of applications, and an application can be written as a separate entity and can be easily exported to other applications for reusability. To create your first Django project, open a terminal (or Command Prompt for Windows users), type the following command, and hit Enter: $ django-admin.py startproject django_mytweets This command will make a folder named django_mytweets in the current directory and create the initial directory structure inside it. Let's see what kind of files are created. The new structure is as follows: django_mytweets/// django_mytweets/ manage.py This is the content of django_mytweets/: django_mytweets/ __init__.py settings.py urls.py wsgi.py Here is a quick explanation of what these files are: django_mytweets (the outer folder): This folder is the project folder. Contrary to the earlier project structure in which the whole project was kept in a single folder, the new Django project structure somehow hints that every project is an application inside Django. This means that you can import other third party applications on the same level as the Django project. This folder also contains the manage.py file, which include all the project management settings. manage.py: This is utility script is used to manage our project. You can think of it as your project's version of django-admin.py. Actually, both django-admin.py and manage.py share the same backend code. Further clarification about the settings will be provided when are going to tweak the changes. Let's have a look at the manage.py file: #!/usr/bin/env python import os import sys if __name__ == "__main__":    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_mytweets.settings")    from django.core.management import   execute_from_command_line    execute_from_command_line(sys.argv) The source code of the manage.py file will be self-explanatory once you read the following code explanation. #!/usr/bin/env python The first line is just the declaration that the following file is a Python file, followed by the import section in which os and sys modules are imported. These modules mainly contain system-related operations. import os import sys The next piece of code checks whether the file is executed by the main function, which is the first function to be executed, and then loads the Django setting module to the current path. As you are already running a virtual environment, this will set the path for all the modules to the path of the current running virtual environment. if __name__ == "__main__":    os.environ.setdefault("DJANGO_SETTINGS_MODULE",     "django_mytweets.settings") django_mytweets/ ( Inner folder) __init__.py Django projects are Python packages, and this file is required to tell Python that this folder is to be treated as a package. A package in Python's terminology is a collection of modules, and they are used to group similar files together and prevent naming conflicts. settings.py: This is the main configuration file for your Django project. In it, you can specify a variety of options, including database settings, site language(s), what Django features need to be enabled, and so on. By default, the database is configured to use SQLite Database, which is advisable to use for testing purposes. Here, we will only see how to enter the database in the settings file; it also contains the basic setting configuration, and with slight modification in the manage.py file, it can be moved to another folder, such as config or conf. To make every other third-party application a part of the project, we need to register it in the settings.py file. INSTALLED_APPS is a variable that contains all the entries about the installed application. As the project grows, it becomes difficult to manage; therefore, there are three logical partitions for the INSTALLED_APPS variable, as follows: DEFAULT_APPS: This parameter contains the default Django installed applications (such as the admin) THIRD_PARTY_APPS: This parameter contains other application like SocialAuth used for social authentication LOCAL_APPS: This parameter contains the applications that are created by you url.py: This is another configuration file. You can think of it as a mapping between URLs and the Django view functions that handle them. This file is one of Django's more powerful features. When we start writing code for our application, we will create new files inside the project's folder. So, the folder also serves as a container for our code. Now that you have a general idea of the structure of a Django project, let's configure our database system. Summary We prepared our development environment in this article, created our first project, set up the database, and learned how to launch the Django development server. We learned the best way to write code for our Django project and saw the default Django project structure. Resources for Article: Further resources on this subject: Tinkering Around in Django JavaScript Integration [article] Adding a developer with Django forms [article] So, what is Django? [article]
Read more
  • 0
  • 0
  • 7561

article-image-digging-deep-requests
Packt
16 Jun 2015
17 min read
Save for later

Digging Deep into Requests

Packt
16 Jun 2015
17 min read
In this article by Rakesh Vidya Chandra and Bala Subrahmanyam Varanasi, authors of the book Python Requests Essentials, we are going to deal with advanced topics in the Requests module. There are many more features in the Requests module that makes the interaction with the web a cakewalk. Let us get to know more about different ways to use Requests module which helps us to understand the ease of using it. (For more resources related to this topic, see here.) In a nutshell, we will cover the following topics: Persisting parameters across requests using Session objects Revealing the structure of request and response Using prepared requests Verifying SSL certificate with Requests Body Content Workflow Using generator for sending chunk encoded requests Getting the request method arguments with event hooks Iterating over streaming API Self-describing the APIs with link headers Transport Adapter Persisting parameters across Requests using Session objects The Requests module contains a session object, which has the capability to persist settings across the requests. Using this session object, we can persist cookies, we can create prepared requests, we can use the keep-alive feature and do many more things. The Session object contains all the methods of Requests API such as GET, POST, PUT, DELETE and so on. Before using all the capabilities of the Session object, let us get to know how to use sessions and persist cookies across requests. Let us use the session method to get the resource. >>> import requests >>> session = requests.Session() >>> response = requests.get("https://google.co.in", cookies={"new-cookie-identifier": "1234abcd"}) In the preceding example, we created a session object with requests and its get method is used to access a web resource. The cookie value which we had set in the previous example will be accessible using response.request.headers. >>> response.request.headers CaseInsensitiveDict({'Cookie': 'new-cookie-identifier=1234abcd', 'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/2.2.1 CPython/2.7.5+ Linux/3.13.0-43-generic'}) >>> response.request.headers['Cookie'] 'new-cookie-identifier=1234abcd' With session object, we can specify some default values of the properties, which needs to be sent to the server using GET, POST, PUT and so on. We can achieve this by specifying the values to the properties like headers, auth and so on, on a Session object. >>> session.params = {"key1": "value", "key2": "value2"} >>> session.auth = ('username', 'password') >>> session.headers.update({'foo': 'bar'}) In the preceding example, we have set some default values to the properties—params, auth, and headers using the session object. We can override them in the subsequent request, as shown in the following example, if we want to: >>> session.get('http://mysite.com/new/url', headers={'foo': 'new-bar'}) Revealing the structure of request and response A Requests object is the one which is created by the user when he/she tries to interact with a web resource. It will be sent as a prepared request to the server and does contain some parameters which are optional. Let us have an eagle eye view on the parameters: Method: This is the HTTP method to be used to interact with the web service. For example: GET, POST, PUT. URL: The web address to which the request needs to be sent. headers: A dictionary of headers to be sent in the request. files: This can be used while dealing with the multipart upload. It's the dictionary of files, with key as file name and value as file object. data: This is the body to be attached to the request.json. There are two cases that come in to the picture here: If json is provided, content-type in the header is changed to application/json and at this point, json acts as a body to the request. In the second case, if both json and data are provided together, data is silently ignored. params: A dictionary of URL parameters to append to the URL. auth: This is used when we need to specify the authentication to the request. It's a tuple containing username and password. cookies: A dictionary or a cookie jar of cookies which can be added to the request. hooks: A dictionary of callback hooks. A Response object contains the response of the server to a HTTP request. It is generated once Requests gets a response back from the server. It contains all of the information returned by the server and also stores the Request object we created originally. Whenever we make a call to a server using the requests, two major transactions are taking place in this context which are listed as follows: We are constructing a Request object which will be sent out to the server to request a resource A Response object is generated by the requests module Now, let us look at an example of getting a resource from Python's official site. >>> response = requests.get('https://python.org') In the preceding line of code, a requests object gets constructed and will be sent to 'https://python.org'. Thus obtained Requests object will be stored in the response.request variable. We can access the headers of the Request object which was sent off to the server in the following way: >>> response.request.headers CaseInsensitiveDict({'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/2.2.1 CPython/2.7.5+ Linux/3.13.0-43-generic'}) The headers returned by the server can be accessed with its 'headers' attribute as shown in the following example: >>> response.headers CaseInsensitiveDict({'content-length': '45950', 'via': '1.1 varnish', 'x-cache': 'HIT', 'accept-ranges': 'bytes', 'strict-transport-security': 'max-age=63072000; includeSubDomains', 'vary': 'Cookie', 'server': 'nginx', 'age': '557','content-type': 'text/html; charset=utf-8', 'public-key-pins': 'max-age=600; includeSubDomains; ..) The response object contains different attributes like _content, status_code, headers, url, history, encoding, reason, cookies, elapsed, request. >>> response.status_code 200 >>> response.url u'https://www.python.org/' >>> response.elapsed datetime.timedelta(0, 1, 904954) >>> response.reason 'OK' Using prepared Requests Every request we send to the server turns to be a PreparedRequest by default. The request attribute of the Response object which is received from an API call or a session call is actually the PreparedRequest that was used. There might be cases in which we ought to send a request which would incur an extra step of adding a different parameter. Parameters can be cookies, files, auth, timeout and so on. We can handle this extra step efficiently by using the combination of sessions and prepared requests. Let us look at an example: >>> from requests import Request, Session >>> header = {} >>> request = Request('get', 'some_url', headers=header) We are trying to send a get request with a header in the previous example. Now, take an instance where we are planning to send the request with the same method, URL, and headers, but we want to add some more parameters to it. In this condition, we can use the session method to receive complete session level state to access the parameters of the initial sent request. This can be done by using the session object. >>> from requests import Request, Session >>> session = Session() >>> request1 = Request('GET', 'some_url', headers=header) Now, let us prepare a request using the session object to get the values of the session level state: >>> prepare = session.prepare_request(request1) We can send the request object request with more parameters now, as follows: >>> response = session.send(prepare, stream=True, verify=True) 200 Voila! Huge time saving! The prepare method prepares the complete request with the supplied parameters. In the previous example, the prepare_request method was used. There are also some other methods like prepare_auth, prepare_body, prepare_cookies, prepare_headers, prepare_hooks, prepare_method, prepare_url which are used to create individual properties. Verifying an SSL certificate with Requests Requests provides the facility to verify an SSL certificate for HTTPS requests. We can use the verify argument to check whether the host's SSL certificate is verified or not. Let us consider a website which has got no SSL certificate. We shall send a GET request with the argument verify to it. The syntax to send the request is as follows: requests.get('no ssl certificate site', verify=True) As the website doesn't have an SSL certificate, it will result an error similar to the following: requests.exceptions.ConnectionError: ('Connection aborted.', error(111, 'Connection refused')) Let us verify the SSL certificate for a website which is certified. Consider the following example: >>> requests.get('https://python.org', verify=True) <Response [200]> In the preceding example, the result was 200, as the mentioned website is SSL certified one. If we do not want to verify the SSL certificate with a request, then we can put the argument verify=False. By default, the value of verify will turn to True. Body content workflow Take an instance where a continuous stream of data is being downloaded when we make a request. In this situation, the client has to listen to the server continuously until it receives the complete data. Consider the case of accessing the content from the response first and the worry about the body next. In the above two situations, we can use the parameter stream. Let us look at an example: >>> requests.get("https://pypi.python.org/packages/source/F/Flask/Flask-0.10.1.tar.gz", stream=True) If we make a request with the parameter stream=True, the connection remains open and only the headers of the response will be downloaded. This gives us the capability to fetch the content whenever we need by specifying the conditions like the number of bytes of data. The syntax is as follows: if int(request.headers['content_length']) < TOO_LONG: content = r.content By setting the parameter stream=True and by accessing the response as a file-like object that is response.raw, if we use the method iter_content, we can iterate over response.data. This will avoid reading of larger responses at once. The syntax is as follows: iter_content(chunk_size=size in bytes, decode_unicode=False) In the same way, we can iterate through the content using iter_lines method which will iterate over the response data one line at a time. The syntax is as follows: iter_lines(chunk_size = size in bytes, decode_unicode=None, delimitter=None) The important thing that should be noted while using the stream parameter is it doesn't release the connection when it is set as True, unless all the data is consumed or response.close is executed. Keep-alive facility As the urllib3 supports the reuse of the same socket connection for multiple requests, we can send many requests with one socket and receive the responses using the keep-alive feature in the Requests library. Within a session, it turns to be automatic. Every request made within a session automatically uses the appropriate connection by default. The connection that is being used will be released after all the data from the body is read. Streaming uploads A file-like object which is of massive size can be streamed and uploaded using the Requests library. All we need to do is to supply the contents of the stream as a value to the data attribute in the request call as shown in the following lines. The syntax is as follows: with open('massive-body', 'rb') as file:    requests.post('http://example.com/some/stream/url',                  data=file) Using generator for sending chunk encoded Requests Chunked transfer encoding is a mechanism for transferring data in an HTTP request. With this mechanism, the data is sent in a series of chunks. Requests supports chunked transfer encoding, for both outgoing and incoming requests. In order to send a chunk encoded request, we need to supply a generator for your body. The usage is shown in the following example: >>> def generator(): ...     yield "Hello " ...     yield "World!" ... >>> requests.post('http://example.com/some/chunked/url/path',                  data=generator()) Getting the request method arguments with event hooks We can alter the portions of the request process signal event handling using hooks. For example, there is hook named response which contains the response generated from a request. It is a dictionary which can be passed as a parameter to the request. The syntax is as follows: hooks = {hook_name: callback_function, … } The callback_function parameter may or may not return a value. When it returns a value, it is assumed that it is to replace the data that was passed in. If the callback function doesn't return any value, there won't be any effect on the data. Here is an example of a callback function: >>> def print_attributes(request, *args, **kwargs): ...     print(request.url) ...     print(request .status_code) ...     print(request .headers) If there is an error in the execution of callback_function, you'll receive a warning message in the standard output. Now let us print some of the attributes of the request, using the preceding callback_function: >>> requests.get('https://www.python.org/',                  hooks=dict(response=print_attributes)) https://www.python.org/ 200 CaseInsensitiveDict({'content-type': 'text/html; ...}) <Response [200]> Iterating over streaming API Streaming API tends to keep the request open allowing us to collect the stream data in real time. While dealing with a continuous stream of data, to ensure that none of the messages being missed from it we can take the help of iter_lines() in Requests. The iter_lines() iterates over the response data line by line. This can be achieved by setting the parameter stream as True while sending the request. It's better to keep in mind that it's not always safe to call the iter_lines() function as it may result in loss of received data. Consider the following example taken from http://docs.python-requests.org/en/latest/user/advanced/#streaming-requests: >>> import json >>> import requests >>> r = requests.get('http://httpbin.org/stream/4', stream=True) >>> for line in r.iter_lines(): ...     if line: ...         print(json.loads(line) ) In the preceding example, the response contains a stream of data. With the help of iter_lines(), we tried to print the data by iterating through every line. Encodings As specified in the HTTP protocol (RFC 7230), applications can request the server to return the HTTP responses in an encoded format. The process of encoding turns the response content into an understandable format which makes it easy to access it. When the HTTP header fails to return the type of encoding, Requests will try to assume the encoding with the help of chardet. If we access the response headers of a request, it does contain the keys of content-type. Let us look at a response header's content-type: >>> re = requests.get('http://google.com') >>> re.headers['content-type'] 'text/html; charset=ISO-8859-1' In the preceding example the content type contains 'text/html; charset=ISO-8859-1'. This happens when the Requests finds the charset value to be None and the 'content-type' value to be 'Text'. It follows the protocol RFC 7230 to change the value of charset to ISO-8859-1 in this type of a situation. In case we are dealing with different types of encodings like 'utf-8', we can explicitly specify the encoding by setting the property to Response.encoding. HTTP verbs Requests support the usage of the full range of HTTP verbs which are defined in the following table. To most of the supported verbs, 'url' is the only argument that must be passed while using them. Method Description GET GET method requests a representation of the specified resource. Apart from retrieving the data, there will be no other effect of using this method. Definition is given as requests.get(url, **kwargs) POST The POST verb is used for the creation of new resources. The submitted data will be handled by the server to a specified resource. Definition is given as requests.post(url, data=None, json=None, **kwargs) PUT This method uploads a representation of the specified URI. If the URI is not pointing to any resource, the server can create a new object with the given data or it will modify the existing resource. Definition is given as requests.put(url, data=None, **kwargs) DELETE This is pretty easy to understand. It is used to delete the specified resource. Definition is given as requests.delete(url, **kwargs) HEAD This verb is useful for retrieving meta-information written in response headers without having to fetch the response body. Definition is given as requests.head(url, **kwargs) OPTIONS OPTIONS is a HTTP method which returns the HTTP methods that the server supports for a specified URL. Definition is given as requests.options(url, **kwargs) PATCH This method is used to apply partial modifications to a resource. Definition is given as requests.patch(url, data=None, **kwargs) Self-describing the APIs with link headers Take a case of accessing a resource in which the information is accommodated in different pages. If we need to approach the next page of the resource, we can make use of the link headers. The link headers contain the meta data of the requested resource, that is the next page information in our case. >>> url = "https://api.github.com/search/code?q=addClass+user:mozilla&page=1&per_page=4" >>> response = requests.head(url=url) >>> response.headers['link'] '<https://api.github.com/search/code?q=addClass+user%3Amozilla&page=2&per_page=4>; rel="next", <https://api.github.com/search/code?q=addClass+user%3Amozilla&page=250&per_page=4>; rel="last" In the preceding example, we have specified in the URL that we want to access page number one and it should contain four records. The Requests automatically parses the link headers and updates the information about the next page. When we try to access the link header, it showed the output with the values of the page and the number of records per page. Transport Adapter It is used to provide an interface for Requests sessions to connect with HTTP and HTTPS. This will help us to mimic the web service to fit our needs. With the help of Transport Adapters, we can configure the request according to the HTTP service we opt to use. Requests contains a Transport Adapter called HTTPAdapter included in it. Consider the following example: >>> session = requests.Session() >>> adapter = requests.adapters.HTTPAdapter(max_retries=6) >>> session.mount("http://google.co.in", adapter) In this example, we created a request session in which every request we make retries only six times, when the connection fails. Summary In this article, we learnt about creating sessions and using the session with different criteria. We also looked deeply into HTTP verbs and using proxies. We learnt about streaming requests, dealing with SSL certificate verifications and streaming responses. We also got to know how to use prepared requests, link headers and chunk encoded requests. Resources for Article: Further resources on this subject: Machine Learning [article] Solving problems – closest good restaurant [article] Installing NumPy, SciPy, matplotlib, and IPython [article]
Read more
  • 0
  • 0
  • 3803
article-image-deploying-play-application-coreos-and-docker
Packt
11 Jun 2015
8 min read
Save for later

Deploying a Play application on CoreOS and Docker

Packt
11 Jun 2015
8 min read
In this article by Giancarlo Inductivo, author of the book Play Framework Cookbook Second Edition, we will see deploy a Play 2 web application using CoreOS and Docker. CoreOS is a new, lightweight operating system ideal for modern application stacks. Together with Docker, a software container management system, this forms a formidable deployment environment for Play 2 web applications that boasts of simplified deployments, isolation of processes, ease in scalability, and so on. (For more resources related to this topic, see here.) For this recipe, we will utilize the popular cloud IaaS, Digital Ocean. Ensure that you sign up for an account here: https://cloud.digitalocean.com/registrations/new This recipe also requires Docker to be installed in the developer's machine. Refer to the official Docker documentation regarding installation: https://docs.docker.com/installation/ How to do it... Create a new Digital Ocean droplet using CoreOS as the base operating system. Ensure that you use a droplet with at least 1 GB of RAM for the recipe to work. note that Digital Ocean does not have a free tier and are all paid instances: Ensure that you select the appropriate droplet region: Select CoreOS 607.0.0 and specify a SSH key to use. Visit the following link if you need more information regarding SSH key generation:https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2: Once the Droplet is created, make a special note of the Droplet's IP address which we will use to log in to the Droplet: Next, create a Docker.com account at https://hub.docker.com/account/signup/ Create a new repository to house the play2-deploy-73 docker image that we will use for deployment: Create a new Play 2 webapp using the activator template, computer-database-scala, and change into the project root:    activator new play2-deploy-73 computer-database-scala && cd play2-deploy-73 Edit conf/application.conf to enable automatic database evolutions:    applyEvolutions.default=true Edit build.sbt to specify Docker settings for the web app:    import NativePackagerKeys._    import com.typesafe.sbt.SbtNativePackager._      name := """play2-deploy-73"""      version := "0.0.1-SNAPSHOT"      scalaVersion := "2.11.4"      maintainer := "<YOUR_DOCKERHUB_USERNAME HERE>"      dockerExposedPorts in Docker := Seq(9000)      dockerRepository := Some("YOUR_DOCKERHUB_USERNAME HERE ")      libraryDependencies ++= Seq(      jdbc,      anorm,      "org.webjars" % "jquery" % "2.1.1",      "org.webjars" % "bootstrap" % "3.3.1"    )          lazy val root = (project in file(".")).enablePlugins(PlayScala) Next, we build the Docker image and publish it to Docker Hub:    $ activator clean docker:stage docker:publish    ..    [info] Step 0 : FROM dockerfile/java    [info] ---> 68987d7b6df0    [info] Step 1 : MAINTAINER ginduc    [info] ---> Using cache    [info] ---> 9f856752af9e    [info] Step 2 : EXPOSE 9000    [info] ---> Using cache    [info] ---> 834eb5a7daec    [info] Step 3 : ADD files /    [info] ---> c3c67f0db512    [info] Removing intermediate container 3b8d9c18545e    [info] Step 4 : WORKDIR /opt/docker    [info] ---> Running in 1b150e98f4db    [info] ---> ae6716cd4643    [info] Removing intermediate container 1b150e98f4db  [info] Step 5 : RUN chown -R daemon .    [info] ---> Running in 9299421b321e    [info] ---> 8e15664b6012    [info] Removing intermediate container 9299421b321e    [info] Step 6 : USER daemon    [info] ---> Running in ea44f3cc8e11    [info] ---> 5fd0c8a22cc7    [info] Removing intermediate container ea44f3cc8e11    [info] Step 7 : ENTRYPOINT bin/play2-deploy-73    [info] ---> Running in 7905c6e2d155    [info] ---> 47fded583dd7    [info] Removing intermediate container 7905c6e2d155    [info] Step 8 : CMD    [info] ---> Running in b807e6360631    [info] ---> c3e1999cfbfd    [info] Removing intermediate container b807e6360631    [info] Successfully built c3e1999cfbfd    [info] Built image ginduc/play2-deploy-73:0.0.2-SNAPSHOT    [info] The push refers to a repository [ginduc/play2-deploy-73] (len: 1)    [info] Sending image list    [info] Pushing repository ginduc/play2-deploy-73 (1 tags)    [info] Pushing tag for rev [c3e1999cfbfd] on {https://cdn-registry-1.docker.io/v1/repositories/ginduc/play2-deploy-73/tags/0.0.2-SNAPSHOT}    [info] Published image ginduc/play2-deploy-73:0.0.2-SNAPSHOT Once the Docker image has been published, log in to the Digital Ocean droplet using SSH to pull the uploaded docker image. You will need to use the core user for your CoreOS Droplet:    ssh core@<DROPLET_IP_ADDRESS HERE>    core@play2-deploy-73 ~ $ docker pull <YOUR_DOCKERHUB_USERNAME HERE>/play2-deploy-73:0.0.1-SNAPSHOT    Pulling repository ginduc/play2-deploy-73    6045dfea237d: Download complete    511136ea3c5a: Download complete    f3c84ac3a053: Download complete    a1a958a24818: Download complete    709d157e1738: Download complete    d68e2305f8ed: Download complete    b87155bee962: Download complete    2097f889870b: Download complete    5d2fb9a140e9: Download complete    c5bdb4623fac: Download complete    68987d7b6df0: Download complete    9f856752af9e: Download complete    834eb5a7daec: Download complete    fae5f7dab7bb: Download complete    ee5ccc9a9477: Download complete    74b51b6dcfe7: Download complete    41791a2546ab: Download complete    8096c6beaae7: Download complete    Status: Downloaded newer image for <YOUR_DOCKERHUB_USERNAME HERE>/play2-deploy-73:0.0.2-SNAPSHOT We are now ready to run our Docker image using the following docker command:    core@play2-deploy-73 ~ $ docker run -p 9000:9000 <YOUR_DOCKERHUB_USERNAME_HERE>/play2-deploy-73:0.0.1-SNAPSHOT Using a web browser, access the computer-database webapp using the IP address we made note of in an earlier step of this recipe (http://192.241.239.43:9000/computers):   How it works... In this recipe, we deployed a Play 2 web application by packaging it as a Docker image and then installing and running the same Docker image in a Digital Ocean Droplet. Firstly, we will need an account on DigitalOcean.com and Docker.com. Once our accounts are ready and verified, we create a CoreOS-based droplet. CoreOS has Docker installed by default, so all we need to install in the droplet is the Play 2 web app Docker image. The Play 2 web app Docker image is based on the activator template, computer-database-scala, which we named play2-deploy-73. We make two modifications to the boilerplate code. The first modification in conf/application.conf:    applyEvolutions.default=true This setting enables database evolutions by default. The other modification is to be made in build.sbt. We import the required packages that contain the Docker-specific settings:    import NativePackagerKeys._    import com.typesafe.sbt.SbtNativePackager._ The next settings are to specify the repository maintainer, the exposed Docker ports, and the Docker repository in Docker.com; in this case, supply your own Docker Hub username as the maintainer and Docker repository values:    maintainer := "<YOUR DOCKERHUB_USERNAME>"      dockerExposedPorts in Docker := Seq(9000)      dockerRepository := Some("<YOUR_DOCKERHUB_USERNAME>") We can now build Docker images using the activator command, which will generate all the necessary files for building a Docker image:    activator clean docker:stage Now, we will use the activator docker command to upload and publish to your specified Docker.com repository:    activator clean docker:publish To install the Docker image in our Digital Ocean Droplet, we first log in to the droplet using the core user:    ssh core@<DROPLET_IP_ADDRESS> We then use the docker command, docker pull, to download the play2-deploy-73 image from Docker.com, specifying the tag:    docker pull <YOUR_DOCKERHUB_USERNAME>/play2-deploy-73:0.0.1-SNAPSHOT Finally, we can run the Docker image using the docker run command, exposing the container port 9000:    docker run -p 9000:9000 <YOUR_DOCKERHUB_USERNAME>/play2-deploy-73:0.0.1-SNAPSHOT There's more... Refer to the following links for more information on Docker and Digital Ocean: https://www.docker.com/whatisdocker/ https://www.digitalocean.com/community/tags/docker Summary In this recipe, we deployed a Play 2 web application by packaging it as a Docker image and then installing and running the same Docker image in a Digital Ocean Droplet. Resources for Article: Further resources on this subject: Less with External Applications and Frameworks [article] SpriteKit Framework and Physics Simulation [article] Speeding Vagrant Development With Docker [article]
Read more
  • 0
  • 0
  • 2506

article-image-edx-e-learning-course-marketing
Packt
05 Jun 2015
9 min read
Save for later

edX E-Learning Course Marketing

Packt
05 Jun 2015
9 min read
In this article by Matthew A. Gilbert, the author of edX E-Learning Course Development, we are going to learn various ways of marketing. (For more resources related to this topic, see here.) edX's marketing options If you don't market your course, you might not get any new students to teach. Fortunately, edX provides you with an array of tools for this purpose, as follows: Creative Submission Tool: Submit the assets required for creating a page in your edX course using the Creative Submission Tool. You can also use those very materials in promoting the course. Access the Creative Submission Tool at https://edx.projectrequest.net/index.php/request. Logo and the Media Kit: Although these are intended for members of the media, you can also use the edX Media Kit for your promotional purposes: you can download high-resolution photos, edX logo visual guidelines (in Adobe Illustrator and EPS versions), key facts about edX, and answers to frequently asked questions. You can also contact the press office for additional information. You can find the edX Media Kit online at https://www.edx.org/media-kit. edX Learner Stories: Using stories of students who have succeeded with other edX courses is a compelling way to market the potential of your course. Using Tumblr, edX Learner Stories offers more than a dozen student profiles. You might want to use their stories directly or use them as a template for marketing materials of your own. Read edX Learner Stories at http://edxstories.tumblr.com. Social media marketing Traditional marketing tools and the options available in the edX Marketing Portal are a fitting first step in promoting your course. However, social media gives you a tremendously enhanced toolkit you can use to attract, convert, and transform spectators into students. When marketing your course with social media, you will also simultaneously create a digital footprint for yourself. This in turn helps establish your subject matter expertise far beyond one edX course. What's more, you won't be alone; there exists a large community of edX instructors and students, including those from other MOOC platforms already online. Take, for example, the following screenshot from edX's Twitter account (@edxonline). edX has embraced social media as a means of marketing and to create a practicing virtual community for those creating and taking their courses. Likewise, edX also actively maintains a page on Facebook, as follows: You can also see how active edX's YouTube channel is in the following screenshot. Note that there are both educational and promotional videos. To get you started in social media—if you're not already there—take a look at the list of 12 social media tools, as follows. Not all of these tools might be relevant to your needs, but consider the suggestions to decide how you might best use them, and give them a try: Facebook (https://www.facebook.com): Create a fan page for your edX course; you can re-use content from your course's About page such as your course intro video, course description, course image, and any other relevant materials. Be sure to include a link from the Facebook page for your course to its About page. Look for ways to share other content from your course (or related to your course) in a way that engages members of your fan page. Use your Facebook page to generate interest and answer questions from potential students. You might also consider creating a Facebook group. This can be more useful for current students to share knowledge during the class and to network once it's complete. Visit edX on Facebook at https://www.facebook.com/edX. Google+ (https://plus.google.com): Take the same approach as you did with your Facebook fan page. While this is not as engaging as Facebook, you might find that posting content on Google+ increases traffic to your course's About page due to the increased referrals you are likely to experience via Google search results. Add edX to your circles on Google+ at https://plus.google.com/+edXOnline/posts. Instagram (https://instagram.com): Share behind-the-scenes pictures of you and your staff for your course. Show your students what a day in your life is like, making sure to use a unique hashtag for your course. Picture the possibilities with edX on Instagram at https://instagram.com/edxonline/. LinkedIn (https://www.linkedin.com): Share information about your course in relevant LinkedIn groups, and post public updates about it in your personal account. Again, make sure you include a unique hashtag for your course and a link to the About page. Connect with edX on LinkedIn at https://www.linkedin.com/company/edx. Pinterest (https://www.pinterest.com): Share photos as with Instagram, but also consider sharing infographics about your course's subject matter or share infographics or imagers you use in your actual course as well. You might consider creating pin boards for each course, or one per pin board per module in a course. Pin edX onto your Pinterest pin board at https://www.pinterest.com/edxonline/. Slideshare (http://www.slideshare.net): If you want to share your subject matter expertise and thought leadership with a wider audience, Slideshare is a great platform to use. You can easily post your PowerPoint presentations, class documents or scholarly papers, infographics, and videos from your course or another topic. All of these can then be shared across other social media platforms. Review presentations from or about edX courses on Slideshare at http://www.slideshare.net/search/slideshow?searchfrom=header&q=edx. SoundCloud (https://soundcloud.com): With SoundCloud, you can share MP3 files of your course lectures or create podcasts related to your areas of expertise. Your work can be shared on Twitter, Tumblr, Facebook, and Foursquare, expanding your influence and audience exponentially. Listen to some audio content from Harvard University at https://soundcloud.com/harvard. Tumblr (https://www.tumblr.com): Resembling what the child of WordPress and Twitter might be like, Tumblr provides a platform to share behind-the-scenes text, photos, quotes, links, chat, audios, and videos of your edX course and the people who make it possible. Share a "day in the life" or document in real time, an interactive history of each edX course you teach. Read edX's learner stories at http://edxstories.tumblr.com. Twitter (https://twitter.com): Although messages on Twitter are limited to 140 characters, one tweet can have a big impact. For a faculty wanting to promote its edX course, it is an efficient and cost-effective option. Tweet course videos, samples of content, links to other curriculum, or promotional material. Engage with other educators who teach courses and retweet posts from academic institutions. Follow edX on Twitter at https://twitter.com/edxonline. You might also consider subscribing to edX's Twitter list of edX instructors at https://twitter.com/edXOnline/lists/edx-professors-teachers, and explore the Twitter accounts of edX courses by subscribing to that list at https://twitter.com/edXOnline/lists/edx-course-handles. Vine (https://vine.co): A short-format video service owned by Twitter, Vine provides you with 6 seconds to share your creativity, either in a continuous stream or smaller segments linked together like stop motion. You might create a vine showing the inner working of the course faculty and staff, or maybe even ask short questions related to the course content and invite people to reply with answers. Watch vines about MOOCs at https://vine.co. WordPress: WordPress gives you two options to manage and share content with students. With WordPress.com (https://wordpress.com), you're given a selection of standardized templates to use on a hosted platform. You have limited control but reasonable flexibility and limited, if any, expenses. With Wordpress.org (https://wordpress.org), you have more control but you need to host it on your own web server, which requires some technical know-how. The choice is yours. Read posts on edX on the MIT Open Matters blog on Wordpress.com at https://mitopencourseware.wordpress.com/category/edx/. YouTube (https://www.youtube.com): YouTube is the heart of your edX course. It's the core of your curriculum and the anchor of engagement for your students. When promoting your course, use existing videos from your curriculum in your social media campaigns, but identify opportunities to record short videos specifically for promoting your course. Watch course videos and promotional content on the edX YouTube channel at https://www.youtube.com/user/EdXOnline. Personal branding basics Additionally, whether the impact of your effort is immediately evident or not, your social media presence powers your personal brand as a professor. Why is that important? Read on to know. With the possible exception of marketing professors, most educators likely tend to think more about creating and teaching their course than promoting it—or themselves. Traditionally, that made sense, but it isn't practical in today's digitally connected world. Social media opens an area of influence where all educators—especially those teaching an edX course—should be participating. Unfortunately, many professors don't know where or how to start with social media. If you're teaching a course on edX, or even edX Edge, you will likely have some kind of marketing support from your university or edX. But if you are just in an organization using edX Code, or simply want to promote yourself and your edX course, you might be on your own. One option to get you started with social media is the Babb Group, a provider of resources and consulting for online professors, business owners, and real-estate investors. Its founder and CEO, Dani Babb (PhD), says this: "Social media helps you show that you are an expert in a given field. It is an important tool today to help you get hired, earn promotions, and increase your visibility." The Babb Group offers five packages focused on different social media platforms: Twitter, LinkedIn, Facebook, Twitter and Facebook, or Twitter with Facebook and LinkedIn. You can view the Babb Group's social media marketing packages at http://www.thebabbgroup.com/social-media-profiles-for-professors.html. Connect with Dani Babb on LinkedIn at https://www.linkedin.com/in/drdanibabb or on Twitter at https://twitter.com/danibabb Summary In this article, we tackled traditional marketing tools, identified options available from edX, discussed social media marketing, and explored personal branding basics. Resources for Article: Further resources on this subject: Constructing Common UI Widgets [article] Getting Started with Odoo Development [article] MODx Web Development: Creating Lists [article]
Read more
  • 0
  • 0
  • 3728