Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
article-image-understanding-web-based-applications-and-other-multimedia-forms
Packt
20 Nov 2013
5 min read
Save for later

Understanding Web-based Applications and Other Multimedia Forms

Packt
20 Nov 2013
5 min read
(For more resources related to this topic, see here.) However, we will not look at blogs, wikis, or social networking sites that are usually referred to as web-based reference tools. Moodle already has these, so instead we will take a look at web applications that allows the easy creation, collaboration, and sharing of multimedia elements, such as interactive floor planners, online maps, timelines, and many others applications that are very easy to use, and that support different learning styles. Usually, I use Moodle as a school operating system and web apps as its social applications, to illustrate what I believe can be a very powerful way of using Moodle and the web for learning. Designing meaningful activities in Moodle gives students the opportunity to express their creativity by using these tools, and reflecting on the produced multimedia artifacts with both peers and teacher. However, we have to keep in mind some issues of e-safety, backups, and licensing when using these online tools, usually associated with online communities. After all, we will have our students using them, and they will therefore be exposed to some risks. Creating dynamic charts using Google Drive (Spreadsheets) Assigning students in our Moodle course tasks will require them to use a tool like Google Spreadsheets to present their plans to colleagues in a visual way. Google Drive (http://drive.google.com) provides a set of online productivity tools that work on web standards and recreates a typical Office suite. We can make documents, spreadsheets, presentations, drawings, or forms. To use Google Drive, we will need a Google account. After creating our account and logging in to Google Drive, we can organize the files displayed on the right side of the screen, add them to folders, tag them, search (of course, it's Google!), collaborate (imagine a wiki spreadsheet), export to several formats (including the usual formats for Office documents from Microsoft, Open Office, or Adobe PDF), and publish these documents online. We will start by creating a new Spreadsheet to make a budget for a music studio which will be built during the music course, by navigating to CREATE | Spreadsheet. Insert a chart As in any spreadsheet application, we can add a title by double-clicking on Untitled spreadsheet and then we add some equipment and cost to the cells: After populating our table with values and selecting all of them, we should click on the Insert chart button. The Start tab will show up in the Chart Editor pop up, as shown in the following screenshot: If we click on the Charts tab, we can pick from a list of available charts. Let's pick one of the pie charts. In the Customize tab, we can add a title to the chart, and change its appearance: When everything is done, we can click on the Insert button, and the chart previewed in the Customize tab will be added to the spreadsheet. Publish If we click on the chart, a square will be displayed on the upper-right corner, and if we click on the drop-down arrow, we see a Publish chart... option, which can be used to publish the chart. When we click on this option, we will be presented with two ways of embedding the chart, the first, as an interactive chart, and the second, as an image. Both change dynamically if we change the values or the chart in Google Drive. We should use the image code to put the chart on a Moodle forum. Share, comment, and collaborate Google Drive has the options of sharing and allowing comments and changes in our spreadsheet by other people. On the upper-right corner of each opened document, there are two buttons for that, Comments and Share. To add collaborators to our spreadsheet, we have to click on the Share button and then add their contacts (for example, e-mail) in the Invite people: field, then click on the Share & save button, and hit Done. If a collaborator is working on the same spreadsheet, at the same time we are, we can see it below the Comments and Share buttons as shown in the following screenshot: If we click on the arrow next to 1 other viewer we can chat directly with the collaborator as we edit it collaboratively: Remember that, this can be quite useful in distance courses that have collaborative tasks assigned to groups. Creating a shared folder using Google Drive We can also use the sharing functionality to share documents with the collaborators (15 GB of space for that). In the main Google Drive page, we can create a folder by navigating to Create | Folder. We are then required to give it a name: The folder will be shown in the files and folder explorer in Google Drive: To share it with someone, we need to right-click the folder and choose the Share... option. Then, just like the process of sharing a spreadsheet we saw previously, we just need to add our collaborators' contacts (for example, e-mail) in the Invite people: field, then click on Share & save and hit Done. The invited people will receive an e-mail to add the shared folder to their Google Drive (they need a Google account for this) and it is done. Everything we add to this folder is automatically synced with everyone. This includes all the Google Drive documents, PDFs, and all the files uploaded to this folder. And it's an easy way to share multimedia projects between a group of people working on the same project.
Read more
  • 0
  • 0
  • 2073

Packt
15 Nov 2013
5 min read
Save for later

RESTful Web Services – Server-Sent Events (SSE)

Packt
15 Nov 2013
5 min read
Getting started Generally, the flow of web services is initiated by the client by sending a request for the resource to the server. This is the traditional way of consuming web services. Traditional Flow Here, the browser or Jersey client initiates the request for data from the server, and the server provides a response along with the data. Every time a client needs to initiate a request for the resource, the server may not have the capability to generate the data. This becomes difficult in an application where real-time data needs to be shown. Even though there is no new data over the server, the client needs to check for it every time. Nowadays, there is a requirement that the server needs to send some data without the client's request. For this to happen the client and server need to be connected, and the server can push the data to the client. This is why it is termed as Server-Sent Events. In these events, the connections created initially between the client and server is not released after the request. The server maintains the connection and pushes the data to the respective client when required. Server-Sent Event Flow In the Server-Sent Event Flow diagram initially, when a browser or a Jersey client initiates a request to establish a connection with the server using EventSource, the server is always in a listening mode for the new connection to be established. When a new connection from any EventSource is received, the server opens a new connection and maintains it in a queue. Maintaining a connection depends upon the implementation of business logic. SSE creates a single unidirectional connection. So, only a single connection is established between the client and server. After the connection is successfully established, the client is in the listening mode for new events from the server. Whenever any new event occurs on the server side, it will broadcast the event, along with the data to a specific open HTTP connection. In modern browsers that support HTML5, the onmessage method of EventSource is responsible for handling new events received from the server; whereas, in the case of Jersey clients, we have the onEvent method of EventSource, which handles new events from the server. Implementing Server-Sent Events (SSE) To use SSE, we need to register SseFeature on both the client and server sides. By doing so, the client/server gets connected to SseFeature to be used while traversing data over the network. SSE: Internal Working In the SSE: Internal Working diagram, we assume that the client/server is connected. When any new event is generated, the server initiates an OutboundEvent instance that will be responsible to have chunked output, which in turn will have a serialized data format. OutboundEventWriter is responsible to serialize the data on the server side. We need to specify the media type of the data in OutboundEvent. There are no restrictions of providing specific media types only. However, on the client side, InboundEvent is responsible for handling the incoming data from the server. Here, InboundEvent receives the chunked input that contains serialized data format. Using InbounEventReader, data is deserialized. Using SSEBroadCaster, we are able to broadcast events to multiple clients that are connected to the server. Let's look at the example, which shows how to create SSE web services and broadcast the events: @ApplicationPath("services") public class SSEApplication extends ResourceConfig { publicSSEApplication() { super(SSEResource.class, SseFeature.class); } } Here, we registered the SseFeature module and the SSEResource root-resource class to the server. private static final SseBroadcaster BROADCASTER = new SseBroadcaster(); …… @GET @Path("sseEvents") @Produces(SseFeature.SERVER_SENT_EVENTS) public EventOutput getConnection() { final EventOutput eventOutput = new EventOutput(); BROADCASTER.add(eventOutput); return eventOutput; } …… In the SSEResource root class, we need to create a resource method that will allow clients to establish the connection and persist accordingly. Here, we are maintaining the connection into the BROADCASTER instance in the SseBroadcaster class. EventOutput manages specific client connections. SseBroadcaster is simply responsible for accommodating a group of EventOutput; that is, the client's connection. …… @POST @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void post(@FormParam("name") String name) { BROADCASTER .broadcast(new OutboundEvent.Builder() .data(String.class, name) .build()); } …… When any post method is consumed, we create a new event and broadcast it to the client available in the BROADCASTER instance. The OutboundEvent instance will contain the data (MediaType, Object) method that is initialized with a specific media type and actual data. We can provide any media type to send data. By using the build() method, data is being serialized with the OutBoundEventWriter class internally. When the broadcast (OutboundEvent) is called, internally SseBroadcaster pushes data on all registered EventOutputs; that is, on clients connected to SseBroadcaster. At times, there's a scenario where the client/server has been connected and after sometime, the client gets disconnected. So, in this case, SseBroadcaster automatically handles the client connection; that is, it determines whether the connection needs to be maintained. When any client connection is closed, the broadcaster detects EventOutput and frees the connection and resources obtained by that EventOutput connection. Summary Thus we learned the difference between the traditional web service flow and SSE web service flow. We also covered how to create the SSE web services and implement the Jersey client in order to consume the SSE using different programmatic models. Useful Links: Setting up the most Popular Journal Articles in your Personalized Community in Liferay Portal Understanding WebSockets and Server-sent Events in Detail RESS - The idea and the Controversies
Read more
  • 0
  • 0
  • 4986

article-image-working-different-types-interactive-charts
Packt
30 Oct 2013
7 min read
Save for later

Working with Different Types of Interactive Charts

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) This article explains how to create and embed 2D and 3D charts. They can also be interactive or static and we will insert them into our Moodle courses. We will mainly work with several spreadsheets in order to include diverse tools and techniques that are also present. The main idea is to display data in charts and provide students with the necessary information for their activities. We will also work with a variety of charts and deal with statistics as a baseline topic in this article. We can either develop a chart or work with ready-to-use data. You can design these types of activities in your Moodle course, together with a math teacher. When thinking of statistics, we generally have in mind a picture of a chart and some percentages representing the data of the chart. We can change that paradigm and create a different way to draw and read statistics in our Moodle course. We design charts with drawings, map charts, links to websites, and other interesting items. We can also redesign the charts, comprising numbers, with different assets because we want not only to enrich, but also strengthen the diversity of the material for our Moodle course since some students are not keen on numbers and dislike activities with them. So, let's give another chance to statistics! There are different types of graphics to show statistics. Therefore, we show a variety of tools available to display different results. No matter what our subject is, we can include these types of graphics in our Moodle course. You can use these graphics to help your students give weight to their arguments and express themselves using key points clearly. We teach students to include graphics, read them, and use them as a tool of communication. We can also work with puzzles related to statistics. That is to say, we can invent a graph and give tips or clues to our students so that they can sort out which percentages belong to the chart. In other words, we can create a listening comprehension activity, a reading comprehension activity, or a math problem. We can just upload or embed the chart, create an appealing activity, and give clues to our students so that they can think of the items belonging to the chart. Inserting column charts In this activity, we work with the website http://populationaction.org/. We work with statistics about different topics that are related to each other. We can explore different countries and use several charts in order to draw conclusions. We can also embed the charts in our Moodle course. Getting ready We need to think of a country to work with. We can compare statistics of population, water, croplands, and forests of different countries in order to draw conclusions about their futures. How to do it... We go to the website mentioned earlier and follow some steps in order to get the HTML code to embed it in our Moodle course. In this case, we choose Canada. These are the steps to follow: Enter http://populationaction.org/ in the browser window. Navigate to Publications | Data & Maps. Click on People in the Balance. Click on the down arrow next to the Country or Region Name search block and choose Canada, as shown in the following screenshot: Go to the bottom of the page and click on Share. Copy the HTML code, as shown in the following screenshot: Click on Done. How it works... It is time to embed the charts in our Moodle course. Another option is to draw the charts using a spreadsheet. So, we choose the weekly outline section where we want to add this activity and perform the following steps: Click on Add an activity or resource. Click on Forum | Add. Complete the Forum name block. Click on the down arrow in Forum type and choose Q and A forum. Complete the Description block. Click on the Edit HTML source icon. Paste the HTML code that was copied. Click on Update. Click on the down arrow next to Subscription mode and choose Forced subscription. Click on Save and display. The activity looks as shown in the following screenshot: Embedding a line chart In this recipe, we will present the estimated number of people (in millions) using a particular language over the Internet. To do this, we may include images in our spreadsheet in accordance with the method being used to design the activity. Instead of writing the name of the languages, we insert the flags that represent the language used. We design the line chart taking into account the statistical operations carried out at http://www.internetworldstats.com/stats7.htm. Getting ready We carry out the activity using Google Docs. We have to sign in and follow the steps required to design a spreadsheet file. We have several options for working with the document. After you have an account to work with Google Drive, let's see how to make our line chart! How to do it... We work with s spreadsheet because we need to make calculations and create a chart. First, we need to create a document in the spreadsheet. Therefore, we need to perform the following steps: Click on Create | Spreadsheet, as shown in the following screenshot: Write the name of the languages spoken in the A column. Write the figures in the B column (from the http://www.internetworldstats.com/stats7.htm website). Select the data from A1 up to the B11 column. Click on Insert | Chart. Edit your chart using the Chart Editor, as shown in the following screenshot: Click on Insert. Add the images of the flags corresponding to the languages spoken. Position the cursor over C1 and click on Insert | Image.... Another pop-up window will appear. You have several ways to upload images, as shown in the following screenshot: Click on Choose an image to upload and insert the image from your computer. Click on Select. Repeat the same process for all the languages. Steps 7 to 11 are optional. Click on the chart. Click on the down arrow in Share | Publish chart..., as shown in the following screenshot: Click on the down arrow next to Select a public format and choose Image, as shown in the following screenshot: Copy the HTML code that appears, as shown in the previous screenshot. Click on Done. How it works... We have just designed the chart that we want our students to work with. We are going to embed the chart in our Moodle course; another option is to share the spreadsheet and allow students to draw the chart. If you want to design a warm-up activity for students to guess or find out which the top languages used over the Internet are, you could add a chat, forum, or a question in the course. In this recipe, we are going to create a wiki so that students can work together. So, select the weekly outline section where you want to add the activity and perform the following steps: Click on Add an activity or resource. Click on Wiki | Add. Complete the Wiki name and Description blocks. Click on the Edit HTML source icon and paste the HTML code that we have previously copied. Then click on Update. Complete the First page name block. Click on Save and return to course. The activity looks as shown in the following screenshot:
Read more
  • 0
  • 0
  • 1437
Banner background image

article-image-creating-and-using-composer-packages
Packt
29 Oct 2013
7 min read
Save for later

Creating and Using Composer Packages

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) Using Bundles One of the great features in Laravel is the ease in which we can include the class libraries that others have made using bundles. On the Laravel site, there are already many useful bundles, some of which automate certain tasks while others easily integrate with third-party APIs. A recent addition to the PHP world is Composer, which allows us to use libraries (or packages) that aren't specific to Laravel. In this article, we'll get up-and-running with using bundles, and we'll even create our own bundle that others can download. We'll also see how to incorporate Composer into our Laravel installation to open up a wide range of PHP libraries that we can use in our application. Downloading and installing packages One of the best features of Laravel is how modular it is. Most of the framework is built using libraries, or packages, that are well tested and widely used in other projects. By using Composer for dependency management, we can easily include other packages and seamlessly integrate them into our Laravel app. For this recipe, we'll be installing two popular packages into our app: Jeffrey Way's Laravel 4 Generators and the Imagine image processing packages. Getting ready For this recipe, we need a standard installation of Laravel using Composer. How to do it... For this recipe, we will follow these steps: Go to https://packagist.org/. In the search box, search for way generator as shown in the following screenshot: Click on the link for way/generators : View the details at https://packagist.org/packages/way/generators and take notice of the require line to get the package's version. For our purposes, we'll use "way/generators": "1.0.*" . In our application's root directory, open up the composer.json file and add in the package to the require section so it looks like this: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*" }, Go back to http://packagist.org and perform a search for imagine as shown in the following screenshot: Click on the link to imagine/imagine and copy the require code for dev-master : Go back to our composer.json file and update the require section to include the imagine package . It should now look similar to the following code: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*", "imagine/imagine": "dev-master" }, Open the command line, and in the root of our application, run the Composer update as follows: php composer.phar update Finally, we'll add the Generator Service Provider, so open the app/config/app.php file and in the providers array, add the following line: 'WayGeneratorsGeneratorsServiceProvider' How it works... To get our package, we first go to packagist.org and search for the package we want. We could also click on the Browse packages link. It will display a list of the most recent packages as well as the most popular. After clicking on the package we want, we'll be taken to the detail page, which lists various links including the package's repository and home page. We could also click on the package's maintainer link to see other packages they have released. Underneath, we'll see the various versions of the package. If we open that version's detail page, we'll find the code we need to use for our composer.json file. We could either choose to use a strict version number, add a wildcard to the version, or use dev-master, which will install whatever is updated on the package's master branch. For the Generators package, we'll only use Version 1.0, but allow any minor fixes to that version. For the imagine package, we'll use dev-master, so whatever is in their repository's master branch will be downloaded, regardless of version number. We then run update on Composer and it will automatically download and install all of the packages we chose. Finally, to use Generators in our app, we need to register the service provider in our app's config file. Using the Generators package to set up an app Generators is a popular Laravel package that automates quite a bit of file creation. In addition to controllers and models, it can also generate views, migrations, seeds, and more, all through a command-line interface. Getting ready For this recipe, we'll be using the Laravel 4 Generators package maintained by Jeffrey Way that was installed in the Downloading and installing packages recipe. We'll also need a properly configured MySQL database. How to do it… Follow these steps for this recipe: Open the command line in the root of our app and, using the generator, create a scaffold for our cities as follows: php artisan generate:scaffold cities --fields="city:string" In the command line, create a scaffold for our superheroes as follows: php artisan generate:scaffold superheroes --fields="name:string, city_id:integer:unsigned" In our project, look in the app/database/seeds directory and find a file named CitiesTableSeeder.php. Open it and add some data to the $cities array as follows: <?php class CitiesTableSeeder extends Seeder { public function run() { DB::table('cities')->delete(); $cities = array( array( 'id' => 1, 'city' => 'New York', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 2, 'city' => 'Metropolis', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 3, 'city' => 'Gotham', 'created_at' => date('Y-m-d g:i:s',time()) ) ); DB::table('cities')->insert($cities); } } In the app/database/seeds directory, open SuperheroesTableSeeder.php and add some data to it: <?php class SuperheroesTableSeeder extends Seeder { public function run() { DB::table('superheroes')->delete(); $superheroes = array( array( 'name' => 'Spiderman', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Superman', 'city_id' => 2, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Batman', 'city_id' => 3, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'The Thing', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ) ); DB::table('superheroes')->insert($superheroes); } } In the command line, run the migration then seed the database as follows: php artisan migrate php artisan db:seed Open up a web browser and go to http://{your-server}/cities. We will see our data as shown in the following screenshot: Now, navigate to http://{your-server}/superheroes and we will see our data as shown in the following screenshot: How it works... We begin by running the scaffold generator for our cities and superheroes tables. Using the --fields tag, we can determine which columns we want in our table and also set options such as data type. For our cities table, we'll only need the name of the city. For our superheroes table, we'll want the name of the hero as well as the ID of the city where they live. When we run the generator, many files will automatically be created for us. For example, with cities, we'll get City.php in our models, CitiesController.php in controllers, and a cities directory in our views with the index, show, create, and edit views. We then get a migration named Create_cities_table.php, a CitiesTableSeeder.php seed file, and CitiesTest.php in our tests directory. We'll also have our DatabaseSeeder.php file and our routes.php file updated to include everything we need. To add some data to our tables, we opened the CitiesTableSeeder.php file and updated our $cities array with arrays that represent each row we want to add. We did the same thing for our SuperheroesTableSeeder.php file. Finally, we run the migrations and seeder and our database will be created and all the data will be inserted. The Generators package has already created the views and controllers we need to manipulate the data, so we can easily go to our browser and see all of our data. We can also create new rows, update existing rows, and delete rows.
Read more
  • 0
  • 0
  • 6219

article-image-learning-add-dependencies
Packt
15 Oct 2013
6 min read
Save for later

Learning to add dependencies

Packt
15 Oct 2013
6 min read
(For more resources related to this topic, see here.) Adding dependencies We suddenly realize that for our purposes, we will also need a function in our simplemath library that calculates greatest common divisor (gcd) of two numbers. It will take some effort to write a well-tested gcd function, why not look around to see if someone has already implemented it. Go to https://npmjs.org and search for gcd. You will get scores of results. You may find lots of node modules solving the same problem. It is often difficult to choose between seemingly identical node modules. In such situations, check out the credentials of the developer(s) who are maintaining the project. Compare the number of times each module was downloaded by users. You can get this information on the package's page on npmjs at https://npmjs.org/package/<pkg-name>. You can also check out the repository where the project is hosted, or the home page of the project. You will get this information on the npmjs home page of the module. If it isn't available, this probably isn't the module you want to use. If, however, it is available, check out the number of people who have starred the project on github, the number of people who have forked it, and active developers contributing to the project. Perhaps check out and run the test cases, or dive into the source code. If you are betting heavily on a node module which isn't actively maintained by reputed developers, or which isn't well tested, you might be setting yourself up for a major rework in the future. While we search for the gcd keyword on npmjs website we come to know that there is a node module named mathutils (https://npmjs.org/package/mathutils) that provides an implementation of gcd. We don't want to write our own implementation especially after knowing that someone somewhere in the node community has already solved that problem and published the JavaScript code. Now we want to be able to reuse that code from within our library. This use case is a little contrived and it is an overkill to include an external library for such simple tasks as to calculate GCD, which is as a matter of fact, very few lines of code, and is popular enough to be found easily. It is used here for the purpose of illustration. We can do so very easily. Again npm command line will help us reduce the number of steps. $npm install mathutils --save We have asked npm to install mathutils and the --save flag, in the end saves it as a dependency in our package.json file. So the mathutils library is downloaded in node_modules folder inside our project. Our new package.json file looks like this. { "name": "simplemath", "version": "0.0.1", "description": "A simple math library", "main": "index.js", "dependencies": { "mathutils": "0.0.1" }, "devDependencies": {}, "scripts": { "test": "test" }, "repository": "", "keywords": [ "math", "mathematics", "simple" ], "author": "yourname <[email protected]>", "license": "BSD" } And thus, mathutils is ready for us to use it as we please. Let's proceed to make use of it in our library. Add the test case: Add the following code to test gcd function to the end of tests.js file but before console.info. assert.equal( simplemath.gcd(12, 8), 4 ); console.log("GCD works correctly"); Glue the gcd function from mathutils to simplemath in index.js. var mathutils = require("mathutils"); to load mathutils? var simplemath = require("./lib/constants.js"); simplemath.sum = require("./lib/sum.js"); simplemath.subtract = require("./lib/subtract .js"); simplemath.multiply = require("./lib/multiply.js"); simplemath.divide = require("./lib/divide.js"); simplemath.gcd = mathutils.gcd; // Assign gcd module.exports = simplemath; We have imported the mathutil library in our index.js and assigned the gcd function from the mathutil library to simplemath property with the same name. Let's test it out. Since our package.json is aware of the test script, we can delegate it to npm. $ npm test … All tests passed successfully Thus we have successfully added a dependency to our project. The node_modules folder We do not want to litter our node.js application directory with code from external libraries or packages that we want to use, npm provides a way of keeping our application code and third party libraries or node modules into separate directories. That is why the node_modules folder. Code for any third-party modules will go into this folder. From node.js documentation (http://nodejs.org/api/modules.html): If the module identifier passed to require() is not a native module, and does not begin with '/', '../', or './', then node starts at the parent directory of the current module, and adds /node_modules, and attempts to load the module from that location. If it is not found there, then it moves to the parent directory, and so on, until the root of the tree is reached. For example, if the file at '/home/ry/projects/foo.js' called require ('bar.js'), then node would look in the following locations, in this order: /home/ry/projects/node_modules/bar.js /home/ry/node_modules/bar.js /home/node_modules/bar.js /node_modules/bar.js This allows programs to localize their dependencies, so that they do not clash. Whenever we run the npm install command, the packages get stored into the node_modules folder inside the directory in which the command was issued. Each module might have its own set of dependencies, which are then installed inside node_modules folder of that module. So in effect we obtain a dependency tree with each module having its dependencies installed in its own folder. Imagine two modules, on which your code is dependent, uses a different version of a third module. Having dependencies installed in their own folders, and the fact that require will look into the innermost node_modules folder first, affords a kind of safety that very few platforms are able to provide. Each module can have its own version of the same dependency. Thus node.js tactfully avoids dependency-hell which most of its peers haven't been able to do so far. Summary In this article you will learn how to install node.js and npm, modularize your code in different files and folders, create your own node modules and add them on npm registry, and configure the local npm installation to provide some of your own convenient default. Resources for Article: Further resources on this subject: An Overview of the Node Package Manager [Article] Understanding and Developing Node Modules [Article] Setting up Node [Article]
Read more
  • 0
  • 0
  • 1326

article-image-introducing-feature-introjs
Packt
07 Oct 2013
5 min read
Save for later

Introducing a feature of IntroJs

Packt
07 Oct 2013
5 min read
(For more resources related to this topic, see here.) API IntroJs includes functions that let the user to control and change the execution of the introduction. For example, it is possible to make a decision for an unexpected event that happens during execution, or to change the introduction routine according to user interactions. Later on, all available APIs in IntroJs will be explained. However, these functions will extend and develop in the future. IntroJs includes these API functions: start goToStep exit setOption setOptions oncomplete onexit onchange onbeforechange introJs.start() As mentioned before, introJs.start() is the main function of IntroJs that lets the user to start the introduction for specified elements and get an instance of the introJS class. The introduction will start from the first step in specified elements. This function has no arguments and also returns an instance of the introJS class. introJs.goToStep(stepNo) Jump to the specific step of the introduction by using this function. As it is clear, introductions always start from the first step; however, it is possible to change the configuration by using this function. The goToStep function has an integer argument that accepts the number of the step in the introduction. introJs().goToStep(2).start(); //starts introduction from step 2 As the example indicates, first, the default configuration changed by using the goToStep function from 1 to 2, and then the start() function will be called. Hence, the introduction will start from the second step. Finally, this function will return the introJS class's instance. introJs.exit() The introJS.exit() function lets the user exit and close the running introduction. By default, the introduction ends when the user clicks on the Done button or goes to the last step of the introduction. introJs().exit() As it shows, the exit() function doesn't have any arguments and returns an instance of introJS. introJs.setOption(option, value) As mentioned before, IntroJs has some default options that can be changed by using the setOption method. This function has two arguments. The first one is useful to specify the option name and the second one is to set the value. introJs().setOption("nextLabel", "Go Next"); In the preceding example, nextLabel sets to Go Next. Also, it is possible to change other options by using the setOption method. introJs.setOptions(options) It is possible to change an option using the setOption method. However, to change more than one option at once, it is possible to use setOptions instead. The setOptions method accepts different options and values in the JSON format. introJs().setOptions({ skipLabel: "Exit", tooltipPosition: "right" }); In the preceding example, two options are set at the same time by using JSON and the setOptions method. introJs.oncomplete(providedCallback) The oncomplete event is raised when the introduction ends. If a function passes as an oncomplete method, it will be called by the library after the introduction ends. introJs().oncomplete(function() { alert("end of introduction"); }); In this example, after the introduction ends, the anonymous function that is passed to the oncomplete method will be called and alerted with the end of introduction message. introJs.onexit(providedCallback) As mentioned before, the user can exit the running introduction using the Esc key or by clicking on the dark area in the introduction. The onexit event notices when the user exits from the introduction. This function accepts one argument and returns the instance of running introJS. introJs().onexit(function() { alert("exit of introduction"); }); In the preceding example, we passed an anonymous function to the onexit method with an alert() statement. If the user exits the introduction, the anonymous function will be called and an alert with the message exit of introduction will appear. introJs.onchange(providedCallback) The onchange event is raised in each step of the introduction. This method is useful to inform when each step of introduction is completed. introJs().onchange(function(targetElement) { alert("new step"); }); You can define an argument for an anonymous function (targetElement in the preceding example), and when the function is called, you can access the current target element that is highlighted in the introduction with that argument. In the preceding example, when each introduction's step ends, an alert with the new step message will appear. introJs.onbeforechange(providedCallback) Sometimes, you may need to do something before each step of introduction. Consider that you need to do an Ajax call before the user goes to a step of the introduction; you can do this with the onbeforechange event. introJs().onbeforechange(function(targetElement) { alert("before new step");}); We can also define an argument for an anonymous function (targetElement in the preceding example), and when this function is called, the argument gets some information about the currently highlighted element in the introduction. So using that argument, you can know which step of the introduction will be highlighted or what's the type of target element and more. In the preceding example, an alert with the message before new step will appear before highlighting each step of the introduction. Summary In this article we learned about the API functions, their syntaxes, and how they are used. Resources for Article: Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 5250
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-using-events-interceptors-and-logging-services
Packt
03 Oct 2013
19 min read
Save for later

Using Events, Interceptors, and Logging Services

Packt
03 Oct 2013
19 min read
(For more resources related to this topic, see here.) Understanding interceptors Interceptors are defined as part of the EJB 3.1 specification (JSR 318), and are used to intercept Java method invocations and lifecycle events that may occur in Enterprise Java Beans (EJB) or Named Beans from Context Dependency Injection (CDI). The three main components of interceptors are as follows: The Target class: This class will be monitored or watched by the interceptor. The target class can hold the interceptor methods for itself. The Interceptor class: This interceptor class groups interceptor methods. The Interceptor method: This method will be invoked according to the lifecycle events. As an example, a logging interceptor will be developed and integrated into the Store application. Following the hands-on approach of this article, we will see how to apply the main concepts through the given examples without going into a lot of details. Check the Web Resources section to find more documentation about interceptors. Creating a log interceptor A log interceptor is a common requirement in most Java EE projects as it's a simple yet very powerful solution because of its decoupled implementation and easy distribution among other projects if necessary. Here's a diagram that illustrates this solution: Log and LogInterceptor are the core of the log interceptor functionality; the former can be thought of as the interface of the interceptor, it being the annotation that will decorate the elements of SearchManager that must be logged, and the latter carries the actual implementation of our interceptor. The business rule is to simply call a method of class LogService, which will be responsible for creating the log entry. Here's how to implement the log interceptor mechanism: Create a new Java package named com.packt.store.log in the project Store. Create a new enumeration named LogLevel inside this package. This enumeration will be responsible to match the level assigned to the annotation and the logging framework: package com.packt.store.log; public enum LogLevel { // As defined at java.util.logging.Level SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST; public String toString() { return super.toString(); } } We're going to create all objects of this section—LogLevel, Log, LogService, and LogInterceptor—into the same package, com.packt.store.log. This decision makes it easier to extract the logging functionality from the project and build an independent library in the future, if required. Create a new annotation named Log. This annotation will be used to mark every method that must be logged, and it accepts the log level as a parameter according to the LogLevel enumeration created in the previous step: package com.packt.store.log; @Inherited @InterceptorBinding @Retention(RetentionPolicy.RUNTIME) @Target({ElementType.METHOD, ElementType.TYPE}) public @interface Log { @Nonbinding LogLevel value() default LogLevel.FINEST; } As this annotation will be attached to an interceptor, we have to add the @InterceptorBinding decoration here. When creating the interceptor, we will add a reference that points back to the Log annotation, creating the necessary relationship between them. Also, we can attach an annotation virtually to any Java element. This is dictated by the @Target decoration, where we can set any combination of the ElementType values such as ANNOTATION_TYPE, CONSTRUCTOR, FIELD, LOCAL_VARIABLE, METHOD, PACKAGE, PARAMETER, and TYPE (mapping classes, interfaces, and enums), each representing a specific element. The annotation being created can be attached to methods and classes or interface definitions. Now we must create a new stateless session bean named LogService that is going to execute the actual logging: @Stateless public class LogService { // Receives the class name decorated with @Log public void log(final String clazz, final LogLevel level, final String message) { // Logger from package java.util.logging Logger log = Logger.getLogger(clazz); log.log(Level.parse(level.toString()), message); } } Create a new class, LogInterceptor, to trap calls from classes or methods decorated with @Log and invoke the LogService class we just created—the main method must be marked with @AroundInvoke—and it is mandatory that it receives an InvocationContext instance and returns an Object element: @Log @Interceptor public class LogInterceptor implements Serializable { private static final long serialVersionUID = 1L; @Inject LogService logger; @AroundInvoke public Object logMethod(InvocationContext ic) throws Exception { final Method method = ic.getMethod(); // check if annotation is on class or method LogLevel logLevel = method.getAnnotation(Log.class)!= null ? method.getAnnotation(Log.class).value() : method.getDeclaringClass().getAnnotation(Log.class).value(); // invoke LogService logger.log(ic.getClass().getCanonicalName(),logLevel, method.toString()); return ic.proceed(); } } As we defined earlier, the Log annotation can be attached to methods and classes or interfaces by its @Target decoration; we need to discover which one raised the interceptor to retrieve the correct LogLevel value. When trying to get the annotation from the class shown in the method.getDeclaringClass().getAnnotation(Log.class) line, the engine will traverse through the class' hierarchy searching for the annotation, up to the Object class if necessary. This happens because we marked the Log annotation with @Inherited. Remember that this behavior only applies to the class's inheritance, not interfaces. Finally, as we marked the value attribute of the Log annotation as @Nonbinding in step 3, all log levels will be handled by the same LogInterceptor function. If you remove the @Nonbinding line, the interceptor should be further qualified to handle a specific log level, for example @Log(LogLevel.INFO), so you would need to code several interceptors, one for each existing log level. Modify the beans.xml (under /WEB-INF/) file to tell the container that our class must be loaded as an interceptor—currently, the file is empty, so add all the following lines: <beans xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/beans_1_0.xsd"> <interceptors> <class>com.packt.store.log.LogInterceptor</class> </interceptors> </beans> Now decorate a business class or method with @Log in order to test what we've done. For example, apply it to the getTheaters() method in SearchManager from the project Store. Remember that it will be called every time you refresh the query page: @Log(LogLevel.INFO) public List<Theater> getTheaters() { ... } Make sure you have no errors in the project and deploy it to the current server by right-clicking on the server name and then clicking on the Publish entry. Access the theater's page, http://localhost:7001/theater/theaters.jsf, refresh it a couple of times, and check the server output. If you have started your server from Eclipse, it should be under the Console tab: Nov 12, 2012 4:53:13 PM com.packt.store.log.LogService log INFO: public java.util.List com.packt.store.search.SearchManager.getTheaters() Let's take a quick overview of what we've accomplished so far; we created an interceptor and an annotation that will perform all common logging operations for any method or class marked with such an annotation. All log entries generated from the annotation will follow WebLogic's logging services configuration. Interceptors and Aspect Oriented Programming There are some equivalent concepts on these topics, but at the same time, they provide some critical functionalities, and these can make a completely different overall solution. In a sense, interceptors work like an event mechanism, but in reality, it's based on a paradigm called Aspect Oriented Programming (AOP). Although AOP is a huge and complex topic and has several books that cover it in great detail, the examples shown in this article make a quick introduction to an important AOP concept: method interception. Consider AOP as a paradigm that makes it easier to apply crosscutting concerns (such as logging or auditing) as services to one or multiple objects. Of course, it's almost impossible to define the multiple contexts that AOP can help in just one phrase, but for the context of this article and for most real-world scenarios, this is good enough. Using asynchronous methods A basic programming concept called synchronous execution defines the way our code is processed by the computer, that is, line-by-line, one at a time, in a sequential fashion. So, when the main execution flow of a class calls a method, it must wait until its completion so that the next line can be processed. Of course, there are structures capable of processing different portions of a program in parallel, but from an external viewpoint, the execution happens in a sequential way, and that's how we think about it when writing code. When you know that a specific portion of your code is going to take a little while to complete, and there are other things that could be done instead of just sitting and waiting for it, there are a few strategies that you could resort to in order to optimize the code. For example, starting a thread to run things in parallel, or posting a message to a JMS queue and breaking the flow into independent units are two possible solutions. If your code is running on an application server, you should know by now that thread spawning is a bad practice—only the server itself must create threads, so this solution doesn't apply to this specific scenario. Another way to deal with such a requirement when using Java EE 6 is to create one or more asynchronous methods inside a stateless session bean by annotating either the whole class or specific methods with javax.ejb.Asynchronous. If the class is decorated with @Asynchronous, all its methods inherit the behavior. When a method marked as asynchronous is called, the server usually spawns a thread to execute the called method—there are cases where the same thread can be used, for instance, if the calling method happens to end right after emitting the command to run the asynchronous method. Either way, the general idea is that things are explicitly going to be processed in parallel, which is a departure from the synchronous execution paradigm. To see how it works, let's change the LogService method to be an asynchronous one; all we need to do is decorate the class or the method with @Asynchronous: @Stateless @Asynchronous public class LogService { … As the call to its log method is the last step executed by the interceptor, and its processing is really quick, there is no real benefit in doing so. To make things more interesting, let's force a longer execution cycle by inserting a sleep method into the method of LogService: public void log(final String clazz,final LogLevel level,final String message) { Logger log = Logger.getLogger(clazz); log.log(Level.parse(level.toString()), message); try { Thread.sleep(5000); log.log(Level.parse(level.toString()), "reached end of method"); } catch (InterruptedException e) { e.printStackTrace(); } } Using Thread.sleep() when running inside an application server is another classic example of a bad practice, so keep away from this when creating real-world solutions. Save all files, publish the Store project, and load the query page a couple of times. You will notice that the page is rendered without delay, as usual, and that the reached end of method message is displayed after a few seconds in the Console view. This is a pretty subtle scenario, so you can make it harsher by commenting out the @Asynchronous line and deploying the project again—this time when you refresh the browser, you will have to wait for 5 seconds before the page gets rendered. Our example didn't need a return value from the asynchronous method, making it pretty simple to implement. If you need to get a value back from such methods, you must declare it using the java.util.concurrent.Future interface: @Asynchronous public Future<String> doSomething() { … } The returned value must be changed to reflect the following: return new AsyncResult<String>("ok"); The javax.ejb.AsyncResult function is an implementation of the Future interface that can be used to return asynchronous results. There are other features and considerations around asynchronous methods, such as ways to cancel a request being executed and to check if the asynchronous processing has finished, so the resulting value can be accessed. For more details, check the Creating Asynchronous methods in EJB 3.1 reference at the end of this article. Understanding WebLogic's logging service Before we advance to the event system introduced in Java EE 6, let's take a look at the logging services provided by Oracle WebLogic Server. By default, WebLogic Server creates two log files for each managed server: access.log: This is a standard HTTP access log, where requests to web resources of a specific server instance are registered with details such as HTTP return code, the resource path, response time, among others <ServerName.log>: This contains the log messages generated by the WebLogic services and deployed applications of that specific server instance These files are generated in a default directory structure that follows the pattern $DOMAIN_NAME/servers/<SERVER_NAME>/logs/. If you are running a WebLogic domain that spawns over more than one machine, you will find another log file named <DomainName>.log in the machine where the administration server is running. This file aggregates messages from all managed servers of that specific domain, creating a single point of observation for the whole domain. As a best practice, only messages with a higher level should be transferred to the domain log, avoiding overhead to access this file. Keep in mind that the messages written to the domain log are also found at the managed server's specific log file that generated them, so there's no need to redirect everything to the domain log. Anatomy of a log message Here's a typical entry of a log file: ####<Jul 15, 2013 8:32:54 PM BRT> <Alert> <WebLogicServer> <sandbox-lap> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <> <1373931174624> <BEA-000396> <Server shutdown has been requested by weblogic.> The description of each field is given in the following table: Text Description #### Fixed, every log message starts with this sequence <Jul 15, 2013 8:32:54 PM BRT> Locale-formatted timestamp <Alert> Message severity <WebLogicServer> WebLogic subsystem-other examples are WorkManager, Security, EJB, and Management <sandbox-lap> Physical machine name <AdminServer> WebLogic Server name <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> Thread ID <weblogic> User ID <>  Transaction ID, or empty if not in a transaction context <>  Diagnostic context ID, or empty if not applicable; it is used by the Diagnostics Framework to correlate messages of a specific request <1373931174624> Raw time in milliseconds <BEA-000396> Message ID <Server shutdown has been requested by weblogic.> Description of the event The Diagnostics Framework presents functionalities to monitor, collect, and analyze data from several components of WebLogic Server. Redirecting standard output to a log file The logging solution we've just created is currently using the Java SE logging engine—we can see our messages on the console's screen, but they aren't being written to any log file managed by WebLogic Server. It is this way because of the default configuration of Java SE, as we can see from the following snippet, taken from the logging.properties file used to run the server: # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler You can find this file at $JAVA_HOME/jre/lib/logging.properties. So, as stated here, the default output destination used by Java SE is the console. There are a few ways to change this aspect: If you're using this Java SE installation solely to run WebLogic Server instances, you may go ahead and change this file, adding a specific WebLogic handler to the handlers line as follows: handlers= java.util.logging.ConsoleHandler,weblogic.logging.ServerLoggingHandler Tampering with Java SE files is not an option (it may be shared among other software, for instance); you can duplicate the default logging.properties file into another folder $DOMAIN_HOME being a suitable candidate, add the new handler, and instruct WebLogic to use this file at startup by adding this argument to the following command line: -Djava.util.logging.config.file=$DOMAIN_HOME/logging.properties You can use the administration console to set the redirection of the standard output (and error) to the log files. To do so, perform the following steps: In the left-hand side panel, expand Environment and select Servers. In the Servers table, click on the name of the server instance you want to configure. Select Logging and then General. Find the Advanced section, expand it, and tick the Redirect stdout logging enabled checkbox: Click on Save to apply your changes. If necessary, the console will show a message stating that the server must be restarted to acquire the new configuration. If you get no warnings asking to restart the server, then the configuration is already in use. This means that both WebLogic subsystems and any application deployed to that server is automatically using the new values, which is a very powerful feature for troubleshooting applications without intrusive actions such as modifying the application itself—just change the log level to start capturing more detailed messages! Notice that there are a lot of other logging parameters that can be configured, and three of them are worth mentioning here: The Rotation group (found in the inner General tab): The rotation feature instructs WebLogic to create new log files based on the rules set on this group of parameters. It can be set to check for a size limit or create new files from time to time. By doing so, the server creates smaller files that we can easily handle. We can also limit the number of files retained in the machine to reduce the disk usage. If the partition where the log files are being written to reaches 100 percent of utilization, WebLogic Server will start behaving erratically. Always remember to check the disk usage; if possible, set up a monitoring solution such as Nagios to keep track of this and alert you when a critical level is reached. Minimum severity to log (also in the inner General tab): This entry sets the lower severity that should be logged by all destinations. This means that even if you set the domain level to debug, the messages will be actually written to the domain log only if this parameter is set to the same or lower level. It will work as a gatekeeper to avoid an overload of messages being sent to the loggers. HTTP access log enabled (found in the inner HTTP tab): When WebLogic Server is configured in a clustered environment, usually a load-balancing solution is set up to distribute requests between the WebLogic managed servers; the most common options are Oracle HTTP Server (OHS) or Apache Web Server. Both are standard web servers, and as such, they already register the requests sent to WebLogic in their own access logs. If this is the case, disable the WebLogic HTTP access log generation, saving processing power and I/O requests to more important tasks. Integrating Log4J to WebLogic's logging services If you already have an application that uses Log4J and want it to write messages to WebLogic's log files, you must add a new weblogic.logging.log4j.ServerLoggingAppender appender to your lo4j.properties configuration file. This class works like a bridge between Log4J and WebLogic's logging framework, allowing the messages captured by the appender to be written to the server log files. As WebLogic doesn't package a Log4J implementation, you must add its JAR to the domain by copying it to $DOMAIN_HOME/tickets/lib, along with another file, wllog4j.jar, which contains the WebLogic appender. This file can be found inside $MW_HOME/wlserver/server/lib. Restart the server, and it's done! If you're using a *nix system, you can create a symbolic link instead of copying the files—this is great to keep it consistent when a path changing these specific files must be applied to the server. Remember that having a file inside $MW_HOME/wlserver/server/lib doesn't mean that the file is being loaded by the server when it starts up; it is just a central place to hold the libraries. To be loaded by a server, a library must be added to the classpath parameter of that server, or you can add it to the domain-wide lib folder, which guarantees that it will be available to all nodes of the domain on a specific machine. Accessing and reading log files If you have direct access to the server files, you can open and search them using a command-line tool such as tail or less, or even use a graphical viewer such as Notepad. But when you don't have direct access to them, you may use WebLogic's administration console to read their content by following the steps given here: In the left-hand side pane of the administration console, expand Diagnostics and select Log Files. In the Log Files table, select the option button next to the name of the log you want to check and click on View: The types displayed on this screen, which are mentioned at the start of the section, are Domain Log, Server Log, and HTTP Access. The others are resource-specific or linked to the diagnostics framework. Check the Web resources section at the end of this article for further reference. The page displays the latest contents of the log file; the default setting shows up to 500 messages in reverse chronological order. The messages at the top of the window are the most recent messages that the server has generated. Keep in mind that the log viewer does not display messages that have been converted into archived log files.
Read more
  • 0
  • 0
  • 5287

article-image-creating-our-first-bot-webbot
Packt
03 Oct 2013
9 min read
Save for later

Creating our first bot, WebBot

Packt
03 Oct 2013
9 min read
(For more resources related to this topic, see here.)   With the knowledge you have gained, we are now ready to develop our first bot, which will be a simple bot that gathers data (documents) based on a list of URLs and datasets (field and field values) that we will require. First, let's start by creating our bot package directory. So, create a directory called WebBot so that the files in our project_directory/lib directory look like the following: '-- project_directory|-- lib | |-- HTTP (our existing HTTP package) | | '-- (HTTP package files here) | '-- WebBot | |-- bootstrap.php| |-- Document.php | '-- WebBot.php |-- (our other files)'-- 03_webbot.php As you can see, we have a very clean and simple directory and file structure that any programmer should be able to easily follow and understand. The WebBot class Next, open the file WebBot.php file and add the code from the project_directory/lib/WebBot/WebBot.php file: In our WebBot class, we first use the __construct() method to pass the array of URLs (or documents) we want to fetch, and the array of document fields are used to define the datasets and regular expression patterns. Regular expression patterns are used to populate the dataset values (or document field values). If you are unfamiliar with regular expressions, now would be a good time to study them. Then, in the __construct() method, we verify whether there are URLs to fetch or not. If there , we set an error message stating this problem. Next, we use the __formatUrl() method to properly format URLs we fetch data. This method will also set the correct protocol: either HTTP or HTTPS ( Hypertext Transfer Protocol Secure ). If the protocol is already set for the URL, for example http://www.[dom].com, we ignore setting the protocol. Also, if the class configuration setting conf_force_https is set to true, we force the HTTPS protocol again unless the protocol is already set for the URL. We then use the execute() method to fetch data for each URL, set and add the Document objects to the array of documents, and track document statistics. This method also implements fetchdelay logic that will delay each fetch by x number of seconds if set in the class configuration settings conf_delay_between_fetches. We also include the logic that only allows distinct URL fetches, meaning that, if we have already fetched data for a URL we won't fetch it again; this eliminates duplicate URL data fetches. The Document object is used as a container for the URL data, and we can use the Document object to use the URL data, the data fields, and their corresponding data field values. In the execute() method, you can see that we have performed a HTTPRequest::get() request using the URL and our default timeout value—which is set with the class configuration settings conf_default_timeout. We then pass the HTTPResponse object that is returned by the HTTPRequest::get() method to the Document object. Then, the Document object uses the data from the HTTPResponse object to build the document data. Finally, we include the getDocuments() method, which simply returns all the Document objects in an array that we can use for our own purposes as we desire. The WebBot Document class Next, we need to create a class called Document that can be used to store document data and field names with their values. To do this we will carry out the following steps: We first pass the data retrieved by our WebBot class to the Document class. Then, we define our document's fields and values using regular expression patterns. Next, add the code from the project_directory/lib/WebBot/Document.php file. Our Document class accepts the HTTPResponse object that is set in WebBot class's execute() method, and the document fields and document ID. In the Document __construct() method, we set our class properties: the HTTP Response object, the fields (and regular expression patterns), the document ID, and the URL that we use to fetch the HTTP response. We then check if the HTTP response successful (status code 200), and if it isn't, we set the error with the status code and message. Lastly, we call the __setFields() method. The __setFields() method parses out and sets the field values from the HTTP response body. For example, if in our fields we have a title field defined as $fields = ['title' => '<title>(.*)</title>'];, the __setFields() method will add the title field and pull all values inside the <title>*</title> tags into the HTML response body. So, if there were two title tags in the URL data, the __setField() method would add the field and its values to the document as follows: ['title'] => [ 0 => 'title x', 1 => 'title y' ] If we have the WebBot class configuration variable—conf_include_document_field_raw_values—set to true, the method will also add the raw values (it will include the tags or other strings as defined in the field's regular expression patterns) as a separate element, for example: ['title'] => [ 0 => 'title x', 1 => 'title y', 'raw' => [ 0 => '<title>title x</title>', 1 => '<title>title y</title>' ] ] The preceding code is very useful when we want to extract specific data (or field values) from URL data. To conclude the Document class, we have two more methods as follows: getFields(): This method simply returns the fields and field values getHttpResponse(): This method can be used to get the HTTPResponse object that was originally set by the WebBot execute() method This will allow us to perform logical requests to internal objects if we wish. The WebBot bootstrap file Now we will create a bootstrap.php file (at project_directory/lib/WebBot/) to load the HTTP package and our WebBot package classes, and set our WebBot class configuration settings: <?php namespace WebBot; /** * Bootstrap file * * @package WebBot */ // load our HTTP package require_once './lib/HTTP/bootstrap.php'; // load our WebBot package classes require_once './lib/WebBot/Document.php'; require_once './lib/WebBot/WebBot.php'; // set unlimited execution time set_time_limit(0); // set default timeout to 30 seconds WebBotWebBot::$conf_default_timeout = 30; // set delay between fetches to 1 seconds WebBotWebBot::$conf_delay_between_fetches = 1; // do not use HTTPS protocol (we'll use HTTP protocol) WebBotWebBot::$conf_force_https = false; // do not include document field raw values WebBotWebBot::$conf_include_document_field_raw_values = false; We use our HTTP package to handle HTTP requests and responses. You have seen in our WebBot class how we use HTTP requests to fetch the data, and then use the HTTP Response object to store the fetched data in the previous two sections. That is why we need to include the bootstrap file to load the HTTP package properly. Then, we load our WebBot package files. Because our WebBot class uses the Document class, we load that class file first. Next, we use the built-in PHP function set_time_limit() to tell the PHP interpreter that we want to allow unlimited execution time for our script. You don't necessarily have to use unlimited execute time. However, for testing reasons, we will use unlimited execution time for this example. Finally, we set the WebBot class configuration settings. These settings are used by the WebBot object internally to make our bot work as we desire. We should always make the configuration settings as simple as possible to help other developers understand. This means we should also include detailed comments in our code to ensure easy usage of package configuration settings. We have set up four configuration settings in our WebBot class. These are static and public variables, meaning that we can set them from anywhere after we have included the WebBot class, and once we set them they will remain the same for all WebBot objects unless we change the configuration variables. If you do not understand the PHP keyword static, now would be a good time to research this subject. The first configuration variable is conf_default_timeout. This variable is used to globally set the default timeout (in seconds) for all WebBot objects we create. The timeout value tells the HTTPRequest class how long it continue trying to send a request before stopping and deeming it as a bad request, or a timed-out request. By default, this configuration setting value is set to 30 (seconds). The second configuration variable—conf_delay_between_fetches—is used to set a time delay (in seconds) between fetches (or HTTP requests). This can be very useful when gathering a lot of data from a website or web service. For example, say, you had to fetch one million documents from a website. You wouldn't want to unleash your bot with that type of mission without fetch delays because you could inevitably cause—to that website—problems due to massive requests. By default, this value is set to 0, or no delay. The third WebBot class configuration variable—conf_force_https—when set to true, can be used to force the HTTPS protocol. As mentioned earlier, this will not override any protocol that is already set in the URL. If the conf_force_https variable is set to false, the HTTP protocol will be used. By default, this value is set to false. The fourth and final configuration variable—conf_include_document_field_raw_values—when set to true, will force the Document object to include the raw values gathered from the ' regular expression patterns. We've discussed configuration settings in detail in the WebBot Document Class section earlier in this article. By default, this value is set to false. Summary In this article you have learned how to get started with building your first bot using HTTP requests and responses. Resources for Article : Further resources on this subject: Installing and Configuring Jobs! and Managing Sections, Categories, and Articles using Joomla! [Article] Search Engine Optimization in Joomla! [Article] Adding a Random Background Image to your Joomla! Template [Article]
Read more
  • 0
  • 0
  • 2521

article-image-connecting-mongohq-api-restkit
Packt
30 Sep 2013
7 min read
Save for later

Connecting to MongoHq API with RestKit

Packt
30 Sep 2013
7 min read
(For more resources related to this topic, see here.) Let's take a base URL: NSURL *baseURL = [NSURL URLWithString:@"http://example.com/v1/"]; Now: [NSURL URLWithString:@"foo" relativeToURL:baseURL]; // Will give us http://example.com/v1/foo [NSURL URLWithString:@"foo?bar=baz" relativeToURL:baseURL]; // -> http://example.com/v1/foo?bar=baz [NSURL URLWithString:@"/foo" relativeToURL:baseURL]; // -> http://example.com/foo [NSURL URLWithString:@"foo/" relativeToURL:baseURL]; // -> http://example.com/v1/foo [NSURL URLWithString:@"/foo/" relativeToURL:baseURL]; // -> http://example.com/foo/ [NSURL URLWithString:@"http://example2.com/" relativeToURL:baseURL]; // -> http://example2.com/ Having the knowledge of what an object manager is, let's try to apply it in a real-life example. Before proceeding, it is highly recommend that we check the actual documentation on REST API of MongoHQ. The current one is at the following link: http://support.mongohq.com/mongohq-api/introduction.html As there are no strict rules on REST API, every API is different and does a number of things in its own way. MongoHQ API is not an exception. In addition, it is currently in "beta" stage. Some of the non-standard things one can find in it are as follows: The API key should be provided as a parameter with every request. There is an undocumented way of how to provide it in Headers, which is a more common approach. Sometimes, if you get an error with the status code returned as 200 (OK), which is not according to REST standards, the normal way would be to return something in 4xx, which is stated as a client error. Sometimes, while the output of an error message is a JSON string, the HTTP response Content-type header is set as text/plain. To use the API, one will need a valid API Key. You can easily get one for free following a simple guideline recommended by the MongoHQ team: Sign up for an account at http://MongoHQ.com. Once logged in, click on the My Account drop-down menu at the top-right corner and select Account Settings. Look for the section labeled API Token. From there, take your token. We will put the API key into the MongoHQ-API-Token HTTP header. The following screenshot shows where one can find the API token key: API Token on Account Info page So let's set up our configuration using the following steps: You can use the AppDelegate class for putting the code, while I recommend using a separate MongoHqApi class for such App/API logic separation. First, let's set up our object manager with the following code: - (void)setupObjectManager { NSString *baseUrl = @"https://api.mongohq.com"; AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:[NSURL URLWithString:baseUrl]]; NSString *apiKey = @"MY_API_KEY"; [httpClient setDefaultHeader:@"MongoHQ-API-Token" value:apiKey]; RKObjectManager *manager = [[RKObjectManager alloc] initWithHTTPClient:httpClient]; [RKMIMETypeSerialization registerClass:[RKNSJSONSerialization class] forMIMEType:@"text/plain"]; [manager.HTTPClient registerHTTPOperationClass:[AFJSONRequestOperation class]]; [manager setAcceptHeaderWithMIMEType:RKMIMETypeJSON]; manager.requestSerializationMIMEType = RKMIMETypeJSON; [RKObjectManager setSharedManager:manager]; } Let's look at the code line by line and set the base URL. Remember not to put a slash (/) at the end, otherwise, you might have a problem with response mapping: NSString *baseUrl = @"https://api.mongohq.com"; Initialize the HTTP client with baseUrl: AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:[NSURL URLWithString:baseUrl]]; Set a few properties for our HTTP client, such as the API key in the header: NSString *apiKey = @"MY_API_KEY"; [httpClient setDefaultHeader:@"MongoHQ-API-Token" value:apiKey]; For the real-world app, one can show an Enter Api Key view controller to the user, and use a NSUserDefaults or a keychain to store and retrieve it. And initialize the RKObjectManager with our HTTP client: RKObjectManager *manager = [[RKObjectManager alloc] initWithHTTPClient:httpClient]; MongoHQ APIs sometimes return errors in text/plain, thus we explicitly will add text/plain as a JSON content type to properly parse errors: [RKMIMETypeSerialization registerClass:[RKNSJSONSerialization class] forMIMEType:@"text/plain"]; Register JSONRequestOperation to parse JSON in requests: [manager.HTTPClient registerHTTPOperationClass:[AFJSONRequestOperation class]]; State that we are accepting JSON content type: [manager setAcceptHeaderWithMIMEType:RKMIMETypeJSON]; Configure so that we want the outgoing objects to be serialized into JSON: manager.requestSerializationMIMEType = RKMIMETypeJSON; Finally, set the shared instance of the object manager, so that we can easily re-use it later: [RKObjectManager setSharedManager:manager]; Sending requests with object manager Next, we want to query our databases. Let's first see how a database request will show us the output in JSON. To check this, go to http://api.mongohq.com/databases?_apikey=YOUR_API_KEY in your web browser YOUR_API_KEY. If a JSON-formatter extension (https://github.com/rfletcher/safari-json-formatter) is installed in your Safari browser, you will probably see the output shown in the following screenshot. JSON response from API As we see, the JSON representation of one database is: [ { "hostname": "sandbox.mongohq.com", "name": "Test", "plan": "Sandbox", "port": 10097, "shared": true } ] Therefore, our possible MDatabase class could look like: @interface MDatabase : NSObject @property (nonatomic, strong) NSString *name; @property (nonatomic, strong) NSString *plan; @property (nonatomic, strong) NSString *hostname; @property (nonatomic, strong) NSNumber *port; @end We can also modify the @implementation section to override the description method, which will help us while debugging the application and printing the object: // in @implementation MDatabase - (NSString *)description { return [NSString stringWithFormat:@"%@ on %@ @ %@:%@", self.name, self.plan, self.hostname, self.port]; } Now let's set up a mapping for it: - (void)setupDatabaseMappings { RKObjectManager *manager = [RKObjectManager sharedManager]; Class itemClass = [MDatabase class]; NSString *itemsPath = @"/databases"; RKObjectMapping *mapping = [RKObjectMapping mappingForClass:itemClass]; [mapping addAttributeMappingsFromArray:@[@"name", @"plan", @"hostname", @"port"]]; NSString *keyPath = nil; NSIndexSet *statusCodes = RKStatusCodeIndexSetForClass(RKStatusCodeClassSuccessful); RKResponseDescriptor *responseDescriptor = [RKResponseDescriptor responseDescriptorWithMapping:mapping method:RKRequestMethodGET pathPattern:itemsPath keyPath:keyPath statusCodes:statusCodes]; [manager addResponseDescriptor:responseDescriptor]; } Let's look at the mapping setup line by line: First, we define a class, which we will use to map to: Class itemClass = [MDatabase class]; And the endpoint we plan to request for getting a list of objects: NSString *itemsPath = @"/databases"; Then we create the RKObjectMapping mapping for our object class: RKObjectMapping *mapping = [RKObjectMapping mappingForClass:itemClass]; If the names of JSON fields and class properties are the same, we will use an addAttributeMappingsFromArray method and provide the array of properties: [mapping addAttributeMappingsFromArray:@[@"name", @"plan", @"hostname", @"port"]]; The root JSON key path in our case is nil. It means that there won't be one. NSString *keyPath = nil; The mapping will be triggered if a response status code is anything in 2xx: NSIndexSet *statusCodes = RKStatusCodeIndexSetForClass(RKStatusCodeClassSuccessful); Putting it all together in response descriptor (for a GET request method): RKResponseDescriptor *responseDescriptor = [RKResponseDescriptor responseDescriptorWithMapping:mapping method:RKRequestMethodGET pathPattern:itemsPath keyPath:keyPath statusCodes:statusCodes]; Add response descriptor to our shared manager: RKObjectManager *manager = [RKObjectManager sharedManager]; [manager addResponseDescriptor:responseDescriptor]; Sometimes, depending on the architectural decision, it's nicer to put the mapping definition as part of a model object, and later call it like [MDatabase mapping], but for the sake of simplicity, we will put the mapping in line with RestKit configuration. The actual code that loads the database list will look like: RKObjectManager *manager = [RKObjectManager sharedManager]; [manager getObjectsAtPath:@"/databases" parameters:nil success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) { NSLog(@"Loaded databases: %@", [mappingResult array]); } failure:^(RKObjectRequestOperation *operation, NSError *error) { NSLog(@"Error: %@", [error localizedDescription]) }]; As you may have noticed, the method is quite simple to use and it uses block-based APIs for callbacks, which greatly improves the code readability, compared to using delegates, especially if there is more than one network request in a class. A possible implementation of a table view that loads and shows the list of databases will look like the following screenshot: View of loaded Database items Summary In this article, we learned how to set up the RestKit library to work for our web service, we talked about sending requests, getting responses, and how to do object manipulations. We also talked about simplifying the requests by introducing routing. In addition, we discussed how integration with UI can be done and created forms. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [Article] Getting Started on UDK with iOS [Article] Unity iOS Essentials: Flyby Background [Article]
Read more
  • 0
  • 0
  • 1372

article-image-developing-your-mobile-learning-strategy
Packt
27 Sep 2013
27 min read
Save for later

Developing Your Mobile Learning Strategy

Packt
27 Sep 2013
27 min read
(For more resources related to this topic, see here.) What is mobile learning? There have been many attempts at defining mobile learning. Is it learning done on the move, such as on a laptop while we sit in a train? Or is it learning done on a personal mobile device, such as a smartphone or a tablet? The capabilities of mobile devices Anyone can develop mobile learning. You don't need to be a gadget geek or have the latest smartphone or tablet. You certainly don't need to know anything about the make and models of devices on the market. The only thing the learning practitioner really needs is an understanding of the capabilities of the mobile devices that your learners have. This will inform the types of mobile learning interventions that will be best suited to your audience. The following table shows an overview of what a mobile learner might be able to do with each of the device types. The Device uses column on the left should already be setting off lots of great learning ideas in your head! Device uses Feature phone Smartphone Tablet Gaming device Media player Send texts Yes Yes       Mark calls Yes Yes       Take photos Yes Yes Yes Yes Yes Listen to music Yes Yes Yes Yes Yes Social networking Yes Yes Yes Yes Yes Take high res photos   Yes Yes Yes Yes Web searches   Yes Yes Yes Yes Web browsing   Yes Yes Yes Yes Watch online videos   Yes Yes Yes Yes Video calls   Yes Yes Yes Yes Edit photos   Yes Yes Yes Yes Shoot videos   Yes Yes   Yes Take audio recordings   Yes Yes   Yes Install apps   Yes Yes   Yes Edit documents   Yes Yes   Yes Use maps   Yes Yes   Yes Send MMS   Yes Yes     View catch up TV     Yes Yes   Better quality web browsing     Yes Yes   Shopping online     Yes     Trip planning     Yes     Bear in mind that screen size will also impact the type of learning activity that can be undertaken. For example: Feature phone displays are very small, so learning activities for this device type should center on text messaging with a tutor or capturing photos for an assignment. Smartphones are significantly larger so there is a much wider range of learning activities available, especially around the creation of material such as photo and video for assignment or portfolio purposes, and a certain amount of web searching and browsing. Tablets are more akin to the desktop computing environment, although some tasks such as typing are harder and taking photos is bit clumsier due to the larger size of the device. They are great for short learning tasks, assessments, video watching, and much more. Warning – it's not about delivering courses Mobile learning can be many things. What it is not is simply the delivery of e-learning courses, which is traditionally the domain of the desktop computer, on a smaller device. Of course it can be used to deliver educational materials, but what is more important is that it can also be used to foster collaboration, to facilitate communication, to access performance support, and to capture evidence. But if you try to deliver an entire course purely on a mobile, then the likelihood is that no one will use it. Your mobile learning strategy Finding a starting point for your mobile learning design is easier said than done. It is often useful when designing any type of online interaction to think through a few typical user types and build up a picture of who they are and what they want to use the system for. This helps you to visualize who you are designing for. In addition to this, in order to understand how best to utilize mobile devices for learning, you also need to understand how people actually use their mobile devices. For example, learners are highly unlikely to sit at a smartphone and complete a 60 minutes e-learning course or type out an essay. But they are very likely to read an article, do some last minute test preparation or communicate with other learners. Who are your learners? Understanding your users is an important part of designing online experiences. You should take time to understand the types of learners within your own organization and what their mobile usage looks like, as a first step in delivering mobile learning on Moodle. With this in mind, let's look at a handful of typical mobile learners from around the world who could reasonably be expected to be using an educational or workplace learning platform such as Moodle: Maria is an office manager in Madrid, Spain. She doesn't leave home without her smartphone and uses it wherever she is, whether for e-mail, web searching and browsing, reading the news, or social networking. She lives in a country where smartphone penetration has reached almost half of the population, of whom two-third access the internet every day on their mobile. The company she works for has a small learning platform for delivery of work-based learning activities and performance support resources. Fourteen year old Jennifer attends school in Rio de Janeiro, Brazil. Like many of her peers, she carries a smartphone with her and it's a key part of her life. The Brazilian population is one of the most connected in the developing world with nearly half of the population using the Internet, and its mobile phone subscriptions accounting for one-third of the entire subscriptions across Latin America and the Caribbean. Her elementary school uses a learning platform for the delivery of course resources, formative assessments, and submission of student assignments. Nineteen year old Mike works as an apprentice at a large car maker in Sunderland, UK. He spends about one-third of his time in formal education, and his remaining days each week are spent on the production line, getting a thorough grounding in every element of the car manufacturing process. He owns a smartphone and uses it heavily, in a country where nearly half of the population accesses the Internet at least monthly on their smartphone. His employer has a learning platform for delivery of work-based learning and his college also has their own platform where he keeps a training diary and uploads evidence of skills acquisition for later submission and marking. Josh is a twenty year old university student in the United States. In his country, nearly 90 percent of adults now own a mobile phone and half of all adults use their phone to access the Internet, although in his age group this increases to three quarters. Among his student peers across the U.S., 40 percent are already doing test preparation on their mobiles, whether their institution provides the means or not. His university uses a learning platform for delivery of course resources, submission of student assignments, and student collaborative activities. These four particular learners were not chosen at random—there is one important thing that connects them all. The four countries they are from represent not just important mobile markets but, according to the statistics page on Moodle.org, also represent the four largest Moodle territories, together making up over a third of all registered Moodle sites in the world. When you combine those Moodle market statistics with the level of mobile internet usage in each country, you can immediately see why support for mobile learning is so important for Moodle sites. How do your learners use their devices? In 2012, Google published the findings of a research survey which investigated how users behave across computer, tablet, smartphone, and TV screens. Their researchers found that users make decisions about what device to use for a given task depending on four elements that together make up the user's context: location, goal, available time, and attitude. Each of these is important to take into account when thinking about what sort of learning interactions your users could engage in when using their mobile devices, and you should be aiming to offer a range of mobile learning interactions that can lend themselves to different contexts, for example, offering tasks ranging in length from 2 to 20 minutes, and tasks suited to different locations, such as home, work, college, or out in the field. The attitude element is an interesting one, and it's important to allow learners to choose tasks that are appropriate to their mood at the time. Google also found that users either move between screens to perform a single task ( sequential screening ) or use multiple screens at the same time ( simultaneous screening ). In the case of simultaneous screening, they are likely to be performing complementary tasks relating to the same activity on each screen. From a learning point of view, you can design for multi-screen tasks. For example, you may find learners use their computer to perform some complex research and then collect evidence in the field using their smartphone—these would be sequential screening tasks. A media studies student could be watching a rolling news channel on the television while taking photos, video, and notes for an assignment on his tablet or smartphone—these would be simultaneous screening tasks. Understanding the different scenarios in which learners can use multiple screens will open up new opportunities for mobile learning. A key statement from the Google research states that "Smartphones are the backbone of our daily media interactions". However, despite occupying such a dominant position in our lives, the smartphone also accounts for the lowest time per user interaction at an average of 17 minutes, as opposed to 30 minutes for tablet, 39 minutes for computer, and 43 minutes for TV. This is an important point to bear in mind when designing mobile learning: as a rule of thumb you can expect a learner to engage with a tablet-based task for half an hour, and a smartphone-based task for just a quarter of an hour. Google helpfully outlines some important multi-screen lessons. While these are aimed at identifying consumer behaviour and in particular online shopping habits, we can interpret them for use in mobile learning as follows: Understand how people consume digital media and tailor your learning strategies to each channel Learning goals should be adjusted to account for the inherent differences in each device Learners must be able to save their progress between devices Learners must be able to easily find the learning platform (Moodle) on each device Once in the learning platform, it must be easy for learners to find what they are looking for quickly Smartphones are the backbone of your learners' daily media use, so design your learning to be started on smartphone and continued on a tablet or desktop computer Having an understanding of how modern-day learners use their different screens and devices will have a real impact on your learning design. Mobile usage in your organization In 2011, the world reached a technology watershed when it was estimated that one third of the world's seven billion people were online. The growth in online users is dominated by the developing world and is fuelled by mobile devices. There are now a staggering six billion mobile phone subscriptions globally. Mobile technology has quite simply become ubiquitous. And as Google showed us, people use mobile devices as the backbone of their daily media consumption, and most people already use them for school, college, or work regardless of whether they are allowed to. In this section, we will look at how mobiles are used in some of the key sectors in which Moodle is used: in schools, further and higher education, and in the workplace. Mobile usage in school Moodle is widely used throughout primary and secondary education, and mobile usage among school pupils is widespread. The two are natural bedfellows in this sector. For example, in the UK half of all 12 to 15 year olds own a smartphone while 70 percent of 8 to 15 year olds have a games console such as a Nintendo DS or PlayStation in their bedroom. Mobile device use is quite simply rampant among school children. Many primary schools now have policies which allow children to bring mobile phones into school, recognizing that such devices have a role to play in helping pupils feel safe and secure, particularly on the journey to and from school. However, it is a fairly normal practice among this age group for mobiles to be handed in at the start of the school day and collected at the end of the day. For primary pupils, therefore, the use of mobile devices for education will be largely for homework. In secondary schools, the picture is very different. There is not likely to be a device hand-in policy during school hours and a variety of acceptable use policies will be in use. An acceptable use policy may include a provision for using mobiles in lesson time, with a teacher's agreement, for the purposes of supporting learning. This, of course, opens up valuable learning opportunities. Mobile learning in education has been the subject of a number of initiatives and research studies which are all excellent sources of information. These include: Learning2Go, who were pioneers in mobile learning for schools in the UK, distributing hundreds of Windows Mobile devices to Wolverhampton schools between 2003 and 2007, introducing smartphones in 2008 under the Computers for Pupils initiative and the national MoLeNET scheme. Learning Untethered, which was not a formal research project but an exploration that gave Android tablets to a class of fifth graders. It was noted that the overall ''feel'' of the classroom shifted as students took a more active role in discovery, exploration and active learning. The Dudley Handhelds initiative, which provided 300 devices to learners in grade five to ten across six primary schools, one secondary special school, and one mainstream secondary school. These are just a few of the many research studies available, and they are well worth a read to understand how schools have been implementing mobile learning for different age groups. Mobile usage in further and higher education College students are heavy users of mobiles, and there is a roughly half and half split between smartphones and feature phones among the student community. Of the smartphone users, over 80 percent use them for college-related tasks. As we saw from Google's research, smartphones are the backbone of your learners' daily media use for those who have them. So if you don't already provide mobile learning opportunities on your Moodle site, then it is likely that your users are already helping themselves to the vast array of mobile learning sites and apps that have sprung up in recent years to meet the high demand for such services. If you don't provide your students with mobile learning opportunities, you can bet your bottom dollar that someone else is, and it could be of dubious quality or out of date. Despite the ubiquity of the mobile, many schools and colleges continue to ban them, viewing mobiles as a distraction or a means of bullying. They are fighting a rising tide, however. Students are living their lives through their mobile devices, and these devices have become their primary means of communication. A study in late 2012 of nearly 295,000 students found that despite e-mail, IM, and text messaging being the dominant peer-communication tools for students, less than half of 14 to 18 year olds and only a quarter of 11 to 14 year olds used them to communicate with their teachers. Over half of high school students said they would use their smartphone to communicate with their teacher if it was allowed. Unfortunately it rarely is, but this will change. Students want to be able to communicate electronically with their teachers; they want online text articles with classmate collaboration tools; they want to go online on their mobile to get information. Go to where your students are and communicate with them in their native environment, which is via their mobile. Be there for them, engage them, and inspire them. In the years approaching 2010, some higher education institutions started engaging in headline-grabbing "iPad for every student" initiatives. Many institutions adopted a quick-win strategy of making mobile-friendly websites with access to campus information, directories, news and events. It is estimated that in the USA over 90 percent of higher education institutions have mobile-friendly websites. Some of the headline-grabbing initiatives include the following: Seton Hill University was the first to roll out iPads to all full-time students in 2010 and have continued to do so every year since. They are at the forefront of mobile learning in the US University sector and use Moodle as their virtual learning environment (VLE). Abilene Christian University was the first university in the U.S. to provide iPhones or iPod Touches to all new full-time students in 2008, and are regarded as one of the most mobile-friendly campuses in the U.S. The University of Western Sydney in Australia will roll out 11,000 iPads to all faculty and newly-enrolled students in 2013, as well as creating their own mobile apps. Coventry University in the UK is creating a smart campus in which the geographical location of students triggers access to content and experiences through their mobile devices. MoLeNET in the UK was one of the world's largest mobile learning implementations, comprising 115 colleges, 29 schools, 50,000 students, and 4,000 staff from 2007 to 2010. This was a research-led initiative although unfortunately the original website has now been taken down. While some of these examples are about providing mobile devices to new students, the Bring Your Own Device (BYOD) trend is strong in further and higher education. We know that mobile devices form the backbone of students' media consumption and in the U.S. alone, 75 percent of students use their phone to access the Internet. Additionally, 40 percent have signed up to online test preparation sites on their mobiles, heavily suggesting that if an institution doesn't provide mobile learning services, students will go and get it elsewhere anyway. Instead of the glamorous offer of iPads for all, some institutions have chosen to invest heavily in their wireless network infrastructure in support of a BYOD approach. This is a very heavy investment and can be far more expensive than a few thousand iPads. Some BYOD implementations include: King's College London in the UK, which supports 6,000 staff and 23,500 students The University of Tennessee at Knoxville in the U.S., which hosts more than 26,000 students and 5,000 faculty and staff members, with nearly 75,000 smartphones, tablets, and laptops The University of South Florida in the U.S., which supports 40,000 users Sau Paolo State University in Brazil, which has 45,000 students and noted that despite providing desktop machines in the computer labs, half of all students opted to use their own devices instead There are many challenges to BYOD which are not within the scope of this article, but there are also many resources on how to implement a BYOD policy that minimizes such risks. Use the Internet to seek these out. Providing campus information websites on mobiles obviously was not the key rationale behind such technology investments. The real interest is in delivering mobile learning, and this remains an area full of experimentation and research. Google Scholar can be used to chart the rise of mobile learning research and it becomes evident how this really takes off in the second half of the decade, when the first major institutions started investing in mobile technology. It indexes scholarly literature, including journal and conference papers, theses and dissertations, academic articles, pre-prints, abstracts, and technical reports. A year-by-year search reveals the rise of mobile learning research from just over 100 articles in 2000 to over 6,000 in 2012. The following chart depicts the rise of mobile learning in academic research: Mobile usage in apprenticeships A typical apprenticeship will include a significant amount of college-based learning towards a qualification, alongside a major component based in the workplace under the supervision of an employer while the apprentice learns a particular trade. Due to the movement of the student from college to workplace, and the fact that the apprentice usually has to keep a reflective log and capture evidence of their skills acquisition, mobile devices can play a really useful role in apprenticeships. Traditionally, the age group for apprenticeships is 16 to 24 year olds. This is an age group that has never known a world without mobiles and their mobile devices are integrated into the fabric of their daily lives and media consumption. They use social networks, SMS, and instant messaging rather than e-mail, and are more likely to use the mobile internet than any other age group. Statistics from the U.S. reveal that 75 percent of students use their phone to access the Internet. Reflective logs are an important part of any apprenticeship. There are a number of activities in Moodle that can be used for keeping reflective logs, and these are ideal for mobile learning. Reflective log entries tend to be shorter than traditional assignments and lend themselves well to production on a tablet or even a smartphone. Consumption of reflective logs is perfect for both smartphone and tablet devices, as posts tend to be readable in less than 5 minutes. Many institutions use Moodle coupled with an ePortfolio tool such as Mahara or Onefile to manage apprenticeship programs. There are additional Packt Publishing articles on ePortfolio tools such as Mahara, should you wish to investigate a third-party, open source ePortfolio solution. Mobile usage in the workplace BYOD in the workplace is also becoming increasingly common, and, appears to be an unstoppable trend. It may also be discouraged or banned on security, data protection, or distraction grounds, but it is happening regardless. There is an increasing amount of research available on this topic, and some key findings from various studies reveal the scale of the trend: A survey of 600 IT and business leaders revealed that 90 percent of survey respondents had employees using their own devices at work 65 to 75 percent of companies allow some sort of BYOD usage 80 to 90 percent of employees use a personal mobile device for business use If you are a workplace learning practitioner then you need to sit up and take note of these numbers if you haven't done so already. Even if your organization doesn't officially have a BYOD policy, it is most likely that your employees are already using their own mobile devices for business purposes. It's up to your IT department to manage this safely, and again there are many resources and case studies available online to help with this. But as a learning practitioner, whether it's officially supported or not, it's worth asking yourself whether you should embrace it anyway, and provide learning activities to these users and their devices. Mobile usage in distance learning Online distance learning is principally used in higher education (HE), and many institutions have taken to it either as a new stream of revenue or as a way of building their brand globally. Enrolments have rocketed over recent years; the number of U.S. students enrolled in an online course has increased from one to six million in a decade. Online enrolments have also been the greatest source of new enrolments in HE in that time, outperforming general student enrolment dramatically. Indeed, the year 2011 in the US saw a 10 percent growth rate in distance learning enrolment against 2 percent in the overall HE student population. In the 2010 to 2011 academic years, online enrolments accounted for 31 percent of all U.S. HE enrolments. Against this backdrop of phenomenal growth in HE distance learning courses, we also have a new trend of Massive Online Open Courses (MOOCs) which aim to extend enrolment past traditional student populations to the vast numbers of potential students for whom a formal HE program of study may not be an option. The convenience and flexibility of distance learning appeal to certain groups of the population. Distance learners are likely to be older students, with more than 30 years of age being the dominant age group. They are also more likely to be in full-time employment and taking the course to help advance their careers, and are highly likely to be married and juggling home and family commitments with their jobs and coursework. We know that among the 30 to 40 age group mobile device use is very high, particularly among working professionals, who are a major proportion of HE distance learners. However, the MOOC audience is of real interest here as this audience is much more diverse. As many MOOC users find traditional HE programs out of their reach, many of these will be in developing countries, where we already know that users are leapfrogging desktop computing and going straight to mobile devices and wireless connectivity. For these types of courses, mobile support is absolutely crucial. A wide variety of tools exist to support online distance learning, and these are split between synchronous and asynchronous tools, although typically a blend of the two is used. In synchronous learning, all participants are present at the same time. Courses will therefore be organized to a timetable, and will involve tools such as webinars, video conferences, and real-time chat. In asynchronous learning, courses are self-directed and students work to their own schedules, and tools include e-mail, discussion forums, audio recording, video recordings, and printed material. Connecting distance learning from traditional institutions to MOOCs is a recognized need to improve course quality and design, faculty training, course assessment, and student retention. There are known barriers, including motivation, feedback, teacher contact, and student isolation. These are major challenges to the effectiveness of distance learning, and later in this article we will demonstrate how mobile devices can be used to address some of these areas. Case studies The following case studies illustrate two approaches to how an HE institution and a distance learning institution have adopted Moodle to deliver mobile learning. Both institutions were very early movers in making Moodle mobile-friendly, and can be seen as torch bearers for the rest of us. Fortunately, both institutions have also been influential in the approach that Moodle HQ have taken to mobile compatibility, so in using the new mobile features in recent versions of Moodle, we are all able to take advantage of the substantial amount of work that went into these two sites. University of Sussex The University of Sussex is a research-led HE institution on the south coast of England. They use a customized Moodle 1.9 installation called Study Direct, which plays host to 1,500 editing tutors and 15,000 students across 2,100 courses per year, and receives 13,500 unique hits per day. The e-learning team at the University of Sussex contains five staff (one manager, two developers, one user support, and one tutor support) whose remit covers a much wider range of learning technologies beyond the VLE. However, the team has achieved a great deal with limited resources. It has been working towards a responsive design for some years and has helped to influence the direction of Moodle with regards to designing for mobile devices and usability, through speaking at UK Moodle and HE conferences and providing passionate inputs into debates on the Moodle forums on the subject of interface design. Further to this, team member Stuart Lamour is one of the three original developers of the Bootstrap theme for Moodle, which is used throughout this article. The Study Direct site shows what is possible in Moodle, given the time and resources for its development and a focus on user-centered design. The approach has been to avoid going down the native application route for mobile access like many institutions have done, and to instead focus on a responsive, browser-based user experience. The login page is simple and clean. One of the nice things that the University of Sussex has done is to think through the user interactions on its site and clearly identify calls to action, typically with a green button, as shown by the sign in button on the login page in the following screenshot: The team has built its own responsive theme for Moodle. While the team has taken a leading role on development of the Moodle 2 Bootstrap theme, the University of Sussex site is still on Moodle 1.9 so this implementation uses its own custom theme. This theme is fully responsive and looks good when viewed on a tablet or a smartphone, reordering screen elements as necessary for each screen resolution. The course page, shown in the following screenshot, is similarly clear and uncluttered. The editing interface has been customized quite heavily to give tutors a clear and easy way to edit their courses without running the risk of messing up the user interface. The team maintains a useful and informative blog explaining what they have done to improve the user experience, and which is well worth a read. Open University The Open University (OU) in the UK runs one the largest Moodle sites in the world. It is currently using Moodle 2 for the OU's main VLE as well as for its OpenLearn and Qualifications online platforms. Its Moodle implementation regularly sees days with well over one million transactions and over 60,000 unique users, and has seen peak times of 5,000 simultaneous online users. The OU's focus on mobile Moodle goes back to about 2010, so it was an early mover in this area. This means that the OU did not have the benefit of all the mobile-friendly features that now come with Moodle, but had to largely create its own mobile interface from scratch. Anthony Forth gave a presentation at the UK Moodle Moot in 2011 on the OU's approach to mobile interface design for Moodle. He identified that at the time the Open University migrated to Moodle 2 in 2011 it had over 13,000 mobile users per month. The OU chose to survey a group of 558 of these users in detail to investigate their needs more closely. It transpired that the most popular uses of Moodle on mobile devices was for forums, news, resources and study planners, while areas such as wikis and blogs were very low down the list of users' priorities. So the OU's mobile design focused on these particular areas as well as looking at usability in general. The preceding screenshot shows the OU course page with tabbed access to the popular areas such as Planner, News, Forums, and Resources, and then the main content area providing space for latest news, unread forum posts, and activities taking place this week. The site uses a nice, clean, and easy to understand user interface in which a lot of thought has gone into the needs of the student. Summary In this article, we have provided you with a vision of how mobile learning could be put to use on your own organization's Moodle platform. We gave you an understanding of some of the foundation concepts of mobile learning, some insights into how important mobile learning is becoming, and how it is gaining momentum in different sectors. Your learners are already using mobile devices whether in educational institutions or in the workplace, and they use mobile devices as the backbone of their daily online interactions. They want to also use them for learning. Hopefully, we have started you off on a mobile learning path that will allow you to make this happen. Mobile devices are where the future of Moodle is going to be played out, so it makes complete sense to be designing for mobile access right now. Fortunately, Moodle already provides the means for this to happen and provides tools that allow you to set it up for mobile delivery. Resources for Article : Further resources on this subject: Getting Started with Moodle 2.0 for Business [Article] Managing Student Work using Moodle: Part 2 [Article] Integrating Moodle 2.0 with Mahara and GoogleDocs for Business [Article]
Read more
  • 0
  • 0
  • 941
article-image-vaadin-and-its-context
Packt
25 Sep 2013
24 min read
Save for later

Vaadin and its Context

Packt
25 Sep 2013
24 min read
(For more resources related to this topic, see here.) Developing Java applications and more specifically, developing Java web applications should be fun. Instead, most projects are a mess of sweat and toil, pressure and delays, costs and cost cutting. Web development has lost its appeal. Yet, among the many frameworks available, there is one in particular that draws our attention because of its ease of use and its original stance. It has been around since the past decade and has begun to grow in importance. The name of this framework is Vaadin. The goal of this article is to see, step-by-step, how to develop web applications with Vaadin. Vaadin is the Finnish word for a female reindeer (as well as a Finnish goddess). This piece of information will do marvels to your social life as you are now one of the few people on Earth who know this (outside Finland). Before diving right into Vaadin, it is important to understand what led to its creation. Readers who already have this information (or who don't care) should go directly to Environment Setup. Rich applications Vaadin is often referred to as a Rich Internet Application (RIA) framework. Before explaining why, we need to first define some terms which will help us describe the framework. In particular, we will have a look at application tiers, the different kind of clients, and their history. Application tiers Some software run locally, that is, on the client machine and some run remotely, such as on a server machine. Some applications also run on both the client and the server. For example, when requesting an article from a website, we interact with a browser on the client side but the order itself is passed on a server in the form of a request. Traditionally, all applications can be logically separated into tiers, each having different responsibilities as follows: Presentation : The presentation tier is responsible for displaying the end-user information and interaction. It is the realm of the user interface. Business Logic : The logic tier is responsible for controlling the application logic and functionality. It is also known as the application tier, or the middle tier as it is the glue between the other two surrounding tiers, thus leading to the term middleware. Data : The data tier is responsible for storing and retrieving data. This backend may be a file system. In most cases, it is a database, whether relational, flat, or even an object-oriented one. This categorization not only naturally corresponds to specialized features, but also allows you to physically separate your system into different parts, so that you can change a tier with reduced impact on adjacent tiers and no impact on non-adjacent tiers. Tier migration In the histor yof computers and computer software, these three tiers have moved back and forth between the server and the client. Mainframes When computers were mainframes, all tiers were handled by the server. Mainframes stored data, processed it, and were also responsible for the layout of the presentation. Clients were dumb terminals, suited only for displaying characters on the screen and accepting the user input. Client server Not many companies could afford the acquisition of a mainframe (and many still cannot). Yet, those same companies could not do without computers at all, because the growing complexity of business processes needed automation. This development in personal computers led to a decrease in their cost. With the need to share data between them, the network traffic rose. This period in history saw the rise of the personal computer, as well as the Client server term, as there was now a true client. The presentation and logic tier moved locally, while shared databases were remotely accessible, as shown in the following diagram: Thin clients Big companies migrating from mainframes to client-server architectures thought that deploying software on ten client machines on the same site was relatively easy and could be done in a few hours. However, they quickly became aware of the fact that with the number of machines growing in a multi-site business, it could quickly become a nightmare. Enterprises also found that it was not only the development phase that had to be managed like a project, but also the installation phase. When upgrading either the client or the server, you most likely found that the installation time was high, which in turn led to downtime and that led to additional business costs. Around 1991, Sir Tim Berners-Leeinvented the Hyper Text Markup Language, better known as HTML. Some time after that, people changed its original use, which was to navigate between documents, to make HTML-based web applications. This solved the deployment problem as the logic tier was run on a single-server node (or a cluster), and each client connected to this server. A deployment could be done in a matter of minutes, at worst overnight, which was a huge improvement. The presentation layer was still hosted on the client, with the browser responsible for displaying the user interface and handling user interaction. This new approach brought new terms, which are as follows: The old client-server architecture was now referred to as fat client . The new architecture was coined as thin client, as shown in the following diagram: Limitations of the thin-client applications approach Unfortunately, this evolution was made for financial reasons and did not take into account some very important drawbacks of the thin client. Poor choice of controls HTML does not support many controls, and what is available is not on par with fat-client technologies. Consider, for example, the list box: in any fat client, choices displayed to the user can be filtered according to what is typed in the control. In legacy HTML, there's no such feature and all lines are displayed in all cases. Even with HTML5, which is supposed to add this feature, it is sadly not implemented in all browsers. This is a usability disaster if you need to display the list of countries (more than 200 entries!). As such, ergonomics of true thin clients have nothing to do with their fat-client ancestors. Many unrelated technologies Developers of fat-client applications have to learn only two languages: SQL and the technology's language, such as Visual Basic, Java, and so on. Web developers, on the contrary, have to learn an entire stack of technologies, both on the client side and on the server side. On the client side, the following are the requirements: First, of course, is HTML. It is the basis of all web applications, and although some do not consider it a programming language per se, every web developer must learn it so that they can create content to be displayed by browsers. In order to apply some common styling to your application, one will probably have to learn the Cascading Style Sheets ( CSS) technology. CSS is available in three main versions, each version being more or less supported by browser version combinations (see Browser compatibility). Most of the time, it is nice to have some interactivity on the client side, like pop-up windows or others. In this case, we will need a scripting technology such as ECMAScript. ECMAScript is the specification of which JavaScript is an implementation (along with ActionScript ). It is standardized by the ECMA organization. See http://www.ecma-international.org/publications/standards/Ecma-262.htm for more information on the subject. Finally, one will probably need to update the structure of the HTML page, a healthy dose of knowledge of the Document Object Model (DOM) is necessary. As a side note, consider that HTML, CSS, and DOM are W3C specifications while ECMAScript is an ECMA standard. From a Java point-of-view and on the server side, the following are the requirements: As servlets are the most common form of request-response user interactions in Java EE, every web developer worth his salt has to know both the Servlet specification and the Servlet API. Moreover, most web applications tend to enforce the Model-View-Controller paradigm. As such, the Java EE specification enforces the use of servlets for controllers and JavaServer Pages (JSP ) for views. As JSP are intended to be templates, developers who create JSP have an additional syntax to learn, even though they offer the same features as servlets. JSP accept scriptlets, that is, Java code snippets, but good coding practices tend to frown upon this, however, as Java code can contain any feature, including some that should not be part of views—for example, the database access code. Therefore, a completely new technology stack is proposed in order to limit code included in JSP: the tag libraries. These tag libraries also have a specification and API, and that is another stack to learn. However, these are a few of the standard requirements that you should know in order to develop web applications in Java. Most of the time, in order to boost developer productivity, one has to use frameworks. These frameworks are available in most of the previously cited technologies. Some of them are supported by Oracle, such as Java Server Faces, others are open source, such as Struts. JavaEE 6 seems to favor replacement of JSP and Servlet by Java Server Faces(JSF). Although JSF aims to provide a component-based MVC framework, it is plagued by a relative complexity regarding its components lifecycle. Having to know so much has negative effects, a few are as follows: On the technical side, as web developers have to manage so many different technologies, web development is more complex than fat-client development, potentially leading to more bugs On the human resources side, different meant either different profiles were required or more resources, either way it added to the complexity of human resource management On the project management side, increased complexity caused lengthier projects: developing a web application was potentially taking longer than developing a fat-client application All of these factors tend to make the thin-client development cost much more than fat-client, albeit the deployment cost was close to zero. Browser compatibility The Web has standards, most of them upheld by the World Wide Web Consortium. Browsers more or less implement these standards, depending on the vendor and the version. The ACID test, in version 3, is a test for browser compatibility with web standards. Fortunately, most browsers pass the test with 100 percent success, which was not the case two years ago. Some browsers even make the standards evolve, such as Microsoft which implemented the XmlHttpRequest objectin Internet Explorer and thus formed the basis for Ajax. One should be aware of the combination of the platform, browser, and version. As some browsers cannot be installed with different versions on the same platform, testing can quickly become a mess (which can fortunately be mitigated with virtual machines and custom tools like http://browsershots.org). Applications should be developed with browser combinations in mind, and then tested on it, in order to ensure application compatibility. For intranet applications, the number of supported browsers is normally limited. For Internet applications, however, most common combinations must be supported in order to increase availability. If this wasn't enough, then the same browser in the same version may run differently on different operating systems. In all cases, each combination has an exponential impact on the application's complexity, and therefore, on cost. Page flow paradigm Fat-client applications manage windows. Most of the time, there's a main window. Actions are mainly performed in this main window, even if sometimes managed windows or pop-up windows are used. As web applications are browser-based and use HTML over HTTP, things are managed differently. In this case, the presentation unit is not the window but the page. This is a big difference that entails a performance problem: indeed, each time the user clicks on a submit button, the request is sent to the server, processed by it, and the HTML response is sent back to the client. For example, when a client submits a complex registration form, the entire page is recreated on the server side and sent back to the browser even if there is a minor validation error, even though the required changes to the registration form would have been minimal. Beyond the limits Over the last few years, users have been applying some pressure in order to have user interfaces that offer the same richness as good old fat-client applications. IT managers, however, are unwilling to go back to the old deploy-as-a-project routine and its associated costs and complexity. They push towards the same deployment process as thin-client applications. It is no surprise that there are different solutions in order to solve this dilemma. What are rich clients? All the following solutions are globally called rich clients, even if the approach differs. They have something in common though: all of them want to retain the ease of deployment of the thin client and solve some or all of the problems mentioned previously. Rich clients fulfill the fourth quadrant of the following schema, which is like a dream come true, as shown in the following diagram: Some rich client approaches The following solutions are strategies that deserve the rich client label. Ajax Ajax was one of the first successful rich-client solutions. The term means Asynchronous JavaScript with XML. In effect, this browser technology enables sending asynchronous requests, meaning there is no need to reload the full page. Developers can provide client scripts implementing custom callbacks: those are executed when a response is sent from the server. Most of the time, such scripts use data provided in the response payload to dynamically update relevant part of the page DOM. Ajax addresses the richness of controls and the page flow paradigm. Unfortunately: It aggravates browser-compatibility problems as Ajax is not handled in the same way by all browsers. It has problems unrelated directly to the technologies, which are as follows: Either one learns all the necessary technologies to do Ajax on its own, that is, JavaScript, Document Object Model, and JSON/XML, to communicate with the server and write all common features such as error handling from scratch. Alternatively, one uses an Ajax framework, and thus, one has to learn another technology stack. Richness through a plugin The oldest way to bring richness to the user's experience is to execute the code on the client side and more specifically, as a plugin in the browser. Sun—now Oracle—proposed the applet technology, whereas Microsoft proposed ActiveX. The latest technology using this strategy is Flash. All three were failures due to technical problems, including performance lags, security holes, and plain-client incompatibility or just plain rejection by the market. There is an interesting way to revive the applet with the Apache Pivot project, as shown in the following screenshot (http://pivot.apache.org/), but it hasn't made a huge impact yet; A more recent and successful attempt at executing code on the client side through a plugin is through Adobe's Flex. A similar path was taken by Microsoft's Silverlight technology. Flex is a technology where static views are described in XML and dynamic behavior in ActionScript. Both are transformed at compile time in Flash format. Unfortunately, Apple refused to have anything to do with the Flash plugin on iOS platforms. This move, coupled with the growing rise of HTML5, resulted in Adobe donating Flex to the Apache foundation. Also, Microsoft officially renounced plugin technology and shifted Silverlight development to HTML5. Deploying and updating fat-client from the web The most direct way toward rich-client applications is to deploy (and update) a fat-client application from the web. Java Web Start Java Web Start (JWS), available at http://download.oracle.com/javase/1.5.0/docs/guide/javaws/, is a proprietary technology invented by Sun. It uses a deployment descriptor in Java Network Launching Protocol (JNLP) that takes the place of the manifest inside a JAR file and supplements it. For example, it describes the main class to launch the classpath, and also additional information such as the minimum Java version, icons to display on the user desktop, and so on. This descriptor file is used by the javaws executable, which is bundled in the Java Runtime Environment. It is the javaws executable's responsibility to read the JNLP file and do the right thing according to it. In particular, when launched, javaws will download the updated JAR. The detailed process goes something like the following: The user clicks on a JNLP file. The JNLP file is downloaded on the user machine, and interpreted by the local javaws application. The file references JARs that javaws can download. Once downloaded, JWS reassembles the different parts, create the classpath, and launch the main class described in the JNLP. JWS correctly tackles all problems posed by the thin-client approach. Yet it never reaches critical mass for a number of reasons: First time installations are time-consuming because typically lots of megabytes need to be transferred over the wire before the users can even start using the app. This is a mere annoyance for intranet applications, but a complete no go for Internet apps. Some persistent bugs weren't fixed across major versions. Finally, the lack of commercial commitment by Sun was the last straw. A good example of a successful JWS application is JDiskReport (http://www.jgoodies.com/download/jdiskreport/jdiskreport.jnlp), a disk space analysis tool by Karsten Lentzsch , which is available on the Web for free. Update sites Updating software through update sites is a path taken by both Integrated Development Environment ( IDE ) leaders, NetBeans and Eclipse. In short, once the software is initially installed, updates and new features can be downloaded from the application itself. Both IDEs also propose an API to build applications. This approach also handles all problems posed by the thin-client approach. However, like JWS, there's no strong trend to build applications based on these IDEs. This can probably be attributed to both IDEs using the OSGI standard whose goal is to address some of Java's shortcomings but at the price of complexity. Google Web Toolkit Google Web Toolkit (GWT) is the framework used by Google to create some of its own applications. Its point of view is very unique among the technologies presented here. It lets you develop in Java, and then the GWT compiler transforms your code to JavaScript, which in turn manipulates the DOM tree to update HTML. It's GWT's responsibility to handle browser compatibility. This approach also solves the other problems of the pure thin-client approach. Yet, GWT does not shield developers from all the dirty details. In particular, the developer still has to write part of the code handling server-client communication and he has to take care of the segregation between Java server-code which will be compiled into byte code and Java client-code which will be compiled into JavaScript. Also, note that the compilation process may be slow, even though there are a number of optimization features available during development. Finally, developers need a good understanding of the DOM, as well as the JavaScript/DOM event model. Why Vaadin? Vaadin is a solution evolved from a decade of problem-solving approach, provided by a Finnish company named Vaadin Ltd, formerly IT Mill. Therefore, having so many solutions available, could question the use of Vaadin instead of Flex or GWT? Let's first have a look at the state of the market for web application frameworks in Java, then detail what makes Vaadin so unique in this market. State of the market Despite all the cons of the thin-client approach, an important share of applications developed today uses this paradigm, most of the time with a touch of Ajax augmentation. Unfortunately, there is no clear leader for web applications. Some reasons include the following: Most developers know how to develop plain old web applications, with enough Ajax added in order to make them usable by users. GWT, although new and original, is still complex and needs seasoned developers in order to be effective. From a Technical Lead or an IT Manager's point of view, this is a very fragmented market where it is hard to choose a solution that will meet users' requirements, as well as offering guarantees to be maintained in the years to come. Importance of Vaadin Vaadin is a unique framework in the current ecosystem; its differentiating features include the following: There is no need to learn different technology stacks, as the coding is solely in Java. The only thing to know beside Java is Vaadin's own API, which is easy to learn. This means: The UI code is fully object-oriented There's no spaghetti JavaScript to maintain It is executed on the server side Furthermore, the IDE's full power is in our hands with refactoring and code completion. No plugin to install on the client's browser, ensuring all users that browse our application will be able to use it as-is. As Vaadin uses GWT under the hood, it supports all browsers that the version of GWT also supports. Therefore, we can develop a Vaadin application without paying attention to the browsers and let GWT handle the differences. Our users will interact with our application in the same way, whether they use an outdated version (such as Firefox 3.5), or a niche browser (like Opera). Moreover, Vaadin uses an abstraction over GWT so that the API is easier to use for developers. Also, note that Vaadin Ltd (the company) is part of GWT steering committee, which is a good sign for the future. Finally, Vaadin conforms to standards such as HTML and CSS, making the technology future proof. For example, many applications created with Vaadin run seamlessly on mobile devices although they were not initially designed to do so. Vaadin integration In today's environment, integration features of a framework are very important, as normally every enterprise has rules about which framework is to be used in some context. Vaadin is about the presentation layer and runs on any servlet container capable environment. Integrated frameworks There are three integration levels possible which are as follows: Level 1 : out-of-the-box or available through an add-on, no effort required save reading the documentation Level 2 : more or less documented Level 3 : possible with effort The following are examples of such frameworks and tools with their respective integration estimated effort: Level 1 : Java Persistence API ( JPA ): JPA is the Java EE 5 standard for all things related to persistence. An add-on exists that lets us wire existing components to a JPA backend. Other persistence add-ons are available in the Vaadin directory, such as a container for Hibernate, one of the leading persistence frameworks available in the Java ecosystem. A bunch of widget add-ons, such as tree tables, popup buttons, contextual menus, and many more. Level 2 : Spring is a framework which is based on Inversion of Control ( IoC ) that is the de facto standard for Dependency Injection. Spring can easily be integrated with Vaadin, and different strategies are available for this. Context Dependency Injection ( CDI ): CDI is an attempt at making IoC a standard on the Java EE platform. Whatever can be done with Spring can be done with CDI. Any GWT extensions such as Ext-GWT or Smart GWT can easily be integrated in Vaadin, as Vaadin is built upon GWT's own widgets. Level 3 : We can use another entirely new framework and languages and integrate them with Vaadin, as long as they run on the JVM: Apache iBatis, MongoDB, OSGi, Groovy, Scala, anything you can dream of! Integration platforms Vaadin provides an out-of-the-box integration with an important third-party platform: Liferay is an open source enterprise portal backed by Liferay Inc. Vaadin provides a specialized portlet that enables us to develop Vaadin applications as portlets that can be run on Liferay. Also, there is a widgetset management portlet provided by Vaadin, which deploys nicely into Liferay's Control Panel. Using Vaadin in the real world If you embrace Vaadin, then chances are that you will want to go beyond toying with the Vaadin framework and develop real-world applications. Concerns about using a new technology Although it is okay to use the latest technology for a personal or academic project, projects that have business objectives should just run and not be riddled with problems from third-party products. In particular, most managers may be wary when confronted by anew product (or even a new version), and developers should be too. The following are some of the reasons to choose Vaadin: Product is of highest quality : The Vaadin team has done rigorous testing throughout their automated build process. Currently, it consists of more than 8,000 unit tests. Moreover, in order to guarantee full compatibility between versions, many (many!) tests execute pixel-level regression testing. Support : Commercial : Although completely committed to open source, Vaadin Limited offer commercial support for their product. Check their Pro Account offering. User forums : A Vaadin user forum is available. Anyone registered can post questions and see them answered by a member of the team or of the community. Note that Vaadin registration is free, as well as hassle-free: you will just be sent the newsletter once a month (and you can opt-out, of course). Retro-compatibility: API : The server-side API is very stable, version after version, and has survived major client-engines rewrite. Some part of the API has been changed from v6 to v7, but it is still very easy to migrate. Architecture : Vaadin's architecture favors abstraction and is at the root of it all. Full-blown documentation available : Product documentation : Vaadin's site provides three levels of documentation regarding Vaadin: a five-minute tutorial, a one-hour tutorial, and the famed article of Vaadin . Tutorials API documentation : The Javadocs are available online; there is no need to build the project locally. Course/webinar offerings : Vaadin Ltd currently provides four different courses, which tackles all the needed skills for a developer to be proficient in the framework. Huge community around the product : There is a community gathering, which is ever growing and actively using the product. There are plenty of blogs and articles online on Vaadin. Furthermore, there are already many enterprises using Vaadin for their applications. Available competent resources : There are more and more people learning Vaadin. Moreover, if no developer is available, the framework can be learned in a few days. Integration with existing product/platforms : Vaadin is built to be easily integrated with other products and platforms. The artile of Vaadin describes how to integrate with Liferay and Google App Engine. Others already use Vaadin Upon reading this, managers and developers alike should realize Vaadin is mature and is used on real-world applications around the world. If you still have any doubts, then you should check http://vaadin.com/who-is-using-vaadin and be assured that big businesses trusted Vaadin before you, and benefited from its advantages as well. Summary In this article, we saw the migration of application tiers in the software architecture between the client and the server. We saw that each step resolved the problems in the previous architecture: Client-server used the power of personal computers in order to decrease mainframe costs Thin-clients resolved the deployment costs and delays Thin-clients have numerous drawbacks. For the user, a lack of usability due to poor choice of controls, browser compatibility issues, and the navigation based on page flow; for the developer, many technologies to know. As we are at the crossroad, there is no clear winner in all the solutions available: some only address a few of the problems, some aggravate them. Vaadin is an original solution that tries to resolve many problems at once: It provides rich controls It uses GWT under the cover that addresses most browser compatibility issues It has abstractions over the request response model, so that the model used is application-based and not page based The developer only needs to know one programming language: Java, and Vaadin generates all HTML, JavaScript, and CSS code for you Now we can go on and create our first Vaadin application! Resources for Article : Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Creating a Basic Vaadin Project [Article] Vaadin – Using Input Components and Forms [Article]
Read more
  • 0
  • 0
  • 2297

article-image-making-specs-more-concise-intermediate
Packt
13 Sep 2013
6 min read
Save for later

Making specs more concise (Intermediate)

Packt
13 Sep 2013
6 min read
(For more resources related to this topic, see here.) Making specs more concise (Intermediate) So far, we've written specifications that work in the spirit of unit testing, but we're not yet taking advantage of any of the important features of RSpec to make writing tests more fluid. The specs illustrated so far closely resemble unit testing patterns and have multiple assertions in each spec. How to do it... Refactor our specs in spec/lib/location_spec.rb to make them more concise: require "spec_helper" describe Location do describe "#initialize" do subject { Location.new(:latitude => 38.911268, :longitude => -77.444243) } its (:latitude) { should == 38.911268 } its (:longitude) { should == -77.444243 } end end While running the spec, you see a clean output because we've separated multiple assertions into their own specifications: Location #initialize latitude should == 38.911268 longitude should == -77.444243 Finished in 0.00058 seconds 2 examples, 0 failures The preceding output requires either the .rspec file to contain the --format doc line, or when executing rspec in the command line, the --format doc argument must be passed. The default output format will print dots (.) for passing tests, asterisks (*) for pending tests, E for errors, and F for failures. It is time to add something meatier. As part of our project, we'll want to determine if Location is within a certain mile radius of another point. In spec/lib/location_spec.rb, we'll write some tests, starting with a new block called context. The first spec we want to write is the happy path test. Then, we'll write tests to drive out other states. I am going to re-use our Location instance for multiple examples, so I'll refactor that into another new construct, a let block: require "spec_helper" describe Location do let(:latitude) { 38.911268 } let(:longitude) { -77.444243 } let(:air_space) { Location.new(:latitude => 38.911268,: longitude => -77.444243) } describe "#initialize" do subject { air_space } its (:latitude) { should == latitude } its (:longitude) { should == longitude } end end Because we've just refactored, we'll execute rspec and see the specs pass. Now, let's spec out a Location#near? method by writing the code we wish we had: describe "#near?" do context "when within the specified radius" do subject { air_space.near?(latitude, longitude, 1) } it { should be_true } end end end Running rspec now results in failure because there's no Location#near? method defined. The following is the naive implementation that passes the test (in lib/location.rb): def near?(latitude, longitude, mile_radius) true end Now, we can drive a failure case, which will force a real implementation in spec/lib/location_spec.rb within the describe "#near?" block: context "when outside the specified radius" do subject { air_space.near?(latitude * 10, longitude * 10, 1) } it { should be_false } end Running the specs now results in the expected failure. The following is a passing implementation of the haversine formula in lib/location.rb that satisfies both cases: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) to_radians = Proc.new { |d| d * Math::PI / 180 } dist_lat = to_radians.call(lat - self.latitude) dist_long = to_radians.call(long - self.longitude) lat1 = to_radians.call(self.latitude) lat2 = to_radians.call(lat) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) + Math.sin(dist_long/2) * Math.sin(dist_long/2) * Math.cos(lat1) * Math.cos(lat2) c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) (R * c) <= mile_radius end Refactor both of the previous tests to be more expressive by utilizing predicate matchers: describe "#near?" do context "when within the specified radius" do subject { air_space } it { should be_near(latitude, longitude, 1) } end context "when outside the specified radius" do subject { air_space } it { should_not be_near(latitude * 10, longitude * 10, 1) } end end Now that we have a passing spec for #near?, we can alleviate a problem with our implementation. The #near? method is too complicated. It could be a pain to try and maintain this code in future. Refactor for ease of maintenance while ensuring that the specs still pass: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) loc = Location.new(:latitude => lat,:longitude => long) R * haversine_distance(loc) <= mile_radius end private def to_radians(degrees) degrees * Math::PI / 180 end def haversine_distance(loc) dist_lat = to_radians(loc.latitude - self.latitude) dist_long = to_radians(loc.longitude - self.longitude) lat1 = to_radians(self.latitude) lat2 = to_radians(loc.latitude) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) +Math.sin(dist_long/2) * Math.sin(dist_long/2) *Math.cos(lat1) * Math.cos(lat2) 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) end Finally, run rspec again and see that the tests continue to pass. A successful refactor! How it works... The subject block takes the return statement of the block—a new instance of Location in the previous example—and binds it to a locally scoped variable named subject. Subsequent it and its blocks can refer to that subject variable. Furthermore, the its blocks implicitly operate on the subject variable to produce more concise tests. Here is an example illustrating how subject is used to produce easier-to-read tests: describe "Example" do subject { { :key1 => "value1", :key2 => "value2" } } it "should have a size of 2" do subject.size.should == 2 end end We can use subject from within the it block and this will refer to the anonymous hash returned by the subject block. In the preceding test, we could have been more concise with an its block: its (:size) { should == 2 } We're not limited to just sending symbols to an its block—we can use strings too: its ('size') { should == 2 } When there is an attribute of subject you want to assert but the value cannot easily be turned into a valid Ruby symbol, you'll need to use a string. This string is not evaluated as Ruby code; it's only evaluated against the subject under test as a method of that class. Hashes, in particular, allow you to define an anonymous array with the key value to assert the value for that key: its ([:key1]) { should == "value1" } There's more... In the previous code examples, another block known as the context block was presented. The context block is a grouping mechanism for associating tests. For example, you may have a conditional branch in your code that changes the outputs of a method. Here, you may use two context blocks, one for a value and the second for another value. In our example, we're separating the happy path (when a given point is within the specified mile radius) from the alternative (when a given point is outside the specified mile radius). context is a useful construct that allows you to declare let and other blocks within it, and those blocks apply only for the scope of the containing context. Summary This article demonstrated to us the idiomatic RSpec code that makes good use of the RSpec Domain Specific Language (DSL). Resources for Article : Further resources on this subject: Quick start - your first Sinatra application [Article] Behavior-driven Development with Selenium WebDriver [Article] External Tools and the Puppet Ecosystem [Article]
Read more
  • 0
  • 0
  • 971

article-image-features-raphaeljs
Packt
12 Sep 2013
16 min read
Save for later

Features of RaphaelJS

Packt
12 Sep 2013
16 min read
(For more resources related to this topic, see here.) Creating a Raphael element Creating a Raphael element is very easy. To make it better, there are predefined methods to create basic geometrical shapes. Basic shape There are three basic shapes in RaphaelJS, namely circle, ellipse, and rectangle. Rectangle We can create a rectangle using the rect() method. This method takes four required parameters and a fifth optional parameter, border-radius. The border-radius parameter will make the rectangle rounded (rounded corners) by the number of pixels specified. The syntax for this method is: paper.rect(X,Y,Width,Height,border-radius(optional)); A normal rectangle can be created using the following code snippet: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); // creating a rectangle with the rect() method. The four required parameters are X,Y,Width & Height var rect = paper.rect(35,25,170,100).attr({ "fill":"#17A9C6", //filling with background color "stroke":"#2A6570", // border color of the rectangle "stroke-width":2 // the width of the border }); The output for the preceding code snippet is shown in the following screenshot: Plain rectangle Rounded rectangle The following code will create a basic rectangle with rounded corners: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); //The fifth parameter will make the rectangle rounded by the number of pixels specified – A rectangle with rounded corners var rect = paper.rect(35,25,170,100,20).attr({ "fill":"#17A9C6",//background color of the rectangle "stroke":"#2A6570",//border color of the rectangle "stroke-width":2 // width of the border }); //in the preceding code 20(highlighted) is the border-radius of the rectangle. The output for the preceding code snippet is a rectangle with rounded corners, as shown in the following screenshot: Rectangle with rounded corners We can create other basic shapes in the same way. Let's create an ellipse with our magic wand. Ellipse An ellipse is created using the ellipse() method and it takes four required parameters, namely x,y, horizontal radius, and vertical radius. The horizontal radius will be the width of the ellipse divided by two and the vertical radius will be the height of the ellipse divided by two. The syntax for creating an ellipse is: paper.ellipse(X,Y,rX,rY); //rX is the horizontal radius & rY is the vertical radius of the ellipse Let's consider the following example for creating an ellipse: // creating a raphael paperin 'paperDiv' var paper = Raphael ("paperDiv", 650,400); //The ellipse() method takes four required parameters: X,Y, horizontal radius & vertical Radius var ellipse = paper.ellipse(195,125,170,100).attr({ "fill":"#17A9C6", // background color of the ellipse "stroke":"#2A6570", // ellipse's border color "stroke-width":2 // border width }); The preceding code will create an ellipse of width 170 x 2 and height 100 x 2. An ellipse created using the ellipse() method is shown in the following screenshot: An Ellipse Complex shapes It's pretty easy to create basic shapes, but what about complex shapes such as stars, octagons, or any other shape which isn't a circle, rectangle, or an ellipse. It's time for the next step of Raphael wizardry. Complex shapes are created using the path() method which has only one parameter called pathString. Though the path string may look like a long genetic sequence with alphanumeric characters, it's actually very simple to read, understand, and draw with. Before we get into path drawing, it's essential that we know how it's interpreted and the simple logic behind those complex shapes. Imagine that you are drawing on a piece of paper with a pencil. To draw something, you will place the pencil at a point in the paper and begin to draw a line or a curve and then move the pencil to another point on the paper and start drawing a line or curve again. After several such cycles, you will have a masterpiece—at least, you will call it a masterpiece. Raphael uses a similar method to draw and it does so with a path string. A typical path string may look like this: M0,0L26,0L13,18L0,0. Let's zoom into this path string a bit. The first letter says M followed by 0,0. That's right genius, you've guessed it correctly. It says move to 0,0 position, the next letter L is line to 26,0. RaphaelJS will move to 0,0 and from there draw a line to 26,0. This is how the path string is understood by RaphaelJS and paths are drawn using these simple notations. Here is a comprehensive list of commands and their respective meanings: Command Meaning expansion Attributes M move to (x, y) Z close path (none) L line to (x, y) H horizontal line to x V vertical line to y C curve to (x1, y1, x2, y2, x, y) S smooth curve to (x2, y2, x, y) Q quadratic Bézier curve to (x1, y1, x, y) T smooth quadratic Bézier curve to (x, y) A elliptical arc (rx, ry, x axis-rotation, large-arc-flag, sweep-flag, x, y) R Catmull-Rom-curve to* x1, y1 (x y) The uppercase commands are absolute (M20, 20); they are calculated from the 0,0 position of the drawing area (paper). The lowercase commands are relative (m20, 20); they are calculated from the last point where the pen left off. There are so many commands, which might feel like too much to take in—don't worry; there is no need to remember every command and its format. Because we'll be using vector graphics editors to extract paths, it's essential that you understand the meaning of each and every command so that when someone asks you "hey genius, what does this mean?", you shouldn't be standing there clueless pretending to have not heard it. The syntax for the path() method is as follows: paper.path("pathString"); Let's consider the following example: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 350,200); // Creating a shape using the path() method and a path string var tri = paper.path("M0,0L26,0L13,18L0,0").attr({ "fill":"#17A9C6", // filling the background color "stroke":"#2A6570", // the color of the border "stroke-width":2 // the size of the border }); All these commands ("M0,0L26,0L13,18L0,0") use uppercase letters. They are therefore absolute values. The output for the previous example is shown in the following screenshot: A triangle shape drawn using the path string Extracting and using paths from an editor Well, a triangle may be an easy shape to put into a path string. How about a complex shape such as a star? It's not that easy to guess and manually find the points. It's also impossible to create a fairly more complex shape like a simple flower or a 2D logo. Here in this section, we'll see a simple but effective method of drawing complex shapes with minimal fuss and sharp accuracy. Vector graphics editors The vector graphics editors are meant for creating complex shapes with ease and they have some powerful tools in their disposal to help us draw. For this example, we'll create a star shape using an open source editor called Inkscape, and then extract those paths and use Raphael to get out the shape! It is as simple as it sounds, and it can be done in four simple steps. Step 1 – Creating the shape in the vector editor Let's create some star shapes in Inkscape using the built-in shapes tool. Star shapes created using the built-in shapes tool Step 2 – Saving the shape as SVG The paths used by SVG and RaphaelJS are similar. The trick is to use the paths generated by the vector graphics editor in RaphaelJS. For this purpose, the shape must be saved as an SVG file. Saving the shape as an SVG file Step 3 – Copying the SVG path string The next step is to copy the path from SVG and paste it into Raphael's path() method. SVG is a markup language, and therefore it's nested in tags. The SVG path can be found in the <path> and </path> tags. After locating the path tag, look for the d attribute. This will contain a long path sequence. You've now hit the bullseye. The path string is highlighted Step 4 – Using the copied path as a Raphael path string After copying the path string from SVG, paste it into Raphael's path() method. var newpath=paper.path("copied path string from SVG").attr({ "fill":"#5DDEF4", "stroke":"#2A6570", "stroke-width":2 }); That's it! We have created a complex shape in RaphaelJS with absolute simplicity. Using this technique, we can only extract the path, not the styles. So the background color, shadow, or any other style in the SVG won't apply. We need to add our own styles to the path objects using the attr() method. A screenshot depicting the complex shapes created in RaphaelJS using the path string copied from an SVG file is shown here: Complex shapes created in RaphaelJS using path string Creating text Text can be created using the text() method. Raphael gives us a way to add a battery of styles to the text object, right from changing colors to animating physical properties like position and size. The text() method takes three required parameters, namely, x,y, and the text string. The syntax for the text() method is as follows: paper.text(X,Y,"Raphael JS Text"); // the text method with X,Y coordinates and the text string Let's consider the following example: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); // creating text var text = paper.text(40,55,"Raphael Text").attr({ "fill":"#17A9C6", // font-color "font-size":75, // font size in pixels //text-anchor indicates the starting position of the text relative to the X, Y position.It can be "start", "middle" or "end" default is "middle" "text-anchor":"start", "font-family":"century gothic" // font family of the text }); I am pretty sure that the text-anchor property is a bit heavy to munch. Well, there is a saying that a picture is worth a thousand words. The following diagram clearly explains the text-anchor property and its usage. A brief explanation of text-anchor property A screenshot of the text rendered using the text() method is as follows: Rendering text using the text() method Manipulating the style of the element The attr() method not only adds styles to an element, but it also modifies an existing style of an element. The following example explains the attr() method: rect.attr('fill','#ddd'); // This will update the background color of the rectangle to gray Transforming an element RaphaelJS not only creates elements, but it also allows the manipulating or transforming of any element and its properties dynamically. Manipulating a shape By the end of this section, you would know how to transform a shape. There might be many scenarios wherein you might need to modify a shape dynamically. For example, when the user mouse-overs a circle, you might want to scale up that circle just to give a visual feedback to the user. Shapes can be manipulated in RaphaelJS using the transform() method. Transformation is done through the transform() method, and it is similar to the path() method where we add the path string to the method. transform() works in the same way, but instead of the path string, it's the transformation string. There is only a moderate difference between a transformation string and a path string. There are four commands in the transformation string: T Translate S Scale R Rotate in degrees M Matrix The fourth command, M, is of little importance and let's keep it out of the way, to avoid confusion. The transformation string might look similar to a path string. In reality, they are different, not entirely but significantly, sharing little in common. The M in a path string means move to , whereas the same in a transformation string means Matrix . The path string is not to be confused with a transformation string. As with the path string, the uppercase letters are for absolute transformations and the lowercase for relative transformation. If the transformation string reads r90T100,0, then the element will rotate 90 degrees and move 100 px in the x axis (left). If the same reads r90t100,0, then the element will rotate 90 degrees and since the translation is relative, it will actually move vertically down 100px, as the rotation has tilted its axis. I am sure the previous point will confuse most, so let me break it up. Imagine a rectangle with a head and now this head is at the right side of the rectangle. For the time being, let's forget about absolute and relative transformation; our objective is to: Rotate the rectangle by 90 degrees. Move the rectangle 100px on the x axis (that is, 100px to the right). It's critical to understand that the elements' original values don't change when we translate it, meaning its x and y values will remain the same, no matter how we rotate or move the element. Now our first requirement is to rotate the rectangle by 90 degrees. The code for that would be rect.transform("r90") where r stands for rotation—fantastic, the rectangle is rotated by 90 degrees. Now pay attention to the next important step. We also need the rectangle to move 100px in the x axis and so we update our previous code to rect.transform("r90t100,0"), where t stands for translation. What happens next is interesting—the translation is done through a lowercase t, which means it's relative. One thing about relative translations is that they take into account any previous transformation applied to the element, whereas absolute translations simply reset any previous transformations before applying their own. Remember the head of the rectangle on the right side? Well, the rectangle's x axis falls on the right side. So when we say, move 100px on the x axis, it is supposed to move 100px towards its right side, that is, in the direction where its head is pointing. Since we have rotated the rectangle by 90 degrees, its head is no longer on the right side but is facing the bottom. So when we apply the relative translation, the rectangle will still move 100px to its x axis, but the x axis is now pointing down because of the rotation. That's why the rectangle will move 100px down when you expect it to move to the right. What happens when we apply absolute translation is something that is entirely different from the previous one. When we again update our code for absolute translation to rect.transform("r90T100,0"), the axis of the rectangle is not taken into consideration. However, the axis of the paper is used, as absolute transformations don't take previous transformations into account, and they simply reset them before applying their own. Therefore, the rectangle will move 100px to the right after rotating 90 degrees, as intended. Absolute transformations will ignore all the previous transformations on that element, but relative transformations won't. Getting a grip on this simple logic will save you a lot of frustration in the future while developing as well as while debugging. The following is a screenshot depicting relative translation: Using relative translation The following is a screenshot depicting absolute translation: Using absolute translation Notice the gap on top of the rotated rectangle; it's moved 100px on the one with relative translation and there is no such gap on top of the rectangle with absolute translation. By default, the transform method will append to any transformation already applied to the element. To reset all transformations, use element.transform(""). Adding an empty string to the transform method will reset all the previous transformations on that element. It's also important to note that the element's original x,y position will not change when translated. The element will merely assume a temporary position but its original position will remain unchanged. Therefore after translation, if we call for the element's position programmatically, we will get the original x,y, not the translated one, just so we don't jump from our seats and call RaphaelJS dull! The following is an example of scaling and rotating a triangle: //creating a Triangle using the path string var tri = paper.path("M0,0L104,0L52,72L0,0").attr({ "fill":"#17A9C6", "stroke":"#2A6570", "stroke-width":2 }); //transforming the triangle. tri.animate({ "transform":"r90t100,0,s1.5" },1000); //the transformation string should be read as rotating the element by 90 degrees, translating it to 100px in the X-axis and scaling up by 1.5 times The following screenshot depicts the output of the preceding code: Scaling and rotating a triangle The triangle is transformed using relative translation (t). Now you know the reason why the triangle has moved down rather than moving to its right. Animating a shape What good is a magic wand if it can't animate inanimate objects! RaphaelJS can animate as smooth as butter almost any property from color, opacity, width, height, and so on with little fuss. Animation is done through the animate() method. This method takes two required parameters, namely final values and milliseconds, and two optional parameters, easing and callback. The syntax for the animate() method is as follows: Element.animate({ Animation properties in key value pairs },time,easing,callback_function); Easing is that special effect with which the animation is done, for example, if the easing is bounce, the animation will appear like a bouncing ball. The following are the several easing options available in RaphaelJS: linear < or easeIn or ease-in > or easeOut or ease-out <> or easeInOut or ease-in-out backIn or back-in backOut or back-out elastic bounce Callbacks are functions that will execute when the animation is complete, allowing us to perform some tasks after the animation. Let's consider the example of animating the width and height of a rectangle: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); rect.animate({ "width":200, // final width "height":200 // final height },300,"bounce',function(){ // something to do when the animation is complete – this callback function is optional // Print 'Animation complete' when the animation is complete $("#animation_status").html("Animation complete") }) The following screenshot shows a rectangle before animation: Rectangle before animation A screenshot demonstrating the use of a callback function when the animation is complete is as follows. The text Animation complete will appear in the browser after completing the animation. Use of a callback function The following code animates the background color and opacity of a rectangle: rect.animate({ "fill":"#ddd", // final color, "fill-opacity":0.7 },300,"easeIn",function(){ // something to do when the animation is complete – this call back function is optional // Alerts done when the animation is complete alert("done"); }) Here the rectangle is animated from blue to gray and with an opacity from 1 to 0.7 over a duration of 300 milliseconds. Opacity in RaphaelJS is the same as in CSS, where 1 is opaque and 0 is transparent.
Read more
  • 0
  • 0
  • 3140
article-image-video-conversion-required-html5-video-playback
Packt
12 Sep 2013
5 min read
Save for later

Video conversion into the required HTML5 Video playback

Packt
12 Sep 2013
5 min read
(For more resources related to this topic, see here.) If you have issues with Playback support and probably thinking that you would play any video in Windows Media Player, it is not so as Windows Media Player doesn't support all formats. This article will show you how to fix this and get them playing. Transcoding audio files (must know) We start this section by getting ready the files we are going to use later on—it is likely you may well have some music tracks already, but not in the right format. We will fix that in this task by using a shareware program called Switch Audio File Converter, which is available from http://www.nch.com.au/switch for approximately USD40. Getting ready For this task, you need download a copy of the Switch Sound Converter application—it is available from http://www.nch.com.au/switch/index.html. You may like to note that a license is required for encoding AMR files or using MP3 files in certain instances—these can be purchased at the same time as purchasing the main license. How to do it... The first thing to do is install the software, so let's go ahead and run switchsetup.exe—note that for the purposes of this demo, you should not select any of the additional related programs when requested. Double-click the application to open it, then click on Add File and browse to, and then select the file you want to convert: Click on Output Format and change it to .ogg—it will automatically download the required converter as soon as you click on Convert. The file is saved by default into your Music folder underneath your profile. How it works... Switch Sound File Converter has been designed to make the conversion process as simple as possible—this includes downloading any extra components that are required for the purposes of encoding or decoding audio files. You can alter the encoding settings, although you should find that for general use this may not be necessary. There's more... There are lots of converters available that you can try—I picked this one as it is quick and easy to use, and doesn't have a large footprint (unlike some others). If you prefer, you can also use online services to accomplish the same task—two examples include Fre:ac (http://www.freac.org) or Online-Convert.com (http://www.online-convert.com). Note though that some sites will take note of details such as your IP address or what it is you are converting as well as store copies for a period of time. Installing playback support: codecs (Must know) Now that we have converted our audio files ready for playback—it's time to ensure that we can actually play them back in our PCs as well as in our browsers. Most of the latest browsers will play at least one of the formats we've created in the previous task but it is likely that you won't be able to play them outside of the browser. Let's take a look at how we can fix this by updating the codecs installed in your PC. For those of you not familiar with codecs, they are designed to help encode assets when the audio file is created and decode them as part of playback. Software and hardware makers will decide the makeup of each codec based on which containers and technologies they should support; a number of factors such as file size, quality, and bandwidth all play a part in their decisions. Let's take a look at how we can update our PCs to allow for proper playback of HTML5 video. Getting ready There are lots of individuals or companies who have produced different codecs, with differing results. We will take a look at one package that seems to work very well for Windows, which is the K-Lite Codec Pack. You need to download a copy of the pack, which is available from http://fileforum.betanews.com/detail/KLite-Codec-Pack-Basic/1094057842/1 —use the blue Download link on the right side of the page. This will download the basic version, which is more than sufficient for our needs at this stage. How to do it... Download, then run K-Lite_Codec_Pack_860_Basic.exe. Click on Next. On the Installation Mode screen, select the Simple option. On the File Associations page, select Windows Media Player. On the File associations screen for Windows Media Player screen, click on Select all audio: On the Thumbnails screen, click on Next. On the Speaker configuration screen, click on Next, then Install. The software will confirm when the codecs have been installed. How it works... In order to play back HTML5 format audio in Windows Media Player, you need to ensure you have the correct support in place; Windows Media Player doesn't understand the encoding format of HTML5 audio by default. We can overcome this by installing additional codecs that tell Windows how to encode or decode a particular file format; K-Lite's package aims to remove the pain of this process. There's more... The package we've looked at in this task is only available for Windows, if you are a Mac user, you will need to use an alternative method. There are lots of options available online—one such option is X Lossless Decoder, available from http://www.macupdate.com/app/mac/23430/x-lossless-decoder, which includes support for both .ogg and .mp4 formats. Summary We've taken a look at the recipes that show you to transcode a video into HTML5 Format and install playback support. This is only just the start of what you can achieve using this article—there is a whole world out there to explore. Resources for Article : Further resources on this subject: Basic use of Local Storage [Article] Customize your LinkedIn profile headline [Article] Blocking versus Non blocking scripts [Article]
Read more
  • 0
  • 0
  • 1050

article-image-article-rapid-development
Packt
04 Sep 2013
7 min read
Save for later

Rapid Development

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) Concept of reusability The concept of reusability has its roots in the production process. Typically, most of us go about creating e-learning using a process similar to what is shown in the following screenshot. It works well for large teams and the one man band, except in the latter case, you become a specialist for all the stages of production. That's a heavy load. It's hard to be good at all things and it demands that you constantly stretch and improve your skills, and find ways to increase the efficiency of what you do. Reusability in Storyline is about leveraging the formatting, look and feel and interactions you create so that you can re-purpose your work and speed-up production. Not every project will be an original one-off, in fact most won't, so the concept is to approach development with a plan to repurpose 80 percent of the media, quizzes, interactions, and designs you create. As you do this, you begin to establish processes, templates, and libraries that can be used to rapidly assemble base courses. With a little tweaking and some minor customization, you'll have a new, original course in no time. Your client doesn't need to know that 80 percent was made from reusable elements with just 20 percent created as original, unique components, but you'll know the difference in terms of time and effort. Leveraging existing assets So how can you leverage existing assets with Storyline? The first things you'll want to look at are the courses you've built with other authoring programs, such as PowerPoint, QuizMaker Engage, Captivate, Flash, and Camtasia. If there are design themes, elements, or interactions within these courses that you might want to use for future Storyline courses, you should focus your efforts on importing what you can, and further adjusting within Storyline to create a new version of the asset that can be reused for future Storyline courses. If re-working the asset is too complex or if you don't expect to reuse it in multiple courses, then using Storyline's web object feature to embed the interaction without re-working it in any way may be the better approach. In both cases, you'll save time by reusing content you've already put a lot of time in developing. Importing external content Here are the steps to bring external content into Storyline: From the Articulate Startup screen or by choosing the Insert tab, and then New Slide within a project, select the Import option. There are options to import PowerPoint, Quizmaker, and Storyline. All of these will display the slides within the file to be imported. You can pick and choose which slides to import into a new or the current scene in Storyline. The Engage option displays the entire interaction that can be imported into a single slide in the current or a new scene. Click on Import to complete the process. Considerations when importing Keep the following points in mind when importing: PowerPoint and Quizmaker files can be imported directly into Storyline. Once imported, you can edit the content like you would any other Storyline slide. Master slides come along with the import making it simple to reuse previous designs. Note that 64-bit PowerPoint is not supported and you must have an installed, activated version of Quizmaker for the import to work. The PowerPoint to Storyline conversion is not one-to-one. You can expect some alignment issues with slide objects due to the fact that PowerPoint uses points and Storyline uses pixels. There are 2.66 pixels for each point which is why you'll need to tweak the imported slides just a bit. Same with Quizmaker though the reason why is slightly different; Quizmaker is 686 x 424 in size, whereas Storyline is 720 x 540 by default. Engage files can be imported into Storyline and they are completely functional, but cannot be edited within Storyline. Though the option to import Engage appears on the Import screen, what Storyline is really doing is creating a web object to contain the Engage interaction. Once imported into a new scene, clicking on the Engage interaction will display an Options menu where you can make minor adjustments to the behavior of the interaction as well as Preview and Edit in it Engage. You can also resize and position the interaction just as you would any web object. Remember that though web objects work in iPad and HTML5 outputs, Engage content is Flash, so it will not playback on an iPad or in an HTML5 browser. Like Quizmaker, you'll need an installed, activated version of Engage for the import to work. Flash, Captivate, and Camtasia files cannot be imported in Storyline and cannot be edited within Storyline. You can however, use web objects to embed these projects into Storyline or the Insert Flash option. In both cases, the imported elements appear seamless to the learner while retaining full functionality.   Build once, and reuse many times Quizzing is at the heart of many e-learning courses where often the quiz questions need to be randomized or even reused in different sections of a single course (that is, the same questions for a pre and post-test). The concept of building once and reusing many times works well with several aspects of Storyline. We'll start with quizzing and a feature called Question Banks as follows: Question Banks Question Bank offers a way to pool, reuse, and randomize questions within a project. Slides in a question bank are housed within the project file but are not visible until placed into the story. Question Banks can include groups of quiz slides and regular slides (that is, you might include a regular slide if you need to provide instructions for the quiz or would like to include a post-quiz summary). When you want to include questions from a Question Bank, you just need to insert a new Quizzing slide, and then choose Draw from Bank . You can then select one or more questions to include and randomize them if desired. Follow along… In this exercise we will be removing three questions from a scene and moving them into a question bank. This will allow you to draw one or more of those questions at any point in the project where the quiz questions are needed, as follows: From the Home tab, choose Question Banks , and then Create Question bank . Title this Identity Theft Questions . Notice that a new tab has opened in Normal View . The Question Bank appears in this tab. Click on the Import link and navigate to question slides 2, 3, and 4. From the Import drop-down menu at the top, select move questions into question bank . Click on the Story View tab and notice the three slides containing the quiz questions are no longer in the story. Click back on the Identity Theft tab and notice that they are located here. The questions will not become a part of the story until the next step, when you draw them from the bank. In Story View, click once on slide 1 to select it, and then from the Home tab, choose Question Banks and New Draw from Question Bank . From the Question Bank drop-down menu, select Identity Theft Questions . All questions will be selected by default and will be randomized after being placed into the story. This means that the learner will need to answer three questions before continuing onto the next slide in the story. Click on Insert . The Question Bank draw has been inserted as slide 2. To see how this works, Preview the scene. Save as Exercise 11 – Identity Theft Quiz.   There are multiple ways to get back to the questions that are in a question bank. You can do this by selecting the tab the questions are located in (in this case, Identity Theft ), you can view the question bank slide in Normal View or choose Question Banks from the Home tab and navigate to the name of the question bank you'd like to edit.
Read more
  • 0
  • 0
  • 946