Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
article-image-yii-11-using-zii-components
Packt
05 Sep 2011
10 min read
Save for later

Yii 1.1: Using Zii Components

Packt
05 Sep 2011
10 min read
  (For more resources on this subject, see here.) Introduction Yii have a useful library called Zii. It's bundled with framework and includes some classes aimed to make the developer's life easier. Its most handy components are grids and lists which allow you to build data in both the admin and user parts of a website in a very fast and efficient way. In this article you'll learn how to use and adjust these components to fit your needs. Also you'll learn about data providers. They are part of the core framework, not Zii, but since they are used extensively with grids and lists, we'll review them here. We'll use Sakila sample database version 0.8 available from official MySQL website: http://dev.mysql.com/doc/sakila/en/sakila.html. Using data providers Data providers are used to encapsulate common data model operations such as sorting, pagination and querying. They are used with grids and lists extensively. Because both widgets and providers are standardized, you can display the same data using different widgets and you can get data for a widget from various providers. Switching providers and widgets is relatively transparent. Currently there are CActiveDataProvider, CArrayDataProvider, and CSqlDataProvider implemented to get data from ActiveRecord models, arrays, and SQL queries respectively. Let's try all these providers to fill a grid with data. Getting ready Create a new application using yiic webapp as described in the official guide. Download the Sakila database from http://dev.mysql.com/doc/sakila/en/sakila.html and execute the downloaded SQLs: first schema then data. Configure the DB connection in protected/config/main.php. Use Gii to create a model for the film table. How to do it... Let's start with a view for a grid controller. Create protected/views/grid/index.php: <?php $this->widget('zii.widgets.grid.CGridView', array('dataProvider' => $dataProvider,))?> Then create a protected/controllers/GridController.php: <?phpclass GridController extends Controller{ public function actionAR() { $dataProvider = new CActiveDataProvider('Film', array( 'pagination'=>array( 'pageSize'=>10, ), 'sort'=>array( 'defaultOrder'=> array('title'=>false), ) )); $this->render('index', array( 'dataProvider' => $dataProvider, )); } public function actionArray() { $yiiDevelopers = array( array( 'name'=>'Qiang Xue', 'id'=>'2', 'forumName'=>'qiang', 'memberSince'=>'Jan 2008', 'location'=>'Washington DC, USA', 'duty'=>'founder and project lead', 'active'=>true, ), array( 'name'=>'Wei Zhuo', 'id'=>'3', 'forumName'=>'wei', 'memberSince'=>'Jan 2008', 'location'=>'Sydney, Australia', 'duty'=>'project site maintenance and development', 'active'=>true, ), array( 'name'=>'Sebastián Thierer', 'id'=>'54', 'forumName'=>'sebas', 'memberSince'=>'Sep 2009', 'location'=>'Argentina', 'duty'=>'component development', 'active'=>true, ), array( 'name'=>'Alexander Makarov', 'id'=>'415', 'forumName'=>'samdark', 'memberSince'=>'Mar 2010', 'location'=>'Russia', 'duty'=>'core framework development', 'active'=>true, ), array( 'name'=>'Maurizio Domba', 'id'=>'2650', 'forumName'=>'mdomba', 'memberSince'=>'Aug 2010', 'location'=>'Croatia', 'duty'=>'core framework development', 'active'=>true, ), array( 'name'=>'Y!!', 'id'=>'1644', 'forumName'=>'Y!!', 'memberSince'=>'Aug 2010', 'location'=>'Germany', 'duty'=>'core framework development', 'active'=>true, ), array( 'name'=>'Jeffrey Winesett', 'id'=>'15', 'forumName'=>'jefftulsa', 'memberSince'=>'Sep 2010', 'location'=>'Austin, TX, USA', 'duty'=>'documentation and marketing', 'active'=>true, ), array( 'name'=>'Jonah Turnquist', 'id'=>'127', 'forumName'=>'jonah', 'memberSince'=>'Sep 2009 - Aug 2010', 'location'=>'California, US', 'duty'=>'component development', 'active'=>false, ), array( 'name'=>'István Beregszászi', 'id'=>'1286', 'forumName'=>'pestaa', 'memberSince'=>'Sep 2009 - Mar 2010', 'location'=>'Hungary', 'duty'=>'core framework development', 'active'=>false, ), ); $dataProvider = new CArrayDataProvider( $yiiDevelopers, array( 'sort'=>array( 'attributes'=>array('name', 'id', 'active'), 'defaultOrder'=>array('active' => true, 'name' => false), ), 'pagination'=>array( 'pageSize'=>10, ), )); $this->render('index', array( 'dataProvider' => $dataProvider, )); } public function actionSQL() { $count=Yii::app()->db->createCommand('SELECT COUNT(*) FROM film')->queryScalar(); $sql='SELECT * FROM film'; $dataProvider=new CSqlDataProvider($sql, array( 'keyField'=>'film_id', 'totalItemCount'=>$count, 'sort'=>array( 'attributes'=>array('title'), 'defaultOrder'=>array('title' => false), ), 'pagination'=>array( 'pageSize'=>10, ), )); $this->render('index', array( 'dataProvider' => $dataProvider, )); }} Now run grid/aR, grid/array and grid/sql actions and try using the grids. How it works... The view is pretty simple and stays the same for all data providers. We are calling the grid widget and passing the data provider instance to it. Let's review actions one by one starting with actionAR: $dataProvider = new CActiveDataProvider('Film', array( 'pagination'=>array( 'pageSize'=>10, ), 'sort'=>array( 'defaultOrder'=>array('title'=>false), ))); CActiveDataProvider works with active record models. Model class is passed as a first argument of class constructor. Second argument is an array that defines class public properties. In the code above, we are setting pagination to 10 items per page and default sorting by title. Note that instead of using a string we are using an array where keys are column names and values are true or false. true means order is DESC while false means that order is ASC. Defining the default order this way allows Yii to render a triangle showing sorting direction in the column header. In actionArray we are using CArrayDataProvider that can consume any array. $dataProvider = new CArrayDataProvider($yiiDevelopers, array( 'sort'=>array( 'attributes'=>array('name', 'id', 'active'), 'defaultOrder'=>array('active' => true, 'name' => false), ), 'pagination'=>array( 'pageSize'=>10, ),)); First argument accepts an associative array where keys are column names and values are corresponding values. Second argument accepts an array with the same options as in CActiveDataProvider case. In actionSQL, we are using CSqlDataProvider that consumes the SQL query and modifies it automatically allowing pagination. First argument accepts a string with SQL and a second argument with data provider parameters. This time we need to supply calculateTotalItemCount with total count of records manually. For this purpose we need to execute the extra SQL query manually. Also we need to define keyField since the primary key of this table is not id but film_id. To sum up, all data providers are accepting the following properties: pagination CPagination object or an array of initial values for new instance of CPagination. sort CSort object or an array of initial values for new instance of CSort. totalItemCount We need to set this only if the provider, such as CsqlDataProvider, does not implement the calculateTotalItemCount method. There's more... You can use data providers without any special widgets. Replace protected/views/grid/ index.php content with the following: <?php foreach($dataProvider->data as $film):?> <?php echo $film->title?><?php endforeach?> <?php $this->widget('CLinkPager',array( 'pages'=>$dataProvider->pagination))?> Further reading To learn more about data providers refer to the following API pages:   http://www.yiiframework.com/doc/api/CDataProvider http://www.yiiframework.com/doc/api/CActiveDataProvider http://www.yiiframework.com/doc/api/CArrayDataProvider http://www.yiiframework.com/doc/api/CSqlDataProvider http://www.yiiframework.com/doc/api/CSort http://www.yiiframework.com/doc/api/CPagination Using grids Zii grids are very useful to quickly create efficient application admin pages or any pages you need to manage data on. Let's use Gii to generate a grid, see how it works, and how we can customize it. Getting ready Carry out the following steps: Create a new application using yiic webapp as described in the official guide. Download the Sakila database from http://dev.mysql.com/doc/sakila/en/sakila.html. Execute the downloaded SQLs: first schema then data. Configure the DB connection in protected/config/main.php. Use Gii to create models for customer, address, and city tables. How to do it... Open Gii, select Crud Generator, and enter Customer into the Model Class field. Press Preview and then Generate. Gii will generate a controller in protected/controllers/CustomerController.php and a group of views under protected/views/customer/. Run customer controller and go to Manage Customer link. After logging in you should see the grid generated: How it works... Let's start with the admin action of customer controller: public function actionAdmin(){ $model=new Customer('search'); $model->unsetAttributes(); // clear any default values if(isset($_GET['Customer'])) $model->attributes=$_GET['Customer']; $this->render('admin',array( 'model'=>$model, ));} Customer model is created with search scenario, all attribute values are cleaned up, and then filled up with data from $_GET. On the first request $_GET is empty but when you are changing the page, or filtering by first name attribute using the input field below the column name, the following GET parameters are passed to the same action via an AJAX request: Customer[address_id] =Customer[customer_id] =Customer[email] =Customer[first_name] = alexCustomer[last_name] =Customer[store_id] =Customer_page = 2ajax = customer-grid Since scenario is search, the corresponding validation rules from Customer::rules are applied. For the search scenario, Gii generates a safe rule that allows to mass assigning for all fields: array('customer_id, store_id, first_name, last_name, email, address_id, active, create_date, last_update', 'safe','on'=>'search'), Then model is passed to a view protected/views/customer/admin.php. It renders advanced search form and then passes the model to the grid widget: <?php $this->widget('zii.widgets.grid.CGridView', array( 'id'=>'customer-grid', 'dataProvider'=>$model->search(), 'filter'=>$model, 'columns'=>array( 'customer_id', 'store_id', 'first_name', 'last_name', 'email', 'address_id', /* 'active', 'create_date', 'last_update', */ array( 'class'=>'CButtonColumn', ), ),)); ?> Columns used in the grid are passed to columns. When just a name is passed, the corresponding field from data provider is used. Also we can use custom column represented by a class specified. In this case we are using CButtonColumn that renders view, update and delete buttons that are linked to the same named actions and are passing row ID to them so action can be done to a model representing specific row from database. filter property accepts a model filled with data. If it's set, a grid will display multiple text fields at the top that the user can fill to filter the grid. The dataProvider property takes an instance of data provider. In our case it's returned by the model's search method : public function search(){ // Warning: Please modify the following code to remove attributes that // should not be searched. $criteria=new CDbCriteria; $criteria->compare('customer_id',$this->customer_id); $criteria->compare('store_id',$this->store_id); $criteria->compare('first_name',$this->first_name,true); $criteria->compare('last_name',$this->last_name,true); $criteria->compare('email',$this->email,true); $criteria->compare('address_id',$this->address_id); $criteria->compare('active',$this->active); $criteria->compare('create_date',$this->create_date,true); $criteria->compare('last_update',$this->last_update,true); return new CActiveDataProvider(get_class($this), array( 'criteria'=>$criteria, ));} This method is called after the model was filled with $_GET data from the filtering fields so we can use field values to form the criteria for the data provider. In this case all numeric values are compared exactly while string values are compared using partial match.
Read more
  • 0
  • 0
  • 2788

article-image-overview-node-package-manager
Packt
11 Aug 2011
5 min read
Save for later

An Overview of the Node Package Manager

Packt
11 Aug 2011
5 min read
Node Web Development npm package format An npm package is a directory structure with a package.json file describing the package. This is exactly what we just referred to as a Complex Module, except npm recognizes many more package.json tags than does Node. The starting point for npm's package.json is the CommonJS Packages/1.0 specification. The documentation for npm's package.json implementation is accessed with the following command: $ npm help json A basic package.json file is as follows: { name: "packageName", version: "1.0", main: "mainModuleName", modules: { "mod1": "lib/mod1", "mod2": "lib/mod2" } } The file is in JSON format which, as a JavaScript programmer, you should already have seen a few hundred times. The most important tags are name and version. The name will appear in URLs and command names, so choose one that's safe for both. If you desire to publish a package in the public npm repository it's helpful to check and see if a particular name is already being used, at http://search.npmjs.org or with the following command: $ npm search packageName The main tag is treated the same as complex modules. It references the module that will be returned when invoking require('packageName'). Packages can contain many modules within themselves, and those can be listed in the modules list. Packages can be bundled as tar-gzip tarballs, especially to send them over the Internet. A package can declare dependencies on other packages. That way npm can automatically install other modules required by the module being installed. Dependencies are declared as follows: "dependencies": { "foo" : "1.0.0 - 2.9999.9999" , "bar" : ">=1.0.2 <2.1.2" } The description and keywords fields help people to find the package when searching in an npm repository (http://search.npmjs.org). Ownership of a package can be documented in the homepage, author, or contributors fields: "description": "My wonderful packages walks dogs", "homepage": "http://npm.dogs.org/dogwalker/", "author": [email protected] Some npm packages provide executable programs meant to be in the user's PATH. These are declared using the bin tag. It's a map of command names to the script which implements that command. The command scripts are installed into the directory containing the node executable using the name given. bin: { 'nodeload.js': './nodeload.js', 'nl.js': './nl.js' }, The directories tag documents the package directory structure. The lib directory is automatically scanned for modules to load. There are other directory tags for binaries, manuals, and documentation. directories: { lib: './lib', bin: './bin' }, The script tags are script commands run at various events in the lifecycle of the package. These events include install, activate, uninstall, update, and more. For more information about script commands, use the following command: $ npm help scripts This was only a taste of the npm package format; see the documentation (npm help json) for more. Finding npm packages By default npm modules are retrieved over the Internet from the public package registry maintained on http://npmjs.org. If you know the module name it can be installed simply by typing the following: $ npm install moduleName But what if you don't know the module name? How do you discover the interesting modules? The website http://npmjs.org publishes an index of the modules in that registry, and the http://search.npmjs.org site lets you search that index. npm also has a command-line search function to consult the same index: $ npm search mp3 mediatags Tools extracting for media meta-data tags =coolaj86 util m4a aac mp3 id3 jpeg exiv xmp node3p An Amazon MP3 downloader for NodeJS. =ncb000gt Of course upon finding a module it's installed as follows: $ npm install mediatags After installing a module one may want to see the documentation, which would be on the module's website. The homepage tag in the package.json lists that URL. The easiest way to look at the package.json file is with the npm view command, as follows: $ npm view zombie ... { name: 'zombie', description: 'Insanely fast, full-stack, headless browser testing using Node.js', ... version: '0.9.4', homepage: 'http://zombie.labnotes.org/', ... npm ok You can use npm view to extract any tag from package.json, like the following which lets you view just the homepage tag: $ npm view zombie homepage http://zombie.labnotes.org/ Using the npm commands The main npm command has a long list of sub-commands for specific package management operations. These cover every aspect of the lifecycle of publishing packages (as a package author), and downloading, using, or removing packages (as an npm consumer). Getting help with npm Perhaps the most important thing is to learn where to turn to get help. The main help is delivered along with the npm command accessed as follows: For most of the commands you can access the help text for that command by typing the following: $ npm help <command> The npm website (http://npmjs.org/) has a FAQ that is also delivered with the npm software. Perhaps the most important question (and answer) is: Why does npm hate me? npm is not capable of hatred. It loves everyone, even you.  
Read more
  • 0
  • 0
  • 1981

article-image-understanding-and-developing-node-modules
Packt
11 Aug 2011
5 min read
Save for later

Understanding and Developing Node Modules

Packt
11 Aug 2011
5 min read
Node Web Development A practical introduction to Node, the exciting new server-side JavaScript web development stack What's a module? Modules are the basic building block of constructing Node applications. We have already seen modules in action; every JavaScript file we use in Node is itself a module. It's time to see what they are and how they work. The following code to pull in the fs module, gives us access to its functions: var fs = require('fs'); The require function searches for modules, and loads the module definition into the Node runtime, making its functions available. The fs object (in this case) contains the code (and data) exported by the fs module. Let's look at a brief example of this before we start diving into the details. Ponder over this module, simple.js: var count = 0; exports.next = function() { return count++; } This defines an exported function and a local variable. Now let's use it: The object returned from require('./simple') is the same object, exports, we assigned a function to inside simple.js. Each call to s.next calls the function next in simple.js, which returns (and increments) the value of the count variable, explaining why s.next returns progressively bigger numbers. The rule is that, anything (functions, objects) assigned as a field of exports is exported from the module, and objects inside the module but not assigned to exports are not visible to any code outside the module. This is an example of encapsulation. Now that we've got a taste of modules, let's take a deeper look. Node modules Node's module implementation is strongly inspired by, but not identical to, the CommonJS module specification. The differences between them might only be important if you need to share code between Node and other CommonJS systems. A quick scan of the Modules/1.1.1 spec indicates that the differences are minor, and for our purposes it's enough to just get on with the task of learning to use Node without dwelling too long on the differences. How does Node resolve require('module')? In Node, modules are stored in files, one module per file. There are several ways to specify module names, and several ways to organize the deployment of modules in the file system. It's quite flexible, especially when used with npm, the de-facto standard package manager for Node. Module identifiers and path names Generally speaking the module name is a path name, but with the file extension removed. That is, when we write require('./simple'), Node knows to add .js to the file name and load in simple.js. Modules whose file names end in .js are of course expected to be written in JavaScript. Node also supports binary code native libraries as Node modules. In this case the file name extension to use is .node. It's outside our scope to discuss implementation of a native code Node module, but this gives you enough knowledge to recognize them when you come across them. Some Node modules are not files in the file system, but are baked into the Node executable. These are the Core modules, the ones documented on nodejs.org. Their original existence is as files in the Node source tree but the build process compiles them into the binary Node executable. There are three types of module identifiers: relative, absolute, and top-level. Relative module identifiers begin with "./" or "../" and absolute identifiers begin with "/". These are identical with POSIX file system semantics with path names being relative to the file being executed. Absolute module identifiers obviously are relative to the root of the file system. Top-level module identifiers do not begin with "." , "..", or "/" and instead are simply the module name. These modules are stored in one of several directories, such as a node_modules directory, or those directories listed in the array require.paths, designated by Node to hold these modules. Local modules within your application The universe of all possible modules is split neatly into two kinds, those modules that are part of a specific application, and those modules that aren't. Hopefully the modules that aren't part of a specific application were written to serve a generalized purpose. Let's begin with implementation of modules used within your application. Typically your application will have a directory structure of module files sitting next to each other in the source control system, and then deployed to servers. These modules will know the relative path to their sibling modules within the application, and should use that knowledge to refer to each other using relative module identifiers. For example, to help us understand this, let's look at the structure of an existing Node package, the Express web application framework. It includes several modules structured in a hierarchy that the Express developers found to be useful. You can imagine creating a similar hierarchy for applications reaching a certain level of complexity, subdividing the application into chunks larger than a module but smaller than an application. Unfortunately there isn't a word to describe this, in Node, so we're left with a clumsy phrase like "subdivide into chunks larger than a module". Each subdivided chunk would be implemented as a directory with a few modules in it. In this example, the most likely relative module reference is to utils.js. Depending on the source file which wants to use utils.js it would use one of the following require statements: var utils = require('./lib/utils'); var utils = require('./utils'); var utils = require('../utils');  
Read more
  • 0
  • 0
  • 2600
Banner background image

article-image-integrating-phplist-2-facebook
Packt
02 Aug 2011
3 min read
Save for later

Integrating phpList 2 with Facebook

Packt
02 Aug 2011
3 min read
  PHPList 2 E-mail Campaign Manager Prerequisites For this section, we'll make the following assumptions: We already have a Facebook account We already have a Facebook page We already have the Facebook developer's application installed (http://www.facebook.com/developers) Preparing phpList Facebook allows us to include any content on our page in an iframe with a maximum width of 520 px. phpList's default theming won't fit in 520 px and so it will either get cut off and look odd, or will invoke awkward scroll bars in the middle of the page. To resolve this, we'll need to change the styling of phpList. Note that this will also affect how the public URLs to our phpList site are displayed outside of Facebook (that is, the default sign up page). Navigate to the configure page in phpList's admin interface, using the right-hand navigation panel: Scroll down to the section starting with Header of public pages and click on edit. This is the HTML code that is added to the top of every public (not admin) page. In the following example, we've removed the background image, the HTML table, and images, but left the CSS styling unchanged: Having saved your changes, edit the next section labeled Footer of public pages and make the corresponding changes: Remember that the actual content that the user sees will be "sandwiched" between the header and footer. This means that if you open a tag in the header, you need to make sure it's closed again in the footer. Again in this example, we just closed the HTML body tag in the footer: Having changed the header and footer of the public pages, browse to your default public page (http://your-site.com/lists/, for example) to see how the page looks: Note that there is hardly any styling now, but that there are no fixed-width elements which will cause issues with our iframe. Tweaking the design of using the public header and footer code is a task left to the reader. Creating the Facebook app Before we can add phpList to our Facebook page, we need to create an App. From http://www.facebook.com/developers/, click on the Set Up New App button: Click the radio button to indicate that you agree to the Facebook terms and then click on Create App: Prove you're a human by completing the CAPTCHA: You're taken to the edit page for your new app. Under the Facebook Integration section, enter at least the following details: IFrame Size: Auto-resize (we don't want scrollbars) Tab Name (what the tab will be called when your page is displayed) Tab URL (the full URL to the page you want loaded in your iframe. Generally, this would be a phpList subscribe page) Once you've saved your changes, you're taken to your list of applications. Click on Application Profile Page to see the application as other Facebook users would see it (as opposed to the developer): On the application's profile page, click on Add to My Page to add this application to your Facebook page: When prompted, choose which of your pages you want to add the application to. In this example, we've created a page for this article, so we add the application to this page:  
Read more
  • 0
  • 0
  • 1248

article-image-integrating-phplist-2-wordpress
Packt
29 Jul 2011
3 min read
Save for later

Integrating phpList 2 with WordPress

Packt
29 Jul 2011
3 min read
Prerequisites for this WordPress tutorial For this tutorial, we'll make the following assumptions: We already have a working instance of WordPress (version 3.x) Our phpList site is accessible through HTTP / HTTPS from our WordPress site Installing and configuring the phpList Integration plugin Download the latest version of Jesse Heap's phpList Integration plugin from http://wordpress.org/extend/plugins/phplist-form-integration/, unpack it, and upload the contents to your wp-content/plugins/ directory in WordPress. Activate the plugin from within your WordPress dashboard: Under the Settings menu, click on the new PHPlist link to configure the plugin: General Settings Under the General Settings heading, enter the URL to your phpList installation, as well as an admin username/password combination. Enter the ID and name of at least one list that you want to allow your WordPress users to subscribe to: Why does the plugin require my admin login and password? The admin login and password are used to bypass the confirmation e-mail that would normally be sent to a subscriber. Effectively, the plugin "logs into" phpList as the administrator and then subscribes the user, bypassing confirmation. If you don't want to bypass confirmation e-mails, then you don't need to enter your username and password. Form Settings The plugin will work with this section unmodified. However, let's imagine that we also want to capture the subscriber's name. We already have an attribute in phpList called first name, so change the first field label to First Name and the Text Field ID to first name (the same as our phpList attribute name): Adding a phpList Integration page The plugin will replace the HTML comment <!--phplist form--> with the generated phpList form. Let's say we wanted our phpList form to show up at http://ourblog.com/signup. Create a new WordPress page called Signup, add the content you want to be displayed, and then click on the HTML tab to edit the HTML source: You will see the HTML source of your page displayed. Insert the text "<!--phplist form-->" where you want the form to be displayed and save the page: HTML comments The "<!--some text-->" syntax designates an HTML comment, which is not displayed when the HTML is processed by the browser / viewer. This means that you won't see your comment when you view your page in Visual mode. Once the page has been updated, click on the View page link to display the page in WordPress: The subscribe form will be inserted in the page at the location where you added the comment: Adding a phpList Integration widget Instead of a dedicated page to sign up new subscribers, you may want to use a sidebar widget instead, so that the subscription options can show up on multiple pages on your WordPress site. To add the phpList integration widget, go to your WordPress site's Appearance option and go to the Widgets page: Drag the PHPList Integration widget to your preferred widget location. (These vary depending on your theme): You can change the Title of the widget before you click on Close to finish: Now that you've added the PHPList Integration widget to the widget area, your sign up form will be displayed on all WordPress pages, which include that widget area:   Further resources on this subject: Integrating phpList 2 with Drupal phpList 2 E-mail Campaign Manager: Personalizing E-mail Body Tcl: Handling Email Email, Languages, and JFile with Joomla!
Read more
  • 0
  • 0
  • 4229

article-image-integrating-phplist-2-drupal
Packt
28 Jul 2011
3 min read
Save for later

Integrating phpList 2 with Drupal

Packt
28 Jul 2011
3 min read
PHPList 2 E-mail Campaign Manager Get to grips with the PHPList e-mail announcement delivery system!        Prerequisites For this article, we'll make the following assumptions: We already have a working instance of Drupal (version 6). We are hosting our Drupal site and our phpList site on the same web server and with the same URL base. That is, our Drupal site is accessible at http://yoursite.com and our phpList installation is accessible at http://yoursite.com/lists/. We chose to document the Drupal-phpList integration using Drupal 6, even though Drupal 7 has recently been released. This is because (a) the phpList module for Drupal 7 is still marked as "development" and (b) Drupal 6 has been the official stable version for three years and has a more familiar interface than 7 at this time. However, the following method described for Drupal 6 will work on Drupal 7. Installing and configuring the phpList integration module Go to http://drupal.org/project/phplist and download the latest stable version of the module for Drupal 6.x. Unpack the tar.gz file and you should have a folder called phplist inside. Upload this folder to your Drupal installation's modules directory and then navigate to Administer | Site building | Modules: At the bottom of the modules list, you'll find the Mail and phpList headings with a single phpList module under each. Check both and click on Save configuration: External phpList configuration Navigate to Administer | Site configuration | PHPlist to set up the database credentials and other options required for the integration: You are prompted for your phpList database details. Enter your database host, database name, username, and password. Unless you've done a non-standard installation of phpList, the default entries for prefix and user table prefix will already be correct. Under PHPList URL, enter the URL to your phpList installation. Because we are using phpList and Drupal at the same base URL, we set /lists/ (with a trailing slash) as our PHPList URL: The database check illustrated in the preceding screenshot (red text reading Password is not set) is done after you save your settings. The module tries to connect to the phpList database using the details provided and will warn you if it fails. Scroll to the bottom, ignoring the other options for now, and click on Save configuration: If the database connection test was successful, the External PHPList configuration options will be collapsed into a single, clickable field, hiding them from normal view, as you configure the remaining options: Attribute mapping The module can auto-create attributes in phpList to match any attributes created in your Drupal instance. For example, you may ask your Drupal members to enter demographic information when registering and this mapping would allow these details to be transferred to the phpList. Note that the module warns you that this mapping only works well with textline attributes and not select attributes or radio buttons. Use of this feature is not covered in this article, as it requires advanced pre-configuration of your Drupal instance:
Read more
  • 0
  • 0
  • 1479
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-phplist-2-e-mail-campaign-manager-personalizing-e-mail-body
Packt
26 Jul 2011
5 min read
Save for later

phpList 2 E-mail Campaign Manager: Personalizing E-mail Body

Packt
26 Jul 2011
5 min read
Enhancing messages using built-in placeholders For simple functionality's sake, we generally want our phpList messages to contain at least a small amount of customization. For example, even the default footer, which phpList attaches to messages, contains three placeholders, customizing each message for each recipient: -- If you do not want to receive any more newsletters, [UNSUBSCRIBE] To update your preferences and to unsubscribe, visit [PREFERENCES] Forward a Message to Someone [FORWARD] The placeholders [UNSUBSCRIBE],[PREFERENCES], and [FORWARD] will be replaced with unique URLs per subscriber, allowing any subscriber to immediately unsubscribe, adjust their preferences, or forward a message to a friend simply by clicking on a link. There's a complete list of available placeholders documented on phpList's wiki page at http://docs.phplist.com/Placeholders. Here are some of the most frequently used ones: [CONTENT]: Use this while creating standard message templates. You can design a styled template which is re-used for every mailing and the [CONTENT] placeholder will be replaced with the unique content for that particular message. [EMAIL]: This is replaced by the user's e-mail address. It can be very helpful in the footer of an e-mail, so that subscribers know which e-mail address they used to sign up for list subscription. [LISTS]: The lists to which a member is subscribed. Having this information attached to system confirmation messages makes it easy for subscribers to manage their own subscriptions. Note that this placeholder is only applicable in system messages and not in general list messages. [UNSUBSCRIBEURL]: Almost certainly, you'll want to include some sort of "click here to unsubscribe" link on your messages, either as a pre-requisite for sending bulk mail (perhaps imposed by your ISP) or to avoid users inadvertently reporting you for spamming. [UNSUBSCRIBE]: This placeholder generates the entire hyperlink for you (including the link text, "unsubscribe"), whereas the [UNSUBSCRIBEURL] placeholder simply generates the URL. You would use the URL only if you wanted to link an image to the unsubscription page, as opposed to a simple link, or if you wanted the HTML link text to be something other than "unsubscribe". [USERTRACK]: This inserts an invisible tracker image into HTML messages, helping you to measure the effectiveness of your newsletter. You might combine several of these placeholders to add a standard signature to your messages, as follows: -- You ([EMAIL]) are receiving this message because you subscribed to one or more of our mailing lists. We only send messages to subscribers who have requested and confirmed their subscription (double-opt-in). You can adjust your list membership at any time by clicking on [PREFERENCES] or unsubscribe altogether by clicking on [UNSUBSCRIBE]. -- Placeholders in confirmation messages Some placeholders (such as [LISTS]) are only applicable in confirmation messages (that is, "thank you for subscribing to the following lists..."). These placeholders allow you to customize the following: Request to confirm: Sent initially to users when they subscribe, confirming their e-mail address and subscription request Confirmation of subscription: Sent to users to confirm that they've been successfully added to the requested lists (after they've confirmed their e-mail address) Confirmation of preferences update: Sent to users to confirm their updates when they change their list subscriptions/preferences themselves Confirmation of unsubscription: Sent to users after they've unsubscribed to confirm that their e-mail address will no longer receive messages from phpList Personalizing messages using member attributes Apart from the built-in placeholders, you can also use any member attributes to further personalize your messages. Say you captured the following attributes from your new members: First Name Last Name Hometown Favorite Food You could craft a personalized message as follows: Dear [FIRST NAME], Hello from your friends at the Funky Town Restaurant. We hope the [LAST NAME] family is well in the friendly town of [HOMETOWN]. If you're ever in the mood for a fresh [FAVORITE FOOD], please drop in - we'd be happy to have you! ... This would appear to different subscribers as: Dear Bart, Hello from your friends at the Funky Town Restaurant. We hope the Simpson family is well in the friendly town of Springfield. If you're ever in the mood for a fresh pizza, please drop in - we'd be happy to have you! ... Or: Dear Clark, Hello from your friends at the Funky Town Restaurant. We hope the Kent family is well in the friendly town of Smallville. If you're ever in the mood for a fresh Krypto-Burger, please drop in - we'd be happy to have you! ... If a user doesn't have an attribute for a particular placeholder, it will be replaced with a blank space. For example, if user "Mary" hadn't entered any attributes, her message would look like: Dear, Hello from your friends at the Funky Town Restaurant. We hope the family is well in the friendly town of . If you're ever in the mood for a fresh , please drop in - we'd be happy to have you! ... If the attributes on your subscription form are optional, try to structure your content in such a way that a blank placeholder substitution won't ruin the text. For example, the following text will look awkward with blank substitutions: Your name is [FIRST NAME], your favorite food is [FAVORITE FOOD], and your last name is [LAST NAME] Whereas the following text would at least "degrade gracefully": Your name: [FIRST NAME] Your favorite food: [FAVORITE FOOD] Your last name [LAST NAME]
Read more
  • 0
  • 0
  • 1872

article-image-tips-deploying-sakai
Packt
19 Jul 2011
10 min read
Save for later

Tips for Deploying Sakai

Packt
19 Jul 2011
10 min read
  Sakai CLE Courseware Management: The Official Guide The benefits of knowing that frameworks exist Sakai is built on top of numerous third-party open source libraries and frameworks. Why write code for converting from XML text files to Java objects or connecting and managing databases, when others have specialized and thought out the technical problems and found appropriate and consistent solutions? This reuse of code saves effort and decreases the complexity of creating new functionality. Using third-party frameworks has other benefits as well; you can choose the best from a series of external libraries, increasing the quality of your own product. The external frameworks have their own communities who test them actively. Outsourcing generic requirements, such as the rudiments of generating indexes for searching, allows the Sakai community to concentrate on higher-level goals, such as building new tools. For developers, also for course instructors and system administrators, it is useful background to know, roughly, what the underlying frameworks do: For a developer, it makes sense to look at reuse first. Why re-invent the wheel? Why write with external framework X for manipulating XML files when other developers have already extensively tried and tested and are running framework Y? Knowing what others have done saves time. This knowledge is especially handy for the new-to-Sakai developers who could be tempted to write from scratch. For the system administrator, each framework has its own strengths, weaknesses, and terminology. Understanding the terminology and technologies gives you a head start in debugging glitches and communicating with the developers. For a manager, knowing that Sakai has chosen solid and well-respected open source libraries should help influence buying decisions in favor of this platform. For the course instructor, knowing which frameworks exist and what their potential is helps inform the debate about adding interesting new features. Knowing what Sakai uses and what is possible sharpens the instructors' focus and the ability to define realistic requirements. For the software engineering student, Sakai represents a collection of best practices and frameworks that will make the students more saleable in the labor market. Using the third-party frameworks This section details frameworks that Sakai is heavily dependent on: Spring (http://www.springsource.org/), Hibernate ((http://www.hibernate.org/), and numerous Apache projects (http://www.apache.org/). Generally, Java application builders understand these frameworks. This makes it relatively easier to hire programmers with experience. All projects are open source and the individual use does not clash with Sakai's open source license (http://www.opensource.org/licenses/ecl2.php). The benefit of using Spring Spring is a tightly architected set of frameworks designed to support the main goals of building modern business applications. Spring has a broad set of abilities, from connecting to databases, to transaction, managing business logic, validation, security, and remote access. It fully supports the most modern architectural design patterns. The framework takes away a lot of drudgery for a programmer and enables pieces of code to be plugged in or to be removed by editing XML configuration files rather than refactoring the raw code base itself. You can see for yourself; this is the best framework for the user provider within Sakai. When you log in, you may want to validate the user credentials using a piece of code that connects to a directory service such as LDAP , or replace the code with another piece of code that gets credentials from an external database or even reads from a text file. Thanks to Sakai's services that rely on Spring! You can give (called injecting) the wanted code to a Service manager, which then calls the code when needed. In Sakai terminology, within a running application a service manager manages services for a particular type of data. For example, a course service manager allows programmers to add, modify, or delete courses. A user service manager does the same for users. Spring is responsible for deciding which pieces of code it injects into which service manager, and developers do not need to program the heavy lifting, only the configuration. The advantage is that later, as a part of adapting Sakai to a specific organization, system administrators can also reconfigure authentication or many other services to tailor to local preferences without recompilation. Spring abstracts away underlying differences between different databases. This allows you to program once, each for MySQL , Oracle , and so on, without taking into account the databases' differences. Spring can sit on the top of Hibernate and other limited frameworks, such as JDBC (yet another standard for connecting to databases). This adaptability gives architects more freedom to change and refactor (the process of changing the structure of the code to improve it) without affecting other parts of the code. As Sakai grows in code size, Spring and good architectural design patterns diminish the chance breaking older code. To sum up, the Spring framework makes programming more efficient. Sakai relies on the main framework. Many tasks that programmers would have previously hard coded are now delegated to XML configuration files. Hibernate for database coupling Hibernate is all about coupling databases to the code. Hibernate is a powerful, high performance object/relational persistence and query service. That is to say, a designer describes Java objects in a specific structure within XML files. After reading these files, Hibernate gains the ability to save or load instances of the object from the database. Hibernate supports complex data structures, such as Java collections and arrays of objects. Again, it is a choice of an external framework that does the programmer's dog work, mostly via XML configuration. The many Apache frameworks Sakai is biased rightfully towards projects associated with the Apache Software Foundation (ASF) (http://www.apache.org/). Sakai instances run within a Tomcat server and many institutes place an Apache web server in front of the Tomcat server to deal with dishing out static content (content that does not change, such as an ordinary web page), SSL/TLS, ease of configuration, and log parsing. Further, individual internal and external frameworks make use of the Apache commons frameworks, (http://commons.apache.org/) which have reusable libraries for all kinds of specific needs, such as validation, encoding, e-mailing, uploading files, and so on. Even if a developer does not use the common libraries directly, they are often called by other frameworks and have significant impact on the wellbeing; for example, security of a Sakai instance. To ensure look and feel consistency, designers used common technologies, such as Apache Velocity, Apache Wicket , Apache MyFaces (an implementation of Java Server Faces), Reasonable Server Faces (RSF) , and plain old Java Server Pages (JSP) Apache Velocity places much of the look and feel in text templates that non-programmers can then manipulate with text editors. The use of Velocity is mostly superseded by JSF. However, as Sakai moves forward, technologies such as RSF and Wicket (http://wicket.apache.org/) are playing a predominate role. Sakai uses XML as the format of choice to support much of its functionality, from configuration files, to the backing up of sites and the storage of internal data representations, RSS feeds, and so on. There is a lot of runtime effort in converting to and from XML and translating XML into other formats. Here are the gory technical details: there are two main methods for parsing XML: You can parse (another word for process) XML into a Document Object Model (DOM) in the memory that you can later transverse and manipulate programmatically. XML can also be parsed via an event-driven mechanism where Java methods are called, for example, when an XML tag begins or ends, or there is a body to the tag. Programmatically simple API for XML (SAX) libraries support the second approach in Java. Generally, it is easier to program with DOM than SAX, but as you need a model of the XML in memory, DOM, by its nature, is more memory intensive. Why would that matter? In large-scale deployments, the amount of memory tends to limit a Sakai instance's performance rather than Sakai being limited by the computational power of the servers. Therefore, as Sakai heavily uses XML, whenever possible, a developer should consider using SAX and avoid keeping the whole model of the XML document in memory. Looking at dependencies As Sakai adapts and expands its feature set, expect the range of external libraries to expand. The table mentions libraries used, their links to the relevant home page, and a very brief description of their functionality. Name Homepage Description Apache-Axis http://ws.apache.org/axis/ SOAP web services Apache-Axis2 http://ws.apache.org/axis2   SOAP, REST web services. A total rewrite of Apache-axis. However, not currently used within Entity Broker, a Sakai specific component.   Apache Commons http://commons.apache.org Lower-level utilities Batik http://xmlgraphics.apache.org/batik/ Batik is a Java-based toolkit for applications or applets that want to use images in the Scalable Vector Graphics (SVG) format. Commons-beanutils http://commons.apache.org/beanutils/ Methods for Java bean manipulation Commons-codec http://commons.apache.org/codec Commons Codec provides implementations of common encoders and decoders, such as Base64, Hex, Phonetic, and URLs. Commons-digester http://commons.apache.org/digester Common methods for initializing objects from XML configuration Commons-httpclient http://hc.apache.org/httpcomponents-client Supports HTTP-based standards with the client side in mind Commons-logging http://commons.apache.org/logging/ Logging support Commons-validator http://commons.apache.org/validator Support for verifying the integrity of received data Excalibur http://excalibur.apache.org Utilities FOP http://xmlgraphics.apache.org/fop Print formatting ready for conversions to PDF and a number of other formats Hibernate http://www.hibernate.org ORM database framework Log4j http://logging.apache.org/log4j For logging Jackrabbit http://jackrabbit. apache.org http://jcp.org/en/jsr/detail?id=170 Content repository. A content repository is a hierarchical content store with support for structured and unstructured content, full text search, versioning, transactions, observation, and more. James http://james.apache.org A mail server Java Server Faces http://java.sun.com/javaee/javaserverfaces Simplifies building user interfaces for JavaServer applications Lucene http://lucene.apache.org Indexing MyFaces http://myfaces.apache.org JSF implementation with implementation-specific widgets Pluto http://portals.apache.org/pluto The Reference Implementation of the Java Portlet Specfication Quartz http://www.opensymphony.com/quartz Scheduling Reasonable Server Faces (RSF) http://www2.caret.cam.ac.uk/rsfwiki RSF is built on the Spring framework, and simplifies the building of views via XHTML. ROME https://rome.dev.java.net ROME is a set of open source Java tools for parsing, generating, and publishing RSS and Atom feeds. SAX http://www.saxproject.org Event-based XML parser STRUTS http://struts.apache.org/ Heavy-weight MVC framework, not used in the core of Sakai, but rather some components used as part of the occasional tool Spring http://www.springsource.org Used extensively within the code base of Sakai. It is a broad framework that is designed to make building business applications simpler. Tomcat http://tomcat.apache.org Servlet container Velocity http://velocity.apache.org Templating Wicket http://wicket.apache.org Web app development framework Xalan http://xml.apache.org/xalan-j An XSLT (Extensible Stylesheet Language Transformation) processor for transforming XML documents into HTML, text, or other XML document types xerces http://xerces.apache.org/xerces-j XML parser For the reader who has downloaded and built Sakai from source code, you can automatically generate a list of current external dependencies via Maven. First, you will need to build the binary version and then print out the dependency report. To achieve this from within the top-level directory of the source code, you can run the following commands: mvn -Ppack-demo install mvn dependency:list The table is based on an abbreviated version of the dependency list, generated from the source code from March 2009. For those of you wishing to dive into the depths of Sakai, you can search the home pages mentioned in the table. In summary, Spring is the most important underlying third-party framework and Sakai spends a lot of its time manipulating XML.  
Read more
  • 0
  • 0
  • 2671

article-image-alice-3-controlling-behavior-animations
Packt
18 Jul 2011
11 min read
Save for later

Alice 3: Controlling the Behavior of Animations

Packt
18 Jul 2011
11 min read
  Alice 3 Cookbook 79 recipes to harness the power of Alice 3 for teaching students to build attractive and interactive 3D scenes and videos         Read more about this book       (For more resources related to this subject, see here.) Introduction You need to organize the statements that request the different actors to perform actions. Alice 3 provides blocks that allow us to configure the order in which many statements should be executed. This article provides many tasks that will allow us to start controlling the behavior of animations with many actors performing different actions. We will execute many actions with a specific order. We will use counters to run one or more statements many times. We will execute actions for many actors of the same class. We will run code for different actors at the same time to render complex animations. Performing many statements in order In this recipe, we will execute many statements for an actor with a specific order. We will add eight statements to control a sequence of movements for a bee. Getting ready We have to be working on a project with at least one actor. Therefore, we will create a new project and set a simple scene with a few actors. Select File | New... in the main menu to start a new project. A dialog box will display the six predefined templates with their thumbnail previews in the Templates tab. Select GrassyProject.a3p as the desired template for the new project and click on OK. Alice will display a grassy ground with a light blue sky. Click on Edit Scene, at the lower right corner of the scene preview. Alice will show a bigger preview of the scene and will display the Model Gallery at the bottom. Add an instance of the Bee class to the scene, and enter bee for the name of this new instance. First, Alice will create the MyBee class to extend Bee. Then, Alice will create an instance of MyBee named bee. Follow the steps explained in the Creating a new instance from a class in a gallery recipe, in the article, Alice 3: Making Simple Animations with Actors. Add an instance of the PurpleFlower class, and enter purpleFlower for the name of this new instance. Add another instance of the PurpleFlower class, and enter purpleFlower2 for the name of this new instance. The additional flower may be placed on top of the previously added flower. Add an instance of the ForestSky class to the scene. Place the bee and the two flowers as shown in the next screenshot: How to do it... Follow these steps to execute many statements for the bee with a specific order: Open an existing project with one actor added to the scene. Click on Edit Code, at the lower-right corner of the big scene preview. Alice will show a smaller preview of the scene and will display the Code Editor on a panel located at the right-hand side of the main window. Click on the class: MyScene drop-down list and the list of classes that are part of the scene will appear. Select MyScene | Edit run. Select the desired actor in the instance drop-down list located at the left-hand side of the main window, below the small scene preview. For example, you can select bee. Make sure that part: none is selected in the drop-down list located at the right-hand side of the chosen instance. Activate the Procedures tab. Alice will display the procedures for the previously selected actor. Drag the pointAt procedure and drop it in the drop statement here area located below the do in order label, inside the run tab. Because the instance name is bee, the pointAt statement contains the this.bee and pointAt labels followed by the target parameter and its question marks ???. A list with all the possible instances to pass to the first parameter will appear. Click on this.purpleFlower. The following code will be displayed, as shown in the next screenshot: this.bee.pointAt(this.purpleFlower) Drag the moveTo procedure and drop it below the previously dropped procedure call. A list with all the possible instances to pass to the first parameter will appear. Select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01, as shown in the following screenshot: Click on the more... drop-down menu button that appears at the right-hand side of the recently dropped statement. Click on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears. Click on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the second statement: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the third statement: this.bee.delay(2.0) Drag the moveAwayFrom procedure and drop it below the previously dropped procedure call. Select 0.25 for the first parameter. Click on the more... drop-down menu button that appears and select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fourth statement: this.bee.moveAwayFrom(0.25, this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the turnToFace procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fifth statement: this.bee.turnToFace(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the moveTo procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the sixth statement: this.bee.moveTo(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_AND_END_GENTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the seventh statement: this.bee.delay(2.0) Drag the move procedure and drop it below the previously dropped procedure call. Select FORWARD and then 10.0. Click on the more... drop-down menu button, on duration and then on 10.0 in the cascade menu that appears. Click on the additional more... drop-down menu that appears, on asSeenBy and then on this.bee. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the eighth and final statement. The following screenshot shows the eight statements that compose the run procedure: this.bee.move(FORWARD, duration: 10.0, asSeenBy: this.bee, style: BEGIN_ABRUPTLY_AND_END_GENTLY) (Move the mouse over the image to enlarge it.) Select File | Save as... from Alice's main menu and give a new name to the project. Then you can make changes to the project according to your needs. How it works... When we run a project, Alice creates the scene instance, creates and initializes all the instances that compose the scene, and finally executes the run method defined in the MyScene class. By default, the statements we add to a procedure are included within the do in order block. We added eight statements to the do in order block, and therefore Alice will begin with the first statement: this.bee.pointAt(this.purpleFlower) Once the bee finishes executing the pointAt procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following second statement after the first one finishes: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) The do in order statement encapsulates a group of statements with a synchronous execution. Thus, when we add many statements within a do in order block, these statements will run one after the other. Each statement requires its previous statement to finish before starting its execution, and therefore we can use the do in order block to group statements that must run with a specific order. The moveTo procedure moves the 3D model that represents the actor until it reaches the position of the other actor. The value for the target parameter is the instance of the other actor. We want the bee to move to one of the petals of the first flower, purpleFlower, and therefore we passed this value to the target parameter: this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01) We called the getPart function for purpleFlower with IStemMiddle_IStemTop_IHPistil_IHPetal01 as the name of the part to return. This function allows us to retrieve one petal from the flower as an instance. We used the resulting instance as the target parameter for the moveTo procedure and we could make the bee move to the specific petal of the flower. Once the bee finishes executing the moveTo procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following third statement after the second one finishes: this.bee.delay(2.0) The delay procedure puts the actor to sleep in its current position for the specified number of seconds. The next statement specified in the do in order block will run after waiting for two seconds. The statements added to the run procedure will perform the following visible actions in the specified order: Point the bee at purpleFlower. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee away 0.25 units from the position of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Turn the bee to the face of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee forward 10 units. Begin the movement abruptly but end it gently. The total duration for the animation must be 10 seconds. The bee will disappear from the scene. The following screenshot shows six screenshots of the rendered frames: (Move the mouse over the image to enlarge it.) There's more... When you work with the Alice code editor, you can temporarily disable statements. Alice doesn't execute the disabled statements. However, you can enable them again later. It is useful to disable one or more statements when you want to test the results of running the project without these statements, but you might want to enable them back to compare the results. To disable a statement, right-click on it and deactivate the IsEnabled option, as shown in the following screenshot: The disabled statements will appear with diagonal lines, as shown in the next screenshot, and won't be considered at run-time: To enable a disabled statement, right-click on it and activate the IsEnabled option.
Read more
  • 0
  • 0
  • 1897

article-image-ejb-31-working-interceptors
Packt
06 Jul 2011
3 min read
Save for later

EJB 3.1: Working with Interceptors

Packt
06 Jul 2011
3 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook        The recipes in this article are based largely around a conference registration application as developed in the first recipe of the previous article on Introduction to Interceptors. It will be necessary to create this application before the other recipes in this article can be demonstrated. Using interceptors to enforce security While security is an important aspect of many applications, the use of programmatic security can clutter up business logic. The use of declarative annotations has come a long way in making security easier to use and less intrusive. However, there are still times when programmatic security is necessary. When it is, then the use of interceptors can help remove the security code from the business logic. Getting ready The process for using an interceptor to enforce security involves: Configuring and enabling security for the application server Adding a @DeclareRoles to the target class and the interceptor class Creating a security interceptor How to do it... Configure the application to handle security as detailed in Configuring the server to handle security recipe. Add the @DeclareRoles("employee") to the RegistrationManager class. Add a SecurityInterceptor class to the packt package. Inject a SessionContext object into the class. We will use this object to perform programmatic security. Also use the @DeclareRoles annotation. Next, add an interceptor method, verifyAccess, to the class. Use the SessionContext object and its isCallerInRole method to determine if the user is in the "employee" role. If so, invoke the proceed method and display a message to that effect. Otherwise, throw an EJBAccessException. @DeclareRoles("employee") public class SecurityInterceptor { @Resource private SessionContext sessionContext; @AroundInvoke public Object verifyAccess(InvocationContext context) throws Exception { System.out.println("SecurityInterceptor: Invoking method: " + context.getMethod().getName()); if (sessionContext.isCallerInRole("employee")) { Object result = context.proceed(); System.out.println("SecurityInterceptor: Returned from method: " + context.getMethod().getName()); return result; } else { throw new EJBAccessException(); } } } Execute the application. The user should be prompted for a username and password as shown in the following screenshot. Provide a user in the employee role. The application should execute to completion. Depending on the interceptors in place, you will console output similar to the following: INFO: Default Interceptor: Invoking method: register INFO: SimpleInterceptor entered: register INFO: SecurityInterceptor: Invoking method: register INFO: InternalMethod: Invoking method: register INFO: register INFO: Default Interceptor: Invoking method: create INFO: Default Interceptor: Returned from method: create INFO: InternalMethod: Returned from method: register INFO: SecurityInterceptor: Returned from method: register INFO: SimpleInterceptor exited: register INFO: Default Interceptor: Returned from method: register How it works... The @DeclareRoles annotation was used to specify that users in the employee role are associated with the class. The isCallerInRole method checked to see if the current user is in the employee role. When the target method is called, if the user is authorized then the InterceptorContext's proceed method is executed. If the user is not authorized, then the target method is not invoked and an exception is thrown. See also EJB 3.1: Controlling Security Programmatically Using JAAS  
Read more
  • 0
  • 0
  • 1947
article-image-ejb-31-introduction-interceptors
Packt
06 Jul 2011
7 min read
Save for later

EJB 3.1: Introduction to Interceptors

Packt
06 Jul 2011
7 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook Introduction Most applications have cross-cutting functions which must be performed. These cross-cutting functions may include logging, managing transactions, security, and other aspects of an application. Interceptors provide a way to achieve these cross-cutting activities. The use of interceptors provides a way of adding functionality to a business method without modifying the business method itself. The added functionality is not intermeshed with the business logic resulting in a cleaner and easier to maintain application. Aspect Oriented Programming (AOP) is concerned with providing support for these cross-cutting functions in a transparent fashion. While interceptors do not provide as much support as other AOP languages, they do offer a good level of support. Interceptors can be: Used to keep business logic separate from non-business related activities Easily enabled/disabled Provide consistent behavior across an application Interceptors are specific methods invoked around a method or methods of a target EJB. We will use the term target, to refer to the class containing the method(s) an interceptor will be executing around. The interceptor's method will be executed before the EJB's method is executed. When the interceptor method executes, it is passed as an InvocationContext object. This object provides information relating to the state of the interceptor and the target. Within the interceptor method, the InvocationContext's method proceed can be issued that will result in the target's business method being executed or, as we will see shortly, the next interceptor in the chain. When the business method returns, the interceptor continues execution. This permits execution of code before and after the execution of a business method. Interceptors can be used with: Stateless session EJBs Stateful session EJBs Singleton session EJBs Message-driven beans The @Interceptors annotation defines which interceptors will be executed for all or individual methods of a class. Interceptor classes use the same lifecycle of the EJB they are applied to, in the case of stateful EJBs, which means the interceptor could be passivated and activated. In addition, they support the use of dependency injection. The injection is done using the EJB's naming context. More than one interceptor can be used at a time. The sequence of interceptor execution is referred to as an interceptor chain. For example, an application may need to start a transaction based on the privileges of a user. These actions should also be logged. An interceptor can be defined for each of these activities: validating the user, starting the transaction, and logging the event. The use of interceptor chaining is illustrated in the Using interceptors to handle application statistics recipe. Lifecycle callbacks such as @PreDestroy and @PostConstruct can also be used within interceptors. They can access interceptor state information as discussed in the Using lifecycle methods in interceptors recipe. Interceptors are useful for: Validating parameters and potentially changing them before they are sent to a method Performing security checks Performing logging Performing profiling Gathering statistics An example of parameter validation can be found in the Using the InvocationContext to verify parameters recipe. Security checks are illustrated in the Using interceptors to enforce security recipe. The use of interceptor chaining to record a method's hit count and the time spent in the method is discussed in the Using interceptors to handle application statistics recipe. Interceptors can also be used in conjunction with timer services. The recipes in this article are based largely around a conference registration application as developed in the first recipe. It will be necessary to create this application before the other recipes can be demonstrated. Creating the Registration Application A RegistrationApplication is developed in this recipe. It provides the ability of attendees to register for a conference. The application will record their personal information using an entity and other supporting EJBs. This recipe details how to create this application. Getting ready The RegistrationApplication consists of the following classes: Attendee – An entity representing a person attending the conference AbstractFacade – A facade-based class AttendeeFacade – The facade class for the Attendee class RegistrationManager – Used to control the registration process RegistrationServlet – The GUI interface for the application The steps used to create this application include: Creating the Attendee entity and its supporting classes Creating a RegistrationManager EJB to control the registration process Creating a RegistrationServlet to drive the application The RegistrationManager will be the primary vehicle for the demonstration of interceptors. How to do it... Create a Java EE application called RegistrationApplication. Add a packt package to the EJB module and a servlet package in the application's WAR module. Next, add an Attendee entity to the packt package. This entity possesses four fields: name, title, company, and id. The id field should be auto generated. Add getters and setters for the fields. Also add a default constructor and a three argument constructor for the first three fields. The major components of the class are shown below without the getters and setters. @Entity public class Attendee implements Serializable { private String name; private String title; private String company; private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Attendee() { } public Attendee(String name, String title, String company) { this.name = name; this.title = title; this.company = company; } } Next, add an AttendeeFacade stateless session bean which is derived from the AbstractFacade class. The AbstractFacade class is not shown here. @Stateless public class AttendeeFacade extends AbstractFacade<Attendee> { @PersistenceContext(unitName = "RegistrationApplication-ejbPU") private EntityManager em; protected EntityManager getEntityManager() { return em; } public AttendeeFacade() { super(Attendee.class); } } Add a RegistrationManager stateful session bean to the packt package. Add a single method, register, to the class. The method should be passed three strings for the name, title, and company of the attendee. It should return an Attendee reference. Use dependency injection to add a reference to the AttendeeFacade. In the register method, create a new Attendee and then use the AttendeeFacade class to create it. Next, return a reference to the Attendee. @Stateful public class RegistrationManager { @EJB AttendeeFacade attendeeFacade; Attendee attendee; public Attendee register(String name, String title, String company) { attendee = new Attendee(name, title, company); attendeeFacade.create(attendee); return attendee; } } In the servlet package of the WAR module, add a servlet called RegistrationServlet. Use dependency injection to add a reference to the RegistrationManager. In the try block of the processRequest method, use the register method to register an attendee and then display the attendee's name. public class RegistrationServlet extends HttpServlet { @EJB RegistrationManager registrationManager; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet RegistrationServlet</title>"); out.println("</head>"); out.println("<body>"); Attendee attendee = registrationManager.register("Bill Schroder", "Manager", "Acme Software"); out.println("<h3>" + attendee.getName() + " has been registered</h3>"); out.println("</body>"); out.println("</html>"); } finally { out.close(); } } ... } Execute the servlet. The output should appear as shown in the following screenshot: How it works... The Attendee entity holds the registration information for each participant. The RegistrationManager session bean only has a single method at this time. In later recipes we will augment this class to add other capabilities. The RegistrationServlet is the client for the EJBs.
Read more
  • 0
  • 0
  • 2001

article-image-overview-web-services-sakai
Packt
06 Jul 2011
16 min read
Save for later

An overview of web services in Sakai

Packt
06 Jul 2011
16 min read
Connecting to Sakai is straightforward, and simple tasks, such as automatic course creation, take only a few lines of programming effort. There are significant advantages to having web services in the enterprise. If a developer writes an application that calls a number of web services, then the application does not need to know the hidden details behind the services. It just needs to agree on what data to send. This loosely couples the application to the services. Later, if you can replace one web service with another, programmers do not need to change the code on the application side. SOAP works well with most organizations' firewalls, as SOAP uses the same protocol as web browsers. System administrators have a tendency to protect an organization's network by closing unused ports to the outside world. This means that most of the time there is no extra network configuration effort required to enable web services. Another simplifying factor is that a programmer does not need to know the details of SOAP or REST, as there are libraries and frameworks that hide the underlying magic. For the Sakai implementation of SOAP, to add a new service is as simple as writing a small amount of Java code within a text file, which is then compiled automatically and run the first time the service is called. This is great for rapid application development and deployment, as the system administrator does not need to restart Sakai for each change. Just as importantly, the Sakai services use the well-known libraries from the Apache Axis project. SOAP is an XML message passing protocol that, in the case of Sakai sites, sits on top of the Hyper Text Transfer Protocol (HTTP). HTTP is the protocol used by web browsers to obtain web pages from a server. The client sends messages in XML format to a service, including the information that the service needs. Then the service returns a message with the results or an error message. The architects introduced SOAP-based web services first to Sakai , adding RESTful services later. Unlike SOAP, instead of sending XML via HTTP posts to one URL that points to a service, REST sends to a URL that includes information about the entity, such as a user, with which the client wishes to interact. For example, a REST URL for viewing an address book item could look similar to http://host/direct/addressbook_item/15. Applying URLs in this way makes for understandable, human-readable address spaces. This more intuitive approach simplifies coding. Further, SOAP XML passing requires that the client and the server parse the XML and at times, the parsing effort is expensive in CPU cycles and response times. The Entity Broker is an internal service that makes life easier for programmers and helps them manipulate entities. Entities in Sakai are managed pieces of data such as representations of courses, users, grade books, and so on. In the newer versions of Sakai, the Entity Broker has the power to expose entities as RESTful services. In contrast, for SOAP services, if you wanted a new service, you would need to write it yourself. Over time, the Entity Broker exposes more and more entities RESTfully, delivering more hooks free to integrate with other enterprise systems. Both SOAP and REST services sit on top of the HTTP protocol. Protocols This section explains how web browsers talk to servers in order to gather web pages. It explains how to use the telnet command and a visual tool called TCPMON (http://ws.apache.org/commons/tcpmon/tcpmontutorial.html) to gain insight into how web services and Web 2.0 technologies work. Playing with Telnet It turns out that message passing occurs via text commands between the browser and the server. Web browsers use HTTP to get web pages and the embedded content from the server and to send form information to the server. HTTP talks between the client and server via text (7-bit ASCII) commands. When humans talk with each other, they have a wide vocabulary. However, HTTP uses fewer than twenty words. You can directly experiment with HTTP using a Telnet client to send your commands to a web server. For example, if your demonstration Sakai instance is running on port 8080, the following command will get you the login page: telnet localhost 8080 GET /portal/login The GET command does what it sounds like and gets a web page. Forms can use the GET verb to send data at the end of the URL. For example, GET /portal/login?name=alan&age=15 is sending the variables name=alan and age=15 to the server. Installing TCPMON You can use the TCPMON tool to view requests and responses from a web browser such as Firefox. One of TCPMON's abilities is that it can act as an invisible man in the middle, recording the messages between the web browser and the server. Once set up, the requests sent from the browser go to TCPMON and it passes the request on to the server. The server passes back a response and then TCPMON, a transparent proxy, returns the response to the web browser. This allows us to look at all requests and responses graphically. First, you can set up TCPMON to listenon a given port number—by convention, normally port 8888—and then you can configure your web browser to send its requests through the proxy. Then, you can type the address of a given page into the web browser, but instead of going directly to the relevant server, the browser sends the request to the proxy, which then passes it on and passes the response back. TCPMON displays both the request and the responses in a window. You can download TCPMON here. After downloading and unpacking, you can—from within the build directory—run either tcpmon.bat for the Windows environment or tcpmon.sh for the UNIX/Linux environment. To configure a proxy, you can click on the Admin tab and then set the Listen Port to 8888 and select the Proxy radio button. After that, clicking on Add will create a new tab, where the requests and responses will be displayed later. Your favorite web browser now has to recognize the newly-setup proxy. For Firefox 3, you can do this by selecting the menu option Edit/Preferences, and then choosing the Advanced tab and the Network tab, as shown in the next screenshot. You will need to set the proxy options, HTTP proxy to 127.0.0.1, and the port number to 8888. If you do this, you will need to ensure that the No proxies text input is blank. Clicking on the OK button enables the new settings. (Move the mouse over the image to enlarge.) To use the Proxy from within Internet Explorer 7 for a Local Area Network (LAN), you can edit the dialog box found under Tools | Internet Options | Connections | LAN settings. Once the proxy is working, typing http://localhost:8080/portal/login in the address bar will seamlessly return the login page of your local Sakai instance. Otherwise, you will see an error message similar to Proxy Server Refused Connection for Firefox or Internet Explorer cannot display the webpage. To turn off the proxy settings, simply select the No Proxies radio box and click on OK for Firefox 3, and for Internet Explorer 7, unselect the Use a proxy server for the LAN tick box and click on OK Requests and returned status codes When TCPMON is running a proxy on port 8888, it allows you to view the requests from the browser and the response in an extra tab, as shown in the following screenshot. Notice the extra information that the browser sends as part of the request. HTTP/1.1 defines the protocol and version level and the lines below GET are the header variables. The User-Agent defines which client sends the request. The Accept headers tell the server what the capabilities of the browser are, and the Cookie header defines the value stored in a cookie. HTTP is stateless, in principle; each response is based only on the current request. However, to get around this, persistent information can be stored in cookies. Web browsers normally store their representation of a cookie as a little text file or in a small database on the end users' computers. Sakai uses the supporting features of a servlet container, such as Tomcat, to maintain state in cookies. A cookie stores a session ID, and when the server sees the session ID, it can look up the request's server-side state. This state contains information such as whether the user is logged in, or what he or she has ordered. The web browser deletes the local representation of the cookie each time the browser closes. A cookie that is deleted when a web browser closes is known as a session cookie. The server response starts with the protocol followed by a status number. HTTP/1.1 200 OK tells the web browser that the server is using HTTP version 1.1 and was able to return the requested web page successfully. 2xx status codes imply success. 3xx status codes imply some form of redirection and tell the web browser where to try to pick up the requested resource. 4xx status codes are for client errors, such as malformed requests or lack of permission to obtain the resource. 4xx states are fertile grounds for security managers to look in log files for attempted hacking. 5xx status codes mostly have to do with a failure of the server itself and are mostly of interest to system administrators and programmers during the debugging cycle. In most cases, 5xx status numbers are about either high server load or a broken piece of code. Sakai is changing rapidly and even with the most vigorous testing, there are bound to be the occasional hiccups. You will find accurate details of the full range of status codes at: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html. Another important part of the response is Content-Type, which tells the web browser which type of material the response is returning, so the browser knows how to handle it. For example, the web browser may want to run a plug-in for video types and display text natively. Content-Length in characters is normally also given. After the header information is finished, there is a newline followed by the content itself. Web browsers interpret any redirects that are returned by sending extra requests. Web browsers also interpret any HTML pages and make multiple requests for resources such as JavaScript files and images. Modern browsers do not wait until the server returns all the requests, but render the HTML page live as the server returns the parts. The GET verb is not very efficient for posting a large amount of data, as the URL has a length limit of around 2000 characters. Further, the end user can see the form data, and the browser may encode entities such as spaces to make the URL unreadable. There is also a security aspect: if you are typing passwords in forms using GET, others may see your password or other details. This is not a good idea, especially at Internet Cafés where the next user who logs on can see the password in the browsing history. The POST verb is a better choice. Let us take as an example the Sakai demonstration login page (http://localhost:8080/portal/login). The login page itself contains a form tag that points to the relogin page with the POST method. <form method="post" action="http://localhost:8080/portal/relogin" enctype="application/x-www-form-urlencoded"> Note that the HTML tag also defines the content type. Key features of the POST request compared to GET are: The form values are stored as content after the header values There is a newline between the end of the header and the data The request mentions data and the amount of data by the use of the Content-Length header value The essential POST values for a login form with user admin (eid=admin) and password admin (pw=admin) will look like: POST http://localhost:8080/portal/relogin HTTP/1.1 Content-Type: application/x-www-form-urlencoded Content-Length: 31 eid=admin&pw=admin&submit=Login POST requests can contain much more information than GET requests, and the requests hide the values from the address bar of the web browser. This is not secure. The header is just as visible as the URL, so POST values are also neither hidden nor secure. The only viable solution is for your web browser to encrypt your transactions using SSL/TLS (http://www.ietf.org/rfc/rfc2246.txt) for security, and this occurs every time you connect to a server using an HTTPS URL. SOAP Sakai uses the Apache Axis framework, which the developers have configured to accept SOAP calls via POST. SOAP sends messages in a specific XML format with the Content-Type, otherwise known as MIME type, application/soap+xml. A programmer does not need to know more than that, as the client libraries take care of the majority of the excruciating low-level details. An example SOAP message generated by the Perl module, SOAP::Lite (http://www.soaplite.com/), for creating a login session in Sakai will look like the following POST data: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope soap_encodingStyle= "http://schemas.xmlsoap.org/soap/encoding/" > <c-gensym3 xsi_type="xsd:string">admin</c-gensym3> <c-gensym5 xsi_type="xsd:string">admin</c-gensym5> </login> </soap:Body> </soap:Envelope> There is an envelope with a body containing data for the service to consume. The important point to remember is that both the client and the server have to be able to parse the specific XML schema. SOAP messages can include extra security features, but Sakai does not require these. The architects expect organizations to encrypt web services using SSL/TSL. The last extra SOAP-related complexity is the Web Service Description Language (http://www.w3.org/TR/wsdl). Web services may change location or exist in multiple locations for redundancy. The service writer can define the location of the services and the data types involved with those services in another file, in XML format. JSON Also worth mentioning is JavaScript Object Notation (JSON), which is another popular format, passed using HTTP. When web developers realized that they could force browsers to load parts of a web page in at a time, it significantly improved the quality of the web browsing experience for the end user. This asynchronous loading enables all kinds of whiz-bang features, such as when you type in a search term and can choose from a set of search term completions before pressing on the Submit button. Asynchronous loading delivers more responsive and richer web pages that feel more like traditional desktop applications than a plain old web page. JSON is one of the formats of choice for passing asynchronous requests and responses. The asynchronous communication normally occurs through HTTP GET or POST, but with a specific content structure that is designed to be human readable and script language parser-friendly. JSON calls have the file extension .json as part of the URL. As mentioned in RFC 4627, an example image object communicated in JSON looks like: { "Image": { "Width": 800, "Height": 600, "Title": "View from 15th Floor", "Thumbnail": { "Url": "http://www.example.com/image/481989943", "Height": 125, "Width": "100" }, "IDs": [116, 943, 234, 38793] } } By confusing the boundaries between client and server, a lot of the presentation and business logic is locked on the client side in scripting languages such as JavaScript. The scripting language orchestrates the loading of parts of pages and the generation of widget sets. Frameworks such as jQuery (http://jquery.com/) and MyFaces (http://myfaces.apache.org/) significantly ease the client-side programming burden. REST To understand REST, you need to understand the other verbs in HTTP (http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html). The full HTTP set is OPTIONS, GET, HEAD, POST, PUT, DELETE, and TRACE. The HEAD verb returns from the server only the headers of the response without the content, and is useful for clients that want to see if the content has changed since the last request. PUT requests that the content in the request be stored at a particular location mentioned in the request. DELETE is for deleting the entity. REST uses the URL of the request to route to the resource, and the HTTP verb GET is used to get a resource, PUT to update, DELETE to delete, and POST to add a new resource. In general, POST request is for creating an item, PUT for updating an item, DELETE for deleting an item, and GET for returning information on the item. In SOAP, you are pointing directly towards the service the client calls or indirectly via the web service description. However, in REST, part of the URL describes the resource or resources you wish to work with. For example, a hypothetical address book application that lists all e-mail addresses in HTML format would look similar to the following: GET /email To list the addresses in XML format or JSON format: GET /email.xml GET /email.json To get the first e-mail address in the list: GET /email/1 To create a new e-mail address, of course remembering to add the rest of e-mail details to the end of the GET: POST /email In addition, to delete address 5 from the list use the following command: DELETE /email/5 To obtain address 5 in other formats such as JSON or XML, then use file extensions at the end of the URL, for example: GET /email/5.json GET /email/5.xml RESTful services are intuitively more descriptive than SOAP services, and they enable easy switching of the format from HTML to JSON to fuel the dynamic and asynchronous loading of websites. Due to the direct use of HTTP verbs by REST, this methodology also fits well with the most common application type: CRUD (Create, Read, Update, and Delete) applications, such as the site or user tools within Sakai. Now that we have discussed the theory, in the next section we shall discuss which Sakai-related SOAP services already exist.
Read more
  • 0
  • 0
  • 2857

article-image-moodle-history-teaching-using-chats-books-and-plugins
Packt
29 Jun 2011
4 min read
Save for later

Moodle: History Teaching using Chats, Books and Plugins

Packt
29 Jun 2011
4 min read
The Chat Module Students naturally gravitate towards the Chat module in Moodle. It is one of the modules that they effortlessly use whilst working on another task. I often find that they have logged in and are discussing work related tasks in a way that enables them to move forward on a particular task. Another use for the Chat module is to conduct a discussion outside the classroom timetabled lesson when students know that you are available to help them with issues. This is especially relevant to students who embark on study leave in preparation for examinations. It can be a lonely and stressful period. Knowing that they can log in to a chat that has been planned in advance means that they can prepare issues that they wish to discuss about their workload and find out how their peers are tackling the same issues. The teacher can ensure that the chat stays on message and provide useful input at the same time. Setting up a Chatroom We want to set up a chat with students who are on holiday but have some examination preparation to do for a lesson that will take place straight after their return to school. Ideally we would have informed the students prior to starting their holiday that this session would be available to anyone who wished to take part. Log in to the Year 7 History course and turn on editing In the Introduction section, click the Add an activity dropdown Select Chat Enter an appropriate name for the chat Enter some relevant information in the Introduction text Select the date and time for the chat to begin Beside Repeat sessions select No repeats – publish the specified time only Leave other elements at their default settings Click Save changes The following screenshot is the result of clicking Add an activity from the drop-down menu: If we wanted to set up the chatroom so that the chat took place at the same time each day or each week then it is possible to select the appropriate option from the Repeat sessions dropdown. The remaining options make it possible for students to go back and view sessions that they have taken part in. Entering the chatroom When a student or teacher logs in to the course for the appointed chat they will see the chat symbol in the Introduction section. Clicking on the symbol enables them to enter the chatroom via a simple chat window or a more accessible version where checking the box ensures that only new messages appear on the screen as shown in the following screenshot: As long as another student or teacher has entered the chatroom, a chat can begin when users type a message and await a response. The Chat module is a useful way for students to collaborate with each other and with their teacher if they need to. It comes into its own when students are logging in to discuss how to make progress with their collaborative wiki story about a murder in the monastery or when students preparing for an examination share tips and advice to help each other through the experience. Collaboration is the key to effective use of the chat module and teachers need not fear its potential for timewasting if this point is emphasized in the activities that they are working on. Plugins A brief visit to www.moodle.org and a search for ‘plugins’ reveals an extensive list of modules that are available for use with Moodle but stand outside the standard installation. If you have used a blogging tool such as Wordpress you will be familiar with the concept of plugins. Over the last few years, developers have built up a library of plugins which can be used to enhance your Moodle experience. Every teacher has different ways of doing things and it is well worth exploring the plugins database and related forums to find out what teachers are using and how they are using it. There is for example a plugin for writing individual learning plans for students and another plugin called Quickmail which enables you to send an email to everyone on your course even more quickly than the conventional way. Installing plugins Plugins need to be installed and they need administrator rights to run at all. The Book module for example, requires a zip file to be downloaded from the plugins database onto your computer and the files then need to be extracted to a folder in the Mod folder of your Moodle’s software directory. Once it is in the correct folder, the administrator then needs to run the installation. Installation has been successful if you are able to log in to the course and see the Book module as an option in the Add a resource dropdown.
Read more
  • 0
  • 0
  • 1605
article-image-android-application-testing-adding-functionality-ui
Packt
27 Jun 2011
10 min read
Save for later

Android Application Testing: Adding Functionality to the UI

Packt
27 Jun 2011
10 min read
  Android Application Testing Guide Build intensively tested and bug free Android applications  The user interface is in place. Now we start adding some basic functionality. This functionality will include the code to handle the actual temperature conversion. Temperature conversion From the list of requirements from the previous article we can obtain this statement: When one temperature is entered in one field the other one is automatically updated with the conversion. Following our plan we must implement this as a test to verify that the correct functionality is there. Our test would look something like this: @UiThreadTest public final void testFahrenheitToCelsiusConversion() { mCelsius.clear(); mFahrenheit.clear(); final double f = 32.5; mFahrenheit.requestFocus(); mFahrenheit.setNumber(f); mCelsius.requestFocus(); final double expectedC = TemperatureConverter.fahrenheitToCelsius(f); final double actualC = mCelsius.getNumber(); final double delta = Math.abs(expectedC - actualC); final String msg = "" + f + "F -> " + expectedC + "C but was " + actualC + "C (delta " + delta + ")"; assertTrue(msg, delta < 0.005); } Firstly, as we already know, to interact with the UI changing its values we should run the test on the UI thread and thus is annotated with @UiThreadTest. Secondly, we are using a specialized class to replace EditText providing some convenience methods like clear() or setNumber(). This would improve our application design. Next, we invoke a converter, named TemperatureConverter, a utility class providing the different methods to convert between different temperature units and using different types for the temperature values. Finally, as we will be truncating the results to provide them in a suitable format presented in the user interface we should compare against a delta to assert the value of the conversion. Creating the test as it is will force us to follow the planned path. Our first objective is to add the needed code to get the test to compile and then to satisfy the test's needs. The EditNumber class In our main project, not in the tests one, we should create the class EditNumber extending EditText as we need to extend its functionality. We use Eclipse's help to create this class using File | New | Class or its shortcut in the Toolbars. This screenshot shows the window that appears after using this shortcut: The following table describes the most important fields and their meaning in the previous screen:     Field Description Source folder: The source folder for the newly-created class. In this case the default location is fine. Package: The package where the new class is created. In this case the default package com.example.aatg.tc is fine too. Name: The name of the class. In this case we use EditNumber. Modifiers: Modifiers for the class. In this particular case we are creating a public class. Superclass: The superclass for the newly-created type. We are creating a custom View and extending the behavior of EditText, so this is precisely the class we select for the supertype. Remember to use Browse... to find the correct package. Which method stubs would you like to create? These are the method stubs we want Eclipse to create for us. Selecting Constructors from superclass and Inherited abstract methods would be of great help. As we are creating a custom View we should provide the constructors that are used in different situations, for example when the custom View is used inside an XML layout. Do you want to add comments? Some comments are added automatically when this option is selected. You can configure Eclipse to personalize these comments. Once the class is created we need to change the type of the fields first in our test: public class TemperatureConverterActivityTests extends ActivityInstrumentationTestCase2<TemperatureConverterActivity> { private TemperatureConverterActivity mActivity; private EditNumber mCelsius; private EditNumber mFahrenheit; private TextView mCelsiusLabel; private TextView mFahrenheitLabel; ... Then change any cast that is present in the tests. Eclipse will help you do that. If everything goes well, there are still two problems we need to fix before being able to compile the test: We still don't have the methods clear() and setNumber() in EditNumber We don't have the TemperatureConverter utility class To create the methods we are using Eclipse's helpful actions. Let's choose Create method clear() in type EditNumber. Same for setNumber() and getNumber(). Finally, we must create the TemperatureConverter class. Be sure to create it in the main project and not in the test project. Having done this, in our test select Create method fahrenheitToCelsius in type TemperatureConverter. This fixes our last problem and leads us to a test that we can now compile and run. Surprisingly, or not, when we run the tests, they will fail with an exception: 09-06 13:22:36.927: INFO/TestRunner(348): java.lang. ClassCastException: android.widget.EditText 09-06 13:22:36.927: INFO/TestRunner(348): at com.example.aatg. tc.test.TemperatureConverterActivityTests.setUp( TemperatureConverterActivityTests.java:41) 09-06 13:22:36.927: INFO/TestRunner(348): at junit.framework. TestCase.runBare(TestCase.java:125) That is because we updated all of our Java files to include our newly-created EditNumber class but forgot to change the XMLs, and this could only be detected at runtime. Let's proceed to update our UI definition: <com.example.aatg.tc.EditNumber android_layout_height="wrap_content" android_id="@+id/celsius" android_layout_width="match_parent" android_layout_margin="@dimen/margin" android_gravity="right|center_vertical" android_saveEnabled="true" /> That is, we replace the original EditText by com.example.aatg.tc.EditNumber which is a View extending the original EditText. Now we run the tests again and we discover that all tests pass. But wait a minute, we haven't implemented any conversion or any handling of values in the new EditNumber class and all tests passed with no problem. Yes, they passed because we don't have enough restrictions in our system and the ones in place simply cancel themselves. Before going further, let's analyze what just happened. Our test invoked the mFahrenheit.setNumber(f) method to set the temperature entered in the Fahrenheit field, but setNumber() is not implemented and it is an empty method as generated by Eclipse and does nothing at all. So the field remains empty. Next, the value for expectedC—the expected temperature in Celsius is calculated invoking TemperatureConverter.fahrenheitToCelsius(f), but this is also an empty method as generated by Eclipse. In this case, because Eclipse knows about the return type it returns a constant 0. So expectedC becomes 0. Then the actual value for the conversion is obtained from the UI. In this case invoking getNumber() from EditNumber. But once again this method was automatically generated by Eclipse and to satisfy the restriction imposed by its signature, it must return a value that Eclipse fills with 0. The delta value is again 0, as calculated by Math.abs(expectedC – actualC). And finally our assertion assertTrue(msg, delta < 0.005) is true because delta=0 satisfies the condition, and the test passes. So, is our methodology flawed as it cannot detect a simple situation like this? No, not at all. The problem here is that we don't have enough restrictions and they are satisfied by the default values used by Eclipse to complete auto-generated methods. One alternative could be to throw exceptions at all of the auto-generated methods, something like RuntimeException("not yet implemented") to detect its use when not implemented. But we will be adding enough restrictions in our system to easily trap this condition. TemperatureConverter unit tests It seems, from our previous experience, that the default conversion implemented by Eclipse always returns 0, so we need something more robust. Otherwise this will be only returning a valid result when the parameter takes the value of 32F. The TemperatureConverter is a utility class not related with the Android infrastructure, so a standard unit test will be enough to test it. We create our tests using Eclipse's File | New | JUnit Test Case, filling in some appropriate values, and selecting the method to generate a test as shown in the next screenshot. Firstly, we create the unit test by extending junit.framework.TestCase and selecting com.example.aatg.tc.TemperatureConverter as the class under test: Then by pressing the Next > button we can obtain the list of methods we may want to test: We have implemented only one method in TemperatureConverter, so it's the only one appearing in the list. Other classes implementing more methods will display all the options here. It's good to note that even if the test method is auto-generated by Eclipse it won't pass. It will fail with the message Not yet implemented to remind us that something is missing. Let's start by changing this: /** * Test method for {@link com.example.aatg.tc. TemperatureConverter#fahrenheitToCelsius(double)}. */ public final void testFahrenheitToCelsius() { for (double c: conversionTableDouble.keySet()) { final double f = conversionTableDouble.get(c); final double ca = TemperatureConverter.fahrenheitToCelsius(f); final double delta = Math.abs(ca - c); final String msg = "" + f + "F -> " + c + "C but is " + ca + " (delta " + delta + ")"; assertTrue(msg, delta < 0.0001); } } Creating a conversion table with values for different temperature conversion we know from other sources would be a good way to drive this test. private static final HashMap<Double, Double> conversionTableDouble = new HashMap<Double, Double>(); static { // initialize (c, f) pairs conversionTableDouble.put(0.0, 32.0); conversionTableDouble.put(100.0, 212.0); conversionTableDouble.put(-1.0, 30.20); conversionTableDouble.put(-100.0, -148.0); conversionTableDouble.put(32.0, 89.60); conversionTableDouble.put(-40.0, -40.0); conversionTableDouble.put(-273.0, -459.40); } We may just run this test to verify that it fails, giving us this trace: junit.framework.AssertionFailedError: -40.0F -> -40.0C but is 0.0 (delta 40.0)at com.example.aatg.tc.test.TemperatureConverterTests. testFahrenheitToCelsius(TemperatureConverterTests.java:62) at java.lang.reflect.Method.invokeNative(Native Method) at android.test.AndroidTestRunner.runTest(AndroidTestRunner. java:169) at android.test.AndroidTestRunner.runTest(AndroidTestRunner. java:154) at android.test.InstrumentationTestRunner.onStart( InstrumentationTestRunner.java:520) at android.app.Instrumentation$InstrumentationThread.run( Instrumentation.java:1447) Well, this was something we were expecting as our conversion always returns 0. Implementing our conversion, we discover that we need some ABSOLUTE_ZERO_F constant: public class TemperatureConverter { public static final double ABSOLUTE_ZERO_C = -273.15d; public static final double ABSOLUTE_ZERO_F = -459.67d; private static final String ERROR_MESSAGE_BELOW_ZERO_FMT = "Invalid temperature: %.2f%c below absolute zero"; public static double fahrenheitToCelsius(double f) { if (f < ABSOLUTE_ZERO_F) { throw new InvalidTemperatureException( String.format(ERROR_MESSAGE_BELOW_ZERO_FMT, f, 'F')); } return ((f - 32) / 1.8d); } } Absolute zero is the theoretical temperature at which entropy would reach its minimum value. To be able to reach this absolute zero state, according to the laws of thermodynamics, the system should be isolated from the rest of the universe. Thus it is an unreachable state. However, by international agreement, absolute zero is defined as 0K on the Kelvin scale and as -273.15°C on the Celsius scale or to -459.67°F on the Fahrenheit scale. We are creating a custom exception, InvalidTemperatureException, to indicate a failure providing a valid temperature to the conversion method. This exception is created simply by extending RuntimeException: public class InvalidTemperatureException extends RuntimeException { public InvalidTemperatureException(String msg) { super(msg); } } Running the tests again we now discover that testFahrenheitToCelsiusConversion test fails, however testFahrenheitToCelsius succeeds. This tells us that now conversions are correctly handled by the converter class but there are still some problems with the UI handling this conversion. A closer look at the failure trace reveals that there's something still returning 0 when it shouldn't. This reminds us that we are still lacking a proper EditNumber implementation. Before proceeding to implement the mentioned methods, let's create the corresponding tests to verify what we are implementing is correct.
Read more
  • 0
  • 0
  • 1202

article-image-android-application-testing-tdd-and-temperature-converter
Packt
27 Jun 2011
7 min read
Save for later

Android application testing: TDD and the temperature converter

Packt
27 Jun 2011
7 min read
Getting started with TDD Briefly, Test Driven Development is the strategy of writing tests along the development process. These test cases are written in advance of the code that is supposed to satisfy them. A single test is added, then the code needed to satisfy the compilation of this test and finally the full set of test cases is run to verify their results. This contrasts with other approaches to the development process where the tests are written at the end when all the coding has been done. Writing the tests in advance of the code that satisfies them has several advantages. First, is that the tests are written in one way or another, while if the tests are left till the end it is highly probable that they are never written. Second, developers take more responsibility for the quality of their work. Design decisions are taken in single steps and finally the code satisfying the tests is improved by refactoring it. This UML activity diagram depicts the Test Driven Development to help us understand the process: The following sections explain the individual activities depicted in this activity diagram. Writing a test case We start our development process with writing a test case. This apparently simple process will put some machinery to work inside our heads. After all, it is not possible to write some code, test it or not, if we don't have a clear understanding of the problem domain and its details. Usually, this step will get you face to face with the aspects of the problem you don't understand, and you need to grasp if you want to model and write the code. Running all tests Once the test is written the obvious following step is to run it, altogether with other tests we have written so far. Here, the importance of an IDE with built-in support of the testing environment is perhaps more evident than in other situations and this could cut the development time by a good fraction. It is expected that firstly, our test fails as we still haven't written any code! To be able to complete our test, we usually write additional code and take design decisions. The additional code written is the minimum possible to get our test to compile. Consider here that not compiling is failing. When we get the test to compile and run, if the test fails then we try to write the minimum amount of code necessary to make the test succeed. This may sound awkward at this point but the following code example in this article will help you understand the process. Optionally, instead of running all tests again you can just run the newly added test first to save some time as sometimes running the tests on the emulator could be rather slow. Then run the whole test suite to verify that everything is still working properly. We don't want to add a new feature by breaking an existing one. Refactoring the code When the test succeeds, we refactor the code added to keep it tidy, clean, and minimal. We run all the tests again, to verify that our refactoring has not broken anything and if the tests are again satisfied, and no more refactoring is needed we finish our task. Running the tests after refactoring is an incredible safety net which has been put in place by this methodology. If we made a mistake refactoring an algorithm, extracting variables, introducing parameters, changing signatures or whatever your refactoring is composed of, this testing infrastructure will detect the problem. Furthermore, if some refactoring or optimization could not be valid for every possible case we can verify it for every case used by the application and expressed as a test case. What is the advantage? Personally, the main advantage I've seen so far is that you focus your destination quickly and is much difficult to divert implementing options in your software that will never be used. This implementation of unneeded features is a wasting of your precious development time and effort. And as you may already know, judiciously administering these resources may be the difference between successfully reaching the end of the project or not. Probably, Test Driven Development could not be indiscriminately applied to any project. I think that, as well as any other technique, you should use your judgment and expertise to recognize where it can be applied and where not. But keep this in mind: there are no silver bullets. The other advantage is that you always have a safety net for your changes. Every time you change a piece of code, you can be absolutely sure that other parts of the system are not affected as long as there are tests verifying that the conditions haven't changed. Understanding the testing requirements To be able to write a test about any subject, we should first understand the Subject under test. We also mentioned that one of the advantages is that you focus your destination quickly instead of revolving around the requirements. Translating requirements into tests and cross-referencing them is perhaps the best way to understand the requirements, and be sure that there is always an implementation and verification for all of them. Also, when the requirements change (something that is very frequent in software development projects), we can change the tests verifying these requirements and then change the implementation to be sure that everything was correctly understood and mapped to code. Creating a sample project—the Temperature Converter Our examples will revolve around an extremely simple Android sample project. It doesn't try to show all the fancy Android features but focuses on testing and gradually building the application from the test, applying the concepts learned before. Let's pretend that we have received a list of requirements to develop an Android temperature converter application. Though oversimplified, we will be following the steps you normally would to develop such an application. However, in this case we will introduce the Test Driven Development techniques in the process. The list of requirements Most usual than not, the list of requirements is very vague and there is a high number of details not fully covered. As an example, let's pretend that we receive this list from the project owner: The application converts temperatures from Celsius to Fahrenheit and vice-versa The user interface presents two fields to enter the temperatures, one for Celsius other for Fahrenheit When one temperature is entered in one field the other one is automatically updated with the conversion If there are errors, they should be displayed to the user, possibly using the same fields Some space in the user interface should be reserved for the on screen keyboard to ease the application operation when several conversions are entered Entry fields should start empty Values entered are decimal values with two digits after the point Digits are right aligned Last entered values should be retained even after the application is paused User interface concept design Let's assume that we receive this conceptual user interface design from the User Interface Design team: Creating the projects Our first step is to create the project. As we mentioned earlier, we are creating a main and a test project. The following screenshot shows the creation of the TemperatureConverter project (all values are typical Android project values): When you are ready to continue you should press the Next > button in order to create the related test project. The creation of the test project is displayed in this screenshot. All values will be selected for you based on your previous entries:
Read more
  • 0
  • 0
  • 1984