Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-yii-adding-users-and-user-management-your-site
Packt
21 Feb 2013
9 min read
Save for later

Yii: Adding Users and User Management to Your Site

Packt
21 Feb 2013
9 min read
(For more resources related to this topic, see here.) Mission Checklist This project assumes that you have a web development environment prepared. The files for this project include a Yii project directory with a database schema. To prepare for the project, carry out the following steps replacing the username lomeara with your own username: Copy the project files into your working directory. cp –r ~/Downloads/project_files/Chapter 3/project_files ~/projects/ch3 Make the directories that Yii uses web writeable. cd ~/projects/ch3/ sudo chown -R lomeara:www-data protected/runtime assets protected/models protected/controllers protected/views If you have a link for a previous project, remove it from the webroot directory. rm /opt/lampp/htdocs/cddb Create a link in the webroot directory to the copied directory. cd /opt/lampp/htdocs sudo ln -s ~/projects/ch3 cbdb Import the project into NetBeans (remember to set the project URL to http://localhost/cbdb) and configure for Yii development with PHPUnit. Create a database named cbdb and load the database schema (~/projects/ch3/ protected/data/schema.sql) into it. If you are not using the XAMPP stack or if your access to MySQL is password protected, you should review and update the Yii configuration file (in NetBeans it is ch3/Source Files/protected/config/main.php). Adding a User Object with CRUD As a foundation for our user management system, we will add a User table to the database and then use Gii to build a quick functional interface. Engage Thrusters Let's set the first building block by adding a User table containing the following information: A username Password hash Reference to a person entry for first name and last name In NetBeans, open a SQL Command window for the cbdb database and run the following command: CREATE TABLE 'user' ( 'id' int(10) unsigned NOT NULL AUTO_INCREMENT, 'username' varchar(20) NOT NULL, 'pwd_hash' char(34) NOT NULL, 'person_id' int(10) unsigned NOT NULL, PRIMARY KEY ('id'), UNIQUE KEY 'username' ('username'), CONSTRAINT 'userperson_ibfk_2' FOREIGN KEY ('person_id') REFERENCES 'person' ('id') ON DELETE CASCADE ) ENGINE=InnoDB; Open a web browser to the Gii URL http://localhost/cbdb/index.php/gii(the password configured in the sample code is yiibook) and use Gii to generate a model from the user table. Then, use Gii to generate CRUD from the user model. Back in NetBeans, add a link to the user index in your site's logged in menu (ch3 | Source Files | protected | views | layouts | main.php). It should look like this: } else { $this->widget('zii.widgets.CMenu',array( 'activeCssClass' => 'active', 'activateParents' => true, 'items'=>array( array('label'=>'Home', 'url'=>array('/site/index')), array('label'=>'Comic Books', 'url'=>array('/book'), 'items' => array( array('label'=>'Publishers', 'url'=>array('/publisher')), ) ), array('label'=>'Users', 'url'=>array('/user/index')), array('label'=>'Logout ('.Yii::app()->user- >name.')', 'url'=>array('/site/logout')) ), )); } ?> Right-click on the project name, run the site, and log in with the default username and password (admin/admin). You will see a menu that includes a link named Users. If you click on the Users link in the menu and then click on Create User, you will see a pretty awful-looking user-creation screen. We are going to fix that. First, we will update the user form to include fields for first name, last name, password, and repeat password. Edit ch3 | Source Files | protected | views | user | _form.php and add those fields. Start by changing all instances of $model to $user. Then, add a call to errorSummary on the person data under the errorSummary call on user. <?php echo $form->errorSummary($user); ?> <?php echo $form->errorSummary($person); ?> Add rows for first name and last name at the beginning of the form. <div class="row"> <?php echo $form->labelEx($person,'fname'); ?> <?php echo $form->textField($person,'fname',array ('size'=>20,'maxlength'=>20)); ?> <?php echo $form->error($person,'fname'); ?> </div> <div class="row"> <?php echo $form->labelEx($person,'lname'); ?> <?php echo $form->textField($person,'lname',array ('size'=>20,'maxlength'=>20)); ?> <?php echo $form->error($person,'lname'); ?> </div> Replace the pwd_hash row with the following two rows: <div class="row"> <?php echo $form->labelEx($user,'password'); ?> <?php echo $form->passwordField($user,'password',array ('size'=>20,'maxlength'=>64)); ?> <?php echo $form->error($user,'password'); ?> </div> <div class="row"> <?php echo $form->labelEx($user,'password_repeat'); ?> <?php echo $form->passwordField($user,'password_repeat',array ('size'=>20,'maxlength'=>64)); ?> <?php echo $form->error($user,'password_repeat'); ?> </div> Finally, remove the row for person_id. These changes are going to completely break the User create/update form for the time being. We want to capture the password data and ultimately make a hash out of it to store securely in the database. To collect the form inputs, we will add password fields to the User model that do not correspond to values in the database. Edit the User model ch3 | Source Files | protected | models | User.php and add two public variables to the class: class User extends CActiveRecord { public $password; public $password_repeat; In the same User model file, modify the attribute labels function to include labels for the new password fields. public function attributeLabels() { return array( 'id' => 'ID', 'username' => 'Username', 'password' => 'Password', 'password_repeat' => 'Password Repeat' ); } In the same User model file, update the rules function with the following rules: Require username Limit length of username and password Compare password with password repeat Accept only safe values for username and password We will come back to this and improve it, but for now, it should look like the following: public function rules() { // NOTE: you should only define rules for those attributes //that will receive user inputs. return array( array('username', 'required'), array('username', 'length', 'max'=>20), array('password', 'length', 'max'=>32), array('password', 'compare'), array('password_repeat', 'safe'), ); } In order to store the user's first and last name, we must change the Create action in the User controller ch3 | Source Files | protected | controllers | UserController. php to create a Person object in addition to a User object. Change the variable name $model to $user, and add an instance of the Person model. public function actionCreate() { $user=new User; $person=new Person; // Uncomment the following line if AJAX validation is //needed // $this->performAjaxValidation($user); if(isset($_POST['User'])) { $user->attributes=$_POST['User']; if($user->save()) $this->redirect(array('view','id'=>$user->id)); } $this->render('create',array( 'user'=>$user, 'person'=>$person, )); } Don't reload the create user page yet. First, update the last line of the User Create view ch3 | Source Files | protected | views | user | create.php to send a User object and a Person object. <?php echo $this->renderPartial('_form', array('user'=>$user, 'person' =>$person)); ?> Make a change to the attributeLabels function in the Person model (ch3 | Source Files | protected | models | Person.php) to display clearer labels for first name and last name. public function attributeLabels() { return array( 'id' => 'ID', 'fname' => 'First Name', 'lname' => 'Last Name', ); } The resulting user form should look like this: Looks pretty good, but if you try to submit the form, you will receive an error. To fix this, we will change the User Create action in the User controller ch3 | Source Files | protected | controllers | UserController.php to check and save both User and Person data. if(isset($_POST['User'], $_POST['Person'])) { $person->attributes=$_POST['Person']; if($person->save()) { $user->attributes=$_POST['User']; $user->person_id = $person->id; if($user->save()) $this->redirect(array('view','id'=>$user->id)); } } Great! Now you can create users, but if you try to edit a user entry, you see another error. This fix will require a couple of more changes. First, in the user controller ch3 | Source Files | protected | controllers | UserController.php, change the loadModel function to load the user model with its related person information: $model=User::model() ->with('person') ->findByPk((int)$id); Next, in the same file, change the actionUpdate function. Add a call to save the person data, if the user save succeeds: if($model->save()) { $model->person->attributes=$_POST['Person']; $model->person->save(); $this->redirect(array('view','id'=>$model->id)); } Then, in the user update view ch3 | Source Files | protected | views | user | update.php, add the person information to the form render. <?php echo $this->renderPartial('_form', array('user'=>$model, 'person' => $model->person)); ?> One more piece of user management housekeeping; try deleting a user. Look in the database for the user and the person info. Oops. Didn't clean up after itself, did it? Update the User controller ch3 | Source Files | protected | controllers | UserController.php once again. Change the call to delete in the User delete action: $this->loadModel($id)->person->delete(); Objective Complete - Mini Debriefing We have added a new object, User, to our site, and associated it with the Person object to capture the user's first and last name. Gii helped us get the basic structure of our user management function in place, and then we altered the model, view, and controller to bring the pieces together.
Read more
  • 0
  • 0
  • 9969

article-image-apache-solr-configuration
Packt
19 Feb 2013
17 min read
Save for later

Apache Solr Configuration

Packt
19 Feb 2013
17 min read
(For more resources related to this topic, see here.) During the writing of this article, I used Solr version 4.0 and Jetty Version 8.1.5. If another version of Solr is mandatory for a feature to run, then it will be mentioned. If you don't have any experience with Apache Solr, please refer to the Apache Solr tutorial which can be found at : http://lucene.apache.org/solr/tutorial.html. Running Solr on Jetty The simplest way to run Apache Solr on a Jetty servlet container is to run the provided example configuration based on embedded Jetty. But it's not the case here. In this recipe, I would like to show you how to configure and run Solr on a standalone Jetty container. Getting ready First of all you need to download the Jetty servlet container for your platform. You can get your download package from an automatic installer (such as, apt-get), or you can download it yourself from http://jetty.codehaus.org/jetty/ How to do it... The first thing is to install the Jetty servlet container, which is beyond the scope of this article, so we will assume that you have Jetty installed in the /usr/share/jetty directory or you copied the Jetty files to that directory. Let's start by copying the solr.war file to the webapps directory of the Jetty installation (so the whole path would be /usr/share/jetty/webapps). In addition to that we need to create a temporary directory in Jetty installation, so let's create the temp directory in the Jetty installation directory. Next we need to copy and adjust the solr.xml file from the context directory of the Solr example distribution to the context directory of the Jetty installation. The final file contents should look like the following code: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www. eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/solr</Set> <Set name="war"><SystemProperty name="jetty.home"/>/webapps/solr. war</Set> <Set name="defaultsDescriptor"><SystemProperty name="jetty.home"/>/ etc/webdefault.xml</Set> <Set name="tempDirectory"><Property name="jetty.home" default="."/>/ temp</Set> </Configure> Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Now we need to copy the jetty.xml, webdefault.xml, and logging.properties files from the etc directory of the Solr distribution to the configuration directory of Jetty, so in our case to the /usr/share/jetty/etc directory. The next step is to copy the Solr configuration files to the appropriate directory. I'm talking about files such as schema.xml, solrconfig.xml, solr.xml, and so on. Those files should be in the directory specified by the solr.solr.home system variable (in my case this was the /usr/share/solr directory). Please remember to preserve the directory structure you'll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="true"> <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="collection1" /> </cores> </solr> All the other configuration files should go to the /usr/share/solr/collection1/conf directory (place the schema.xml and solrconfig.xml files there along with any additional configuration files your deployment needs). Your cores may have other names than the default collection1, so please be aware of that. The last thing about the configuration is to update the /etc/default/jetty file and add –Dsolr.solr.home=/usr/share/solr to the JAVA_OPTIONS variable of that file. The whole line with that variable could look like the following: JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true -Dsolr.solr.home=/usr/ share/solr/" If you didn't install Jetty with apt-get or a similar software, you may not have the /etc/default/jetty file. In that case, add the –Dsolr.solr.home=/usr/share/solr parameter to the Jetty startup. We can now run Jetty to see if everything is ok. To start Jetty, that was installed, for example, using the apt-get command, use the following command: /etc/init.d/jetty start You can also run Jetty with a java command. Run the following command in the Jetty installation directory: java –Dsolr.solr.home=/usr/share/solr –jar start.jar If there were no exceptions during the startup, we have a running Jetty with Solr deployed and configured. To check if Solr is running, try going to the following address with your web browser: http://localhost:8983/solr/. You should see the Solr front page with cores, or a single core, mentioned. Congratulations! You just successfully installed, configured, and ran the Jetty servlet container with Solr deployed. How it works... For the purpose of this recipe, I assumed that we needed a single core installation with only I and solrconfig.xml configuration files. Multicore installation is very similar – it differs only in terms of the Solr configuration files. The first thing we did was copy the solr.war file and create the temp directory. The WAR file is the actual Solr web application. The temp directory will be used by Jetty to unpack the WAR file. The solr.xml file we placed in the context directory enables Jetty to define the context for the Solr web application. As you can see in its contents, we set the context to be /solr, so our Solr application will be available under http://localhost:8983/solr/ We also specified where Jetty should look for the WAR file (the war property), where the web application descriptor file (the defaultsDescriptor property) is, and finally where the temporary directory will be located (the tempDirectory property). The next step is to provide configuration files for the Solr web application. Those files should be in the directory specified by the system solr.solr.home variable. I decided to use the /usr/share/solr directory to ensure that I'll be able to update Jetty without the need of overriding or deleting the Solr configuration files. When copying the Solr configuration files, you should remember to include all the files and the exact directory structure that Solr needs. So in the directory specified by the solr.solr.home variable, the solr.xml file should be available – the one that describes the cores of your system. The solr.xml file is pretty simple – there should be the root element called solr. Inside it there should be a cores tag (with the adminPath variable set to the address where Solr's cores administration API is available and the defaultCoreName attribute that says which is the default core). The cores tag is a parent for cores definition – each core should have its own cores tag with name attribute specifying the core name and the instanceDir attribute specifying the directory where the core specific files will be available (such as the conf directory). If you installed Jetty with the apt–get command or similar, you will need to update the /etc/default/jetty file to include the solr.solr.home variable for Solr to be able to see its configuration directory. After all those steps we are ready to launch Jetty. If you installed Jetty with apt–get or a similar software, you can run Jetty with the first command shown in the example. Otherwise you can run Jetty with a java command from the Jetty installation directory. After running the example query in your web browser you should see the Solr front page as a single core. Congratulations! You just successfully configured and ran the Jetty servlet container with Solr deployed. There's more... There are a few tasks you can do to counter some problems when running Solr within the Jetty servlet container. Here are the most common ones that I encountered during my work. I want Jetty to run on a different port Sometimes it's necessary to run Jetty on a different port other than the default one. We have two ways to achieve that: Adding an additional startup parameter, jetty.port. The startup command would look like the following command: java –Djetty.port=9999 –jar start.jar Changing the jetty.xml file – to do that you need to change the following line: <Set name="port"><SystemProperty name="jetty.port" default="8983"/></Set> To: <Set name="port"><SystemProperty name="jetty.port" default="9999"/></Set> Buffer size is too small Buffer overflow is a common problem when our queries are getting too long and too complex, – for example, when we use many logical operators or long phrases. When the standard head buffer is not enough you can resize it to meet your needs. To do that, you add the following line to the Jetty connector in the jetty.xml file. Of course the value shown in the example can be changed to the one that you need: <Set name="headerBufferSize">32768</Set> After adding the value, the connector definition should look more or less like the following snippet: <Call name="addConnector"> <Arg> <New class="org.mortbay.jetty.bio.SocketConnector"> <Set name="port"><SystemProperty name="jetty.port" default="8080"/></ Set> <Set name="maxIdleTime">50000</Set> <Set name="lowResourceMaxIdleTime">1500</Set> <Set name="headerBufferSize">32768</Set> </New> </Arg> </Call> Running Solr on Apache Tomcat Sometimes you need to choose a servlet container other than Jetty. Maybe because your client has other applications running on another servlet container, maybe because you just don't like Jetty. Whatever your requirements are that put Jetty out of the scope of your interest, the first thing that comes to mind is a popular and powerful servlet container – Apache Tomcat. This recipe will give you an idea of how to properly set up and run Solr in the Apache Tomcat environment. Getting ready First of all we need an Apache Tomcat servlet container. It can be found at the Apache Tomcat website – http://tomcat.apache.org. I concentrated on the Tomcat Version 7.x because at the time of writing of this book it was mature and stable. The version that I used during the writing of this recipe was Apache Tomcat 7.0.29, which was the newest one at the time. How to do it... To run Solr on Apache Tomcat we need to follow these simple steps: Firstly, you need to install Apache Tomcat. The Tomcat installation is beyond the scope of this book so we will assume that you have already installed this servlet container in the directory specified by the $TOMCAT_HOME system variable. The second step is preparing the Apache Tomcat configuration files. To do that we need to add the following inscription to the connector definition in the server.xml configuration file: URIEncoding="UTF-8" The portion of the modified server.xml file should look like the following code snippet: <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> The third step is to create a proper context file. To do that, create a solr.xml file in the $TOMCAT_HOME/conf/Catalina/localhost directory. The contents of the file should look like the following code: <Context path="/solr" docBase="/usr/share/tomcat/webapps/solr.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/ usr/share/solr/" override="true"/> </Context> The next thing is the Solr deployment. To do that we need the apache-solr-4.0.0.war file that contains the necessary files and libraries to run Solr that is to be copied to the Tomcat webapps directory and renamed solr.war. The one last thing we need to do is add the Solr configuration files. The files that you need to copy are files such as schema.xml, solrconfig.xml, and so on. Those files should be placed in the directory specified by the solr/home variable (in our case /usr/share/solr/). Please don't forget that you need to ensure the proper directory structure. If you are not familiar with the Solr directory structure please take a look at the example deployment that is provided with the standard Solr package. Please remember to preserve the directory structure you'll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="true"> <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="collection1" /> </cores> </solr> All the other configuration files should go to the /usr/share/solr/collection1/ conf directory (place the schema.xml and solrconfig.xml files there along with any additional configuration files your deployment needs). Your cores may have other names than the default collection1, so please be aware of that. Now we can start the servlet container, by running the following command: bin/catalina.sh start In the log file you should see a message like this: Info: Server startup in 3097 ms To ensure that Solr is running properly, you can run a browser and point it to an address where Solr should be visible, like the following: http://localhost:8080/solr/ If you see the page with links to administration pages of each of the cores defined, that means that your Solr is up and running. How it works... Let's start from the second step as the installation part is beyond the scope of this book. As you probably know, Solr uses UTF-8 file encoding. That means that we need to ensure that Apache Tomcat will be informed that all requests and responses made should use that encoding. To do that, we modified the server.xml file in the way shown in the example. The Catalina context file (called solr.xml in our example) says that our Solr application will be available under the /solr context (the path attribute). We also specified the WAR file location (the docBase attribute). We also said that we are not using debug (the debug attribute), and we allowed Solr to access other context manipulation methods. The last thing is to specify the directory where Solr should look for the configuration files. We do that by adding the solr/home environment variable with the value attribute set to the path to the directory where we have put the configuration files. The solr.xml file is pretty simple – there should be the root element called solr. Inside it there should be the cores tag (with the adminPath variable set to the address where the Solr cores administration API is available and the defaultCoreName attribute describing which is the default core). The cores tag is a parent for cores definition – each core should have its own core tag with a name attribute specifying the core name and the instanceDir attribute specifying the directory where the core-specific files will be available (such as the conf directory). The shell command that is shown starts Apache Tomcat. There are some other options of the catalina.sh (or catalina.bat) script; the descriptions of these options are as follows: stop: This stops Apache Tomcat restart: This restarts Apache Tomcat debug: This start Apache Tomcat in debug mode run: This runs Apache Tomcat in the current window, so you can see the output on the console from which you run Tomcat. After running the example address in the web browser, you should see a Solr front page with a core (or cores if you have a multicore deployment). Congratulations! You just successfully configured and ran the Apache Tomcat servlet container with Solr deployed. There's more... There are some other tasks that are common problems when running Solr on Apache Tomcat. Changing the port on which we see Solr running on Tomcat Sometimes it is necessary to run Apache Tomcat on a different port other than 8080, which is the default one. To do that, you need to modify the port variable of the connector definition in the server.xml file located in the $TOMCAT_HOME/conf directory. If you would like your Tomcat to run on port 9999, this definition should look like the following code snippet: <Connector port="9999" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> While the original definition looks like the following snippet: <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> Installing a standalone ZooKeeper You may know that in order to run SolrCloud—the distributed Solr installation–you need to have Apache ZooKeeper installed. Zookeeper is a centralized service for maintaining configurations, naming, and provisioning service synchronization. SolrCloud uses ZooKeeper to synchronize configuration and cluster states (such as elected shard leaders), and that's why it is crucial to have a highly available and fault tolerant ZooKeeper installation. If you have a single ZooKeeper instance and it fails then your SolrCloud cluster will crash too. So, this recipe will show you how to install ZooKeeper so that it's not a single point of failure in your cluster configuration. Getting ready The installation instruction in this recipe contains information about installing ZooKeeper Version 3.4.3, but it should be useable for any minor release changes of Apache ZooKeeper. To download ZooKeeper please go to http://zookeeper.apache.org/releases.html This recipe will show you how to install ZooKeeper in a Linux-based environment. You also need Java installed. How to do it... Let's assume that we decided to install ZooKeeper in the /usr/share/zookeeper directory of our server and we want to have three servers (with IP addresses 192.168.1.1, 192.168.1.2, and 192.168.1.3) hosting the distributed ZooKeeper installation. After downloading the ZooKeeper installation, we create the necessary directory: sudo mkdir /usr/share/zookeeper Then we unpack the downloaded archive to the newly created directory. We do that on three servers. Next we need to change our ZooKeeper configuration file and specify the servers that will form the ZooKeeper quorum, so we edit the /usr/share/zookeeper/conf/ zoo.cfg file and we add the following entries: clientPort=2181 dataDir=/usr/share/zookeeper/data tickTime=2000 initLimit=10 syncLimit=5 server.1=192.168.1.1:2888:3888 server.2=192.168.1.2:2888:3888 server.3=192.168.1.3:2888:3888 And now, we can start the ZooKeeper servers with the following command: /usr/share/zookeeper/bin/zkServer.sh start If everything went well you should see something like the following: JMX enabled by default Using config: /usr/share/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED And that's all. Of course you can also add the ZooKeeper service to start automatically during your operating system startup, but that's beyond the scope of the recipe and the book itself. How it works... Let's skip the first part, because creating the directory and unpacking the ZooKeeper server there is quite simple. What I would like to concentrate on are the configuration values of the ZooKeeper server. The clientPort property specifies the port on which our SolrCloud servers should connect to ZooKeeper. The dataDir property specifies the directory where ZooKeeper will hold its data. So far, so good right ? So now, the more advanced properties; the tickTime property specified in milliseconds is the basic time unit for ZooKeeper. The initLimit property specifies how many ticks the initial synchronization phase can take. Finally, the syncLimit property specifies how many ticks can pass between sending the request and receiving an acknowledgement. There are also three additional properties present, server.1, server.2, and server.3. These three properties define the addresses of the ZooKeeper instances that will form the quorum. However, there are three values separated by a colon character. The first part is the IP address of the ZooKeeper server, and the second and third parts are the ports used by ZooKeeper instances to communicate with each other.
Read more
  • 0
  • 0
  • 2535

Packt
19 Feb 2013
6 min read
Save for later

Rich Internet Application (RIA) – Canvas

Packt
19 Feb 2013
6 min read
(For more resources related to this topic, see here.) RIA — Canvas (Become an expert) If you started your career in web design or development in the late 90s to early 2000s, there is definitely a good chance that at some point, you've been asked to do a zany, cool, and bouncy website using (then) Macromedia Flash. After it was acquired by Adobe in 2005, Flash transformed from being a stage-based, procedural script-running, hard-coded, and embedded object to a Rich Internet Application ( RIA). With the arrival of Adobe Flex as an SDK for Flash's Action Script 3.0, the company tried to lure more developers into Flash development. Later, Adobe donated Flex to the Apache foundation. All this, and yet no browser vendor ever released the Flash plugin integrated with any of their products. Flash-based applications took time to develop, never had any open source frameworks supporting the final product, and came across many memory-hogging security threats. The biggest blow to this technology came when Apple decided not to support Flash with any of the iPad, iPhone, or iPod devices. The message was loud and clear. The web needed a new platform to support Rich Internet Applications, which can be seamlessly integrated in browsers without any third-party plugin requirement at the visitors' end. Presenting HTML5 Canvas. Getting ready The HTML5 canvas element provides a canvas (surprise!) with a specified height and width inside a web page, which can be manipulated with JavaScript to generate graphics and Rich Internet Applications. How to do it... It is just the same as it was with video or audio. <canvas id="TestCanvas" width="300" height="300"> Your browser does not support Canvas element from HTML5. </canvas> The preceding syntax gives us a blank block element with the specified height and width, which can now be identified with some JavaScript by the ID TestCanvas. <script> var test=document.getElementById("TestCanvas"); var col1=c.getContext("2d"); col1.fillRect(0,0,20,80); col1.fillStyle="#808"; </script> A variable named test is defined with the method document.getElementByID() to identify a canvas on the web page. The getContext object, which is a built-in HTML5 object, is defined in another variable called col1. The value 2d provides properties and methods for drawing paths, boxes, circles, text, images, and more. The fillRect(x,y,width,height) method provides four parameters to draw a rectangle on the x and y coordinates. Similarly, the fillStyle() method defines the fill color of the drawing. The output is as follows: The origin of the x and y coordinates lies at the top-left corner of the canvas, unlike the graph paper (which most of us are used to), where it lies in the bottom-left corner. Appending the graph for multiple columns by additional getContext variables can be done as follows: <script> var test=document.getElementById("TestCanvas"); var col1=test.getContext("2d"); col1.fillStyle="#808"; col1.fillRect(10,0,20,80); var col2=test.getContext("2d"); col2.fillStyle="#808"; col2.fillRect(40,0,20,100); var col3=test.getContext("2d"); col3.fillStyle="#808"; col3.fillRect(70,0,20,120); </script> We get the following output: The getContext variables can be defined with different methods as well. To draw a line we use the moveTo(x,y) and lineTo(x,y) methods: line.moveTo(10,10); line.lineTo(150,250); line.stroke(); The moveTo() method defines the starting point of the line and the lineTo() method defines the end point on the x and y coordinates. The stroke() method without any value assigned to it connects the two assigned points with a line stroke. The stroke() and fill() are the ink methods used to define the visibility of the graphic. To draw a circle we use the arc(x,y,r,start,stop) method: circle.beginPath(); circle.arc(150,150,80,0,2*Math.PI); circle.fill(); With the arc() method, we must use either the fill() method or the stroke() method for a visible area. For further exploration, here are a few more canvas methods that can be tried out: Text for canvas: font: This specifies font properties for text fillText(text,x,y): This draws normal text on the canvas strokeText(text,x,y): This draws stroked text without any fill color Here are the syntaxes for the preceding properties: text.font="30px Arial"; text.fillText("HTML5 Canvas",10,50); text.strokeText("HTML5 Canvas Text",10,100);   And for the last example, we will do a raster image drawing using the ID into the canvas: var img=document.getElementById("canvas-bg"); draw.drawImage(img,10,10); Similar to the ID for Canvas, the image ID is selected by the document.getElementById() method, and then we can use it as a background for the selected canvas. The image used with the ID canvas-bg can be placed in a hidden div tag and later can be used as a background for any graph or chart, or any other graphic. One of the most practical applications of the text and image drawing on a canvas could be the customization of a product with label image and text over it. How it works... There are many places where Canvas may be implemented for regular web development practices. It can be used to generate real-time charts, product customization applications, or more complex or simple applications, depending on the requirement. We know that Canvas is an HTML5 element and the key (for Canvas) always remains with the JavaScript used in it. We get support from all the browsers apart from IE8 or below. There's more... It always helps when a developer knows about the resources available at their disposal. Open source JS frameworks for Canvas There are many open source JavaScript frameworks and libraries available for easy development of the graphics with Canvas. A few noteworthy ones are KineticJS and GoJS. Another framework is ThreeJS, which uses WebGL and allows 3D rendering for your web graphics. Summary This article discussed about the Rich Internet Application (RIA) platform with HTML5 and CSS3. We also saw how Canvas can be used implemented in regular web development practices. Resources for Article : Further resources on this subject: Building HTML5 Pages from Scratch [Article] HTML5: Generic Containers [Article] HTML5: Developing Rich Media Applications using Canvas [Article]
Read more
  • 0
  • 0
  • 1455
Visually different images

article-image-introduction-risk-analysis
Packt
18 Feb 2013
21 min read
Save for later

An Introduction to Risk Analysis

Packt
18 Feb 2013
21 min read
(For more resources related to this topic, see here.) Risk analysis First, we must understand what risk is, how it is calculated, and then implement a solution to mitigate or reduce the calculated risk. At this point in the process of developing agile security architecture, we have already defined our data. The following sections assume we know what the data is, just not the true impact to the enterprise if a threat is realized. What is risk analysis? Simply stated, risk analysis is the process of assessing the components of risk; threats, impact, and probability as it relates to an asset, in our case enterprise data. To ascertain risk, the probability of impact to enterprise data must first be calculated. A simple risk analysis output may be the decision to spend capital to protect an asset based on value of the asset and the scope of impact if the risk is not mitigated. This is the most general form of risk analysis, and there are several methods that can be applied to produce a meaningful output. Risk analysis is directly impacted by the maturity of the organization in terms of being able to show value to the enterprise as a whole and understanding the applied risk methodology. If the enterprise does not have a formal risk analysis capability, it will be difficult for the security team to use this method to properly implement security architecture for enterprise initiatives. Without this capability, the enterprise will either spend on the products with the best marketing, or not spend at all. Let's take a closer look at the risk analysis components and figure out where useful analysis data can be obtained. Assessing threats First, we must define what a threat is in order to identify probable threats. It may be difficult to determine threats to the enterprise data if this analysis has never been completed. A threat is anything that can act negatively towards the enterprise assets. It may be a person, virus, malware, or a natural disaster. Due to the broad scope of threats, actions may be purposeful or unintentional in nature adding to the absolute unpredictability of impact. Once a threat is defined, the attributes of threats must be identified and documented. The documentation of threats should include the type of threat, identified threat groupings, motivations if any, and methods of actions. In order to gain understanding of pertinent threats for the enterprise, researching past events may be helpful. Historically, there have been challenges to getting realistic breach data, but better reporting of post-breach findings continues to reduce the uncertainty of analysis. Another method to getting data is leveraging existing security technologies implemented to build a realistic perspective of threats. The following are a few sample questions to guide you on the discovery of threats: What is being detected by the existing infrastructure? What are others in the same industry observing? What post-breach data is available in the same industry vertical? Who would want access to this data? What would motivate a person to attempt unauthorized access to the data? Data theft Destruction Notoriety Hacktivism Retaliation A sample table of data type, threat, and motivation is shown as follows: Data   Threat   Motivation   Credit card numbers   Hacker   Theft, Cybercrime   Trade secrets   Competitor   Competitive advantage   Personally Identifiable Information (PII)   Disgruntled employee   Retaliation, Destruction   Company confidential documents   Accidental leak   None   Client list   Natural disaster   None   This should be developed with as much detail as possible to form a realistic view of threats to the enterprise. There may also be several variations of threats and motivations for threat action on enterprise data. For example, accessing trade secrets by a competitor may be for competitive advantage, or a hacker may take action as part of hacktivism to bring negative press to the enterprise. The more you can elaborate on the possible threats and motivations that exist, the better you will be able to reduce the list to probable threats based on challenging the data you have gathered. It is important to continually challenge the logic used to have the most realistic perspective. Assessing impact Now that the probable threats have been identified, what kind of damage can be done or negative impact can be enacted upon the enterprise and the data. Impact is the outcome of threats acting against the enterprise. This could be a denial-of-service state where the agent, a hacker, uses a tool to starve the enterprise Internet web servers of resources causing a denial-of-service state for legitimate users. Another impact could be the loss of customer credit cards resulting in online fraud, reputation loss, and countless dollars in cleanup and remediation efforts. There are the immediate impacts and residual impacts. Immediate impacts are rather easy to determine because, typically, this is what we see in the news if it is big enough of an issue. Hopefully, the impact data does not come from first-hand experience, but in the case it is, executives should take action and learn from their mistakes. If there is no real-life experience with the impact, researching breach data will help using Internet sites such as DATALOSS db (http://datalossdb.org). Also, understanding the value of the data to the enterprise and its customers will aide in impact calculation. I think the latter impact analysis is more useful, but if the enterprise is unsure, then relying on breach data may be the only option. The following are a few sample discovery questions for business impact analysis: How is the enterprise affected by threat actions? Will we go out of business? Will we lose market share? If the data is deleted or manipulated, can it be recovered or restored? If the building is destroyed, do we have disaster recovery and business continuity capabilities? To get a more accurate assessment of the probable impact or total cost to the enterprise, map out what data is most desirable to steal, destroy, and manipulate. Align the identified threats to the identified data, and apply an impact level to the data indicating if the enterprise would suffer critical to minor loss. These should be as accurate as possible. Work the scenarios out on paper and base the impact analysis on the outcome of the exercises. The following is a sample table to present the identification and assessment of impact based on threat for a retailer. This is generally called a business impact analysis. Data   Threat   Impact   Credit card numbers   Hacker   Critical   Trade secrets   Competitor   Medium   PII   Disgruntled employee   High   Company confidential documents   Accidental leak   Low   Client list   Natural disaster   Medium   Enterprise industry vertical may affect the impact analysis. For instance, a retailer may have greater impact if credit card numbers are stolen than if their client list was stolen. Both scenarios have impact but one may warrant greater protection and more restricted access to limit the scope of impact, and reduce immediate and residual loss. Business impact should be measured in how the threat actions affect the business overall. Is it an annoyance or does it mean the business can no longer function? Natural disasters should also be accounted for and considered when assessing enterprise risk. Assessing probability Now that all conceived threats have been identified along with the business impact for each scenario, how do we really determine risk? Shouldn't risk be based on how likely the threat may take action, succeed, and cause an impact? Yes! The threat can be the most perilous thing imagined but if threat actions may only occur once in three thousand years, investment in protecting against the threat may not be warranted, at least in the near term. Probability data is as difficult, if not more difficult, to find than threat data. However, this calculation has the most influence on the derived risk. If the identified impact is expected to happen twice a year and the business impact is critical, perhaps security budget should be allocated to security mechanisms that mitigate or reduce the impact. The risk of the latter scenario would be higher because it is more probable, not possible, but probable. Anything is possible. I have heard an analogy for this to make the point. In the game of Russian roulette, a semi-automatic pistol either has a bullet in the chamber or it does not, this is possible. With a revolver and a quick spin of the cylinder, you now have a 1 in 6 chance on whether there is a bullet that will be fired when the firing pin strikes forward. This is oversimplified to illustrate possibility versus probability. There are several variables in the example that could affect the outcome such as a misfire, or the safety catch being enabled, stopping the gun's ability to fire. These would be calculated to form an accurate risk value. Make sense? This is how we need to approach probability. Technically, it is a semi-accurate estimation because there is just not enough detailed information on breaches and attacks to draw absolute conclusions. One approach may be to research what is happening in the same industry using online resources and peer groups, and then make intelligent estimates to determine if the enterprise could be affected too. Generally, there are outlier scenarios that require the utmost attention regardless; start here if these have not been identified as a probable risk scenario for the enterprise. The following are a few sample probability estimation questions: Has this event occurred before to the enterprise? Is there data to suggest it is happening now? Are there documented instances for similar enterprises? Do we know anything in regards to occurrence? Is the identified threat and impact really probable? The following table is the continuation of our risk analysis for our fictional retailer: Data   Threat   Impact   Probability   Credit card numbers   Hacker   Critical   High   Trade secrets   Competitor   Medium   Low   PII   Disgruntled employee   High   Medium   Company confidential documents   Accidental leak Low   Low Client list   Natural disaster   Medium   High   Based on the outcome of the probability exercises of identified threats and impacts, risk can be calculated and the appropriate course of action(s) developed and implemented. Assessing risk Now that the enterprise has agreed on what data has value, identified threats to the data, rated the impact to the enterprise, and the estimated probability of the impact occurring, the next logical step is to calculate the risk of the scenarios. Essentially, there are two methods to analyze and present risk: qualitative and quantitative. The decision to use one over the other should be based on the maturity of the enterprise's risk office. In general, a quantitative risk analysis will use descriptive labels like a qualitative method, however, there is more financial and mathematical analysis in quantitative analysis. Qualitative risk analysis Qualitative risk analysis provides a perspective of risk in levels with labels such as Critical, High, Medium, and Low. The enterprise must still define what each level means in a general financial perspective. For instance, a Low risk level may equate to a monetary loss of $1,000 to $100,000. The dollar ranges associated with each risk level will vary by enterprise. This must be agreed on by the entire enterprise so when risk is discussed, everyone is knowledgeable of what each label means financially. Do not confuse the estimated financial loss with the more detailed quantitative risk analysis approach; it is a simple valuation metric for deciding how much investment should be made based on probable monetary loss. The following section is an example qualitative risk analysis presenting the type of input required for the analysis. Notice that this is not a deep analysis of each of these inputs; it is designed to provide a relatively accurate perspective of risk associated with the scenario being analyzed. Qualitative risk analysis exercise Scenario: Hacker attacks website to steal credit card numbers located in backend database. Threat: External hacker. Threat capability: Novice to pro. Threat capability logic: There are several script-kiddie level tools available to wage SQL injection attacks. SQL injection is also well documented and professional hackers can use advanced techniques in conjunction with the automated tools. Vulnerability: 85 percent (how effective would the threat be with current mitigating mechanisms). Estimated impact: High, Medium, Low (as indicated in the following table).   Risk Estimated loss ($)   High   > 1,000,000   Medium   500,000 to 900,000   Low   < 500,000   Quantitative risk analysis Quantitative risk analysis is an in-depth assessment of what the monetary loss would be to the enterprise if the identified risk were realized. In order to facilitate this analysis, the enterprise must have a good understanding of its processes to determine a relatively accurate dollar amount for items such as systems, data restoration services, and man-hour break down for recovery or remediation of an impacting event. Typically, enterprises with a mature risk office will undertake this type of analysis to drive priority budget items or find areas to increase insurance, effectively transferring business risk. This will also allow for accurate communication to the board and enterprise executives to know at any given time the amount of risk the enterprise has assumed. With the quantitative approach a more accurate assessment of the threat types, threat capabilities, vulnerability, threat action frequency, and expected loss per threat action are required and must be as accurate as possible. As with qualitative risk analysis, the output of this analysis has to be compared to the cost to mitigate the identified threat. Ideally, the cost to mitigate would be less than the loss expectancy over a determined period of time. This is simple return on investment (ROI) calculation. Let's look again at the scenario used in the qualitative analysis and run it through a quantitative analysis. We will then compare against the price of a security product that would mitigate the risk to see if it is worth the capital expense. Before we begin the quantitative risk analysis, there are a couple of terms that need to be explained: Annual loss expectancy (ALE): The ALE is the calculation of what the financial loss would be to the enterprise if the threat event was to occur for a single year period. This is directly related to threat frequency. In the scenario this is once every three years, dividing the single lost expectancy by annual occurrence provides the ALE. Cost of protection (COP): The COP is the capital expense associated with the purchase or implementation of a security mechanism to mitigate or reduce the risk scenario. An example would be a firewall that costs $150,000 and $50,000 per each year of protection of the loss expectancy period. If the cost of protection over the same period is lower than the loss, this is a good indication that the capital expense is financially worthwhile. Quantitative risk analysis exercise Scenario: Hacker attacks website to steal credit card numbers located in backend database. Threat: External hacker. Threat capability: Novice to pro. Threat capability logic: There are several script-kiddie level tools available to wage SQL injection attacks. SQL injection is also well documented and professional hackers can use advanced techniques in conjunction with the automated tools. Vulnerability: 85 percent (how effective would the threat be with current mitigating mechanisms). Single loss expectation: $250,000. Threat frequency: 3 (how many times per year; this would be roughly once every three years). ALE: $83,000. COP: $150,000 (over 3 years). We will divide the total loss and the cost of protection over three years as, typically, capital expenses are depreciated over three to four years, and the loss is expected once every three years. This will give us the ALE and COP in the equation to determine the cost-benefit analysis. This is a simplified example, but the math would look as follows: $83,000 (ALE) - $50,000 (COP) = $33,000 (cost benefit) The loss is annually $33,000 more than the cost to protect against the threat. The assumption in our example is that the $250,000 figure is 85% of the total asset value, but because we have 15% protection capability, the number is now approximately $294,000. This step can be shortcut out of the equation if the ALE and rate of occurrence are known. When trying to figure out threat capability, try to be as realistic about the threat first. This will help us to better assess vulnerability because you will have a more accurate perspective on how realistic the threat is to the enterprise. For instance, if your scenario requires cracking advanced encryption and extensive system experience, the threat capability would be expert indicating current security controls may be acceptable for the majority of threat agents reducing probability and calculated risk. We tend to exaggerate in security to justify a purchase. We need to stop this trend and focus on what is the best area to spend precious budget dollars. The ultimate goal of a quantitative risk analysis is to ensure that spend for protection does not far exceed the threat the enterprise is protecting against. This is beneficial for the security team in justifying the expense of security budget line items. When the analysis is complete, there should still be a qualitative risk label associated with the risk. Using the above scenario with an annualized risk of $50,000 indicates this scenario is extremely low risk based on the defined risk levels in the qualitative risk exercise even if SLE is used. Does this analysis accurately represent acceptable loss? After an assessment is complete it is good practice to ensure all assumptions still hold true, especially the risk labels and associated monetary amounts. Applying risk analysis to trust models Well, now we can apply our risk methodology to our trust models to decide if we can continue with our implementation as is, or whether we need to change our approach based on risk. Our trust models, which are essentially use cases, rely on completing the risk analysis, which in turn decide the trust level and security mechanisms required to reduce the enterprise risk to an acceptable level. It would be foolish to think that we can shove all requests for similar access directly into one of these buckets without further analysis to determine the real risk associated with the request. After completing one of the risk analysis types we just covered, risk guidance can be provided for the scenario (and I stress guidance). For the sake of simplicity an implementation path may be chosen, but it will lead to compromises in the overall security of the enterprise and is cautioned. I have re-presented the table of one scenario, the external application user. This is a better representation of how a trust model should look with risk and security enforcement established for the scenario. If an enterprise is aware of how it conducts business, then a focused effort in this area should produce a realistic list of interactions with data by whom, with what level of trust, and based on risk, what controls need to be present and enforced by policy and standards. User type   External   Allowed access   Tier 1 DMZ only, Least privilege   Trust level   1 - Not trusted   Risk   Medium   Policy   Acceptable use, Monitoring, Access restrictions   Required security mechanisms   FW, IPS, Web application firewall   The user is assumed to have access to log in to the web application and have more possible interaction with the backend database(s). This should be a focal point for testing, because this is the biggest area of risk in this scenario. Threats such as SQL injection that can be waged against a web application with little to no experience are commonplace. Enterprises that have e-commerce websites typically do not restrict who can create an account. This should have input to the trust decision and ultimately the security architecture applied. Deciding on a risk analysis methodology We have covered the two general types of risk analysis, qualitative and quantitative, but which is best? It depends on several factors: risk awareness of the enterprise, risk analysts' capabilities, risk analysis data, and the influence of risk in the enterprise. If the idea of risk analysis or IT risk analysis is new to the enterprise, then a slow approach with qualitative analysis is recommended to get everyone thinking of risk and what it means to the business. It will be imperative to get an enterprise-wide agreement on the risk labels. Using the lesser involved method does not mean you will not be questioned on the data used in the analysis, so be prepared to defend the data used and explain estimation methods leveraged. If it is decided to use a quantitative risk analysis method, a considerable amount of effort is required along with meticulous loss figures and knowledge of the environment. This method is considered the most effective requiring risk expertise, resources, and an enterprise-wide commitment to risk analysis. This method is more accurate, though it can be argued that since both methods require some level of estimation, the accuracy lies in accurate estimation skills. I use the Douglas Hubbard school of thought on estimating with 90 percent accuracy. You will find his works at his website http://www.hubbardresearch.com/. I highly recommend his title How to Measure Anything: Finding the Value of "Intangibles" in Business, Tantor Media to learn estimation skills. It may be beneficial to have an external firm perform the analysis if the engagement is significant in size. The benefits of both should be that the enterprise is able to make risk-aware decisions on how to securely implement IT solutions. Both should be presented with common risk levels such as High, Medium, Low; essentially the common language everyone can speak knowing a range of financial risk without all the intimate details of how they arrived at the risk level. Other thoughts on risk and new enterprise endeavors Now that you have been presented with types of risk analysis, they should be applied as tools to best approach the new technologies being implemented in the networks of our enterprises. Unfortunately, there are broad brush strokes of trusted and untrusted approaches being applied that may or may not be accurate without risk analysis as a decision input. Two examples where this can be very costly are the new BYOD and cloud initiatives. At first glance these are the two most risky business maneuvers an enterprise can attempt from an information security perspective. Deciding if this really is the case requires an analysis based on trust models and data-centric security architecture. If the proper security mechanisms are implemented and security applied from users to data, the risk can be reduced to a tolerable level. The BYOD business model has many positive benefits to the enterprise, especially capital expense reduction. However, implementing a BYOD or cloud solution without further analysis of risk can introduce significant risk beyond the benefit of the initiative. Do not be quick to spread fear in order to avoid facing the changing landscape we have worked so hard to build and secure. It is different, but at one time, what we know today as the norm was new too. Be cautious but creative, or IT security will be discredited for what will be received as a difficult interaction. This is not the desired perception for IT security. Strive to understand the business case, risk to business assets (data, systems, people, processes, and so on), and then apply sound security architecture as we have discussed so far. Begin evangelizing the new approach to security in the enterprise by developing trust models that everyone can understand. Use this as the introduction to agile security architecture and get input to create models based on risk. By providing a risk-based perspective to emerging technologies and other radical requests, a methodical approach can bring better adoption and overall increased security in the enterprise. Summary In this article, we took a look at analyzing risk by presenting quantitative and qualitative methods including an exercise to understand the approach. The overall goal of security is to be integrated into business processes, so it is truly a part of the business and not an expensive afterthought simply there to patch a security problem. Resources for Article : Further resources on this subject: Microsoft Enterprise Library: Security Application Block [Article] Microsoft Enterprise Library: Authorization and Security Cache [Article] Getting Started with Enterprise Library [Article]
Read more
  • 0
  • 0
  • 2534

article-image-meteorjs-javascript-framework-why-meteor-rocks
Packt
08 Feb 2013
18 min read
Save for later

Meteor.js JavaScript Framework: Why Meteor Rocks!

Packt
08 Feb 2013
18 min read
(For more resources related to this topic, see here.) Modern web applications Our world is changing. With continual advancements in display, computing, and storage capacities, what wasn't possible just a few years ago is now not only possible, but critical to the success of a good application. The Web in particular has undergone significant change. The origin of the web app (client/server) From the beginning, web servers and clients have mimicked the dumb terminal approach to computing, where a server with significantly more processing power than a client will perform operations on data (writing records to a database, math calculations, text searches, and so on), transform the data into a readable format (turn a database record into HTML, and so on), and then serve the result to the client, where it's displayed for the user. In other words, the server does all the work, and the client acts as more of a display, or dumb terminal. The design pattern for this is called...wait for it…the client/server design pattern: This design pattern, borrowed from the dumb terminals and mainframes of the 60s and 70s, was the beginning of the Web as we know it, and has continued to be the design pattern we think of, when we think of the Internet. The rise of the machines (MVC) Before the Web (and ever since), desktops were able to run a program such as a spreadsheet or a word processor without needing to talk to a server. This type of application could do everything it needed to, right there on the big and beefy desktop machine. During the early 90s, desktop computers got faster and better. Even more and more beefy. At the same time, the Web was coming alive. People started having the idea that a hybrid between the beefy desktop application (a fat app) and the connected client/server application (a thin app) would produce the best of both worlds. This kind of hybrid app — quite the opposite of a dumb terminal — was called a smart app. There were many business-oriented smart apps created, but the easiest examples are found in computer games. Massively Multiplayer Online games (MMOs), first-person shooters, and real-time strategies are smart apps where information (the data model) is passed between machines through a server. The client in this case does a lot more than just display the information. It performs most of the processing (or controls) and transforms the data into something to be displayed (the view). This design pattern is simple, but very effective. It's called the Model View Controller (MVC) pattern. The model is all the data. In the context of a smart app, the model is provided by a server. The client makes requests for the model from the server. Once the client gets the model, it performs actions/logic on this data, and then prepares it to be displayed on the screen. This part of the application (talk to the server, modify the data model, and prep data for display) is called the controller. The controller sends commands to the view, which displays the information, and reports back to the controller when something happens on the screen (a button click, for example). The controller receives that feedback, performs logic, and updates the model. Lather, rinse, repeat. Because web browsers were built to be "dumb clients" the idea of using a browser as a smart app was out of the question. Instead, smart apps were built on frameworks such as Microsoft .NET, Java, or Macromedia (now Adobe) Flash. As long as you had the framework installed, you could visit a web page to download/run a smart app. Sometimes you could run the app inside the browser, sometimes you could download it first, but either way, you were running a new type of web app, where the application could talk to the server and share the processing workload. The browser grows up (MVVM) Beginning in the early 2000s, a new twist on the MVC pattern started to emerge. Developers started to realize that, for connected/enterprise "smart apps", there was actually a nested MVC pattern. The server (controller) was performing business logic on the database information (model) through the use of business objects, and then passing that information on to a client application (a "view"). The client was receiving this information from the server, and treating it as its own personal "model." The client would then act as a proper controller, perform logic, and send the information to the view to be displayed on the screen. So, the "view" for the server MVC was the "model" for the second MVC. Then came the thought, "why stop at two?" There was no reason an application couldn't have multiple nested MVCs, with each view becoming the model for the next MVC. In fact, on the client side, there's actually a good reason to do so. Separating actual display logic (such as "this submit button goes here" and "the text area changed value") from the client-side object logic (such as "user can submit this record" and "the phone # has changed") allows a large majority of the code to be reused. The object logic can be ported to another application, and all you have to do is change out the display logic to extend the same model and controller code to a different application or device. From 2004-2005, this idea was refined and modified for smart apps (called the presentation model) by Martin Fowler and Microsoft (called the Model View View-Model). While not strictly the same thing as a nested MVC, the MVVM design pattern applied the concept of a nested MVC to the frontend application. As browser technologies (HTML and JavaScript) matured, it became possible to create smart apps that use the MVVM design pattern directly inside an HTML web page. This pattern makes it possible to run a full-sized application directly from a browser. No more downloading multiple frameworks or separate apps. You can now get the same functionality from visiting a URL as you previously could from buying a packaged product. A giant Meteor appears! Meteor takes the MVVM pattern to the next level. By applying templating through handlebars.js (or other template libraries) and using instant updates, it truly enables a web application to act and perform like a complete, robust smart application. Let's walk through some concepts of how Meteor does this, and then we'll begin to apply this to our Lending Library application. Cached and synchronized data (the model) Meteor supports a cached-and-synchronized data model that is the same on the client and the server. When the client notices a change to the data model, it first caches the change locally, and then tries to sync with the server. At the same time, it is listening to changes coming from the server. This allows the client to have a local copy of the data model, so it can send the results of any changes to the screen quickly, without having to wait for the server to respond. In addition, you'll notice that this is the beginning of the MVVM design pattern, within a nested MVC. In other words, the server publishes data changes, and treats those data changes as the "view" in its own MVC pattern. The client subscribes to those changes, and treats the changes as the "model" in its MVVM pattern. A code example of this is very simple inside of Meteor (although you can make it more complex and therefore more controlled if you'd like): var lists = new Meteor.Collection("lists"); What this one line does is declare that there is a lists data model. Both the client and server will have a version of it, but they treat their versions differently. The client will subscribe to changes announced by the server, and update its model accordingly. The server will publish changes, and listen to change requests from the client, and update its model (its master copy) based on those change requests. Wow. One line of code that does all that! Of course there is more to it, but that's beyond the scope of this article, so we'll move on. To better understand Meteor data synchronization, see the Publish and subscribe section of the Meteor documentation at http://docs.meteor.com/#publishandsubscribe. Templated HTML (the view) The Meteor client renders HTML through the use of templates. Templates in HTML are also called view data bindings. Without getting too deep, a view data binding is a shared piece of data that will be displayed differently if the data changes. The HTML code has a placeholder. In that placeholder different HTML code will be placed, depending on the value of a variable. If the value of that variable changes, the code in the placeholder will change with it, creating a different view. Let's look at a very simple data binding – one that you don't technically need Meteor for – to illustrate the point. In LendLib.html, you will see an HTML (Handlebar) template expression: <div id="categories-container"> {{> categories}} </div> That expression is a placeholder for an HTML template, found just below it: <template name="categories"> <h2 class="title">my stuff</h2>... So, {{> categories}} is basically saying "put whatever is in the template categories right here." And the HTML template with the matching name is providing that. If you want to see how data changes will change the display, change the h2 tag to an h4 tag, and save the change: <template name="categories"> <h4 class="title">my stuff</h4>... You'll see the effect in your browser ("my stuff" become itsy bitsy). That's a template – or view data binding – at work! Change the h4 back to an h2 and save the change. Unless you like the change. No judgment here...okay, maybe a little bit of judgment. It's ugly, and tiny, and hard to read. Seriously, you should change it back before someone sees it and makes fun of you!! Alright, now that we know what a view data binding is, let's see how Meteor uses them. Inside the categories template in LendLib.html, you'll find even more Handlebars templates: <template name="categories"> <h4 class="title">my stuff</h4> <div id="categories" class="btn-group"> {{#each lists}} <div class="category btn btn-inverse"> {{Category}} </div> {{/each}} </div> </template> The first Handlebars expression is part of a pair, and is a for-each statement. {{#each lists}} tells the interpreter to perform the action below it (in this case, make a new div) for each item in the lists collection. lists is the piece of data. {{#each lists}} is the placeholder. Now, inside the #each lists expression, there is one more Handlebars expression. {{Category}} Because this is found inside the #each expression, Category is an implied property of lists. That is to say that {{Category}} is the same as saying this.Category, where this is the current item in the for each loop. So the placeholder is saying "Add the value of this.Category here." Now, if we look in LendLib.js, we will see the values behind the templates. Template.categories.lists = function () { return lists.find(... Here, Meteor is declaring a template variable named lists, found inside a template called categories. That variable happens to be a function. That function is returning all the data in the lists collection, which we defined previously. Remember this line? var lists = new Meteor.Collection("lists"); That lists collection is returned by the declared Template.categories.lists, so that when there's a change to the lists collection, the variable gets updated, and the template's placeholder is changed as well. Let's see this in action. On your web page pointing to http://localhost:3000, open the browser console and enter the following line: > lists.insert({Category:"Games"}); This will update the lists data collection (the model). The template will see this change, and update the HTML code/placeholder. The for each loop will run one additional time, for the new entry in lists, and you'll see the following screen: In regards to the MVVM pattern, the HTML template code is part of the client's view. Any changes to the data are reflected in the browser automatically. Meteor's client code (the View-Model) As discussed in the preceding section, LendLib.js contains the template variables, linking the client's model to the HTML page, which is the client's view. Any logic that happens inside of LendLib.js as a reaction to changes from either the view or the model is part of the View-Model. The View-Model is responsible for tracking changes to the model and presenting those changes in such a way that the view will pick up the changes. It's also responsible for listening to changes coming from the view. By changes, we don't mean a button click or text being entered. Instead, we mean a change to a template value. A declared template is the View-Model, or the model for the view. That means that the client controller has its model (the data from the server) and it knows what to do with that model, and the view has its model (a template) and it knows how to display that model. Let's create some templates We'll now see a real-life example of the MVVM design pattern, and work on our Lending Library at the same time. Adding categories through the console has been a fun exercise, but it's not a long -t term solution. Let's make it so we can do that on the page instead. Open LendLib.html and add a new button just before the {{#each lists}} expression. <div id="categories" class="btn-group"> <div class="category btn btn-inverse" id="btnNewCat">+</div> {{#each lists}} This will add a plus button to the page. Now, we'll want to change out that button for a text field if we click on it. So let's build that functionality using the MVVM pattern, and make it based on the value of a variable in the template. Add the following lines of code: <div id="categories" class="btn-group"> {{#if new_cat}} {{else}} <div class="category btn btn-inverse" id="btnNewCat">+</div> {{/if}} {{#each lists}} The first line {{#if new_cat}} checks to see if new_cat is true or false. If it's false, the {{else}} section triggers, and it means we haven't yet indicated we want to add a new category, so we should be displaying the button with the plus sign. In this case, since we haven't defined it yet, new_cat will be false, and so the display won't change. Now let's add the HTML code to display, if we want to add a new category: <div id="categories" class="btn-group"> {{#if new_cat}} <div class="category"> <input type="text" id="add-category" value="" /> </div> {{else}} <div class="category btn btn-inverse" id="btnNewCat">+</div> {{/if}} {{#each lists}} Here we've added an input field, which will show up when new_cat is true. The input field won't show up unless it is, so for now it's hidden. So how do we make new_cat equal true? Save your changes if you haven't already, and open LendingLib.js. First, we'll declare a Session variable, just below our lists template declaration. Template.categories.lists = function () { return lists.find({}, {sort: {Category: 1}}); }; // We are declaring the 'adding_category' flag Session.set('adding_category', false); Now, we declare the new template variable new_cat, which will be a function returning the value of adding_category: // We are declaring the 'adding_category' flag Session.set('adding_category', false); // This returns true if adding_category has been assigned a value //of true Template.categories.new_cat = function () { return Session.equals('adding_category',true); }; Save these changes, and you'll see that nothing has changed. Ta-daaa! In reality, this is exactly as it should be, because we haven't done anything to change the value of adding_category yet. Let's do that now. First, we'll declare our click event, which will change the value in our Session variable. Template.categories.new_cat = function () { return Session.equals('adding_category',true); }; Template.categories.events({ 'click #btnNewCat': function (e, t) { Session.set('adding_category', true); Meteor.flush(); focusText(t.find("#add-category")); } }); Let's take a look at the following line: Template.categories.events({ This line is declaring that there will be events found in the category template. Now let's take a look at the next line: 'click #btnNewCat': function (e, t) { This line tells us that we're looking for a click event on the HTML element with an id="btnNewCat" (which we already created on LendingLib.html). Session.set('adding_category', true); Meteor.flush(); focusText(t.find("#add-category")); We set the Session variable adding_category = true, we flush the DOM (clear up anything wonky), and then we set the focus onto the input box with the expression id="add-category". One last thing to do, and that is to quickly add the helper function focusText(). Just before the closing tag for the if (Meteor.isClient) function, add the following code: /////Generic Helper Functions///// //this function puts our cursor where it needs to be. function focusText(i) { i.focus(); i.select(); }; } //------closing bracket for if(Meteor.isClient){} Now when you save the changes, and click on the plus [ ] button, you'll see the following input box: Fancy! It's still not useful, but we want to pause for a second and reflect on what just happened. We created a conditional template in the HTML page that will either show an input box or a plus button, depending on the value of a variable. That variable belongs to the View-Model. That is to say that if we change the value of the variable (like we do with the click event), then the view automatically updates. We've just completed an MVVM pattern inside a Meteor application! To really bring this home, let's add a change to the lists collection (also part of the View-Model, remember?) and figure out a way to hide the input field when we're done. First, we need to add a listener for the keyup event. Or to put it another way, we want to listen when the user types something in the box and hits Enter. When that happens, we want to have a category added, based on what the user typed. First, let's declare the event handler. Just after the click event for #btnNewCat, let's add another event handler: focusText(t.find("#add-category")); }, 'keyup #add-category': function (e,t){ if (e.which === 13) { var catVal = String(e.target.value || ""); if (catVal) { lists.insert({Category:catVal}); Session.set('adding_category', false); } } } }); We add a "," at the end of the click function, and then added the keyup event handler. if (e.which === 13) This line checks to see if we hit the Enter/return key. var catVal = String(e.target.value || ""); if (catVal) This checks to see if the input field has any value in it. lists.insert({Category:catVal}); If it does, we want to add an entry to the lists collection. Session.set('adding_category', false); Then we want to hide the input box, which we can do by simply modifying the value of adding_category. One more thing to add, and we're all done. If we click away from the input box, we want to hide it, and bring back the plus button. We already know how to do that inside the MVVM pattern by now, so let's add a quick function that changes the value of adding_category. Add one more comma after the keyup event handler, and insert the following event handler: Session.set('adding_category', false); } } }, 'focusout #add-category': function(e,t){ Session.set('adding_category',false); } }); Save your changes, and let's see this in action! In your web browser, on http://localhost:3000 , click on the plus sign — add the word Clothes and hit Enter. Your screen should now resemble the following: Feel free to add more categories if you want. Also, experiment with clicking on the plus button, typing something in, and then clicking away from the input field. Summary In this article you've learned about the history of web applications, and seen how we've moved from a traditional client/server model to a full-fledged MVVM design pattern. You've seen how Meteor uses templates and synchronized data to make things very easy to manage, providing a clean separation between our view, our view logic, and our data. Lastly, you've added more to the Lending Library, making a button to add categories, and you've done it all using changes to the View-Model, rather than directly editing the HTML.   Resources for Article : Further resources on this subject: How to Build a RSS Reader for Windows Phone 7 [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2 [Article] Top features of KnockoutJS [Article]
Read more
  • 0
  • 0
  • 2640

article-image-blocking-versus-non-blocking-scripts
Packt
08 Feb 2013
7 min read
Save for later

Blocking versus Non blocking scripts

Packt
08 Feb 2013
7 min read
(For more resources related to this topic, see here.) Blocking versus non blocking The reason we've put this library into the head of the HTML page and not the footer is because we actually want Modernizr to be a blocking script; this way it will test for, and if applicable create or shim, any elements before the DOM is rendered. We also want to be able to tell what features are available to us before the page is rendered. The script will load, Modernizr will test the availability of the new semantic elements, and if necessary, shim in the ones that fail the tests, and the rest of the page load will be on its merry way. Now when I say that we want the script to be "blocking", what I mean by that is the browser will wait to render or download any more of the page content until the script has finished loading. In essence, everything will move in a serial process and future processes will be "blocked" from occurring until after this takes place. This is also referred to as single threaded. More commonly as you may already be aware of, JavaScripts are called upon in the footer of the page, typically before the last body tag or by way of self-construction through use of an anonymous or immediate function, which only builds itself once the DOM is already parsed. The async attribute Even more recently, included page scripts can have the async attribute added to their tag elements, which will tell the browser to download other scripts in parallel. I like to think of serial versus parallel script downloading in terms of a phone conversation, each script being a single phone call. For each call made, a conversation is held, and once complete, the next phone call is made until there aren't any numbers left to be dialled. Parallel or asynchronous would be like having all of the callers on a conference call at one time. The browser, as the person making all these calls at once, has the superpower to hold all these conversations at the same time. I like to think of blocking scripts as phone calls, which contain pieces of information in their conversations that the person or browser would need to know before communicating or dialling up with the other scripts on this metaphoric conference call. Blocking to allow shimming For our needs, however, we want Modernizr to block that, so that all feature tests and shimming can be done before DOM render. The piece of information the browser needs before calling out the other scripts and parts of the page is what features exist, and whether or not semantic HTML5 elements need to be simulated. Doing otherwise could mean tragedy for something being targeted that doesn't exist because our shim wasn't there to serve its purpose by doing so. It would be similar to a roofer trying to attach shingles to a roof without any nails. Think of shimming as the nails for the CSS to attach certain selectors to their respective DOM nodes. Browsers such as IE typically ignore elements they don't recognize by default so the shims make the styles hold to the replicated semantic elements, and blocking the page ensures that happens in a timely manner. Shimming, which is also referred to as a "shiv", is when JavaScript recreates an HTML5 element that doesn't exist natively in the browser. The elements are thus "shimmed" in for use in styling. The browser will often ignore elements that don't exist natively otherwise. Say for example, the browser that was used to render the page did not support the new HTML5 section element tag. If the page wasn't shimmed to accommodate this before the render tree was constructed, you would run the risk of the CSS not working on those section elements. Looking at the reference chart on http://caniuse.com , this is somewhat likely for anyone using IE 8 or earlier: Now that we've adequately covered how to load Modernizr in the page header, we can move back on to the HTML. Adding the navigation Now that we have verified all of the JavaScript that is connected, we can start adding in more visual HTML elements. I'm going to add in five sections to the page and a fixed navigation header to scroll to each of them. Once that is all in place and working, we'll disable the default HTML actions in the navigation and control everything with JavaScript. By doing this, there will be a nice graceful fallback for the two people on the planet that have JavaScript disabled. Just kidding, maybe it's only one person. All joking aside, a no JavaScript fallback will be in place in the event that it is disabled on the page. If everything checks out as it should, you'll see the following printed in the JavaScript console in developer tools: While we're at it let's remove the h1 tag as well. Since we now know for a fact that Modernizr is great, we don't need to "hello world" it. Once the h1 tag is removed, it's time for a bit of navigation. The HTML used is as follows: <!-- Placing everything in the <header> html5 tag. --> <header> <div id="navbar"> <div id="nav"> <!-- Wrap the navigation in the new html5 nav element --> <nav> <href="#frame-1">Section One</a> <href="#frame-2">Section Two</a> <href="#frame-3">Section Three</a> <href="#frame-4">Section Four</a> <href="#frame-5">Section Four</a> </nav> </div> </div> </header> This is a fairly straightforward navigation at the moment. The entire fragment is placed inside the HTML5 header element of the page. A div tag with the id field of navbar will be used for targeting. I prefer to use HTML5 purely for semantic markup of the page as much as possible and to use div tags to target with styles. You could just as easily add CSS selectors to the new elements and they would be picked up as if they were any other inline or block element. The section frames After the nav element we'll add the page section frames. Each frame will be a div element, and each div element will have an id field matching the href attribute of the element from the navigation. For example, the first frame will have the id field of frame-1 which matches the href attribute of the first anchor tag in the navigation. Everything will also be wrapped in a div tag with the id field of main. Each panel or section will have the class name of frame, which allows us to apply common styles across sections as shown in the following code snippet: <div id="main"> <div id="frame-1" ></div> <div id="frame-2" ></div> <div id="frame-3" ></div> <div id="frame-4" ></div> <div id="frame-5" ></div> </div> Summary In this article we saw the basics of blocking versus non blocking scripts. We saw how the async attribute allows the browser to download other scripts in parallel. We blocked scripts using shimming, and also added navigation to the scripts. Lastly, we saw the use of section frames in scripts. Resources for Article : Further resources on this subject: HTML5: Generic Containers [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 5104
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-why-coffeescript
Packt
31 Jan 2013
9 min read
Save for later

Why CoffeeScript?

Packt
31 Jan 2013
9 min read
(For more resources related to this topic, see here.) CoffeeScript CoffeeScript compiles to JavaScript and follows its idioms closely. It's quite possible to rewrite any CoffeeScript code in Javascript and it won't look drastically different. So why would you want to use CoffeeScript? As an experienced JavaScript programmer, you might think that learning a completely new language is simply not worth the time and effort. But ultimately, code is for programmers. The compiler doesn't care how the code looks or how clear its meaning is; either it will run or it won't. We aim to write expressive code as programmers so that we can read, reference, understand, modify, and rewrite it. If the code is too complex or filled with needless ceremony, it will be harder to understand and maintain. CoffeeScript gives us an advantage to clarify our ideas and write more readable code. It's a misconception to think that CoffeeScript is very different from JavaScript. There might be some drastic syntax differences here and there, but in essence, CoffeeScript was designed to polish the rough edges of JavaScript to reveal the beautiful language hidden beneath. It steers programmers towards JavaScript's so-called "good parts" and holds strong opinions of what constitutes good JavaScript. One of the mantras of the CoffeeScript community is: "It's just JavaScript", and I have also found that the best way to truly comprehend the language is to look at how it generates its output, which is actually quite readable and understandable code. Throughout this article, we'll highlight some of the differences between the two languages, often focusing on the things in JavaScript that CoffeeScript tries to improve. In this way, I would not only like to give you an overview of the major features of the language, but also prepare you to be able to debug your CoffeeScript from its generated code once you start using it more often, as well as being able to convert existing JavaScript. Let's start with some of the things CoffeeScript fixes in JavaScript. CoffeeScript syntax One of the great things about CoffeeScript is that you tend to write much shorter and more succinct programs than you normally would in JavaScript. Some of this is because of the powerful features added to the language, but it also makes a few tweaks to the general syntax of JavaScript to transform it to something quite elegant. It does away with all the semicolons, braces, and other cruft that usually contributes to a lot of the "line noise" in JavaScript. To illustrate this, let's look at an example. On the left-hand side of the following table is CoffeeScript; on the right-hand side is the generated JavaScript: CoffeeScript JavaScript fibonacci = (n) -> return 0 if n == 0 return 1 if n == 1 (fibonacci n-1) + (fibonacci n-2) alert fibonacci 10 var fibonacci; fibonacci = function(n) { if (n === 0) { return 0; } if (n === 1) { return 1; } return (fibonacci(n - 1)) + (fibonacci(n - 2)); }; alert(fibonacci(10)); To run the code examples in this article, you can use the great Try CoffeeScript online tool, at http://coffeescript.org. It allows you to type in CoffeeScript code, which will then display the equivalent JavaScript in a side pane. You can also run the code right from the browser (by clicking the Run button in the upper-left corner). At first, the two languages might appear to be quite drastically different, but hopefully as we go through the differences, you'll see that it's all still JavaScript with some small tweaks and a lot of nice syntactical sugar. Semicolons and braces As you might have noticed, CoffeeScript does away with all the trailing semicolons at the end of a line. You can still use a semicolon if you want to put two expressions on a single line. It also does away with enclosing braces (also known as curly brackets) for code blocks such as if statements, switch, and the try..catch block. Whitespace You might be wondering how the parser figures out where your code blocks start and end. The CoffeeScript compiler does this by using syntactical whitespace. This means that indentation is used for delimited code blocks instead of braces. This is perhaps one of the most controversial features of the language. If you think about it, in almost all languages, programmers tend to already use indentation of code blocks to improve readability, so why not make it part of the syntax? This is not a new concept, and was mostly borrowed from Python. If you have any experience with significant whitespace language, you will not have any trouble with CoffeeScript indentation. If you don't, it might take some getting used to, but it makes for code that is wonderfully readable and easy to scan, while shaving off quite a few keystrokes. I'm willing to bet that if you do take the time to get over some initial reservations you might have, you might just grow to love block indentation. Blocks can be indented with tabs or spaces, but be careful about being consistent using one or the other, or CoffeeScript will not be able to parse your code correctly. Parenthesis You'll see that the clause of the if statement does not need be enclosed within parentheses. The same goes for the alert function; you'll see that the single string parameter follows the function call without parentheses as well. In CoffeeScript, parentheses are optional in function calls with parameters, clauses for if..else statements, as well as while loops. Although functions with arguments do not need parentheses, it is still a good idea to use them in cases where ambiguity might exist. The CoffeeScript community has come up with a nice idiom: wrapping the whole function call in parenthesis. The use of the alert function in CoffeeScript is shown in the following table: CoffeeScript JavaScript alert square 2 * 2.5 + 1 alert(square(2 * 2.5 + 1)); alert (square 2 * 2.5) + 1 alert((square(2 * 2.5)) + 1); Functions are first class objects in JavaScript. This means that when you refer to a function without parentheses, it will return the function itself, as a value. Thus, in CoffeeScript you still need to add parentheses when calling a function with no arguments. By making these few tweaks to the syntax of JavaScript, CoffeeScript arguably already improves the readability and succinctness of your code by a big factor, and also saves you quite a lot of keystrokes. But it has a few other tricks up its sleeve. Most programmers who have written a fair amount of JavaScript would probably agree that one of the phrases that gets typed the most frequently would have to be the function definition function(){}. Functions are really at the heart of JavaScript, yet not without its many warts. CoffeeScript has great function syntax The fact that you can treat functions as first class objects as well as being able to create anonymous functions is one of JavaScript's most powerful features. However, the syntax can be very awkward and make the code hard to read (especially if you start nesting functions). But CoffeeScript has a fix for this. Have a look at the following snippets: CoffeeScript JavaScript -> alert 'hi there!' square = (n) -> n * n var square; (function() { return alert('hi there!'); }); square = function(n) { return n * n; }; Here, we are creating two anonymous functions, the first just displays a dialog and the second will return the square of its argument. You've probably noticed the funny -> symbol and might have figured out what it does. Yep, that is how you define a function in CoffeeScript. I have come across a couple of different names for the symbol but the most accepted term seems to be a thin arrow or just an arrow. Notice that the first function definition has no arguments and thus we can drop the parenthesis. The second function does have a single argument, which is enclosed in parenthesis, which goes in front of the -> symbol. With what we now know, we can formulate a few simple substitution rules to convert JavaScript function declarations to CoffeeScript. They are as follows: Replace the function keyword with -> If the function has no arguments, drop the parenthesis If it has arguments, move the whole argument list with parenthesis in front of the -> symbol Make sure that the function body is properly indented and then drop the enclosing braces Return isn't required You might have noted that in both the functions, we left out the return keyword. By default, CoffeeScript will return the last expression in your function. It will try to do this in all the paths of execution. CoffeeScript will try turning any statement (fragment of code that returns nothing) into an expression that returns a value. CoffeeScript programmers will often refer to this feature of the language by saying that everything is an expression. This means you don't need to type return anymore, but keep in mind that this can, in many cases, alter your code subtly, because of the fact that you will always return something. If you need to return a value from a function before the last statement, you can still use return. Function arguments Function arguments can also take an optional default value. In the following code snippet you'll see that the optional value specified is assigned in the body of the generated Javascript: CoffeeScript JavaScript square = (n=1) -> alert(n * n) var square; square = function(n) { if (n == null) { n = 1; } return alert(n * n); }; In JavaScript, each function has an array-like structure called arguments with an indexed property for each argument that was passed to the function. You can use arguments to pass in a variable number of parameters to a function. Each parameter will be an element in arguments and thus you don't have to refer to parameters by name. Although the arguments object acts somewhat like an array, it is in not in fact a "real" array and lacks most of the standard array methods. Often, you'll find that arguments doesn't provide the functionality needed to inspect and manipulate its elements like they are used with an array. Summary We saw how it can help you write shorter, cleaner, and more elegant code than you normally would in JavaScript and avoid many of its pitfalls. We came to realize that even though CoffeeScripts' syntax seems to be quite different from JavaScript, it actually maps pretty closely to its generated output. Resources for Article : Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Build iPhone, Android and iPad Applications using jQTouch [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 3577

article-image-article-getting-started-with-html5-modernizer-url
Packt
31 Jan 2013
4 min read
Save for later

Getting Started with Modernizr

Packt
31 Jan 2013
4 min read
(For more resources on this subject, see here.) Detect and design with features, not User agents (browsers) What if you could build your website based on features instead of for the individual browser idiosyncrasies by manufacturer and version, making your website not just backward compatible but also forward compatible? You could quite potentially build a fully backward and forward compatible experience using a single code base across the entire UA spectrum. What I mean by this is instead of baking in an MSIE 7.0 version, an MSIE 8.0 version , a Firefox version, and so on of your website, and then using JavaScript to listen for, or sniff out, the browser version in play, it would be much simpler to instead build a single version of your website that supports all of the older generation, latest generation, and in many cases even future generation technologies, such as a video API,box-shadow,and first-of-type. Think of your website as a full-fledged cable television network broadcasting over 130 channels, and your users as customers that sign up for only the most basic package available, of only 15 channels. Any time that they upgrade their cable (browser) package to one offering additional channels (features), they can begin enjoying them immediately because you have already been broadcasting to each one of those channels the entire time. What happens now is that a proverbial line is drawn in the sand, and the site is built on the assumption that a particular set of features will exist and are thus supported. If not, fallbacks are in place to allow a smooth degradation of the experience as usual, but more importantly the site is built to adopt features that the browser will eventually have. Modernizr can detect CSS features, such as @font-face , box-shadow , and CSS gradients. It can also detect HTML5 elements, such as canvas , localstorage, and application cache. In all it can detect over 40 features for you, the developer. Another term commonly used to describe this technique is "progressive enhancement ". When the time fi nally comes that the user decides to upgrade their browser, the new features that the more recent browser version brings with it, for example text-shadow , will automatically be detected and picked up by your website, to be leveraged by your site with no extra work or code from you when they do. Without any additional work on your part, any text that is assigned text-shadow attributes will turn on at the fl ick of a switch so that user's experience will smoothly, and progressively be enhanced. What is Modernizr? More importantly, why should you use it? At its foundation, Modernizr is a feature-detection library powered by none other than JavaScript. Here is an example of conditionally adding CSS classes based on the browser, also known as the User Agent . When the browser parses this HTML document and fi nds a match, that class will be conditionally added to the page. Now that the browser version has been found, the developer can use CSS to alter the page based on the version of the browser that is used to parse the page. In the following example, IE 7, IE 8, and IE 9 all use a different method for a drop shadow attribute on an anchor element: /* IE7 Conditional class using UA sniffing */ .lt-ie7 a{ display: block; float: left; background: url( drop-shadow. gif ); } .lt-ie8 a{ display: inline-block; background: url( drop-shadow.png ); } .lt-ie9 a{ display: inline-block; box-shadow: 10px 5px 5px rgba(0,0,0,0.5); } The problem with the conditional method of applying styles is that not only does it require more code, but it also leaves a burden on the developer to know what browser version is capable of a given feature, in this case box-shadow . Here is the same example using Modernizr . Note how Modernizr has done the heavy lifting for you, irrespective of whether or not the box-shadow feature is supported by the browser: /* Box shadow using Modernizr CSS feature detected classes */ .box-shadow a{ box-shadow: 10px 5px 5px rgba(0,0,0,0.5); } .no-box-shadow a{ background: url( drop-shadow.gif ); }   The Modernizr namespace The Modernizr JavaScript library in your web page's header is a lightweight feature library that will test your browser for the features that it may or may not support, and store those results in a JavaScript namespace aptly named Modernizr. You can then use those test results as you build your website. From this point on everything you need to know about what features your user's browser can and cannot support can be checked for in two separate places. Going back to the cable television analogy, you now know what channels (features) your user does and does not have, and can act accordingly.
Read more
  • 0
  • 1
  • 1015

article-image-null-16
Packt
30 Jan 2013
4 min read
Save for later

Getting Started with HTML5 Modernizr

Packt
30 Jan 2013
4 min read
Detect and design with features, not User Agents (browsers) What if you could build your website based on features instead of for the individual browser idiosyncrasies by manufacturer and version, making your website not just backward compatible but also forward compatible? You could quite potentially build a fully backward and forward compatible experience using a single code base across the entire UA spectrum. What I mean by this is instead of baking in an MSIE 7.0 version, an MSIE 8.0 version , a Firefox version, and so on of your website, and then using JavaScript to listen for, or sniff out, the browser version in play, it would be much simpler to instead build a single version of your website that supports all of the older generation, latest generation, and in many cases even future generation technologies, such as a video API , box-shadow, first-of-type , and more. Think of your website as a full-fledged cable television network broadcasting over 130 channels, and your users as customers that sign up for only the most basic package available, of only 15 channels. Any time that they upgrade their cable (browser) package to one offering additional channels (features), they can begin enjoying them immediately because you have already been broadcasting to each one of those channels the entire time. What happens now is that a proverbial line is drawn in the sand, and the site is built on the assumption that a particular set of features will exist and are thus supported. If not, fallbacks are in place to allow a smooth degradation of the experience as usual, but more importantly the site is built to adopt features that the browser will eventually have. Modernizr can detect CSS features, such as @font-face , box-shadow , and CSS gradients. It can also detect HTML5 elements, such as canvas, localstorage, and application cache. In all it can detect over 40 features for you, the developer. Another term commonly used to describe this technique is progressive enhancement. When the time finally comes that the user decides to upgrade their browser, the new features that the more recent browser version brings with it, for example text-shadow, will automatically be detected and picked up by your website, to be leveraged by your site with no extra work or code from you when they do. Without any additional work on your part, any text that is assigned text-shadow attributes will turn on at the flick of a switch so that user's experience will smoothly, and progressively be enhanced. What is Modernizr? More importantly, why should you use it? At its foundation, Modernizr is a feature-detection library powered by none other than JavaScript. Here is an example of conditionally adding CSS classes based on the browser, also known as the User Agent. When the browser parses this HTML document and finds a match, that class will be conditionally added to the page. code 1 Now that the browser version has been found, the developer can use CSS to alter the page based on the version of the browser that is used to parse the page. In the following example, IE 7, IE 8, and IE 9 all use a different method for a drop shadow attribute on an anchor element: code 2 The problem with the conditional method of applying styles is that not only does it require more code, but it also leaves a burden on the developer to know what browser version is capable of a given feature, in this case box-shadow . Here is the same example using Modernizr. Note how Modernizr has done the heavy lifting for you, irrespective of whether or not the box-shadow feature is supported by the browser: code 3
Read more
  • 0
  • 0
  • 1425

article-image-layout-extnet
Packt
30 Jan 2013
16 min read
Save for later

Layout with Ext.NET

Packt
30 Jan 2013
16 min read
(For more resources related to this topic, see here.) Border layout The Border layout is perhaps one of the more popular layouts. While quite complex at first glance, it is popular because it turns out to be quite flexible to design and to use. It offers common elements often seen in complex web applications, such as an area for header content, footer content, a main content area, plus areas to either side. All are separately scrollable and resizable if needed, among other benefits. In Ext speak, these areas are called Regions, and are given names of North, South, Center, East, and West regions. Only the Center region is mandatory. It is also the one without any given dimensions; it will resize to fit the remaining area after all the other regions have been set. A West or East region must have a width defined, and North or South regions must have a height defined. These can be defined using the Width or Height property (in pixels) or using the Flex property which helps provide ratios. Each region can be any Ext.NET component; a very common option is Panel or a subclass of Panel. There are limits, however: for example, a Window is intended to be floating so cannot be one of the regions. This offers a lot of flexibility and can help avoid nesting too many Panels in order to show other components such as GridPanels or TabPanels, for example. Here is a screenshot showing a simple Border layout being applied to the entire page (that is, the viewport) using a 2-column style layout: We have configured a Border layout with two regions; a West region and a Center region. The Border layout is applied to the whole page (this is an example of using it with Viewport. Here is the code: <%@ Page Language="C#" %> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <ext:ResourceManager runat="server" Theme="Gray" /> <ext:Viewport runat="server" Layout="border"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="200" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" /> </Items> </ext:Viewport> </body> </html> The code has a Viewport configured with a Border layout via the Layout property. Then, into the Items collection two Panels are added, for the West and Center regions. The value of the Layout property is case insensitive and can take variations, such as Border, border, borderlayout, BorderLayout, and so on. As regions of a Border layout we can also configure options such as whether you want split bars, whether Panels are collapsible, and more. Our example uses the following: The West region Panel has been configured to be collapsible (using Collapsible="true"). This creates a small button in the title area which, when clicked, will smoothly animate the collapse of that region (which can then be clicked again to open it). When collapsed, the title area itself can also be clicked which will float the region into appearance, rather than permanently opening it (allowing the user to glimpse at the content and mouse away to close the region). This floating capability can be turned off by using Floatable="false" on the Panel. Split="true" gives a split bar with a collapse button between the regions. This next example shows a more complex Border layout where all regions are used: The markup used for the previous is very similar to the first example, so we will only show the Viewport portion: <ext:Viewport runat="server" Layout="border"> <Items> <ext:Panel Region="North" Split="true" Title="North" Height="75" Collapsible="true" /> <ext:Panel Region="West" Split="true" Title="West" Width="150" Collapsible="true" /> <ext:Panel runat="server" Region="Center" Title="Center content" /> <ext:Panel Region="East" Split="true" Title="East" Width="150" Collapsible="true" /> <ext:Panel Region="South" Split="true" Title="South" Height="75" Collapsible="true" /> </Items> </ext:Viewport> Although each Panel has a title set via the Title property, it is optional. For example, you may want to omit the title from the North region if you want an application header or banner bar, where the title bar could be superfluous. Different ways to create the same components The previous examples were shown using the specific Layout="Border" markup. However, there are a number of ways this can be marked up or written in code. For example, You can code these entirely in markup as we have seen You can create these entirely in code You can use a mixture of markup and code to suit your needs Here are some quick examples: Border layout from code This is the code version of the first two-panel Border layout example: <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { var viewport = new Viewport { Layout = "border", Items = { new Ext.Net.Panel { Region = Region.West, Title = "West", Width = 200, Collapsible = true, Split = true }, new Ext.Net.Panel { Region = Region.Center, Title = "Center content" } } }; this.Form.Controls.Add(viewport); } </script> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <form runat="server"> <ext:ResourceManager runat="server" Theme="Gray" /> </form> </body> </html> There are a number of things going on here worth mentioning: The appropriate panels have been added to the Viewport's Items collection Finally, the Viewport is added to the page via the form's Controls Collection If you are used to programming with ASP.NET, you normally add a control to the Controls collection of an ASP.NET control. However, when Ext.NET controls add themselves to each other, it is usually done via the Items collection. This helps create a more optimal initialization script. This also means that only Ext.NET components participate in the layout logic. There is also the Content property in markup (or ContentControls property in code-behind) which can be used to add non-Ext.NET controls or raw HTML, though they will not take part in the layout. It is important to note that configuring Items and Content together should be avoided, especially if a layout is set on the parent container. This is because the parent container will only use the Items collection. Some layouts may hide the Content section altogether or have other undesired results. In general, use only one at a time, not both because the Viewport is the outer-most control; it is added to the Controls collection of the form itself. Another important thing to bear in mind is that the Viewport must be the only top-level visible control. That means it cannot be placed inside a div, for example it must be added directly to the body or to the <form runat="server"> only. In addition, there should not be any sibling controls (except floating widgets, like Window). Mixing markup and code The same 2-panel Border layout can also be mixed in various ways. For example: <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { this.WestPanel.Title = "West"; this.WestPanel.Split = true; this.WestPanel.Collapsible = true; this.Viewport1.Items.Add(new Ext.Net.Panel { Region = Region.Center, Title = "Center content" }); } </script> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <ext:ResourceManager runat="server" /> <ext:Viewport ID="Viewport1" runat="server" Layout="Border"> <Items> <ext:Panel ID="WestPanel" runat="server" Region="West" Width="200" /> </Items> </ext:Viewport> </body> </html> In the previous example, the Viewport and the initial part of the West region have been defined in markup. The Center region Panel has been added via code and the rest of the West Panel's properties have been set in code-behind. As with most ASP. NET controls, you can mix and match these as you need. Loading layout items via User Controls A powerful capability that Ext.NET provides is being able to load layout components from User Controls. This is achieved by using the UserControlLoader component. Consider this example: <ext:Viewport runat="server" Layout="Border"> <Items> <ext:UserControlLoader Path="WestPanel.ascx" /> <ext:Panel Region="Center" /> </Items> </ext:Viewport> In this code, we have replaced the West region Panel that was used in earlier examples with a UserControlLoader component and set the Path property to load a user control in the same directory as this page. That user control is very simple for our example: <%@ Control Language="C#" %> <ext:Panel runat="server" Region="West" Split="true" Title="West" Width="200" Collapsible="true" /> In other words, we have simply moved our Panel from our earlier example into a user control and loaded that instead. Though a small example, this demonstrates some useful reuse capability. Also note that although we used the UserControlLoader in this Border layout example, it can be used anywhere else as needed, as it is an Ext.NET component. The containing component does not have to be a Viewport Note also that the containing component does not have to be a Viewport. It can be any other appropriate container, such as another Panel or a Window. Let's do just that: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="150" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" /> </Items> </ext:Window> The container has changed from a Viewport to a Window (with dimensions). It will produce this: More than one item with the same region In previous versions of Ext JS and Ext.NET you could only have one component in a given region, for example, only one North region Panel, one West region Panel, and so on. New to Ext.NET 2 is the ability to have more than one item in the same region. This can be very flexible and improve performance slightly. This is because in the past if you wanted the appearance of say multiple West columns, you would need to create nested Border layouts (which is still an option of course). But now, you can simply add two components to a Border layout and give them the same region value. Nested Border layouts are still possible in case the flexibility is needed (and helps make porting from an earlier version easier). First, here is an example using nested Border layouts to achieve three vertical columns: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Layout="Border" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="Inner West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Title="Inner Center" /> </Items> </ext:Panel> </Items> </ext:Window> This code will produce the following output: The previous code is only a slight variation of the example preceding it, but has a few notable changes: The Center region Panel has itself been given the layout as Border. This means that although this is a Center region for the window that it is a part of, this Panel is itself another Border layout. The nested Border layout then has two further Panels, an additional West region and an additional Center region. Note, the Title has also been removed from the outer Center region so that when they are rendered, they line up to look like three Panels next to each other. Here is the same example, but without using a nested border Panel and instead, just adding another West region Panel to the containing Window: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="West" Split="true" Title="Inner West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" Border="false" /> </Items> </ext:Window> Regions are not limited to Panels only A common problem with layouts is to start off creating more deeply nested controls than needed and the example earlier shows that it is not always needed. Multiple items with the same region helps to prevent nesting Border Layouts unnecessarily. Another inefficiency typical with the Border layout usage is using too many containing Panels in each region. For example, there may be a Center region Panel which then contains a TabPanel. However, as TabPanel is a subclass of Panel it can be given a region directly, therefore avoiding an unnecessary Panel to contain the TabPanel: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="False"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="True" /> <ext:TabPanel Region="Center"> <Items> <ext:Panel Title="First Tab" /> <ext:Panel Title="Second Tab" /> </Items> </ext:TabPanel> </Items> </ext:Window> This code will produce the following output: The differences with the nested Border layout example shown earlier are: The outer Center region has been changed from Panel to TabPanel. TabPanels manage their own items' layout so Layout="Border" is removed. The TabPanel also has Border="false" taken out (so it is true by default). The inner Panels have had their regions, Split, and other border related attributes taken out. This is because they are not inside a nested Border layout now; they are tabs. Other Panels, such as TreePanel or GridPanel, can also be used as we will see. Something that can be fiddly from time to time is knowing which borders to take off and which ones to keep when you have nested layouts and controls like this. There is a logic to it, but sometimes a quick bit of trial and error can also help figure it out! As a programmer this sounds minor and unimportant, but usually you want to prevent the borders becoming too thick, as aesthetically it can be off-putting, whereas just the right amount of borders can help make the application look clean and professional. You can always give components a class via the Cls property and then in CSS you can fine tune the borders (and other styles of course) as you need. Weighted regions Another feature new to Ext.NET 2 is that regions can be given weighting to influence how they are rendered and spaced out. Prior versions would require nested Border layouts to achieve this. To see how this works, consider this example to put a South region only inside the Center Panel: To achieve this output, if we used the old way—the nested Border layouts—we would do something like this: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Layout="Border" Border="false"> <Items> <ext:Panel Region="Center" Title="Center" /> <ext:Panel Region="South" Split="true" Title="South" Height="100" Collapsible="true" /> </Items> </ext:Panel> </Items> </ext:Window> In the preceding code, we make the Center region itself be a Border layout with an inner Center region and a South region. This way the outer West region takes up all the space on the left. If the South region was part of the outer Border layout, then it would span across the entire bottom area of the window. But the same effect can be achieved using weighting. This means you do not need nested Border layouts; the three Panels can all be items of the containing window, which means a few less objects being created on the client: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" Weight="10" /> <ext:Panel Region="Center" Title="Center" /> <ext:Panel Region="South" Split="true" Title="South" Height="100" Collapsible="true" /> </Items> </ext:Window> The way region weights work is that the region with the highest weight is assigned space from the border before other regions. If more than one region has the same weight as another, they are assigned space based on their position in the owner's Items collection (that is first come, first served). In the preceding code, we set the Weight property to 10 to the West region only, so it is rendered first and, thus, takes up all the space it can before the other two are rendered. This allows for many flexible options and Ext.NET has an example where you can configure different values to see the effects of different weights: http://examples.ext.net/#/Layout/BorderLayout/Regions_Weights/ As the previous examples show, there are many ways to define the layout, offering you more flexibility, especially if generating from code-behind in a very dynamic way. Knowing that there are so many ways to define the layout, we can now speed up our look at many other types of layouts. Summary This article covered one of the numerous layout options available in Ext.NET, that is, the Border layout, to help you organize your web applications. Resources for Article : Further resources on this subject: Your First ASP.NET MVC Application [Article] Customizing and Extending the ASP.NET MVC Framework [Article] Tips & Tricks for Ext JS 3.x [Article]
Read more
  • 0
  • 0
  • 5106
article-image-downloading-and-setting-bootstrap
Packt
30 Jan 2013
4 min read
Save for later

Downloading and setting up Bootstrap

Packt
30 Jan 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Twitter Bootstrap is more than a set of code. It is an online community. To get started, you will do well to familiarize yourself with Twitter Bootstrap's home base: http://twitter.github.com/bootstrap/ Here you'll find the following: The documentation: If this is your first visit, grab a cup of coffee and spend some time perusing the pages, scanning the components, reading the details, and soaking it in. (You'll see this is going to be fun.) The download button: You can get the latest and greatest versions of the Twitter Bootstrap's CSS, JavaScript plugins, and icons, compiled and ready for action, coming to you in a convenient ZIP folder. This is where we'll start. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you. How to do it… Whatever your experience level, as promised, I'll walk you through all the necessary steps. Here goes! Go to the Bootstrap homepage: http://twitter.github.com/bootstrap/ Click on the large Download Bootstrap button. Locate the download file and unzip or extract it. You should get a folder named simply bootstrap. Inside this folder you should find the folders and files shown in thefollowing screenshot: From the homepage, click on the main navigation item: Get started. Scroll down, or use the secondary navigation, to navigate to the heading: Examples. The direct link is: http://twitter.github.com/bootstrap/getting-started. html#examples Right-click and download the leftmost example, labeled Basic Marketing Site. You'll see that it is an HTML file, named hero.html Save (or move) it to your main bootstrap folder, right alongside the folders named css, img, and js. Rename the file index.html (a standard name for what will become our homepage). You should now see something similar to the following screenshot: Next, we need to update the links to the stylesheets Why? When you downloaded the starter template file, you changed the relationship between the file and its stylesheets. We need to let it know where to find the stylesheets in this new file structure. Open index.html (formerly, hero.html) in your code editor. Need a code editor? Windows users: You might try Notepad++ (http://notepadplus-plus.org/download/) Mac users: Consider TextWrangler (http://www. barebones.com/products/textwrangler/)   Find these lines near the top of the file (lines 11-18 in version 2.0.2): Update the href attributes in both link tags to read as follows: Save your changes! You're set to go! Open it up in your browser! (Double-click on index.html.) You should see something like this: Congratulations! Your first Bootstrap site is underway. Problems? Don't worry. If your page doesn't look like this yet, let me help you spot the problem. Revisit the steps above and double-check a couple of things: Are your folders and files in the right relationship? (see step 3 as detailed previosuly) In your index.html, did you update the href attributes in both stylesheet links? (These should be lines 11 and 18 as of Twitter Bootstrap version 2.1.0.) There's more… Of course, this is not the only way you could organize your files. Some developers prefer to place stylesheets, images, and JavaScript files all within a larger folder named assets or library. The organization method I've presented is recommended by the developers who contribute to the HTML5 Boilerplate. One advantage of this approach is that it reduces the length of the paths to our site assets. Thus, whereas others might have a path to a background image such as this: url('assets/img/bg.jpg'); In the organization scheme I've recommended it will be shorter: url('img/bg.jpg'); This is not a big deal for a single line of code. However, when you consider that there will be many links to stylesheets, JavaScript files, and images running throughout your site files, when we reduce each path a few characters, this can add up. And in a world where speed matters, every bit counts. Shorter paths save characters, reduce file size, and help support faster web browsing. Summary This article gave us a quick introduction to the Twitter Bootstrap. We've got a fair idea as to how to download and set up our Bootstrap. By following these simple steps, we can easily create our first Bootstrap site. Resources for Article : Further resources on this subject: Starting Up Tomcat 6: Part 1 [Article] Build your own Application to access Twitter using Java and NetBeans: Part 2 [Article] Integrating Twitter with Magento [Article]
Read more
  • 0
  • 0
  • 4222

article-image-creating-your-course-presenter
Packt
30 Jan 2013
16 min read
Save for later

Creating Your Course with Presenter

Packt
30 Jan 2013
16 min read
(For more resources related to this topic, see here.) Animating images and objects Animating your objects is done on the Animations ribbon. Please note that this ribbon is different on PowerPoint 2010 than it is on PowerPoint 2007. Just about everything you do will be the same, however the locations and names of some things may be different. For the purpose of this book, we're going to look at PowerPoint 2010, but we will tell you about the differences. Getting ready To start we'll need a slide with a couple of objects on it. You can copy a slide from an existing presentation, putting it into a new presentation to experiment with. I'm going to use the first slide from our "Mary Had a Little Lamb" presentation. So that we can do anything we want to with this slide, I'm going to add a few buttons and an arrow to it, giving us some more objects to work with. When animating objects, it can be extremely useful to group them. This causes the animation to act on the group of objects as if they are one object. To group two or more objects, select them together, then right-click on one of the objects. Click on the Group button; it will open another fly-out menu, then click on Group. How to do it... Animations can only be done to already existing objects. Often, it is easiest to place all the objects on the slide before beginning the animations. That makes it easier to see how the animations will react with one another. For animating images and objects, perform the following steps: Open up the Selections pane by clicking on the Selection Pane button in the Arrange section of the Format ribbon. This will help you keep track of the various objects on the slide and their order of layering. To rename the objects, click on the name in the Selections pane and type in a new name. The small button to the right-hand side of each object's name is for making the object visible or invisible. Open the Animation pane in the Animations ribbon. For PowerPoint 2010, click on the Animation Pane button in the Advanced Animation section of the ribbon. For PowerPoint 2007, click on the Custom Animation button in the Animations section of the Animations ribbon. Adding an animation to a particular object consists of selecting the object and then selecting the animation for it. On the 2010 ribbon, you can add an animation either by clicking on the animation's icon in the center part of the ribbon, or by clicking on the Add Animation button in the Advanced Animations section of the ribbon. Clicking on this button opens the Animation drop-down menu, as shown in the following screenshot: Select the animation you want by clicking on it. How it works... When you first open the Selection pane, the objects will all appear with names such as Rounded Rectangle 12, TextBox 13, and Picture 14. This numbering will be sequential in the order that the objects have been added to that slide. The order in which the objects appear in the pane is the order in which they are layered in the slide. The top object that is listed is the top one in the order. To change the order, right-click on the object and select Bring to Top or Send to Bottom on the context-sensitive menu. PowerPoint works like a desktop publishing program in the same way that it handles the ordering and layering of the objects. If you were doing the design that you are doing in PowerPoint on paper, gluing blocks of text, pictures, and other objects to the paper would have caused some of them to overlap others. This is the concept of "layering". Every object that you put on a slide is layered even if none of them overlap each other. When they do overlap each other, the objects that were added later overlap the earlier ones. This can be changed by changing the object order, using Bring to Top, Send Forward, Send Backward, and Send to Bottom in the context-sensitive menu. The Animation dropbox itself shows the most popular animations and is divided into four, color-coded sections for the type of animation that is to be added, as follows: The Entrance animations are green The Emphasis animations are yellow or gray The Exit animations are red The Motion Paths animations are multicolored In addition, there are links at the bottom of the dialog box to access more animations than the ones shown in the dropbox. Selecting an animation automatically adds it to the Animations pane, so that you can see it. The animations you have added will be listed in the Animations pane in the order in which they run. There will be an icon of a mouse as well, to show that the animation is intended to run with the click of a mouse. The numbers to the left-hand side of the animation tell you what order they are in. If an animation does not have a number to its left-hand side, it is set to run with the previous one. If it has a clock face to the left-hand side, it is intended to run after the previous animation, as shown in the following screenshot: Even though we're setting the animations to start with a mouse click, that won't actually be happening in the Flash video. Later we'll be sequencing the animations, syncing them with the narration. We will need them to be activated by a mouse click in order to do this. There's more… There are a number of different changes you can make to how your animations are run. Animation timing Articulate offers the capability to precisely time your animations, which is much easier than what PowerPoint offers. This makes it easier to match the timing of the animations with the narration. Perform the following steps: By right-clicking on any of the animations, you can access a context-sensitive menu, which allows you to select the animations to run on a mouse click, concurrent with the previous animation or after the previous animation. This can be done using options such as Start On Click, Start With Previous, and Start After Previous, as shown in the following screenshot: For some animations you can change other settings, such as the direction that the object enters the slide from. This is done by clicking on Effect Options... in the context-sensitive menu. When the dialog box opens, you can set Direction or any other setting particular to that animation in the upper part of the dialog box. You can add sound effects to the animation using the Sound drop-down menu in the lower part of the dialog box. Clicking on Timing in the context-sensitive menu opens a dialog box that allows you to change the speed with which the animations run, as shown in the following screenshot: There are five settings available in the Speed drop-down menu, from very slow to very fast. As you can see from the preceding screenshot, the program tells you how long the animation will run for at each speed. Using the Repeat drop-down menu, you can set the animation to repeat itself a set number of times until the slide ends or until the next mouse click. The context-sensitive menu also allows you to open an advanced timeline by clicking on Show Advanced Timeline, which will allow you to further fine-tune your animation and its timing, including adding a delay between it and the previous animation. The timeline view allows you to see how the overlap and running of your animations will come out in terms of seconds. You never want to have the first animation in the slide run automatically; you only want to have it run on the mouse click. Setting it to Start With Previous or Start After Previous is not compatible with Flash animation. Multiple animations Multiple animations can be used for a single object. You can bring the item in with an entrance animation, have a second animation performed on it for emphasis, and then have it exit with an exit animation. These can either be one after the other, or with other things happening in between them. These additional animations are added in the same way that the first animation was added in the main part of this recipe. Additional animations will show up in the list in the order in which they were added. They will also play in this order. Checking your animations To check your animation, you can click on the Play button at the top of the Animation pane. If you would like to run it as a slide slow, you will need to click on the Slide Show icon, which is located in the information bar at the bottom of the PowerPoint window, as shown in the following screenshot: A word about style Your viewers are not going to be impressed by presentations filled with too many animations that are all too common. Therefore, it is of utmost importance to use animations with discretion. They are a great way to add additional objects to a slide, if they can do so without being a distraction. Adding audio narration to your slides Articulate Presenter allows the use of two different types of audio in your presentation. The first is the audio track, which provides background music for your presentation. The second is narration. This program automatically adjusts the volume of your background music whenever there is narration, avoiding competition between the two. There are two ways of creating your narration in Articulate Presenter, either by recording the narration right into the presentation, or by having your narration recorded professionally and importing it into your presentation. In this section, we will look at how both of these methods are accomplished. Getting ready To record narration directly into your presentation, you will need a microphone connected to your computer. It is worthwhile buying a good quality microphone, especially if you are going to be doing a large number of presentations. The sound quality that you can get off a good quality microphone is better than a cheap one. You don't want to use the microphone that's in your webcam. Not only is this not a high-quality microphone, but the distance between you and the microphone will make you sound like you're speaking from inside a tunnel. Ideally, a microphone should be between three to six inches from your mouth, pointed directly at your mouth. Avoid moving your head from side to side as you speak, as this will make your volume level go up and down. The following are some of the key points to look at before you add an audio narration to your slide: A windscreen on your microphone is a good investment as it will help prevent the noise from your breathing on it. You may also want to consider a desktop mic stand so that you don't have to hold it in your hand. Before recording your narration, it's a good idea to have it written out. Many people think that they can do it off the cuff, but when they get in front of the mic, they forget everything that they were going to say. A great place to write out your script is in the notes area at the bottom of the PowerPoint screen. How to do it... We are going to record the narration directly into the presentation using Articulate Presenter's built-in recording function. Perform the following steps to do so: Before recording your narration, you need to ensure that your presentation is ready to receive your recording. To do this, open the Presentation Options dialog box from the Articulate ribbon. On the Other tab, make sure that the Show notes pane on narration window and Record narration for one slide at a time checkboxes under the Recording section are both checked, as shown in the following screenshot: Open the recording screen by clicking on the Record Narration button in the Narration section of the Articulate ribbon. If you have not yet saved your presentation, you will be asked to do so before the Record Narration screen opens. Your PowerPoint screen will seem to disappear when you open the recording screen. Don't worry, you haven't lost it; when you finish recording your narration, it will appear again. If you are using multiple monitors for your computer, the Narration recording screen will always appear on the far right monitor. So if you have something there that you will need to access, you may want to move it before entering the record mode. To begin recording, click on the START RECORDING button on the Articulate ribbon. While you are recording, the START RECORDING button changes to STOP RECORDING. When you have finished recording the narration for the slide, click on the STOP RECORDING button. If you click on the START RECORDING button again after you've stopped recording, it will start the recording again, overwriting what you just recorded. As you are recording, the length of the recording you are making will show in the yellow message bar and will be broken down into hours, minutes, seconds, and tenths of a second. You can check your recording using the Play and Stop buttons to the right-hand side of the START RECORDING button. These buttons are identified with the standard graphical symbols for play and record. Now that you have recorded the narration for the first slide, you can move to the next slide using the right and left arrow buttons to the right-hand side of the Record and Play buttons. You can also select which slide to edit using the drop-down menu located below these buttons. To the right-hand side of the Play and Record buttons, there is an area for selecting the slide that you want to record the narration for. To verify which slides you have already recorded, click on the small arrow pointing downwards below the slide number in the ribbon. This will open a dropbox with a list of all the slides and thumbnails. All the slides that have a narration recorded will show a small icon of a microphone. Above this, they will tell you the duration of the narration that you have recorded, as shown in the following screenshot: To exit the narration recorder and return to PowerPoint, click on the Save & Close button on the ribbon. How it works... Your narrations will be saved in a new file, which has the same name as you gave your presentation, with the .ppta filename extension. The file will be automatically created at this time, if the program has not already created it. If you have to move your presentation for any reason, be sure to move this file along with the .ppta file, which is the presentation. Otherwise you will lose your narrations. There's more... Not only can you record narrations, but you can also import them into the presentation. You may choose to do this using professional "voice" for a specific voice style or for a more professional presentation. Importing narrations into your presentation If you decide to use professionally recorded narrations using professional talent, you will probably not be able to record them with Articulate's recorder. This isn't a problem, as you can very easily import those recordings into your presentation. There are some technical requirements for your recordings. They must be recorded in either the .wav or .mp3 format. Between the two, you are better off using the .wav format files as they are not compressed like the .mp3 files. This means that your finished presentation will be a bigger file, but it will provide a better sound quality for editing. They must be recorded at a sampling rate of 44.1 kHz, 16-bit resolution, and either stereo or mono. Many recording studios and artists prefer to use a resolution of 32 bits, however if you attempt to import 32- bit files into an Articulate presentation, all you will hear is a screech. Perform the following steps for importing narrations: To import these files, click on the Import Audio button in the Narration section of the Articulate ribbon. This will open the Import Audio dialog box. This dialog box contains a simple chart showing the slide numbers, the slide names, and the audio track for the narration. If you have recorded a narration for a slide, it will state the existing narration; if you have no narration, this column will be empty. Select the slide that you wish to import an audio file to by clicking on it. Then click on the Browse... button at the bottom of the screen. This will open a standard, Windows open file dialog box, where you can search for and select your audio file for that particular narration. You can select multiple narration files to be imported at once. Simply select the first file you need in the Windows open file dialog box, then hold down the Shift key and select the last. If the files are not sequentially located in the folder, you can hold down the Ctrl key, select each file individually, and then import them all together. When you do this, a new dialog box will open, allowing you to put the audio files in their correct order. The list of files will be shown in the central part of the dialog box. To change the order, select the file you wish to move, and use the Up, Down, Top, Bottom, and Reverse buttons on the right-hand side of the dialog box to move them as you need to. If you do not get the order of your narration files correct in the dialog box, you will need to individually change the audio files associated with the slides, as there is no way of moving them around in the Import Audio dialog box. Summary This article covered the basics of creating a simple course using Presenter by itself. It taught us the basics of inserting media elements and assets. Resources for Article : Further resources on this subject: Python Multimedia: Video Format Conversion, Manipulations and Effects [Article] Using Web Pages in UPK 3.5 [Article] Adding Flash to your WordPress Theme [Article]
Read more
  • 0
  • 0
  • 2079

article-image-eloquent-relationships
Packt
28 Jan 2013
12 min read
Save for later

Eloquent relationships

Packt
28 Jan 2013
12 min read
(For more resources related to this topic, see here.) 1 — Eloquent relationships ActiveRecord is a design pattern that describes an object-oriented way of interacting with your database. For example, your database's users table contains rows and each of these rows represents a single user of your site. Your User model is a class that extends the Eloquent Model class. When you query a record from your database, an instantiation of your User model class is created and populated with the information from the database. A distinct advantage of ActiveRecord is that your data and the business logic that is related to the data are housed within the same object. For example, it's typical to store the user's password in your model as a hash, to prevent it from being stored as plaintext. It's also typical to store the method, which creates this password hash within your User class. Another powerful aspect of the ActiveRecord pattern is the ability to define relationships between models. Imagine that you're building a blog site and your users are authors who must be able to post their writings. Using an ActiveRecord implementation, you are able to define the parameters of the relationship. The task of maintaining this relationship is then simplified dramatically. Simple code is the easy code to change. Difficult to understand code is the easy code to break. As a PHP developer, you're probably already familiar with the concept of database normalization. If you're not, normalization is the process of designing databases so that there is little redundancy in the stored data. For example, you wouldn't want to have both a users table which contains the user's name and a table of blog posts which also contains the author's name. Instead, your blog post record would refer to the user using their user ID. In this way we avoid synchronization problems and a lot of extra work! There are a number of ways in which relationships can be established in normalized database schemas. One-to-one relationship When a relationship connects two records in a way that doesn't allow for more records to be related, it is a one-to-one relationship. For example, a user record might have a one-to-one relationship with a passport record. In this example, a user record is not permitted to be linked to more than one passport record. Similarly, it is not permitted for a passport record to relate to more than one user record. How would the database look? Your users table contains information about each user in your database. Your passports table contains passport numbers and a link to the user which owns the passport. In this example, each user has no more than one passport and each passport must have an owner. The passports table contains its own id column which it uses as a primary key. It also contains the column user_id, which contains the ID of the user to whom the passport belongs. Last but not least, the passports table contains a column for the passport number. First, let's model this relationship in the User class: class User extends Eloquent { public function passport() { return $this->has_one('Passport'); } } We created a method named passport() that returns a relationship. It might seem strange to return relationships at first. But, you'll soon come to love it for the flexibility it offers. You'll notice that we're using the has_one() method and passing the name of the model as a parameter. In this case, a user has one passport. So, the parameter is the name of the passport model class. This is enough information for Eloquent to understand how to acquire the correct passport record for each user. Now, let's look at the Passport class: class Passport extends Eloquent { public function users() { return $this->belongs_to('User'); } } We're defining the passport's relationship differently. In the User class, we used the has_one() method. In the Passport class we used belongs_to(). It's vital to identify the difference early so that understanding the rest of the relationships is more simple. When a database table contains a foreign key, it is said that it belongs to a record in another table. In this example, our passports table refers to records in the users table through the foreign key user_id. Consequently, we would say that a passport belongs to a user. Since this is a one-to-one relationship the user has one (has_one()) passport. Let's say that we want to view the passport number of the user with the id of 1. $user = User::find(1); If(is_null($user)) { echo "No user found."; return; } If($user->passport) { echo "The user's passport number is " . $user->passport->number; } else { echo "This user has no passport."; } In this example, we're dutifully checking to make sure that our user object was returned as expected. This is a necessary step that should not be overlooked. Then, we check whether or not the user has a passport record associated with it. If a passport record for this user exists, the related object will be returned. If it doesn't exist, $user->passport will return null. In the preceding example, we test for the existence of a record and return the appropriate response. One-to-many relationships One-to-many relationships are similar to one-to-one relationships. In this relationship type, one model has many of other relationships, which in turn belongs to the former. One example of a one-to-many relationship is a professional sports team's relationship to its players. One team has many players. In this example, each player can only belong to one team. The database tables have the same structure. Now, let's look at the code which describes this relationship. class Team extends Eloquent { public function players() { return $this->has_many('Player'); } } class Player extends Eloquent { public function team() { return $this->belongs_to('Team'); } } This example is almost identical to the one-to-one example. The only difference is that the team's players() relationship uses has_many() rather than has_one(). The has_one() relationship returns a model object. The has_many() relationship returns an array of model objects. Let's display all of the players on a specific team: $team = Team::find(2); if(is_null($team)) { echo "The team could not be found."; } if(!$team->players) { echo "The team has no players."; } foreach($team->players as $player) { echo "$player->name is on team $team->name. "; } Again, we test to make sure that our team could be found. Then, we test to make sure that the team has players. Once we know that for sure, we can loop through those players and echo their names. If we tried to loop through the players without first testing and if the team had players, we'd get an error. Many-to-many relationships The last relationship type that we're going to cover is the many-to-many relationship. This relationship is different in that each record from each table could potentially be tied simultaneously to each record in another. We aren't storing foreign keys in either of these tables. Instead, we have a third table that exists solely to store our foreign keys. Let's take a look at the schema. Here we have a students table and a courses table. A student can be enrolled in many courses and a course can contain many students. The connection between students and courses is stored in a pivot table. A pivot table is a table that exists to connect two tables specifically for many-to-many relationships. Standard convention for naming a pivot table is to combine the names of both of the related tables, singularized, alphabetically ordered, and connected with an underscore. This gives us the table name course_student. This convention is not only used by Laravel and it's a good idea to follow the naming conventions covered in this document as strictly as possible as they're widely used in the web-development industry. It's important to notice that we're not creating a model for the pivot table. Laravel allows us to manage these tables without needing to interact with a model. This is especially nice because it doesn't make sense to model a pivot table with business logic. Only the students and courses are a part of our business. The connection between them is important, but only to the students and to the course. It's not important for its own sake. Let's define these models, shall we? class Student extends Eloquent { public function courses() { return $this->has_many_and_belongs_to('Course'); } } class Course extends Eloquent { public function students() { return $this->has_many_and_belongs_to('Student'); } } We have two models, each with the same type of relationship to each other. has_many_and_ belongs_to is a long name. But, it's a fairly simple concept. A course has many students. But, it also belongs to (belongs_to) student records and vice-versa. In this way, they are considered equal. Let's look at how we'll interact with these models in practice: $student = Student::find(1); if(is_null($student)) { echo "The student can't be found."; exit; } if(!$student->courses) { echo "The student $student->name is not enrolled in any courses."; exit; } foreach($student->courses as $course) { echo "The student $student->name is enrolled in the course $course->name."; } Here you can see that we can loop through the courses much the same way we could with the one-to-many relationship. Any time a relationship includes the word many, you know that you'll be receiving an array of models. Conversely, let's pull a course and see which students are a part of it. $course = Course::find(1); if(is_null($course)) { echo "The course can't be found."; exit; } if(!$course->students) { echo "The course $course->name seems to have no students enrolled."; exit; } foreach($course->students as $student) { echo "The student $student->name is enrolled in the course $course->name."; } The relationship functions exactly the same way from the course side. Now that we have established this relationship, we can do some fun things with it. Let's look at how we'd enroll a new student into an existing course: $course = Course::find(13); if(is_null($course)) { echo "The course can't be found."; exit; } $new_student_information = array( 'name' => 'Danielle' ); $course->students()->insert($new_student_information); Here we're adding a new student to our course by using the method insert(). This method is specific to this relationship type and creates a new student record. It also adds a record to the course_student table to link the course and the new student. Very handy! But, hold on. What's this new syntax? $course->students()->insert($new_student_information); Notice how we're not using $course->students->insert(). Our reference to students is a method reference rather than a property reference. That's because Eloquent handles methods that return relationship objects differently from other model methods. When you access a property of a model that doesn't exist, Eloquent will look to see if you have a function that matches that property's name. For example, if we try to access the property $course->students, Eloquent won't be able to find a member variable named $students. So it'll look for a function named students(). We do have one of those. Eloquent will then receive the relationship object from that method, process it, and return the resulting student records. If we access a relationship method as a method and not as a property, we directly receive the relationship object back. The relationship's class extends the Query class. This means that you can operate on a relationship object in the same way that you can operate on a query object, except that it now has new methods that are specific to the relationship type. The specific implementation details aren't important at this point. It's just important to know that we're calling the insert() method on the relationship object returned from $course->students(). Imagine that you have a user model and it has many relationships and belongs to a role model. Roles represent different permission groupings. Example roles might include customer, admin, super admin, and ultra admin. It's easy to imagine a user form for managing its roles. It would contain a number of checkboxes, one for each potential role. The name of the checkboxes is role_ids[] and each value represents the ID of a role in the roles table./p> When that form is posted we'll retrieve those values with the Input::get() method. $role_ids = Input::get('role_ids'); $role_ids is now an array that contains the values 1, 2, 3, and 4. $user->roles()->sync($role_ids); The sync() method is specific to this relationship type and is also perfectly suited for our needs. We're telling Eloquent to connect our current $user to the roles whose IDs exist within the $role_ids array. Let's look at what's going on here in further detail. $user->roles() is returning a has_ many_and_belongs_to relationship object. We're calling the sync() method on that object. Eloquent now looks at the $role_ids array and acknowledges it as the authoritative list of roles for this user. It then removes any records that shouldn't exist in the role_user pivot table and adds records for any role that should exist in the pivot table. Summary In this article we discussed three types of Eloquent relationships—one-to-one relationship, one-to-many relationship, and many-to-many ralationship. Resources for Article : Further resources on this subject: Modeling Relationships with GORM [Article] Working with Simple Associations using CakePHP [Article] NHibernate 2: Mapping relationships and Fluent Mapping [Article]
Read more
  • 0
  • 0
  • 2632
article-image-packaging-content-types-and-feeds-importers
Packt
25 Jan 2013
8 min read
Save for later

Packaging Content Types and Feeds Importers

Packt
25 Jan 2013
8 min read
(For more resources related to this topic, see here.) Features Let's get started. First, we will look at some background information on what Features does. The code that the Features module will give us is in the form of module files sitting in a module folder that we can save to our /sites/all/modules directory, as we would do for any other contributed module. Using this method, we will have the entire configuration that we spent hours on building, saved into a module file and in code. The Features module will keep track of the tweaks we make to our content type configuration or importer for us. If we make changes to our type or importer we simply save a new version of our Features module. The Features module configuration and the setup screen is at Structure | Features or you can go to this path: admin/structure/features. There is no generic configuration for Features that you need to worry about setting up. If you have the Feeds module installed as we do, you'll see two example features that the Feeds module provides—Feeds Import and Feeds News. You can use these provided features or create your own. We're going to create our own in the next section. You should see the following screen at this point: Building a content type feature We have two custom content types so far on our site, Fire Department and Organization Type. Let's package up the Fire Department content type as a feature so that the Features module can start to keep track of each content type configuration and any changes we make going forward. Creating and enabling the feature First click on the Create Feature tab on your Features administration screen. The screen will load a new create feature form. Now follow these steps to create your first feature. We're going to package up our Fire Department content type: Enter a name for your feature. This should be something specific such as Fire Department Content Type. Add a description for the feature. This should be something like This feature packages up our Fire Department Content type configuration. You can create a specific package for your feature. This will help to organize and group your features on the main Features admin screen. Let's call this package Content Types. Version your feature. This is very important as your feature is going to be a module. It's a good idea to version number your feature each time you make a change to it. Our first version will be 7.x-1.0. Leave the URL of update XML blank for now. By this point you should see the following: Now we're going to add our components to the feature. As this feature will be our Fire Department content type configuration, we need to choose this content type as our component. In the drop-down box select Content types: node. Now check the Fire Department checkbox. When you do this you'll see a timer icon appear for a second and then magically all of your content type fields, associated taxonomy, and dependencies will appear in the table to the right. This means that your feature is adding the entire content type configuration. Features is a smart module. It will automatically associate any fields, taxonomy or other dependencies and requirements to your specific feature configuration. As our content type has taxonomy vocabulary associated with it (in the form of the term reference fields) you'll notice that both country and fire_department_type are in the Taxonomy row of the feature table. You should now see the following: Now click on the Download feature button at the bottom of the screen to download the actual module code for our Fire Department feature module. Clicking on Download feature will download the .tar file of the module to your local computer. Find the .tar file and then extract it into your /sites/all/modules/directory. For organizational best practices, I recommend placing it into a / custom directory within your /sites/all/modules as this is really a custom module. So now you should see a folder called fire_department_content_ type in your /sites/all/modules/custom folder. This folder contains the feature module files that you just downloaded. Now if you go back to your main Features administration screen you will see a new tab titled Content Types that contains your new feature module called Fire Department Content Type. Currently this feature is disabled. You can notice the version number in the same row. Go ahead and check the checkbox next to your feature and then click on the Save settings button. What you are doing here is enabling your feature as a module on your site and now your content type's configuration will always be running from this codebase. When you click on Save settings, your feature should now be enabled and showing Default status. When a feature is in Default state this means that your configuration (in this case the Fire Department content type) matches your feature modules codebase. This specific feature is now set up to keep track of any changes that may occur to the content type. So for example if you added a new field to your content type or tweaked any of its existing fields, display formatters or any other part of its configuration, that feature module would have a status of Overridden. We'll demonstrate this in the next section. The custom feature module Before we show the overridden status however, let's take a look at the actual custom feature module code that we've saved. You'll recall that we added a new folder for our Fire Department content type feature to our /sites/all/modules/custom folder. If you look inside the feature module's folder you'll see the following files that are the same as the constructs of a Drupal module: fire_department_content_type.features.field.inc fire_department_content_type.features.inc fire_department_content_type.features.taxonomy.inc fire_department_content_type.info fire_department_content_type.module Anyone familiar with Drupal modules should see here that this is indeed a Drupal module with info, .module, and .inc files. If you inspect the .info file in an editor you'll see the following code (this is an excerpt): name = Fire Department Content Type description = This feature packages up our Fire Department Content type configuration core = 7.x package = Content Types version = 7.x-1.0 project = fire_department_content_type dependencies[] = features The brunt of our module is in the fire_department_content_type.features. field.inc file. This file contains all of our content type's fields defined as a series of the $fields array (see the following excerpt of code): /** * @file * fire_department_content_type.features.field.inc */ /** * Implements hook_field_default_fields(). */ function fire_department_content_type_field_default_fields() { $fields = array(); // Exported field: 'node-fire_department-body'. $fields['node-fire_department-body'] = array( 'field_config'=>array( 'active'=>'1', 'cardinality'=>'1', 'deleted'=>'0', 'entity_types'=>array( 0 =>'node', ), 'field_name'=>'body', 'foreign keys'=>array( 'format'=>array( 'columns'=>array( 'format'=>'format', ), 'table'=>'filter_format' If you view the taxonomy.inc file you'll see two arrays that return the vocabs which we're referencing via the term references of our content type. As you can see, this module has packaged up our entire content type configuration. It's beyond the scope of this book to get into more detail about the actual module files, but you can see how powerful this can be. If you are a module developer you could actually add code to the specific feature module's files to extend and expand your content type directly from the code. This would then be synced to your feature module codebase. Generally, you do not use this method for tweaking your feature but you do have access to the code and can make tweaks to the feature code. What we'll be doing is overriding our feature from the content type configuration level. Additionally, if you load your module's admin screen on your site and scroll down until you see the new package called Content Types, you'll see your feature module enabled here on the modules admin screen: If you disable the Features module here, it will also disable the feature from your Features admin screen. The best practice dictates that you should first disable a Features module via the Features admin screen. This will then disable the module from the modules admin screen.
Read more
  • 0
  • 0
  • 1039

article-image-securing-portal-contents
Packt
24 Jan 2013
8 min read
Save for later

Securing Portal Contents

Packt
24 Jan 2013
8 min read
(For more resources related to this topic, see here.) Introduction This article discusses the configurations aimed at providing security features to portals and all the related components. We will see that we can work using either the web console or the XML configuration files. As you would expect, the latter is more flexible in most instances. Many of the configuration snippets shown in the article are based on Enterprise Deployment Descriptors (DD). Keep in mind that XML always remains the best option for configuring a product. We will configure GateIn in different ways to show how to adapt some of the internal components for your needs. Enterprise Deployment Descriptors (DD) are configuration files related to an enterprise application component that must be deployed in an application server. The goal of the deployment descriptor is to define how a component must be deployed in the container, configuring the state of the application and its internal components. These configuration files were introduced in the Java Enterprise Platform to manage the deployment of Java Enterprise components such as Web Applications, Enterprise Java Beans, Web Services, and so on. Typically, for each specific container, you have a different definition of the descriptor depending on vendors and standard specifications. Typically, a portal consists of pages related to a public section and a private section. Depending on the purpose, of course, we can also work with a completely private portal. The two main mechanisms used in any user-based application are the following: Authentication Authorization In this article we will discuss authorization: how to configure and manage permissions for all the objects involved in the portal. As an example, a User is a member of a Group, which provides him with some authorizations. These authorizations are the things that members of the Groups can do in the portal. On the other side, as an example, a page is defined with some permissions, which says which Groups can access it. Now, we are going to see how to configure and manage these permissions, for the pages, components in a page, and so on in the portal. Securing portals The authorization model of the portal is based on the association between the following actors: groups, memberships, users, and any content inside the portal (pages, categories, or portlets). In this recipe, we will assign the admin role against a set of pages under a specific URL of the portal. This configuration can be found in the default portal provided with GateIn so you can take the complete code from there. Getting ready Locate the web.xml file inside your portal application. How to do it... We need to configure the web.xml file assigning the admin role to the following pages under the URL http://localhost:8080/portal/admin/* in the following way: <security-constraint> <web-resource-collection> <web-resource-name> admin authentication </web-resource-name> <url-pattern>/admin/*</url-pattern> <http-method>POST</http-method> <http-method>GET</http-method> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>NONE</transport-guarantee> </user-data-constraint> </security-constraint> The role must be declared in a different section under the security-constraint tag through the security-role tag. The role-name tag defines the id of the role: <security-role> <description>the admin role</description> <role-name>admin</role-name> </security-role> How it works... GateIn allows you to add different roles for every sections of the portal simply by adding a path expression that can include a set of sub-pages using wildcard notation (/*). This is done by first defining all the needed roles using the security-role element, and then defining a security-constraint element for each set of pages that you want to involve. PicketLink is also for users and memberships, and can manage the organization of the groups. There's more... Configuring GateIn with JAAS GateIn uses JAAS (Java Authentication Authorization Service) as the security model. JAAS (Java Authentication Authorization Service) is the most common framework used in the Java world to manage authentication and authorization. The goal of this framework is to separate the responsibility of users' permissions from the Java application. In this way, you can have a bridge for permissions management between your application and the security provider. For more information about JAAS, please see the following URL: http://docs.oracle.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html Java EE Application servers and JSP/servlet containers, such as JBoss and Tomcat, also support JAAS with specific deployment descriptors. The default JAAS module implemented in GateIn synchronizes the users and roles from the database. In order to add your portal to a specific realm, add the following snippet in web.xml: <login-config> . . . <realm-name>gatein-domain</realm-name> . . . </login-config> Notice that a realm can be managed by JAAS or another authorization framework—it is not important which is used for the Java Enterprise Edition. gatein-domain is the ID of the default GateIn domain that we will use as the default reference for the following recipes. See also The Securing with JBoss AS recipe The Securing with Tomcat recipe Securing with JBoss AS In this recipe, we will configure GateIn with JAAS using JBoss AS (5.x and 6.x). Getting ready Locate the WEB-INF folder inside your portal application. How to do it... Create a new file named jboss-web.xml in the WEB-INF folder with the following content: <jboss-web> <security-domain> java:/jaas/gatein-domain </security-domain> </jboss-web> How it works... This is the JNDI URL where the JAAS module will be referenced. This URL will automatically search the JAAS modules called gatein-domain. The configuration of the modules can be found inside the file gatein-jboss-beans.xml. Usually, this file is inside the deployed <PORTAL_WAR_ROOT>/META-INF, but it could be placed anywhere inside the deploy directory of JBoss, thanks to the auto-discovery feature provided by the JBoss AS. Here is an example: <deployment > <application-policy name="gatein-domain"> <authentication> <login-module code= "org.gatein.wci.security.WCILoginModule" flag="optional"> <module-option name="portalContainerName"> portal </module-option> <module-option name="realmName"> gatein-domain </module-option> </login-module> <login-module code= "org.exoplatform.web.security.PortalLoginModule" flag="required"> ……….. </application-policy> </deployment> JAAS allows adding several login modules, which will be executed in cascade mode according to the flag attribute. The following represents a description of the valid values for the flag attribute and their respective semantics as mentioned in the Java standard API: Required: The LoginModule is required to succeed. If it succeeds or fails, authentication still continues to proceed to the next LoginModule in the list. Requisite: The LoginModule is required to succeed. If it succeeds, authentication continues on the next LoginModule in the list. If it fails, the control immediately returns to the application and the authentication process does not proceed to the next LoginModule. Sufficient: The LoginModule is not required to succeed. If it does succeed, the control immediately returns to the application and the authentication process does not proceed to the next LoginModule. If it fails, authentication continues forward to the next LoginModule Optional: The LoginModule is not required to succeed. If it succeeds or fails, authentication still continues to proceed to the next LoginModule. Look at the recipe Choosing the JAAS modules for details about each login module. See also The Securing portals recipe The Securing with Tomcat recipe The Choosing the JAAS modules recipe Securing with Tomcat In this recipe, we will configure a JAAS realm using Tomcat 6.x.x/7.x.x. Getting ready Locate the declaration of the realm inside <PORTAL_WAR_ROOT>/META-INF/context.xml. How to do it… Change the default configuration for your needs, as described in the previous recipe. The default configuration is the following: <Context path='/portal' docBase='portal' debug='0' reloadable='true' crossContext='true' privileged='true'> <Realm className= 'org.apache.catalina.realm.JAASRealm' appName='gatein-domain' userClassNames= 'org.exoplatform.services.security.jaas.UserPrincipal' roleClassNames= 'org.exoplatform.services.security.jaas.RolePrincipal' debug='0' cache='false'/> <Valve className= 'org.apache.catalina.authenticator.FormAuthenticator' characterEncoding='UTF-8'/> </Context> ; Change the default configuration of the JAAS domain that is defined in the TOMCAT_ HOME/conf/jaas.conf file. Here is the default configuration: <gatein-domain { org.gatein.wci.security.WCILoginModule optional; org.exoplatform.services.security.jaas.SharedStateLoginModule required; org.exoplatform.services.security.j2ee.TomcatLoginModule required; }; How it works… As we have seen in the previous recipe, we can configure the modules in Tomcat using a different configuration file. This means that we can change and add login modules that are related to a specific JAAS realm. The context.xml file is stored inside the web application. If you don't want to modify this file, you can add a new file called portal.xml in the conf folder to override the current configuration. See also The Security with JBoss AS recipe The Choosing the JAAS modules recipe
Read more
  • 0
  • 0
  • 982