Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-using-third-party-plugins-non-native-plugins
Packt
30 Aug 2013
4 min read
Save for later

Using third-party plugins (non-native plugins)

Packt
30 Aug 2013
4 min read
(For more resources related to this topic, see here.) We want to focus on a particular case here, because we have already seen how to add a new property, and for some components, we can easily add the plugins or features property, and then add the plugin configuration. But the components that have native plugins supported by the API do not allow us to do so, like for instance, the grid panel from Ext JS: We can only use the plugins and features that are available within Sencha Architect. What if we want to use a third-party plugin or feature such as the Filter Plugin? It is possible, but we need to use an advanced feature from Sencha Architect, which is "creating overrides". A disclaimer about overrides: this has to be avoided. Whenever you can use a set method to change a property, use it. Overrides should be your last resource and they should be used very carefully, because if you do not use them carefully, you can change the behavior of a component and something may stop working. But we will demonstrate how to do it in a safe way! We will use the BooksGrid as an example in this topic. Let's say we need to use the Filter Plugin on it, so we need to create an override first. To do it, select the BooksGrid from the project inspector, open the code editor, and click on the Create Override button (Step 1). Sencha Architect will display a warning (Step 2). We can click on Yes to continue: The code editor will open (Step 3) the override class so we can enter our code. In this case, we will have complete freedom to do whatever we need to on this file. So let's add the features() function with the declaration of the plugin and also the initComponent()function as shown in the following screenshot (Step 4): One thing that is very important is that we must call the callParent()function (callOverriden()is deprecated already in Ext JS 4.1 and later versions) to make sure we will continue to have all the original behavior of the component (in this case the BooksGridclass). The only thing we want to do is to add a new feature to it. And we are done with the override! To go back to the original class we can use the navigator as shown in the following screenshot: Notice that requires was added to the class Packt.view.override. BooksGrid, which is the class we just wrote. The next step is to add the plugin on the class requires. To do so, we need to select the BooksGrid, go to the config panel, and add the requires with the name of the plugin (Ext.ux.grid.FiltersFeature): Some developers like to add the plugin file directly as a JavaScript file on app.html/index.html. Sencha provides the dynamic loading feature so let's take advantage of it and use it! First, we cannot forget to add the uxfolder with the plugin on the project root folder as shown in the following screenshot: Next, we need to set the application loader. Select the Application from the project inspector (Step 5), then go to the config panel, locate the Loader Config property, click on the +icon (Step 6), then click on the arrow icon (Step 7). The details of the loader will be available on the config panel. Locate the paths property and click on it (Step 8). The code editor will be opened with the loader path's default value, which is {"Ext": "."}(Step 9). Do not remove it; simply add the path of the Ext.uxnamespace which is the uxfolder (Step 10): And we are almost done! We need to add the filterable option in each column we want to allow the user to filter its values (Step 11): we can use the config panel to add a new property (we need to select the desired column from the project inspector first—always remember to do this). And then, we can choose what type of property we want to add (Step 12 and Step 14). For example, we can add filterable: true(Step 13) for the id column and filterable: {type: 'string'}(Step 15 and Step 16) to the Name column as shown in the following screenshot: And the plugin is ready to be used! Summary In this article we learned some useful tricks that can help in our everyday tasks while working with Sencha projects using Sencha Architect. Also we covered advanced topics such as creating overrides to use third party plugins and features and implement multilingual apps. Resources for Article: Further resources on this subject: Sencha Touch: Layouts Revisited [Article] Sencha Touch: Catering Form Related Needs [Article] Creating a Simple Application in Sencha Touch [Article]
Read more
  • 0
  • 0
  • 1440

article-image-customization
Packt
29 Aug 2013
18 min read
Save for later

Customization

Packt
29 Aug 2013
18 min read
(For more resources related to this topic, see here.) Now that you've got a working multisite installation, we can start to add some customizations. Customizations can come in a few different forms. You're probably aware of the customizations that can be made via WordPress plugins and custom WordPress themes. Another way we can customize a multisite installation is by creating a landing page that displays information about each blog in the multisite network, as well as displaying information about the author for that individual blog. I wrote a blog post shortly after WordPress 3.0 came out detailing how to set this landing page up. At the time, I was working for a local newspaper and we were setting up a blog network for some of our reporters to blog about politics (being in Iowa, politics are a pretty big deal here, especially around Caucus time). You can find the post at http://www.longren.org/how-to-wordpress-3-0-multi-site-blog-directory/ if you'd like to read it. There's also a blog-directory.zip file attached to the post that you can download and use as a starting point. Before we get into creating the landing page, let's get the really simple stuff out of the way and briefly go over how themes and plugins are managed in WordPress multisite installations. We'll start with themes. Themes can be activated network-wide, which is really nice if you have a theme that you want every site in your blog network to use. You can also activate a theme for an individual blog, instead of activating the theme for the entire network. This is helpful if one or two individual blogs need to have a totally unique theme that you don't want to be available to the other blogs. Theme management You can install themes on a multisite installation the same way you would with a regular WordPress install. Just upload the theme folder to your wp-content/themes folder to install the theme. Installing a theme is only part of the process for individual blogs to use the themes; you'll need to activate them for the entire blog network or for specific blogs. To activate a theme for an entire network, click on Themes and then click on Installed Themes in the Network Admin dashboard. Check the themes that you want to enable, select Network Enable in the Bulk Actions drop-down menu, and then click on the Apply button. That's all there is to activating a theme (or multiple themes) for an entire multisite network. The individual blog owners can apply the theme just as you would in a regular, nonmultisite WordPress installation. To activate a theme for just one specific blog and not the entire network, locate the target blog using the Sites menu option in the Network Admin dashboard. After you've found it, put your mouse cursor over the blog URL or domain. You should see the action menu appear immediately under the blog URL or domain. The action menu includes options such as Edit, Dashboard, and Deactivate. Click on the Edit action menu item and then navigate to the Themes tab. To activate an individual theme, just click on Enable below the theme that you want to activate. Or, if you want to activate multiple themes for the blog, check all the themes you want through the checkboxes on the left-hand side of each theme from the list, select Enable in the Bulk Actions drop-down menu, and then click on the Apply button. An important thing to keep in mind is that themes that have been activated for the entire network won't be shown here. Now the blog administrator can apply the theme to their blog just as they normally would. Plugin management To install a plugin for network use, upload the plugin folder to wp-content/plugins/ as you normally would. Unlike themes, plugins cannot be activated on a per-site basis. As network administrator, you can add a plugin to the Plugins page for all sites, but you can't make a plugin available to one specific site. It's all or nothing. You'll also want to make sure that you've enabled the Plugins page for the sites that need it. You can enable the Plugins page by visiting the Network Admin dashboard and then navigating to the Network Settings page. At the bottom of that page you should see a Menu Settings section where you can check a box next to Plugins to enable the plugins page. Make sure to click on the Save Changes button at the bottom or nothing will change. You can see the Menu Settings section in the following screenshot. That's where you'll want to enable the Plugins page. Enabling the Plugins page After you've ensured that the Plugins page is enabled, specific site administrators will be able to enable or disable plugins as they normally would. To enable a plugin for the entire network go to the Network Admin dashboard, mouse over the Plugins menu item, and then click on Installed Plugins. This will look pretty familiar to you; it looks pretty much like the Installed Plugins page does on a typical WordPress single-site installation. The following screenshot shows the installed Plugins page: Enable plugins for the entire network You'll notice below each plugin there's some text that reads Network Activate. I bet you can guess what clicking that will do. Yes, clicking on the Network Activate link will activate that plugin for the entire network. That's all there is to the basic plugin setup in WordPress multisite. There's another plugin feature that is often overlooked in WordPress multisite, and that's must-use plugins. These are plugins that are required for every blog or site on the network. Must-use plugins can be installed in the wp-content/mu-plugins/ folder but they must be single-file plugins. The files within folders won't be read. You can't deactivate or activate the must-use plugins. If they exist in the mu-plugins folder, they're used. They're entirely hidden from the Plugin pages, so individual site administrators won't even see them or know they're there. I don't think must-use plugins are a commonly used thing, but it's nice information to have just in case. Some plugins, especially domain mapping plugins, need to be installed in mu-plugins and need to be activated before the normal plugins. Third-party plugins and plugins for plugin management We should also discuss some of the plugins that are available for making the management of plugins and themes on WordPress multisite installations a bit easier. One of the most popular plugins is called Multisite Plugin Manager, and is developed by Aaron Edwards of UglyRobot.com. The Multisite Plugin Manager plugin was previously known as WPMU Plugin Manager. The plugin can be obtained from the WordPress Plugin Directory at http://wordpress.org/plugins/multisite-plugin-manager/. Here's a quick rundown of some of the plugin features: Select which plugins specific sites have access to Set certain plugins to autoactivate itself for new blogs or sites Activate/deactivate a plugin on all network sites Assign some special plugin access permissions to specific network sites Another plugin that you may find useful is called WordPress MU Domain Mapping. It allows you to easily map any blog or site to an external domain. You can find this plugin in the WordPress Plugin Directory at http://wordpress.org/plugins/wordpress-mu-domain-mapping/. There's one other plugin I want to mention; the only drawback is that it's not a free plugin. It's called WP Multisite Replicator, and you can probably guess what it does. This plugin will allow you to set up a "template" blog or site and then replicate that site when adding new sites or blogs. The idea is that you'd create a blog or site that has all the features that other sites in your network will need. Then, you can easily replicate that site when creating a new site or blog. It will copy widgets, themes, and plugin settings to the new site or blog, which makes deploying new, identical sites extremely easy. It's not an expensive plugin, costing about $36 at the moment of writing, which is well worth it in my opinion if you're going to be creating lots of sites that have the same basic feature set. WP Multisite Replicator can be found at http://wpebooks.com/replicator/. Creating a blog directory / landing page Now that we've got the basic theme and plugin stuff taken care of, I think it's time to move onto creating a blog directory or a landing page, whichever you prefer to call it. From this point on I'll be referring to it as a blog directory. You can see a basic version of what we're going to make in the following screenshot. The users on my example multisite installation, at http://multisite.longren.org/, are Kayla and Sydney, my wife and daughter. Blog directory example As I mentioned earlier in this article, I wrote a post about creating this blog directory back when WordPress 3.0 was first released in 2010. I'll be using that post as the basis for most of what we'll do to create the blog directory with some things changed around, so this will integrate more nicely into whatever theme you're using on the main network site. The first thing we need to do is to create a basic WordPress page template that we can apply to a newly created WordPress page. This template will contain the HTML structure for the blog directory and will dictate where the blog names will be shown and where the recent posts and blog description will be displayed. There's no reason that you need to stick with the following blog directory template specifically. You can take the code and add or remove various elements, such as the recent post if you don't want to show them. You'll want to implement this blog directory template as a child theme in WordPress. To do that, just make a new folder in wp-content/themes/. I typically name my child theme folders after their parent themes. So, the child theme folder I made was wp-content/themes/twentythirteen-tyler/. Once you've got the child theme folder created, make a new file called style.css and make sure it has the following code at the top: /*Theme Name: Twenty Thirteen Child ThemeTheme URI: http://yourdomain.comDescription: Child theme for the Twenty Thirteen themeAuthor: Your name hereAuthor URI: http://example.com/about/Template: twentythirteenVersion: 0.1.0*//* ================ *//* = The 1Kb Grid = */ /* 12 columns, 60 pixels each, with 20pixel gutter *//* ================ */.grid_1 { width:60px; }.grid_2 { width:140px; }.grid_3 { width:220px; }.grid_4 { width:300px; }.grid_5 { width:380px; }.grid_6 { width:460px; }.grid_7 { width:540px; }.grid_8 { width:620px; }.grid_9 { width:700px; }.grid_10 { width:780px; }.grid_11 { width:860px; }.grid_12 { width:940px; }.column {margin: 0 10px;overflow: hidden;float: left;display: inline;}.row {width: 960px;margin: 0 auto;overflow: hidden;}.row .row {margin: 0 -10px;width: auto;display: inline-block;}.author_bio {border: 1px solid #e7e7e7;margin-top: 10px;padding-top: 10px;background:#ffffff url('images/sign.png') no-repeat right bottom;z-index: -99999;}small { font-size: 12px; }.post_count {text-align: center;font-size: 10px;font-weight: bold;line-height: 15px;text-transform: uppercase;float: right;margin-top: -65px;margin-right: 20px;}.post_count a {color: #000;}#content a {text-decoration: none;-webkit-transition: text-shadow .1s linear;outline: none;}#content a:hover {color: #2DADDA;text-shadow: 0 0 6px #278EB3;} The preceding code adds the styling to your child theme, and also tells WordPress the name of your child theme. You can set a custom theme name if you want by changing the Theme Name line to whatever you like. The only fields in that big comment block that are required are the Theme Name and Template. Template, which should be set to whatever the parent theme's folder name is. Now create another file in your child theme folder and name it blog-directory.php. The remaining blocks of code need to go into that blog-directory.php file: <?php/*** Template Name: Blog Directory** A custom page template with a sidebar.* Selectable from a dropdown menu on the add/edit page screen.** @package WordPress* @subpackage Twenty Thirteen*/?><?php get_header(); ?><div id="container" class="onecolumn"><div id="content" role="main"><?php the_post(); ?><div id="post-<?php the_ID(); ?>" <?php post_class(); ?>><?php if ( is_front_page() ) { ?><h2 class="entry-title"><?php the_title(); ?></h2><?php } else { ?><h1 class="entry-title"><?php the_title(); ?></h1><?php } ?><div class="entry-content"><!-- start blog directory --><?php// Get the authors from the database ordered randomlyglobal $wpdb;$query = "SELECT ID, user_nicename from $wpdb->users WHERE ID != '1'ORDER BY 1 LIMIT 50";$author_ids = $wpdb->get_results($query);// Loop through each authorforeach($author_ids as $author) {// Get user data$curauth = get_userdata($author->ID);// Get link to author page$user_link = get_author_posts_url($curauth->ID);// Get blog details for the authors primary blog ID$blog_details = get_blog_details($curauth->primary_blog);$postText = "posts";if ($blog_details->post_count == "1") {$postText = "post";}$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($blog_details->last_updated));if ($blog_details->post_count == "") {$blog_details->post_count = "0";}$posts = $wpdb->get_col( "SELECT ID FROM wp_".$curauth->primary_blog."_posts WHERE post_status='publish' AND post_type='post' ANDpost_author='$author->ID' ORDER BY ID DESC LIMIT 5");$postHTML = "";$i=0;foreach($posts as $p) {$postdetail=get_blog_post($curauth->primary_blog,$p);if ($i==0) {$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($postdetail->post_date));}$postHTML .= "&#149; <a href="$postdetail->guid">$postdetail->post_title</a><br />";$i++;}?> The preceding code sets up the theme and queries the WordPress database for authors. In WordPress multisite, users who have the Author permission type have a blog on the network. There's also code for grabbing posts from each of the network sites for displaying the recent posts from them: <div class="author_bio"><div class="row"><div class="column grid_2"><a href="<?php echo $blog_details->siteurl; ?>"><?php echo get_avatar($curauth->user_email, '96','http://www.gravatar.com/avatar/ad516503a11cd5ca435acc9bb6523536'); ?></a></div><div class="column grid_6"><a href="<?php echo $blog_details->siteurl; ?>" title="<?php echo$curauth->display_name; ?> - <?=$blog_details->blogname?>"><?php //echo $curauth->display_name; ?> <?=$curauth->display_name;?></a><br /><small><strong>Updated <?=$updatedOn?></strong></small><br /><?php echo $curauth->description; ?></div><div class="column grid_3"><h3>Recent Posts</h3><?=$postHTML?></div></div><span class="post_count"><a href="<?php echo $blog_details->siteurl;?>" title="<?php echo $curauth->display_name; ?>"><?=$blog_details->post_count?><br /><?=$postText?></a></span></div><?php } ?><!-- end blog directory --><?php wp_link_pages( array( 'before' => '<div class="page-link">' .__( 'Pages:', 'twentythirteen' ), 'after' => '</div>' ) ); ?><?php edit_post_link( __( 'Edit', 'twentythirteen' ), '<spanclass="edit-link">', '</span>' ); ?></div><!-- .entry-content --></div><!-- #post-<?php the_ID(); ?> --><?php comments_template( '', true ); ?></div><!-- #content --></div><!-- #container --><?php //get_sidebar(); ?><?php get_footer(); ?> Once you've got your blog-directory.php template file created, we can get actually started by setting up the page to serve as our blog directory. You'll need to set the root site's theme to your child theme; do it just as you would on a nonmultisite WordPress installation. Before we go further, let's create a couple of network sites so we have something to see on our blog directory. Go to the Network Admin dashboard, mouse over the Sites menu option in the left-hand side menu, and then click on Add New. If you're using a directory network type, as I am, the value you enter for the Site Address field will be the path to the directory that site sits in. So, if you enter tyler as the Site Address value, that the site can be reached at http://multisite.longren.org/tyler/. The settings that I used to set up multisite.longren.org/tyler/ can be seen in the following screenshot. You'll probably want to add a couple of sites just so you get a good idea of what your blog directory page will look like. Example individual site setup Now we can set up the actual blog directory page. On the main dashboard (that is, /wp-admin/index.php), mouse over the Pages menu item on the left-hand side of the page and then click on Add New to create a new page. I usually name this page Home, as I use the blog directory as the first page that visitors see when visiting the site. From there, visitors can choose which blog they want to visit and are also shown a list of the most recent posts from each blog. There's no need to enter any content on the page, unless you want to. The important part is selecting the Blog Directory template. Before you publish your new Home / blog directory page, make sure that you select Blog Directory as the Template value in the Page Attributes section. An example a Home / blog directory page can be seen in the following screenshot: Example Home / blog directory page setup Once you've got your page looking like the example, as shown in the previous screenshot, you can go ahead and publish that page. The Update button in the previous screenshot will say Publish if you've not yet published the page. Next you'll want to set the newly created Home / blog directory page as the front page for the site. To do this, mouse over the Settings menu option on the left-hand side of the page and then click on Reading. For the Front page displays value, check A static page (select below). Previously, Your latest posts was checked. Then for the Front Page drop-down menu, just select the Home page that we just created and click on the Save Changes button at the bottom of the page. I usually don't set anything for the Posts page drop-down menu because I never post to the "parent" site. If you do intend to make posts on the parent site, I'd suggest that you create a new blank page titled Posts and then select that page as your Posts page. The reading settings I use at multisite.longren.org can be as shown in the following screenshot: Reading settings setup After you've saved your reading settings, open up your parent site in your browser and you should see something similar to what I showed in the Blog directory example screenshot. Again, there's no need for you to keep the exact setup that I've used in the example blog-directory.php file. You can give that any style/design that you want. You can rearrange the various pieces on the page as you prefer. You should probably have a decent working knowledge of HTML and CSS to accomplish this, however. You should have a basic blog directory at this point. If you have any experience with PHP, HTML, and CSS, you can probably extend this basic code and do a whole lot more with it. The number of plugins is astounding and they are of very good quality, generally. And I think Automattic has done great things for WordPress in general. No other CMS can claim to have anything like the number of plugins that WordPress does. Summary You should be able to effectively manage themes and plugins in a multisite installation now. If you set the code up, you've got a directory showcasing network member content and, more importantly, know how to set up and customize a WordPress child theme now. Resources for Article : Further resources on this subject: Customization using ADF Meta Data Services [Article] Overview of Microsoft Dynamics CRM 2011 [Article] Customizing an Avatar in Flash Multiplayer Virtual Worlds [Article]
Read more
  • 0
  • 0
  • 1426

article-image-getting-started-your-first-jquery-plugin
Packt
29 Aug 2013
9 min read
Save for later

Getting started with your first jQuery plugin

Packt
29 Aug 2013
9 min read
(For more resources related to this topic, see here.) Getting ready Before we dive into our development, we need to have a good idea of how our plugin is going to work. For this, we will write some simple HTML to declare a shape and a button. Each shape will be declared in the CSS, and then we will use the JavaScript to toggle which shape is shown by toggling the CSS class appended to it. The aim of this recipe is to help you familiarize yourself both with jQuery plugin development and the jQuery Boilerplate template. How to do it Our first step is to set up our HTML. For this we need to open up our index.html file. We will need to add two elements in HTML: shape and wrapper to contain our shape. The button for changing the shape element will be added dynamically by our JavaScript. We will then add an event listener to it so that we can change the shape. The HTML code for this is as follows: <div class="shape_wrapper"> <div class="shape"> </div> </div> This should be placed in the div tag with class="container" in our index.html file. We then need to define each of the shapes we intend to use using CSS. In this example, we will draw a square, a circle, a triangle, and an oval, all of which can be defined using CSS. The shape we will be manipulating will be 100px * 100px. The following CSS should be placed in your main.css file: .shape{ width: 100px; height: 100px; background: #ff0000; margin: 10px 0px; } .shape.circle{ border-radius: 50px; } .shape.triangle{ width: 0; height: 0; background: transparent; border-left: 50px solid transparent; border-right: 50px solid transparent; border-bottom: 100px solid #ff0000; } .shape.oval{ width: 100px; height: 50px; margin: 35px 0; border-radius: 50px / 25px; } Now it's time to get onto the JavaScript. The first step in creating the plugin is to name it; in this case we will call it shapeShift. In the jQuery Boilerplate code, we will need to set the value of the pluginName variable to equal shapeShift. This is done as: var pluginName = "shapeShift" Once we have named the plugin, we can edit our main.js file to call the plugin. We will call the plugin by selecting the element using jQuery and creating an instance of our plugin by running .shapeShift(); as follows: (function(){$('.shape_wrapper').shapeShift();}()); For now this will do nothing, but it will enable us to test our plugin once we have written the code. To ensure the flexibility of our plugin, we will store our shapes as part of the defaults object literal, meaning that, in the future, the shapes used by the plugin can be changed without the plugin code being changed. We will also set the class name of the shape in the defaults object literal so that this can be chosen by the plugin user as well. After doing this, your defaults object should look like the following: defaults = {shapes: ["square", "circle", "triangle", "oval"],shapeClass: ".shape"}; When the .shapeShift() function is triggered, it will create an instance of our plugin and then fire the init function. For this instance of our plugin, we will store the current shape location in the array; this is done by adding it to this by using this.shapeRef = 0. The reason we are storing the shape reference on this is that it attaches it to this instance of the plugin, and it will not be available to other instances of the same plugin on the same page. Once we have stored the shape reference, we need to apply the first shape class to the div element according to our shape. The simplest way to do this is to use jQuery to get the shape and then use addClass to add the shape class as follows: $(this.element).find(this.options.shapeClass).addClass(this.options.shapes[this.shapeRef]); The final step that we need to do in our init function is to add our button to enable the user to change the shape. To do this, we simply append a button element to the shape container as follows: $(this.element).append('<button>Change Shape</button>'); Once we have our button element, we then need to add the shape reference, which changes the shape of the elements. To do this we will create a separate function called changeShape. While we are still in our init function, we can add an event handler to call the changeShape function onto the button. For reasons that will become apparent shortly, we will use the event delegation format of the jQuery. on() function to do this: $(this.element).on('click','button',this.changeShape); We now need to create our changeShape function; the first thing we will do is change this function name to changeShape. We will then change the function declaration to accept a parameter, in this case e. The first thing to note is that this function is called from an event listener on a DOM element and therefore this is actually the element that has been clicked on. This function was called using event delegation; the reason for this becomes apparent here as it allows us to find out which instance of the plugin belongs to the button that has been clicked on. We do this by using the e parameter that was passed to the function. The e parameter passed to the function is the jQuery event object related to the click event that has been fired. Inside it, we will find a reference to the original element that the click event was set to, which in this case is the element that the instance of the plugin is tied to. To retrieve the instance of the plugin, we can simply use the jQuery.data() function. The instance of the plugin is stored on the element as data using the data key plugin_pluginName, so we are able to retrieve it the same way as follows: var plugin = $(e.delegateTarget).data("plugin_" + pluginName); Now that we have the plugin instance, we are able to access everything it contains; the first thing we need to do is to remove the current shape class from the shape element in the DOM. To do this, we will simply find the shape element then look up in the shapes array to get the currently displayed shape, and then use the jQuery.removeClass function to remove the individual class. The code for doing this starts with a simple jQuery selector that allows us to work with the plugin element; we do this using $(plugin.element). We then look inside the plugin element to find the actual shape. As the name of the shape class is configurable, we can read this from our plugin option; so when we are finding the shape we use .find(plugin.options.shapeClass). Finally we add the class; so that we know which shape is next, we look up the shape class from the shapes array stored in the plugin options, selecting the item indicated by the plugin.shapeRef. The full command then looks as follows: $(plugin.element).find(plugin.options.shapeClass).removeClass(plugin.options.shapes[plugin.shapeRef]); We then need to work out which is the next shape we should show; we know that the current shape reference can be found in plugin.shapeRef, so we just need to work out if we have any more shapes left in the shape array or if we should start from the beginning. To do this, we look at the value of plugin.shapeRef and compare it to the length of the shapes array minus 1 (we substract 1 because arrays start at 0); if the shape reference is equal to the length of the shapes array minus 1, we know that we have reached the last shape, so we reset the plugin.shapeRef parameter to 0. Otherwise, we simply increment the shapeRef parameter by 1 as shown in the snippet: if((plugin.shapeRef) === (plugin.options.shapes.length -1)){plugin.shapeRef = 0;}else{plugin.shapeRef = plugin.shapeRef+1;} Our final step is to add the new shape class to the shape element; this can be achieved by finding the shape element and using the jQuery.addClass function to add the shape from the shapes array. This is very similar to our removeClass command that we used earlier with addClass replacing removeClass. $(plugin.element).find(plugin.options.shapeClass).addClass(plugin.options.shapes[plugin.shapeRef]); At this point we should now have a working plugin; so if we fire up the browser and navigate to the index.html file, we should get a square with a button beneath it. Clicking on the button should show the next shape. If your code is working correctly, the shapes should be shown in the order: square, circle, triangle, oval, and then loop back to square. As a final test to show that each plugin instance is tied to one element, we will add a second element to the page. This is as simple as duplicating the original shape_wrapper and creating a second one as shown: <div class="shape_wrapper"><div class="shape"></div></div> If everything is working correctly when loading the index.html page, we will have 2 squares each with a button underneath them, and on clicking the button only the shape above will change. Summary This article explained how to create your first jQuery plugin that manipulates the shape of a div element. We achieved this by writing some HTML to declare a shape and a button, declaring each shape in the CSS, and then using the JavaScript to toggle which shape is shown by toggling the CSS class appended to it. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 1134
Visually different images

article-image-nokogiri
Packt
27 Aug 2013
8 min read
Save for later

Nokogiri

Packt
27 Aug 2013
8 min read
(For more resources related to this topic, see here.) Spoofing browser agents When you request a web page, you send metainformation along with your request in the form of headers. One of these headers, User-agent, informs the web server which web browser you are using. By default open-uri, the library we are using to scrape, will report your browser as Ruby. There are two issues with this. First, it makes it very easy for an administrator to look through their server logs and see if someone has been scraping the server. Ruby is not a standard web browser. Second, some web servers will deny requests that are made by a nonstandard browsing agent. We are going to spoof our browser agent so that the server thinks we are just another Mac using Safari. An example is as follows: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# this string is the browser agent for Safari running on a Macbrowser = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4)AppleWebKit/536.30.1 (KHTML, like Gecko) Version/6.0.5Safari/536.30.1'# create a new Nokogiri HTML document from the scraped URL and pass inthe# browser agent as a second parameterdoc = Nokogiri::HTML(open('http://nytimes.com', browser))# you can now go along with your request as normal# you will show up as just another safari user in the logsputs doc.at_css('h2 a').to_s Caching It's important to remember that every time we scrape content, we are using someone else's server's resources. While it is true that we are not using any more resources than a standard web browser request, the automated nature of our requests leave the potential for abuse. In the previous examples we have searched for the top headline on The New York Times website. What if we took this code and put it in a loop because we always want to know the latest top headline? The code would work, but we would be launching a mini denial of service (DOS) attack on the server by hitting their page potentially thousands of times every minute. Many servers, Google being one example, have automatic blocking set up to prevent these rapid requests. They ban IP addresses that access their resources too quickly. This is known as rate limiting. To avoid being rate limited and in general be a good netizen, we need to implement a caching layer. Traditionally in a large app this would be implemented with a database. That's a little out of scope for this article, so we're going to build our own caching layer with a simple TXT file. We will store the headline in the file and then check the file modification date to see if enough time has passed before checking for new headlines. Start by creating the cache.txt file in the same directory as your code: $ touch cache.txt We're now ready to craft our caching solution: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# set how long in minutes until our data is expired# multiplied by 60 to convert to secondsexpiration = 1 * 60# file to store our cache incache = "cache.txt"# Calculate how old our cache is by subtracting it's modification time# from the current time.# Time.new gets the current time# The mtime methods gets the modification time on a filecache_age = Time.new - File.new(cache).mtime# if the cache age is greater than our expiration timeif cache_age > expiration# our cache has expireputs "cache has expired. fetching new headline"# we will now use our code from the quick start to# snag a new headline# scrape the web pagedata = open('http://nytimes.com')# create a Nokogiri HTML Document from our datadoc = Nokogiri::HTML(data)# parse the top headline and clean it upheadline = doc.at_css('h2 a').content.gsub(/n/," ").strip# we now need to save our new headline# the second File.open parameter "w" tells Ruby to overwrite# the old fileFile.open(cache, "w") do |file| # we then simply puts our text into the file file.puts headlineendputs "cache updated"else # we should use our cached copy puts "using cached copy" # read cache into a string using the read method headline = IO.read("cache.txt")end puts "The top headline on The New York Times is ..."puts headline Our cache is set to expire in one minute, so assuming it has been one minute since you created your cache.txt file, let's fire up our Ruby script: ruby cache.rbcache has expired. fetching new headlinecache updatedThe top headline on The New York Times is ...Supreme Court Invalidates Key Part of Voting Rights Act If we run our script again before another minute passes, it should use the cached copy: $ ruby cache.rbusing cached copyThe top headline on The New York Times is ...Supreme Court Invalidates Key Part of Voting Rights Act SSL By default, open-uri does not support scraping a page with SSL. This means any URL that starts with https will give you an error. We can get around this by adding one line below our require statements: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# disable SSL checking to allow scrapingOpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE Mechanize Sometimes you need to interact with a page before you can scrape it. The most common examples are logging in or submitting a form. Nokogiri is not set up to interact with pages. Nokogiri doesn't even scrape or download the page. That duty falls on open-uri. If you need to interact with a page, there is another gem you will have to use: Mechanize. Mechanize is created by the same team as Nokogiri and is used for automating interactions with websites. Mechanize includes a functioning copy of Nokogiri. To get started, install the mechanize gem: $ gem install mechanizeSuccessfully installed mechanize-2.7.1 We're going to recreate the code sample from the installation where we parsed the top Google results for "packt", except this time we are going to start by going to the Google home page and submitting the search form: # mechanize takes the place of Nokogiri and open-urirequire 'mechanize'# create a new mechanize agent# think of this as launching your web browseragent = Mechanize.new# open a URL in your agent / web browserpage = agent.get('http://google.com/')# the google homepage has one big search box# if you inspect the HTML, you will find a form with the name 'f'# inside of the form you will find a text input with the name 'q'google_form = page.form('f')# tell the page to set the q input inside the f form to 'packt'google_form.q = 'packt'# submit the formpage = agent.submit(google_form)# loop through an array of objects matching a CSS# selector. mechanize uses the search method instead of# xpath or css. search supports xpath and css# you can use the search method in Nokogiri too if you# like itpage.search('h3.r').each do |link| # print the link text puts link.contentend Now execute the Ruby script and you should see the titles for the top results: $ ruby mechanize.rbPackt Publishing: HomeBooksLatest BooksLogin/registerPacktLibSupportContactPackt - Wikipedia, the free encyclopediaPackt Open Source (PacktOpenSource) on TwitterPackt Publishing (packtpub) on TwitterPackt Publishing | LinkedInPackt Publishing | Facebook For more information refer to the site: http://mechanize.rubyforge.org/ People and places you should get to know If you need help with Nokogiri, here are some people and places that will prove invaluable. Official sites The following are the sites you can refer: Homepage and documentation: http://nokogiri.org Source code: https://github.com/sparklemotion/nokogiri/ Articles and tutorials The top five Nokogiri resources are as follows: Nokogiri History, Present, and Future presentation slides from Nokogiri co-author Mike Dalessio: http://bit.ly/nokogiri-goruco-2013 In-depth tutorial covering Ruby, Nokogiri, Sinatra, and Heroku complete with 90 minute behind-the-scenes screencast written by me: http://hunterpowers.com/data-scraping-and-more-with-ruby-nokogiri-sinatra-and-heroku RailsCasts episode 190: Screen Scraping with Nokogiri – an excellent Nokogiri quick start video: http://railscasts.com/episodes/190-screen-scraping-with-nokogiri Mechanize – an excellent Mechanize quick start video: http://railscasts.com/episodes/191-mechanize RailsCasts episode 191 Nokogiri co-author Mike Dalessio's blog: http://blog.flavorjon.es Community The community sites are as follows: Listserve: http://groups.google.com/group/nokogiri-talk GitHub: https://github.com/sparklemotion/nokogiri/ Wiki: http://github.com/sparklemotion/nokogiri/wikis Known issues: http://github.com/sparklemotion/nokogiri/issues Stackoverflow: http://stackoverflow.com/search?q=nokogiri Twitter Nokogiri leaders on Twitter are: Nokogiri co-author Mike Dalessio: @flavorjones Nokogiri co-author Aaron Patterson: @tenderlove Me: @TheHunter For more information on open source, follow Packt Publishing: @PacktOpenSource Summary Thus, we learnt about Nokogiri open source library in this article. Resources for Article : Further resources on this subject: URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Introducing RubyMotion and the Hello World app [Article] Building the Facebook Clone using Ruby [Article]
Read more
  • 0
  • 0
  • 2205

article-image-creating-camel-project-simple
Packt
27 Aug 2013
8 min read
Save for later

Creating a Camel project (Simple)

Packt
27 Aug 2013
8 min read
(For more resources related to this topic, see here.) Getting ready For the examples in this article, we are going to use Apache Camel version 2.11 (http://maven.apache.org/) and Apache Maven version 2.2.1 or newer (http://maven.apache.org/) as a build tool. Both of these projects can be downloaded for free from their websites. The complete source code for all the examples in this article is available on github at https://github.com/bibryam/camel-message-routing-examples repository. It contains Camel routes in Spring XML and Java DSL with accompanying unit tests. The source code for this tutorial is located under the project: camel-message-routing-examples/creating-camel-project. How to do it... In a new Maven project add the following Camel dependency to the pom.xml: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>${camel-version}</version></dependency> With this dependency in place, creating our first route requires only a couple of lines of Java code: public class MoveFileRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file://source") .to("log://org.apache.camel.howto?showAll=true") .to("file://target"); }} Once the route is defined, the next step is to add it to CamelContext, which is the actual routing engine and run it as a standalone Java application: public class Main { public static void main(String[] args) throws Exception { CamelContext camelContext = new DefaultCamelContext(); camelContext.addRoutes(new MoveFileRoute()); camelContext.start(); Thread.sleep(10000); camelContext.stop(); }} That's all it takes to create our first Camel application. Now, we can run it using a Java IDE or from the command line with Maven mvn exec:java. How it works... Camel has a modular architecture; its core (camel-core dependency) contains all the functionality needed to run a Camel application—DSL for various languages, the routing engine, implementations of EIPs, a number of data converters, and core components. This is the only dependency needed to run this application. Then there are optional technology specific connector dependencies (called components) such as JMS, SOAP, JDBC, Twitter, and so on, which are not needed for this example, as the file and log components we used are all part of the camel-core. Camel routes are created using a Domain Specific Language (DSL), specifically tailored for application integration. Camel DSLs are high-level languages that allow us to easily create routes, combining various processing steps and EIPs without going into low-level implementation details. In the Java DSL, we create a route by extending RouteBuilder and overriding the configure method. A route represents a chain of processing steps applied to a message based on some rules. The route has a beginning defined by the from endpoint, and one or more processing steps commonly called "Processors" (which implement the Processor interface). Most of these ideas and concepts originate from the Pipes and Filters pattern from the Enterprise Integration Patterns articlee by Gregor Hohpe and Bobby Woolf. The article provides an extensive list of patterns, which are also available at http://www.enterpriseintegrationpatterns.com, and the majority of which are implemented by Camel. With the Pipes and Filters pattern, a large processing task is divided into a sequence of smaller independent processing steps (Filters) that are connected by channels (Pipes). Each filter processes messages received from the inbound channel, and publishes the result to the outbound channel. In our route, the processing steps are reading the file using a polling consumer, logging it and writing the file to the target folder, all of them piped by Camel in the sequence specified in the DSL. We can visualize the individual steps in the application with the following diagram: A route has exactly one input called consumer and identified by the keyword from. A consumer receives messages from producers or external systems, wraps them in a Camel specific format called Exchange , and starts routing them. There are two types of consumers: a polling consumer that fetches messages periodically (for example, reading files from a folder) and an event-driven consumer that listens for events and gets activated when a message arrives (for example, an HTTP server). All the other processor nodes in the route are either a type of integration pattern or producers used for sending messages to various endpoints. Producers are identified by the keyword to and they are capable of converting exchanges and delivering them to other channels using the underlying transport mechanism. In our example, the log producer logs the files using the log4J API, whereas the file producer writes them to a target folder. The route is not enough to have a running application; it is only a template that defines the processing steps. The engine that runs and manages the routes is called Camel Context. A high level view of CamelContext looks like the following diagram: CamelContext is a dynamic multithread route container, responsible for managing all aspects of the routing: route lifecycle, message conversions, configurations, error handling, monitoring, and so on. When CamelContext is started, it starts the components, endpoints and activates the routes. The routes are kept running until CamelContext is stopped again when it performs a graceful shutdown giving time for all the in-flight messages to complete processing. CamelContext is dynamic, it allows us to start, stop routes, add new routes, or remove running routes at runtime. In our example, after adding the MoveFileRoute, we start CamelContext and let it copy files for 10 seconds, and then the application terminates. If we check the target folder, we should see files copied from the source folder. There's more... Camel applications can run as standalone applications or can be embedded in other containers such as Spring or Apache Karaf. To make development and deployment to various environments easy, Camel provides a number of DSLs, including Spring XML, Blueprint XML, Groovy, and Scala. Next, we will have a look at the Spring XML DSL. Using Spring XML DSL Java and Spring XML are the two most popular DSLs in Camel. Both provide access to all Camel features and the choice is mostly a matter of taste. Java DSL is more flexible and requires fewer lines of code, but can easily become complicated and harder to understand with the use of anonymous inner classes and other Java constructs. Spring XML DSL, on the other hand, is easier to read and maintain, but it is too verbose and testing it requires a little more effort. My rule of thumb is to use Spring XML DSL only when Camel is going to be part of a Spring application (to benefit from other Spring features available in Camel), or when the routing logic has to be easily understood by many people. For the routing examples in the article, we are going to show a mixture of Java and Spring XML DSL, but the source code accompanying this article has all the examples in both DSLs. In order to use Spring, we also need the following dependency in our projects: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>${camel-version}</version></dependency> The same application for copying files, written in Spring XML DSL looks like the following: <beans xsi_schemaLocation=" http ://www.springframework.org/schema/beans http ://www.springframework.org/schema/beans/spring-beans.xsd http ://camel.apache.org/schema/spring http ://camel.apache.org/schema/spring/camel-spring.xsd"><camelContext > <route> <from uri="file://source"/> <to uri="log://org.apache.camel.howto?showAll=true"/> <to uri="file://target"/> </route></camelContext></beans> Notice that this is a standard Spring XML file with an additional CamelContext element containing the route. We can launch the Spring application as part of a web application, OSGI bundle, or as a standalone application: public static void main(String[] args) throws Exception { AbstractApplicationContext springContext = new ClassPathXmlApplicationContext("META-INF/spring/move-file-context.xml"); springContext.start(); Thread.sleep(10000); springContext.stop();} When the Spring container starts, it will instantiate a CamelContext, start it and add the routes without any other code required. That is the complete application written in Spring XML DSL. More information about Spring support in Apache Camel can be found at http://camel.apache.org/spring.html. Summary This article provides a high-level overview of Camel architecture, and demonstrates how to create a simple message driven application. Resources for Article: Further resources on this subject: Binding Web Services in ESB—Web Services Gateway [Article] Drools Integration Modules: Spring Framework and Apache Camel [Article] Routing to an external ActiveMQ broker [Article]
Read more
  • 0
  • 0
  • 6824

article-image-irc-style-chat-tcp-server-and-event-bus
Packt
27 Aug 2013
6 min read
Save for later

IRC-style chat with TCP server and event bus

Packt
27 Aug 2013
6 min read
(For more resources related to this topic, see here.) Step 1 – fresh start In a new folder called, for example, 1_PubSub_Chat, let's open our editor of choice and create here a file called pubsub_chat.js. Also, make sure that you have a terminal window open and you moved into the newly created project directory. Step 2 – creating the TCP server TCP servers are called net servers in Vert.x. Creating and using a net server is really similar to HTTP servers: Obtain the vertx bridge object to access the framework features: var vertx = require('vertx'); /* 1 */var netServer = vertx.createNetServer(); /* 2 */netServer.listen(1234); /* 3 */ Ask Vert.x to create a TCP server (called NetServer in Vert.x). Actually, start the server by telling it to listen on TCP port 1234. Let's test whether this works. This time we need another terminal to run the telnet command: $ telnet localhost 1234 The terminal should now be connected and waiting to send/receive characters. If you have "connection refused" errors, make sure the server is running. Step 3 – adding a connect handler Now, we need to place a block of code to be executed as soon as a client connects: Define a handler function. This function will be called every time a client connects to the server: var vertx = require('vertx')var server = vertx.createNetServer().connectHandler(function(socket) {// Composing a client address stringaddr = socket.remoteAddress();addr = addr.ipaddress + addr.port;socket.write('Welcome to the chat ' + addr + '!');}).listen(1234) A NetServer connect handler accepts the socket object as a parameter; this object is our gateway to reading, writing, or closing the connection to a client. Use the socket object to write a greeting to the newly connected clients. If we test this one as in step 2 (Step 2 – creating the TCP server), we see that the server now welcomes us with a message containing an identifier of the client as its origin host and origin port. Step 4 – adding a data handler We just learned how to execute a block of code at the moment in which the client connects. However now we are interested in doing something else at the time when we receive new data from a client connection. The socket object we used in the previous step for writing data back to the client, accepts a handler function too: the data handler. Let's add one: Add a data handler function to the socket object. This is going to be called every time the client sends a new string of data: var vertx = require('vertx') var server = vertx.createNetServer().connectHandler( function(socket) { // Composing a client address string addr = socket.remoteAddress(); addr = addr.ipaddress + addr.port; socket.write('Welcome to the chat ' + addr + '!'); socket.dataHandler(function(data) { var now = new Date(); now = now.getHours() + ':' + now.getMinutes() + ':' + now.getSeconds(); var msg = now + ' <' + addr + '> ' + data; socket.write(msg); }) }).listen(1234) React to the new data event by writing the same data back to the socket (plus a prefix). What we have now is a sort of an echo server, which returns back to the sender the same message with a prefix string. Step 5 – adding the event bus magic The base requirement of a chat server is that every time a client sends a message, the rest of the connected clients should receive it. We will use event bus, the messaging service provided by the framework, to send (publish) received messages to a broadcast address. Each client will subscribe to the address upon connection and receive other clients' messages from there: Add a data handler function to the socket object: var vertx = require('vertx') var server = vertx.createNetServer().connectHandler( function(socket) { // Composing a client address string addr = socket.remoteAddress(); addr = addr.ipaddress + addr.port; socket.write('Welcome to the chat ' + addr + '!'); vertx.eventBus.registerHandler('broadcast_address', function(event){ socket.write(event); }); socket.dataHandler(function(data) { var now = new Date(); now = now.getHours() + ':' + now.getMinutes() + ':' + now.getSeconds(); var msg = now + ' <' + addr + '> ' + data; vertx.eventBus.publish('broadcast_address', msg); }) }).listen(1234) As soon as a client connects, they listen to the event bus for new data to be published on the address broadcast_address. When a client sends a string of characters to the server, this data is published to the broadcast address, triggering a handler function that writes the string to all the clients' sockets. The chat server is now complete! To try it out, just open three terminals: Terminal 1: $ vertx run pubsub_chat.js Terminal 2: $ telnet localhost 1234 Terminal 3: $ telnet localhost 1234 Now, we have a server and two clients running and connected. Type something in terminal 2 or 3 and see the message being broadcast to both the other windows: $ telnet localhost 1234Trying ::1...Connected to localhost.Escape character is '^]'.Hello from terminal two!13:6:56 <0:0:0:0:0:0:0:155991> Hello from terminal two!13:7:24 <0:0:0:0:0:0:0:155992> Hi there, here's terminal three!13:7:56 <0:0:0:0:0:0:0:155992> Great weather today! Step 6 – Organizing a more complex project Since Vert.x is a polyglot platform, we can choose to write an application (or a part of it) in either of the many supported languages. The granularity of the language choice is at verticle level. It's important to give a good architecture to a non-trivial project from the beginning. Follow this list of generic principles to avoid performance bottlenecks or the need for massive refactoring in the future: Wrap synchronous libraries or legacy code inside a worker verticle (or a module). This will keep the blocking code away from the event loop threads. Divide the problem in isolated domains and write a verticle to handle each of them (for example, database persistor verticle, web server verticle, authenticator verticle, and cache manager verticle). Use a startup verticle. This will be the single entry point to the application. Its responsibilities will be to: Validate the configuration file Programmatically deploy other verticles in the correct order Decide how many instances of a verticle to create (the decision might depend on the environment: for example, the amount of available processors) Register periodic tasks Summary: In this article, we learned in a step-wise procedure how we can create an Internet Relay Chat using the TCP server, and interconnect the server with the clients using an event bus, and enable different types of communication between them. Resources for Article: Further resources on this subject: Getting Started with Zombie.js [Article] Building a Chat Application [Article] Accessing and using the RDF data in Stanbol [Article]
Read more
  • 0
  • 0
  • 2643
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-publishing-project-mobile
Packt
26 Aug 2013
5 min read
Save for later

Publishing the project for mobile

Packt
26 Aug 2013
5 min read
(For more resources related to this topic, see here.) Standard HTML5 publishing You will first publish your project using the standard HTML5 publishing options: Open the HDSB_publish.cptx file. Click on the publish icon situated right next to the preview icon in the main toolbar. Alternatively, you can also go to the File | Publish menu item. The Publish dialog contains all of the publishing options of Captivate 7, as shown in the following screenshot. In the left column of the dialog, of six icons marked as 1, represent the main publishing formats supported by Captivate. The area in the center, marked as 2, displays the options pertaining to the selected format. Take some time to click on each of the six icons of the left column one-by-one. While doing so, take a close look at the right area of the dialog to see how the set of available options changes based on the selected format. When done, return to the SWF/HTML5 format, which is the first icon at the top of the left column. Type hdStreet_standard in the Project Title field. Click on the Browse button associated with the Folder field and choose the /published folder of your exercises as the publish location. In the Output Format Options section, make sure that the HTML5 checkbox is the only the one selected. If necessary, adjust the other options so that the Publish dialog looks like the previous screenshot. When ready, click on the Publish button at the bottom-right corner of the dialog box to trigger the actual publishing process. This process can take some time depending on the size of the project to publish, and on the overall performances of your computer system. When done, Captivate displays a message, acknowledging the successful completion of the publishing process and asking you if you want to view the output. Click on No to close both the message and the Publish dialog. Make sure you save the file before proceeding. Publishing your project to HTML5 is that easy! We will also use Windows Explorer (Windows) or Finder (Mac) to take a closer look at the generated files. Examining the HTML5 output By publishing the project to HTML 5, Captivate has generated a whole bunch of HTML, CSS, and JavaScript files: Use Windows Explorer (Windows) or Finder (Mac) to go to the /published/hdStreet_standard folder of your exercises. Note that Captivate has created a subfolder in the /published folder we specified as the publish destination. Also notice that the name of that subfolder is what we typed in the Project Title field of the Publish dialog. The /published/hdstreet_standard folder should contain the index.html file and five subfolders, as illustrated by the following screenshot: The index.html file is the main HTML file. It is the file to open in a modern web browser in order to view the e-learning content. The /ar folder contains the audio assets of the project. These assets include the voice over narrations and the mouse-click sound in .mp3 format. Every single HTML5 Captivate project includes the same /assets folder. It contains the standard images, CSS, and JavaScript files used to power the objects and features that can be included in a project. The web developers reading these lines will probably recognize some of these files. For example, the jQuery library is included in the /assets/js folder. The /dr folder contains the images that are specific to this project. These images include the slide backgrounds in .png format, the mouse pointers, and the various states of the buttons used in this project. Finally, the /vr folder contains the video assets. These include the video we inserted on slide 2, as well as all of the full motion recording slides of the project. All of these files and folders are necessary for your HTML5 project to work as expected. In other words, you need to upload all of these files and folders to the web server (or to the LMS) to make the project available to your students. Never try to delete, rename, or move any of these files! Double-click on the index.html file to open the project in the default web browser. Make sure everything works as expected. When done, close the web browser and return to Captivate. This concludes our overview of the standard HTML5 publishing feature of Captivate 7. Testing the HTML5 content Producing content for mobile devices raises the issue of testing the content in a situation as close as possible to reality. Most of the time, you'll test the HTML5 output of Captivate only on the mobile device you own, or even worse, in the desktop version of an HTML5 web browser. If you are a Mac user, I've written a blog post on how to test the Captivate HTML5 content on iOS devices, without even owning such a device at http://www.dbr-training.eu/index.cfm/blog/test-your-html5-elearning-on-an-ios-device-without-an-ios-device/. Summary You learned about the publishing step of the typical Captivate production work flow. You learned how to publish your project using the standard HTML5 publishing options. We also used Windows Explorer (Windows) or Finder (Mac) to take a closer look at the generated files. By publishing the project to HTML 5, Captivate has generated a whole bunch of HTML, CSS, and JavaScript files. Resources for Article: Further resources on this subject: Top features you'll want to know about [Article] Remotely Preview and test mobile web pages on actual devices with Adobe Edge Inspect [Article] An Introduction to Flash Builder 4-Network Monitor [Article]
Read more
  • 0
  • 0
  • 1352

article-image-scalability-limitations-and-effects
Packt
23 Aug 2013
26 min read
Save for later

Scalability, Limitations, and Effects

Packt
23 Aug 2013
26 min read
(For more resources related to this topic, see here.) HTML5 limitations If you haven't noticed by now, many of the HTML5 features you will use either have failsafes, multiple versions, or special syntax to enable your code to cover the entire spectrum of browsers and supported HTML5 feature sets within them. As time passes and standards become solidified, one can assume that many of these failsafes and other content display measures will mature into a single standard that all browsers will share. However, in reality this process may take a while and even at its best, developers may still have to utilize many of these failsafe features indefinitely. Therefore, a solid understanding of when, where, and why to use these failsafe measures will enable you develop your HTML5 web pages in a way that can be viewed as intended on all modern browsers. To aid developers in overcoming these previously stated issues, many frameworks and external scripts have been created and open sourced, allowing for a more universal development environment saving developers countless hours when starting each new project. Modernizr (http://modernizr.com) has quickly become a must-have addition for many HTML5 developers as it contains many of the conditions and verifications needed to allow developers to write less code and cover more browsers. Modernizr does all this by checking for a large majority (more then 40) of the new features available in HTML5 in the clients browser and reporting back if they are available or not in a matter of milliseconds. This will allow you as the developer to determine if you should display an alternate version of your content or a warning to the user. Getting your web content to display properly in all browsers is and always has been the biggest challenge for any web developer and when it comes to creating cutting edge interesting content, the challenge usually becomes harder. To allow you to better understand how these features look without the use of third-party integration, we will avoid using external libraries for the time being. It is worth noting how each of these features and others look in all browsers. Therefore make sure to test the examples as well as your own work in not just your favorite browser, but many of the other popular choices as well. Object manipulation with CSS3 Prior to the advent of CSS3, web developers used a laundry list of content manipulation, asset preparation, and asset presentation techniques in order to get their web page layout the way they wanted in every browser. Most of these techniques would be considered "hacks" as they would pretty much be a work around to enable the browser to do something it normally wouldn't. Features such as rounded corners, drop shadows, and transforms were all absent from a web developer's arsenal and the process of getting things the way you want could get mind numbing. Understandably, the excitement level surrounding CSS3 for all web developers is very high as it enables developers to perform more content manipulation techniques then ever before without the need for prior preparation or special browser hacks. Although the list of available properties in CSS3 is massive, let's cover some of the newest and most exciting of the lot. box-shadow It's true that some designers and developers say drop shadows are a part of the past, but the usage of shadowing HTML elements is still a popular design choice for many. In the past, web developers needed to perform tricks such as stretching small gradient images or creating the shadow directly into their background image to achieve this effect in their HTML documents. CSS3 has solved this issue by creating the box-shadow property to allow for drop shadow like effects on your HTML elements. To remind us how this effect was accomplished in ActionScript 3, let's review this code snippet: var dropShadow:DropShadowFilter = new DropShadowFilter();dropShadow.distance = 0;dropShadow.angle = 45;dropShadow.color = 0x333333;dropShadow.alpha = 1;dropShadow.blurX = 10;dropShadow.blurY = 10;dropShadow.strength = 1;dropShadow.quality = 15;dropShadow.inner = false;var mySprite:Sprite = new Sprite();mySprite.filters = new Array(dropShadow); As mentioned before, the new box-shadow property in CSS3 allows you to append these shadowing effects with relative ease and many of the same configuration properties: .box-shadow-example { box-shadow: 3px 3px 5px 6px #000000;} Despite the lack of property names on each of the values applied to this style, you can see that many of the value types coincide with what was appended to the drop shadow we created in ActionScript 3. This box-shadow property is assigned to the .box-shadow-example class and therefore will be applied to any element that has that classname appended to it. By creating a div element with the box-shadow-example class, we can alter our content to look something like the following: <div class="box-shadow-example">CSS3 box-shadow Property</div> As straightforward as this CSS property is to add to your project, it declares a lot of values all in a single line. Let's review each of these values in order that we can understand them better for future usage. To simplify the identification of each of the variables in the property, each of these have been updated to be different: box-shadow: 1px 2px 3px 4px #000000; These variables are explained as follows: The initial value (1px) is the shadow's horizontal offset or if the shadow is going to the left or to the right. A positive value would place the shadow on the right of the element, a negative offset will put the shadow on the left. The second value (2px) is the vertical offset, and just like the horizontal offset value, a negative number would generate a shadow going up and a positive value would generate the shadow going down. The third value (3px) is the blur radius that controls how much blur effect will be added to the shadow. Declaring a value, for example, 0 would create no blur and display a very sharp looking shadow. Negative values placed into the blur radius will be ignored and render no different then using 0. The fourth value (4px) and last of the numerical properties is the spread radius. The spread radius controls how far the drop shadow blur will spread past the initial shadow size declaration. If a value 0 is used, the shadow will display with the default blur radius set and apply no changes. Positive numerical values will yield a shadow that blurs further and negative value will make the shadow blur smaller. The final value is the hexadecimal color value, which states the color that the shadow will be in. Alternatively, you could use box-shadow to apply the shadow effect to the interior of your element rather then the exterior. With ActionScript 3, this was accomplished by appending dropShadow.inner = true; to the list of parameters in your DropShadowFiler object. The CSS3 syntax to apply box-shadow properties in this manner is very similar as all that is required is the addition of the inset keyword. Consider the following code snippet, for example: .box-shadow-example { box-shadow: 3px 3px 5px 6px #666666 inset;} This would produce a shadow that would look like the following screenshot: text-shadow Just like the box-shadow property, text-shadow lives up to its name by creating the same drop-shadowing effect, specifically for text: text-shadow: 2px 2px 6px #ff0000; Like box-shadow, the initial two values for text-shadow are the horizontal and vertical offsets for the shadow placement. The third value, which is optional is the blur size and the fourth value is the hexadecimal color: border-radius Just like element or text shadowing, adding rounded corners to your elements prior to CSS3 was a chore. Developers would usually append separate images or use other object manipulation techniques to achieve this effect on the typically square or rectangle shaped elements. With the addition of the border-radius setting in CSS3, developers can easily and dynamically set element corner roundness with only a couple of line of CSS all without the usage of vector 9 slicing like in Flash. Since HTML elements have four corners, when appending the border-radius styling, we can either target each corner individually, or all the corners at once. In order to easily append a border radius setting to all the corners at once, we would create our CSS properties as follows: #example { background-color:#ff0000; // Red background width: 200px; height: 200px;border-radius: 10px;} The preceding CSS not only appends a 10px border radius to all of the corners of the #example element, by using all the properties, which the modern browsers use, we can be assured that the effect will be visible to all users attempting to view this content: As mentioned above, each of the individual corners of the element can be targeted to only append the radius to a specific part of the element: #example { border-top-left-radius: 0px; // This is doing nothing border-top-right-radius: 5px; border-bottom-right-radius: 20px; border-bottom-left-radius: 100px;} The preceding CSS now removes our #example element's left border radius by setting it to 0px and sets a specific radius to each of the other corners. It's worth noting here that setting a border radius equal to 0 is no different than leaving that property completely out of the CSS styles: Fonts Dealing with customized fonts in Flash has had its ups and downs over the years. Any Flash developer who has needed to incorporate and use customized fonts in their Flash applications probably knows the pain that comes with choosing a font embedding method as well as making sure it works properly for users who don't have the font installed on their computer viewing the Flash application. CSS3 font embedding has implemented a "no fuss" way to include custom fonts into your HTML5 documents with the addition of the @font-face declaration: @font-face { font-family: ClickerScript; src: url('ClickerScript-Regular.ttf'), url('ClickerScript-Regular .otf'), url('ClickerScript-Regular .eot');} CSS can now directly reference your TTF, OTF, or EOT font which can be placed on your web server for accessibility. With the font source declared in our CSS document and a unique font-family identification applied to it, we can start using it on specific elements by using the font-family property: #example { font-family: ClickerScript;} Since we declared a specific font family name in the @font-face property, we can use that custom name on pretty much any element henceforth. Custom fonts can be applied to almost anything that contains text in your HTML document. Form elements such as button labels and text inputs also can be styled to used your custom fonts. You can even remake assets such as website logos in pure HTML and CSS with the same custom fonts used in the original asset creation. Acceptable font formats Like many of the other embedding methods for assets online, fonts needs to be converted into multiple formats to enable all common modern browsers to display them properly. Almost all of the available browsers will be able to handle the common True Type Fonts (.ttffile types) or Open Type Fonts (.otffile types), so embedding one of those two formats will be all that is needed. Unfortunately Internet Explorer 9 does not have support built in for either of those two popular formats and requires fonts to be saved in the EOT file format. External font libraries Many great services have appeared online in the last couple of years allowing web developers to painlessly prepare and embed fonts into their websites. Google's Web Fonts archive available at http://www.google.com/webfonts hosts a large set of open source fonts which can be added to your project without the need to worry about licensing or payment issues. Simply add a couple of extras lines of code into your HTML document and you are ready to go. Another great site that is worth checking out is Font Squirrel, which can be found at http://www.fontsquirrel.com. Like Google Web Fonts, Font Squirrel hosts a large archive of web-ready fonts with the copy-and-paste-ready code snippets to add them to your document. Another great feature on this site is the @font-face generator which give you the ability to convert your preexisting fonts into all the web compatible formats. Before getting carried away and converting all your favorite fonts into web ready formats and integrating them into your work, it is worth noting the End User License Agreement or EULA that came with the font to begin with. Converting many available fonts for use on the web will break license agreements and could cause legal issues for you down the road. Opacity More commonly known as "alpha" to the Flash developer, setting the opacity of an element not only allows you to change the look and feel of your designs, but allows you to add features like content that fades in and out. As simple as this concept seems, it is relatively new to the available list of CSS properties available to web developers. Setting the opacity of an element is extremely easy and looks something like the following: #example { opacity: 0.5;} As you can see from the preceding example, like ActionScript 3, the opacity value is a numerical value between 0 and 1. The preceding example would display a element at 50 percent transparency. The opacity property in CSS3 is now supported in all the major browsers, so there is no need to worry about using alternative property syntax when declaring it. RGB and RGBA coloring When dealing with color values in CSS, many developers would typically use hexadecimal values, which would resemble something like #000000 to declare the usage of the color black. Colors can also be implemented in their RGB representation in CSS by utilizing the rgb() or rgba() calls in place of the hexadecimal value. As you can see by the method name, the rgba color lookup in CSS also requires a forth parameter which declares the colors alpha transparency or opacity amount. Using RGBA in CSS3 rather than hexadecimal colors can be beneficial for a couple of reasons. Consider you have just created a div element which will be displayed on top of existing content within your web page layout. If you ever wanted to set a background color to the div as a specific color but wish for only that background to be semi transparent and not the interior content, the RGBA color declaration now allows you to do this easily as you can set the colors transparency: #example { // Background opacity background: rgba(0, 0, 0, 0.5); // Black 50% opacity // Box-shadow box-shadow: 1px 2px 3px 4px rgba(255, 255, 255, 0.8); // White 80% opacity // Text opacity color: rgba(255, 255, 255, 1); // White no transparency color: rgb(255, 255, 255); // This would accomplish the same styling // Text Drop Shadows (with opacity) text-shadow: 5px 5px 3px rgba(135, 100, 240, 0.5);} As you can see in the preceding example, you can freely use RGB and RGBA values rather than hexadecimal anywhere color values are required in CSS syntax. Element transforms Personally, Ifind CSS3 transforms to be one of the most exciting and fun new features in CSS. Transforming assets in the Flash IDE as well as with ActionScript has always been easily accessible and easy to implement. Transforming HTML elements is a relatively new feature to CSS and is still gaining full support by all the modern browsers. Transforming an element allows you to manipulate its shape and size by opening up a ton of possibilities for animations and visual effects to assets without the need to prepare the source before hand. When we refer to "transforming an element", we are actually describing a number of properties that can be applied to the transformation to give it different characteristics. If you have transformed objects in Flash or possibly in Photoshop before, these properties may be familiar to you. Translate As a Flash developer used to primarily dealing with X and Y coordinates when positioning elements, the CSS3 Translate Transform property is a very handy way of placing elements and it works on the same principal. The translate property takes two parameters which are the X and the Y values to translate, or effectively move the element: transform:translate(-25px, -25px); Unfortunately, to get your transforms to work in all browsers, you will need to target each of them when you append transform styles. Therefore, the standard transform style and property would now look something like this: transform:translate(-25px, -25px);-ms-transform:translate(-25px, -25px); /* IE 9 */-moz-transform:translate(-25px, -25px); /* Firefox */-webkit-transform:translate(-25px, -25px); /* Safari and Chrome */-o-transform:translate(-25px, -25px); /* Opera */ Rotate Rotation is pretty self-explanatory and extremely easy to implement. The rotate properties take a single parameter to specify the amount of rotation, in degrees, to apply to the specific element: transform:rotate(45deg);-ms-transform:rotate(45deg); /* IE 9 */-moz-transform:rotate(45deg); /* Firefox */-webkit-transform:rotate(45deg); /* Safari and Chrome */-o-transform:rotate(45deg); /* Opera */ It is worth noting that regardless of the fact that the supplied value is always intended to be a value in degrees, the value must always have deg appended for the value to be properly recognized. Scale Just like rotate transforms, scaling is pretty straightforward. The scale property requires two parameters, which declare the scale amount for both X and Y: transform:scale(0.5, 2);-ms-transform:scale(0.5, 2); /* IE 9 */-moz-transform:scale(0.5, 2); /* Firefox */-webkit-transform:scale(0.5, 2); /* Safari and Chrome */-o-transform:scale(0.5, 2); /* Opera */ Skew Skewing a element will result in the angling of the X and Y axes: transform:skew(10deg, 20deg);-ms-transform:skew(10deg, 20deg); /* IE 9 */-moz-transform:skew(10deg, 20deg); /* Firefox */-webkit-transform:skew(10deg, 20deg); /* Safari and Chrome */-o-transform:skew(10deg, 20deg); /* Opera */ The following illustration is a representation of skewing an image with the preceding properties: Matrix The matrix properties combine all of the preceding transforms into a single property and can easily eliminate many extra lines of CSS in your source: transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* IE 9 */-ms-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* Firefox */-moz-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20); /* Safari and Chrome */ -webkit-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* Opera */-o-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20); The preceding example utilizes the CSS transform matrix property to apply multiple transform styles in a single call. The matrix property requires six parameters to rotate, scale, move, and skew the element. Using the matrix property is only really useful when you actually need to implement all of the transform properties at once. If you only need to utilize one aspect of element transforms, you will be better off using just that CSS style property. 3D transforms Up until now, all of the transform properties we have reviewed have been two dimensional transformations. CSS3 now also supports 3D as well as 2D transforms.One of the best parts of CSS3 3D transforms is the fact that many devices and browsers support hardware acceleration allowing this complex graphical processing to be done on your video cards GPU. At the time of writing this, only Chrome, Safari, and firefox have support for CSS 3D transforms. Interested in what browsers will support all these great HTML5 features before you start developing? Check out http://caniuse.com to see what popular browsers support in a simple, easy-to-use website. When dealing with elements in a 3D world, we make use of the Z coordinate, which allows the use of some new transform properties. transform:rotateX(angle)transform:rotateY(angle)transform:rotateZ(angle)transform:translateZ(px)transform:scaleZ(px) Let's create a 3D cube from HTML elements to put all of these properties into a working example. To start creating our 3D cube, we will begin by writing the HTML elements which will contain the cube as well as the elements which will be making up the cube itself: <body> <div class="container"> <div id="cube"> <div class="front"></div> <div class="back"></div> <div class="right"></div> <div class="left"></div> <div class="top"></div> <div class="bottom"></div> </div> </div></body> This HTML creates a simple layout for our cube by not only creating each of the six sides, which makes up a cube with specific class names, but the container for the entire cube as well as the main container to display all of our page content. Of course, since there is no internal content in these containers and no styling yet, opening this HTML file in your browser would yield an empty page. So let's start writing our CSS to make all of these elements visible and position each to form our three dimensional cube. We will start by setting up our main containers which will position our content and contain our cubes sides: .container { width: 640px; height: 360px; position: relative; margin: 200px auto; /* Currently only supported by Webkit browsers. */ -webkit-perspective: 1000px; perspective: 1000px;}#cube { width: 640px; height: 320px; position: absolute; /* Let the transformed child elements preserve the 3D transformations: */ transform-style: preserve-3d; -webkit-transform-style: preserve-3d; -moz-transform-style: preserve-3d;} The container class is our main element, which contains all of the other elements within this example. After appending a width and height, we set the top margin to 200px to push the display down the page a bit for better viewing and the left and right margins to auto which will align this element in the center of the page: #cube div { display: block; position: absolute; border: 1px solid #000000; width: 640px; height: 320px; opacity:0.8;} By defining properties to the #cube div, we set the styles to every div element within the #cube element. We are also kind of cheating the system of cube by setting the width and height to rectangular proportions as the intention is to add videos to each of the cube sides once we structure and position it. With the basic cube-side styles appended, its time to start transforming each of the sides to form the three-dimensional cube. We will start with the front of the cube by translating it on the Z axis, bringing it closer to the perspective: #cube .front {-webkit-transform: translateZ(320px); -moz-transform: translateZ(320px); transform: translateZ(320px);} In order to append this style to our element in all modern browsers, we will need to specify the property in multiple syntaxes for each browser that doesn't support the default transform property: The preceding screenshot shows what has happened to the .front div after appending a Z translation of 320px. The larger rectangle is the .front div, which is now 320px closer to our perspective. For simplicity's sake, let's do the same to the .back div and push it 320px away from the perspective: #cube .back { -webkit-transform: rotateX(-180deg) rotate(-180deg) translateZ(320px); -moz-transform: rotateX(-180deg) rotate(-180deg) translateZ(320px); transform: rotateX(-180deg) rotate(-180deg) translateZ(320px);} As you can see from the preceding code, to properly move the .back element into place without placing it upside down, we flip the element by 180 degrees on the X axis and then translate Z by 320px just like we did for .front. Note that we didn't set a negative value on the translate Z because the element was flipped. With the .back CSS styles in place, our cube should look like the following: Now the smallest rectangle visible is the element with the classname .back, the largest is our .front element, and the middle rectangle is the remaining elements to be transformed. To position the sides of our cubes we will need to rotate the side elements on the Y axis to get them to face the proper direction. Once they are rotated into place, we can translate the position on the Z axis to push it out from the center as we did with the front and back faces: #cube .right { -webkit-transform: rotateY(90deg) translateZ( 320px ); -moz-transform: rotateY(90deg) translateZ( 320px ); transform: rotateY(90deg) translateZ( 320px );} With the right side in place, we can do the same to the left side but rotate it in the opposite direction to get it facing the other way: #cube .left {-webkit-transform: rotateY(-90deg) translateZ( 320px ); -moz-transform: rotateY(-90deg) translateZ( 320px ); transform: rotateY(-90deg) translateZ( 320px );} Now that we have all four sides of our cube aligned properly, we can finalize the cube positioning by aligning the top and bottom sides. To properly size the top and bottom we will set their own width and height to override the initial values set in the #cube div styles: #cube .top { width: 640px; height: 640px; -webkit-transform: rotateX(90deg) translateZ( 320px ); -moz-transform: rotateX(90deg) translateZ( 320px ); transform: rotateX(90deg) translateZ( 320px );}#cube .bottom { width: 640px; height: 640px; -webkit-transform: rotateX(-90deg) translateZ( 0px ); -moz-transform: rotateX(-90deg) translateZ( 0px ); transform: rotateX(-90deg) translateZ( 0px );} To properly position the top and bottom sides, we rotate the .top and .bottom elements +-90 degrees on the X axis to get them to face up and down, and only need to translate the top on the Z axis to raise it to the proper height to connect with all of the other sides. With all of those transforms appended to our layout, the resulting cube should look like the following: Although it looks 3D, since there is nothing in the containers, the perspective isn't really showing off our cube very well. So let's add some content such as a video in each of the sides of the cube to get a better visualization of our work. Within each of the sides, let's add the same HTML5 video element code: <video width="640" height="320" autoplay="true" loop="true"> <source src = "cube-video.mp4" type="video/mp4"> <source src = "cube-video.webm" type="video/webm"> Your browser does not support the video tag.</video> Since we have not added the element playback controls in order to display more visible area of the cube, our video element is set to autoplay the video as well as loop the playback on completion. Now we get a result that properly demonstrates what 3D transforms can do and is a little more visually appealing: Since we set the opacity of each of the cube sides, we can now see all four videos playing on each side, pretty cool! Since we are already here, why not kick it up one more notch and add user interaction to this cube so we can spin it around and see the video on each side. To perform this user interaction, we need to use JavaScript to translate the mouse coordinates on the page document to the X and Y 3D rotation of our cube. So let's start by creating the JavaScript to listen for mouse events: window.addEventListener("load", init, false);function init() { // Listen for mouse movement window.addEventListener('mousemove', onMouseMove, false);}function onMouseMove(e) { var mouseX = 0; var mouseY = 0; // Get the mouse position if (e.pageX || e.pageY) { mouseX = e.pageX; mouseY = e.pageY; } else if (e.clientX || e.clientY) { mouseX = e.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; mouseY = e.clientY + document.body.scrollTop + document.documentElement.scrollTop; } console.log("Mouse Position: x:" + mouseX + " y:" + mouseY);} As you can see from the preceding code example, when the mousemove event fires and calls the onMouseMove function, we need to run some conditionals to properly parse the proper mouse position. Since, like so many other parts of web development, retrieving the mouse coordinates differs from browser to browser, we have added a simple condition to attempt to gather the mouse X and Y in a couple of different ways. With the mouse position ready to be translated into the transform rotation of our cube, there is one final bit of preparation we need to complete prior to setting the CSS style updates. Since different browsers support the application of CSS transforms in different syntaxes, we need to figure out, in JavaScript, which syntax to use during runtime to allow our script to run on all browsers. The following code example does just that. By setting a predefined array of the possible property values and attempting to check the type of each as an element style property, we can find which element is not undefined and know it can be used for CSS transform styles: // Get the support transform propertyvar availableProperties = [ 'transform', 'MozTransform', 'WebkitTransform', 'msTransform', 'OTransform' ];// Loop over each of the propertiesfor (var i = 0; i < availableProperties.length; i++) { // Check if the type of the property style is a string (ie. valid) if (typeof document.documentElement.style[availableProperties[i]] == 'string'){ // If we found the supported property, assign it to a variable // for later use. var supportedTranformProperty = availableProperties[i]; }} Now that we have the user's mouse position and the proper syntax for CSS transform updates for our cube, we can put it all together and finally have 3D rotational control of our video cube: <script> var supportedTranformProperty; window.addEventListener("load", init, false); function init() { // Get the support transform property var availableProperties = ['transform', 'MozTransform', 'WebkitTransform', 'msTransform', 'OTransform']; for (var i = 0; i < availableProperties.length; i++) { if (typeof document.documentElement. style[availableProperties[i]] == 'string'){ supportedTranformProperty = availableProperties }} // Listen for mouse movement window.addEventListener('mousemove', onMouseMove, false); } function onMouseMove(e) { // Get the mouse position if (e.pageX || e.pageY) { mouseX = e.pageX; mouseY = e.pageY; } else if (e.clientX || e.clientY) { mouseX = e.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; mouseY = e.clientY + document.body.scrollTop + document.documentElement.scrollTop;} // Update the cube rotation rotateCube(mouseX, mouseY); } function rotateCube(posX, posY) { // Update the CSS transform styles document.getElementById("cube").style[supportedTranformProperty] = 'rotateY(' + posX + 'deg) rotateX(' + posY * -1 + 'deg)'; }</script> Regardless of the fact we have attempted to allow for multi browser use of this example, it is worth opening it up in each to see how something like 3D transforms with heavy internal content run. During the time of writing this, all WebKit browsers were the easy choice when viewing content like this, as browsers such as firefox and Internet Explorer render this example at a much slower and lower quality output: Transitions With CSS3, we can add an effect when changing from one style to another, without using Flash animations or JavaScripts: div { transition: width 2s; -moz-transition: width 2s; /* Firefox 4 */ -webkit-transition: width 2s; /* Safari and Chrome */ -o-transition: width 2s; /* Opera */} If the duration is not specified, the transition will have no effect, because the default value is 0: div { transition: width 2s, height 2s, transform 2s; -moz-transition: width 2s, height 2s, -moz-transform 2s; -webkit-transition: width 2s, height 2s, -webkit-transform 2s; -o-transition: width 2s, height 2s,-o-transform 2s;} It should be worth noting that Internet Explorer currently does not have support for CSS3 transitions. Browser compatibility If you haven't noticed yet, the battle of browser compatibility is one of the biggest aspects of a web developer's job. Over time, many great services and applications have been created to help developers overcome these hurdles in a much simpler manner than trial-and-error techniques. Websites such as http://css3test.com, http://caniuse.com, and http://html5readiness.com are all great resources to keep on top of HTML5 specification developer and browser support for all the features within.
Read more
  • 0
  • 0
  • 2068

article-image-need-directives
Packt
22 Aug 2013
7 min read
Save for later

The Need for Directives

Packt
22 Aug 2013
7 min read
(For more resources related to this topic, see here.) What makes a directive a directive Angular directives have several distinguishing features, but for the sake of simplicity we'll focus on just three in this article. In contrast to most plugins or other forms of drop-in functionality, directives are declarative, data driven, and conversational. Directives are declarative If you've done any JavaScript development before, you've almost certainly used jQuery (or perhaps Prototype), and likely one of the thousands of plugins available for it. Perhaps you've even written your own such plugin. In either case, you probably have a decent understanding of the flow required to integrate it. They all look something like the following code: $(document).ready(function() { $('#myElement').myPlugin({pluginOpts});}); In short, we're finding the DOM element matching #myElement, then applying our jQuery plugin to it. These frameworks are built from the ground up on the principle of DOM manipulation. In contrast, Angular directives are declarative, meaning we write them into the HTML elements themselves. Declarative programming means that instead of telling an object how to behave (imperative programming), we describe what an object is. So, where in jQuery we might grab an element and apply certain properties or behaviors to it, with Angular we label that element as a type of directive, and, elsewhere, maintain code that defines what properties and behaviors make up that type of object: <html> <body> <div id="myElement" my-awesome-directive></div> </body></html> At a first glance, this may seem rather pedantic, merely a difference in styles, but as we begin to make our applications more complex, this approach serves to streamline many of the usual development headaches. In a more fully developed application, our messages would likely be interactive, and in addition to growing or shrinking during the course of the user's visit, we'd want them to be able to reply to some or retweet themselves. If we were to implement this with a DOM manipulation library (such as jQuery or Prototype), that would require rebuilding the HTML with each change (assuming you want it sorted, just using .append() won't be enough), and then rebinding to each of the appropriate elements to allow the various interactions. In contrast, if we use Angular directives, this all becomes much simpler. As before, we use the ng-repeat directive to watch our list and handle the iterated display of tweets, so any changes to our scoped array will automatically be reflected within the DOM. Additionally, we can create a simple tweet directive to handle the messaging interactions, starting with the following basic definition. Don't worry right now about the specific syntax of creating a directive; for now just take a look at the overall flow in the following code: angular.module('myApp', []) .directive('tweet', ['api', function (api) { return function ($scope, $element, $attributes) { $scope.retweet = function () { api.retweet($scope.tweet);// Each scope inherits from it's parent, so we still have access to the full tweet object of { author : '…', text : '…' } }; $scope.reply = function () { api.replyTo($scope.tweet); }; } }]); For now just know that we're getting an instance of our Twitter API connection and passing it into the directive in the variable api, then using that to handle the replies and retweets. Our HTML for each message now looks like the following code: <p ng-repeat="tweet in tweets" tweet> <!-- ng-click allows us to bind a click event to a function on the $scope object --> @{{tweet.author}}: {{tweet.text}} <span ng-click="retweet()">RT</span> | <span ng-click="reply()">Reply</span></p> By adding the tweet attribute to the paragraph tag, we tell Angular that this element should use the tweet directive, which gives us access to the published methods, as well as anything else we later decide to attach to the $scope object. Directives in Angular can be declared in multiple ways, including classes and comments, though attributes are the most common. Scoping within directives is simultaneously one of the most powerful and most complicated features within Angular, but for now it's enough to know that every property and function we attach to the scope is accessible to us within the HTML declarations. Directives are data driven Angular directives are built from the ground up with this philosophy. The scope and attribute objects accessible to each directive form the skeleton around which the rest of a directive is built and can be monitored for changes both within the DOM as well as the rest of your JavaScript code. What this means for developers is that we no longer have to constantly poll for changes, or ensure that every data change that might have an impact elsewhere within our application is properly broadcast. Instead, the scope object handles all data changes for us, and because directives are declarative as well, that data is already connected to the elements of the view that need to update when the data changes. There's a proposal for ECMAScript 6 to support this kind of data watching natively with Object.observe(), but until that is implemented and fully supported, Angular's scope provides the much needed intermediary. Directives are conversational Modular coding emphasizes the use of messages to communicate between separate building blocks within an application. You're likely familiar with DOM events, used by many plugins to broadcast internal changes (for example, save, initialized, and so on) and subscribe to external events (for example, click, focus, and so on). Angular directives have access to all those events as well (the $element variable you saw earlier is actually a jQuery wrapped DOM element), but $scope also provides an additional messaging system that functions only along the scope tree. The $emit and $broadcast methods serve to send messages up and down the scope tree respectively, and like DOM events, allow directives to subscribe to changes or events within other parts of the application, while still remaining modular and uncoupled from the specific logic used to implement those changes. If you don't have jQuery included in your application, Angular wraps the element in jqLite, which is a lightweight wrapper that provides the same basic methods. Additionally, when you add in the use of Angular services, directives gain an even greater vocabulary. Services, among many other things, allow you to share specific pieces of data between the different pieces of your application, such as a collection of user preferences or utility mapping item codes to their names. Between this shared data and the messaging methods, separate directives are able to communicate fully with each other without requiring a retooling of their internal architecture. Directives are everything you've dreamed about Ok, that might be a bit of hyperbole, but you've probably noticed by now that the benefits outlined so far here are exactly in line with the best practices. One of the most common criticisms of Angular is that it's relatively new (especially compared to frameworks such as Backbone and Ember). In contrast, however, I consider that to be one of its greatest assets. Older frameworks all defined themselves largely before there was a consensus on how frontend web applications should be developed. Angular, on the other hand, has had the advantage of being defined after many of the existing best practices had been established, and in my opinion provides the cleanest interface between an application's data and its display. As we've seen already, directives are essentially data driven modules. They allow developers to easily create a packageable feature that declaratively attaches to an element, molds to fit the data at its disposal, and communicates with the other directives around it to ensure coordinated functionality without disruption of existing features. Summary In this article, we learned about what attributes define directives and why they're best suited for frontend development, as well as what makes them different from the JavaScript techniques and packages you've likely used before. I realize that's a bold statement, and likely one that you don't fully believe yet. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] So, what is EaselJS? [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 2724

article-image-catering-your-form-related-needs
Packt
22 Aug 2013
16 min read
Save for later

Catering to Your Form-related Needs

Packt
22 Aug 2013
16 min read
(For more resources related to this topic, see here.) Getting your form ready with form panels This recipe shows how to create a basic form using Sencha Touch and implement some of the behaviors such as how to submit the form data and how to handle the errors during the submission. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps to create a form panel: Create a ch02 folder in the same folder where we had created the ch01 folder. Create and open a new file ch02_01.js and paste the following code into it: Ext.application({name: 'MyApp',requires: ['Ext.MessageBox'],launch: function() {var form;//form and related fields configvar formBase = {//enable vertical scrolling in case the form exceeds the pageheightscrollable: 'vertical',standardSubmit: false,submitOnAction: true,url: 'http://localhost/test.php',items: [{//add a fieldsetxtype: 'fieldset',title: 'Personal Info',instructions: 'Please enter the information above.',//apply the common settings to all the child items of the fieldsetdefaults: {required: true,//required fieldlabelAlign: 'left',labelWidth: '40%'},items: [{//add a text feildxtype: 'textfield',name : 'name',label: 'Name',clearIcon: true,//shows the clear icon in the field when usertypesautoCapitalize : true},{ //add a password fieldxtype: 'passwordfield',name : 'password',label: 'Password',clearIcon: false}, {xtype: 'passwordfield',name : 'reenter',label: 'Re-enter Password',clearIcon: true}, { //add an email fieldxtype: 'emailfield',name : 'email',label: 'Email',placeHolder: '[email protected]',clearIcon: true}]}, {//items docked to the bottom of the formxtype: 'toolbar',docked: 'bottom',items: [{text: 'Reset',handler: function() {form.reset(); //reset the fields}},{text: 'Save',ui: 'confirm',handler: function() {//sumbit the form data to the urlform.submit({success: function(form, result) {Ext.Msg.alert("INFO", "Formsubmitted!");},failure: function(form, result) {Ext.Msg.alert("INFO", "Formsubmission failed!");}});}}]}]};if (Ext.os.is.Phone) {formBase.fullscreen = true;} else { //if desktopExt.apply(formBase, {modal: true,centered: true,hideOnMaskTap: false,height: 385,width: 480});}//create form panelform = Ext.create('Ext.form.Panel', formBase);Ext.Viewport.add(form);}}); Include the following line of code in the index.html file: <script type="text/javascript" charset="utf-8"src = "ch02/ch02_01.js"></script > Deploy and access it from the browser. You will see a screen as shown in the following screenshot: How it works... The code creates a form panel with a fieldset inside it. The fieldset has four fields specified as part of its child items. The xtype config mentioned for each field tells the Sencha Touch component manager which class to use to instantiate them. form = new Ext.form.FormPanel(formBase); creates the form and the other field components using the config defined as part of the formBase. The form.show(); code renders the form to the body, and that's how it will appear on the screen. url contains the URL where the form data will be posted upon submission. The form can be submitted in two ways: By hitting Go on the virtual keyboard, or Enter on a field, which ends up generating the action event By clicking on the Save button, which internally calls the submit() method on the form object form.reset() resets the status of the form and its fields to the original state. So, if you had entered the values in the fields and clicked on the Reset button, all the fields would be cleared. form.submit() posts the form data to the specified URL. The data is posted as an Ajax request using the POST method. Use of useClearIcon on the field tells Sencha Touch whether it should show the clear icon in the field when the user starts entering values in it. On clicking this icon, the value in the field is cleared. There's more... In the preceding code, we saw how to construct a form panel, add fields to it, and handle events. Let us see what other non-trivial things we may have to do in the project and how we can achieve these using Sencha Touch. Standard submit This is an old and traditional way for posting form data to the server URL. If your application's need is to use the standard form submit rather than Ajax, you will have to set the standardSubmit property to true on the form panel. This is set to false by default. The following code snippet shows the usage of this property: var formBase = {scroll: 'vertical',standardSubmit: true,... After this property is set to true on the form panel, form.submit() will load the complete page specified in the url property. Submitting on field action As we saw earlier, the form data automatically gets posted to the URL if the action event occurs (when the Go button or the Enter key is hit). In many applications, this default feature may not be desirable. To disable this feature, you will have to set submitOnAction to false on the form panel. Post-submission handling Say we posted our data to the URL. Now, either the call may fail or it may succeed. To handle these specific conditions and act accordingly, we will have to pass additional config options to the form's submit() method. The following code shows the enhanced version of the submit call: form.submit({success: function(form, result) {Ext.Msg.alert("INFO", "Form submitted!");},failure: function(form, result) {Ext.Msg.alert("INFO", "Form submission failed!");}}); In case the Ajax call (to post form data) fails, the failure() callback function is called and if it's successful, the success() callback function is called. This works only if the standardSubmit property is set to false. Reading form data To read the values entered into a form field, form panel provides the getValues() method, which returns an object with field names and their values. It is important that you set the name property on your form field otherwise that field value will not appear in the object returned by the getValues() method: handler: function() {console.log('INFO', form.getValues());//sumbit the form data to the urlform.submit({...... Loading data in the form fields To set the form field values, the form panel provides record config and two methods, setValues() and setRecord(). The setValues() method expects a config object with name-value pairs for the fields. The following code shows how to use the setValues() method: {text: 'Set Data',handler: function() {form.setValues({name:'Ajit Kumar',email: '[email protected]'});}},{text: 'Reset',...... The preceding code adds a new button named Set Data; by clicking on it, the form field data is populated as shown in the following screenshot. As we had passed values for the Name and Email fields they are set: The other method, setRecord(),expects an instance of the Ext.data.Model class. The following code shows how we can create a model and use it to populate the form fields: ,{text: 'Load Data',handler: function() {Ext.define('MyApp.model.User', {extend: 'Ext.data.Model',config: {fields: ['name', 'email']}});var ajit = Ext.create('MyApp.model.User', {name:'Ajit Kumar',email:'[email protected]'});form.setRecord(ajit);}},{text: 'Reset',...... We shall use setRecord() when our data is stored as a model, or we will construct it as a model to use the benefits of the model (for example, loading from a remote data source, data conversion, data validation, and so on) that are not available with the JSON presentation of the data. While the methods help us to set the field values at runtime the, record config allows us to populate the form field values when the form panel is constructed. The following code snippet shows how we can pass a model at the time of instantiation of the form panel: var ajit = Ext.create('MyApp.model.User', {name:'Ajit Kumar',email:'[email protected]'});var formBase = {scroll: 'vertical',standardSubmit: true,record: ajit,... Working with search We will go over each of the form fields and understand how to work with them. This recipe describes the steps required to create and use a search form field. Getting ready Make sure that you have set up your development. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_02.js. Open a new file ch02_02.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'searchfield',name: 'search',label: 'Search'}]}; Include ch02_02.js in place of ch02_01.js in index.html. Deploy and access the application in the browser. You will see a form panel with a search field. How it works... A search field can be constructed using the Ext.field.Search class instance or using the xtype: 'searchfield' approach. A search form field implements the HTML5 <input> element with type="search". However, the implementation is very limited. For example, the search field in HTML5 allows us to associate a data list that it can use during the search, whereas this feature is not present in Sencha Touch. Similarly, the W3 search field defines a pattern attribute to allow us to specify a regular expression against which a user agent is meant to check the value, which is not supported yet in Sencha Touch. For more detail, you may refer to the W3 search field (http://www.w3.org/TR/html-markup/input.search.html) and the source code of the Ext.field.Search class. There's more... In the application, we often do not use a label for the search fields. Rather, we would like to show text, such as Search…, inside the field that will disappear when the focus is on the field. Let us see how we can achieve this. Using a placeholder Placeholders are supported by most of the form fields in Sencha Touch using the placeholder property. Placeholder text appears in the field as long as there is no value entered in it and the field does not have the focus. The following code snippet shows the typical usage of it: {xtype: 'searchfield',name: 'search',label: 'Search',placeHolder: 'Search...'} Applying custom validation in the e-mail field This recipe describes how to make use of the e-mail form field provided by Sencha Touch, and how to validate the value entered into it to find out whether the entered e-mail passes the validation rule or not. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_03.js. Open a new file ch02_03.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'emailfield',name : 'email',label: 'Email',placeHolder: '[email protected]',clearIcon: true,listeners: {blur: function(thisTxt, eventObj) {var val = thisTxt.getValue();//validate using the patternif (val.search("[a-c]+@[a-z]+[.][a-z]+") == -1)Ext.Msg.alert("Error", "Invalid e-mail address!!");elseExt.Msg.alert("Info", "Valid e-mail address!!");}}}]}; Include ch02_03.js in place of ch02_02.js in index.html. Deploy and access the application in the browser. How it works... The Email field can be constructed using the Ext.field.Email class instance or using the xtype value as emailfield. The e-mail form field implements the HTML5 <input> element with type="email". However, similar to the search field, the implementation is very limited. For example, the e-mail field in HTML5 allows us to specify a regular expression pattern, which can be used to validate the value entered in the field. Working with dates using the date picker This recipe describes how to make use of the date picker form field provided by Sencha Touch, which allows the user to select a date. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_04.js. Open a new file ch02_04.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date'}]}; Include ch02_04.js in place of ch02_03.js in index.html. Deploy and access the application in the browser. How it works... The date picker field can be constructed using the Ext.field.DatePicker class instance or using the xtype: datepickerfield approach. The date picker form field implements the HTML <select> element. When the user tries to select an entry, it shows the date picker component with the slots for the month, day, and year for selection. After selection, when the user clicks on the Done button, the field is set with the selected value. There's more... Additionally, there are other things that can be done, such as setting a date to the current date or a particular date, or changing the order of appearance of month, day, and year. Let us see what it takes to accomplish this. Setting the default date to the current date To set the default value to the current date, the value property must be set to the current date. The following code shows how to do it: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date',value: new Date(),… Setting the default date to a particular date The default date is January 01, 1970. Let's suppose that you need to set the date to a different date but not the current date. To do so, you will have to set the value property using the year, month, and day properties, as follows: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date',value: {year: 2011, month: 6, day: 11},… Changing the slot order By default, the slot order is month, day, and year. You can change it by setting the slotOrder property of the picker property of date picker, as shown in the following code: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date',picker: {slotOrder: ['day', 'month', 'year']}}]}; Setting the picker date range By default, the date range shown by the picker is from 1970 till the current year. For our application need, if we have to alter the year range to a different range, then we can do so by setting the yearFrom and yearTo properties of the picker property of the date picker, as follows: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date',picker: {yearFrom: 2000, yearTo: 2013}}]}; Making a field hidden Often in an application, there would be a need to hide the fields that are not needed in a particular context but are required, and hence they need to be shown. In this recipe, we will see how to make a field hidden and show it conditionally. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Edit ch02_04.js and modify the code, as follows, by adding the hidden property: var formBase = {items: [{xtype: 'datepickerfield',id: 'datefield-id',name: 'date',hidden: true,label: 'Date'}]}; Deploy and access the application in the browser. How it works... When a field is marked as hidden, Sencha Touch uses the DOM's hide() method on the element to hide that particular field. There's more... Let's see how we can programmatically show/hide a field. Showing/hiding a field at runtime Each component in Sencha Touch supports two methods, show() and hide(). The show() method shows the element and the hide() method hides the element. To call these methods, first we will have to find the reference to the component, which can be achieved by either using the object reference or by using the Ext.getCmp() method. Given a component ID, the getCmp() method returns us the component. The following code snippet demonstrates showing an element: var cmp = Ext.getCmp('datefield-id');cmp.show(); To hide an element, we will have to call cmp.hide(). Working with the select field This recipe describes the use of the select form field, which allows the user to select a value from a list of choices, such as a combobox. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_05.js Open a new file ch02_05.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'selectfield',name: 'select',label: 'Select',placeHolder: 'Select...',options: [{text: 'First Option', value: 'first'},{text: 'Second Option', value: 'second'},{text: 'Third Option', value: 'third'}]}]}; Include ch02_05.js in place of ch02_04.js in index.html. Deploy and access the application in the browser. How it works... The preceding code creates a select form field with three options for selection. The select field can be constructed using the Ext.field.Select class instance or using the xtype: 'selectfield' approach. The select form field implements the HTML <select> element. By default, it uses the text property to show the text for selection. There's more... It may not always be possible or desirable to use text and value properties in the date to populate the selection list. In case we have a different property in place of text, then how do we make sure that the selection list is populated correctly without any further conversion? Let's see how we can do this. Using a custom display value We shall use displayField to specify the field that will be used as text, as shown in the following code: {xtype: 'selectfield',name: 'select',label: 'Second Select',placeHolder: 'Select...',displayField: 'desc',options: [ {desc: 'First Option', value: 'first'}, {desc: 'Second Option', value: 'second'}, {desc: 'Third Option', value: 'third'}]} Changing a value using slider This recipe describes the use of the slider form field, which allows the user to change the value by mere sliding. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_06.js. Open a new file ch02_06.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'sliderfield',name : 'height',label: 'Height',minValue: 0,maxValue: 100,increment: 10}]}; Include ch02_06.js in place of ch02_05.js in index.html. Deploy and access the application in the browser. How it works... The preceding code creates a slider field with 0 to 100 as the range of values, with 10 as the increment value; this means that, when a user clicks on the slider, the value will change by 10 on every click. The increment value must be a whole number.
Read more
  • 0
  • 0
  • 962
article-image-packing-everything-together
Packt
22 Aug 2013
13 min read
Save for later

Packing Everything Together

Packt
22 Aug 2013
13 min read
(For more resources related to this topic, see here.) Creating a package When you are distributing your extensions, often, the problem you are helping your customer solve cannot be achieved with a single extension, it actually requires multiple components, modules, and plugins that work together. Rather than making the user install all of these extensions manually one by one, you can package them all together to create a single install package. Our click-to-call plugin and folio component go together nicely, so let's package them together. Create a folder named pkg_folio_v1.0.0 on your desktop, and within it, create a folder named packages. Copy into the packages folder the latest version of com_folio and plg_content_clicktocall, for example, com_folio_v2.7.0.zip and plg_content_clicktocall_v1.2.0.zip. Now create a file named pkg_folio.xml in the root of the pkg_folio_v1.0.0 folder, and add the following code to it: <?xml version="1.0" encoding="UTF-8" ?> <extension type="package" version="3.0"> <name>Folio Package</name> <author>Tim Plummer</author> <creationDate>May 2013</creationDate> <packagename>folio</packagename> <license>GNU GPL</license> <version>1.0.0</version> <url>www.packtpub.com</url> <packager>Tim Plummer</packager> <packagerurl>www.packtpub.com</packagerurl> <description>Single Install Package combining Click To Call plugin with Folio component</description> <files folder="packages"> <file type="component" id="folio" >com_folio_v2.7.0.zip</file> <file type="plugin" id="clicktocall" group="content">plg_content_clicktocall_v1.2.0.zip</file> </files> </extension> This looks pretty similar to our installation XML file that we created for each component; however, there are a few differences. Firstly, the extension type is package: <extension type="package" version="3.0"> We have some new tags that help us to describe what this package is and who made it. The person creating the package may be different to the original author of the extensions: <packagename>folio</packagename><packager>Tim Plummer</packager><packagerurl>www.packtpub.com</packagerurl> You will notice that we are looking for our extensions in the packages folder; however, this could potentially have any name you like: <files folder="packages"> For each extension, we need to say what type of extension it is, what its name is, and the file containing it: <file type="component" id="folio" >com_folio_v2.7.0.zip</file> You can package together as many components, modules, and plugins as you like, but be aware that some servers have a maximum size for uploaded files that is quite low, so, if you try to package too much together, you may run into problems. Also, you might get timeout issues if the file is too big. You'll avoid most of these problems if you keep the package file under a couple of megabytes. You can install packages via Extension Manager in the same way you install any other Joomla! extension: However, you will notice that the package is listed in addition to all of the individual extensions within it: Setting up an update server Joomla! has a built-in update software that allows you to easily update your core Joomla! version, often referred to as one-click updates (even though it usually take a few clicks to launch it). This update mechanism is also available to third-party Joomla! extensions; however, it involves you setting up an update server. You can try this out on your local development environment. To do so, you will need two Joomla! sites: http://localhost/joomla3, which will be our update server, and http://localhost/joomlatest, which will be our site that we are going to try to update the extensions on. Note that the update server does not need to be a Joomla! site; it could be any folder on a web server. Install our click-to-call plugin on the http://localhost/joomlatest site, and make sure it's enabled and working. To enable the update manager to be able to check for updates, we need to add some code to the clicktocall.xml installation XML file under /plugins/content/clicktocall/: <?xml version="1.0" encoding="UTF-8"?> <extension version="3.0" type="plugin" group="content" method="upgrade"> <name>Content - Click To Call</name> <author>Tim Plummer</author> <creationDate>April 2013</creationDate> <copyright>Copyright (C) 2013 Packt Publishing. All rights reserved.</copyright> <license>http://www.gnu.org/licenses/gpl-3.0.html</license> <authorEmail>[email protected]</authorEmail> <authorUrl>http://packtpub.com</authorUrl> <version>1.2.0</version> <description>This plugin will replace phone numbers with click to call links. Requires Joomla 3.0 or greater. Don't forget to publish this plugin! </description> <files> <filename plugin="clicktocall">clicktocall.php</filename> <filename plugin="clicktocall">index.html</filename> </files> <languages> <language tag="en-GB">language/en-GB/en-GB.plg_content_clicktocall.ini</language> </languages> <config> <fields name="params"> <fieldset name="basic"> <field name="phoneDigits1" type="text" default="4" label="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_LABEL" description="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_DESC" /> <field name="phoneDigits2" type="text" default="4" label="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_LABEL" description="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_DESC" /> </fieldset> </fields> </config> <updateservers> <server type="extension" priority="1" name="Click To Call Plugin Updates">http://localhost/joomla3/updates/clicktocall.xml</server> </updateservers> </extension> The type can either be extension or collection; in most cases you'll be using extension, which allows you to update a single extension, as opposed to collection, which allows you to update multiple extensions via a single file: type="extension" When you have multiple update servers, you can set a different priority for each, so you can control the order in which the update servers are checked. If the first one is available, it won't bother checking the rest: priority="1" The name attribute describes the update server; you can put whatever value you like in here: name="Click To Call Plugin Updates" We have told the extension where it is going to check for updates, in this case http://localhost/joomla3/updates/clicktocall.xml. Generally, this should be a publically accessible site so that users of your extension can check for updates. Note that you can specify multiple update servers for redundancy. Now on your http://localhost/joomla3 site, create a folder named updates and put the usual index.html file in it. Copy it in the latest version of your plugin, for example, plg_content_clicktocall_v1.2.1.zip. You may wish to make a minor visual change so you can see if the update actually worked. For example, you could edit the en-GB.plg_content_clicktocall.ini language file under /language/en-GB/, then zip it all back up again. PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_LABEL="Digits first part"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_DESC="How many digits inthe first part of the phone number?"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_LABEL="Digits last part"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_DESC="How many digits inthe second part of the phone number?" Now create the clicktocall.xml file with the following code in your updates folder: <?xml version="1.0" encoding="utf-8"?> <updates> <update> <name>Content - Click To Call</name> <description>This plugin will replace phone numbers with click to call links. Requires Joomla 3.0 or greater. Don't forget to publish this plugin! </description> <element>clicktocall</element> <type>plugin</type> <folder>content</folder> <client>0</client> <version>1.2.1</version> <infourl title="Click To Call Plugin 1.2.1">http://packtpub.com</infourl> <downloads> <downloadurl type="full" format="zip">http://localhost/joomla3/updates/plg_content_clicktocall_v1.2.1.zip</downloadurl> </downloads> <targetplatform name="joomla" version="3.1" /> </update> </updates> This file could be called anything you like, it does not need to be the extensionname.xml as long as it matches the name you set in your installation XML for the extension. The updates tag surrounds all the update elements. Each time you release a new version, you will need to create another update section. Also, if your extension supports both Joomla! 2.5 and Joomla! 3, you will need to have separate <update> definitions for each version. And if you want to support updates for both Joomla! 3.0 and Joomla! 3.1, you will need separate tags for each of them. The value of the name tag is shown in the Extension Manager Update view, so using the same name as your extension should avoid confusion: <name>Content - Click To Call</name> The value of the description tag is shown when you hover over the name in the update view. The value of the element tag is the installed name of the extension. This should match the value in the element column in the jos_extensions table in your database: <element>clicktocall</element> The value of the type tag describes whether this is a component, module, or a plugin: <type>plugin</type> The value of the folder tag is only required for plugins, and describes the type of plugin this is, in our case a content plugin. Depending on your plugin type, this may be system, search, editor, user, and so on. <folder>content</folder> The value of the client tag describes the client_id in the jos_extensions table, which tells Joomla! if this is a site (0) or an administrator (1) extension type. Plugins will always be 0, components will always be 1; however, modules could vary depending on whether it's a frontend or a backend module: <client>0</client> Plugins must have <folder> and <client> elements, otherwise the update check won't work. The value of the version tag is the version number for this release. This version number needs to be higher than the currently installed version of the extension for available updates to be shown: <version>1.2.1</version> The the infourl tag is optional, and allows you to show a link to information about the update, such as release notes: <infourl title="Click To Call Plugin 1.2.1">http://packtpub.com</infourl> The downloads tag shows all of the available download locations for the update. The value of the Downloadurl tag is the URL to download the extension from. This file could be located anywhere you like, it does not need to be in the updates folder on the same site. The type attribute describes whether this is a full package or an update, and the format attribute defines the package type such as zip or tar: <downloadurl type="full" format="zip">http://localhost/joomla3/updates/plg_content_clicktocall_v1.2.1.zip</downloadurl> The targetplatform tag describes the Joomla! version this update is meant for. The value of the name attribute should always be set to joomla. If you want to target your update to a specific Joomla! version, you can use min_dev_level and max_dev_level in here, but in most cases you'd want your update to be available for all Joomla! versions in that Joomla! release. Note that min_dev_level and max_dev_level are only available in Joomla! 3.1 or higher. <targetplatform name="joomla" version="3.1" /> So, now you should have the following files in your http://localhost/joomla3/updates folder. clicktocall.xmlindex.htmlplg_content_clicktocall_v1.2.1.zip You can make sure the XML file works by typing the full URL http://localhost/joomla3/updates/clicktocall.xml: As the update server was not defined in our extension when we installed it, we need to manually add an entry to the jos_update_sites table in our database before the updates will work. So, now go to your http://localhost/joomlatest site and log in to the backend. From the menu navigate to Extensions | Extension Manager, and then click on the Update menu on the left-hand side. Click on the Find Updates button, and you should now see the update, which you can install: Select the Content – Click To Call update and press the Update button, and you should see the successful update message: And if all went well, you should now see the visual changes that you made to your plugin. These built-in updates are pretty good, so why doesn't every extension developer use them? They work great for free extensions, but there is a flaw that prevents many extension developers using this; there is no way to authenticate the user when they are updating. Essentially, what this means is that anyone who gets hold of your extension or knows the details of your update server can get ongoing free updates forever, regardless of whether they have purchased your extension or are an active subscriber. Many commercial developers have either implemented their own update solutions, or don't bother using the update manager, as their customers can install new versions via extension manager over the top of previous versions. This approach although is slightly inconvenient for the end user, it is easier for the developer to control the distribution. One such developer who has come up with his own solution to this, is Nicholas K. Dionysopoulos from Akeeba, and he has kindly shared his solution, the Akeeba Release System, which you can get for free from his website and easily integrate into your own extensions. As usual, Nicholas has excellent documentation that you can read if you are interested, but it's beyond the scope of this book to go into detail about this alternative solution (https://www.akeebabackup.com/products/akeeba-release-system.html). Summary Now you know how to package up your extensions and get them ready for distribution. You learnt how to set up an update server, so now you can easily provide your users with the latest version of your extensions. Resources for Article: Further resources on this subject: Tips and Tricks for Joomla! Multimedia [Article] Adding a Random Background Image to your Joomla! Template [Article] Showing your Google calendar on your Joomla! site using GCalendar [Article]
Read more
  • 0
  • 0
  • 777

article-image-highcharts
Packt
20 Aug 2013
5 min read
Save for later

Highcharts

Packt
20 Aug 2013
5 min read
(For more resources related to this topic, see here.) Creating a line chart with a time axis and two Y axes We will now create the code for this chart: You start the creation of your chart by implementing the constructor of your Highcharts' chart: var chart = $('#myFirstChartContainer').highcharts({}); We will now set the different sections inside the constructor. We start by the chart section. Since we'll be creating a line chart, we define the type element with the value line. Then, we implement the zoom feature by setting the zoomType element. You can set the value to x, y, or xy depending on which axes you want to be able to zoom. For our chart, we will implement the possibility to zoom on the x-axis: chart: {type: 'line',zoomType: 'x'}, We define the title of our chart: title: {text: 'Energy consumption linked to the temperature'}, Now, we create the x axis. We set the type to datetime because we are using time data, and we remove the title by setting the text to null. You need to set a null value in order to disable the title of the xAxis: xAxis: {type: 'datetime',title: {text: null}}, We then configure the Y axes. As defined, we add two Y axes with the titles Temperature and Electricity consumed (in KWh), which we override with a minimum value of 0. We set the opposite parameter to true for the second axis in order to have the second y axis on the right side: yAxis: [{title: {text: 'Temperature'},min:0},{title: {text: 'Energy consumed (in KWh)'},opposite:true,min:0}], We will now customize the tooltip section. We use the crosshairs option in order to have a line for our tooltip that we will use to follow values of both series. Then, we set the shared value to true in order to have values of both series on the same tooltip. tooltip: {crosshairs: true,shared: true}, Further, we set the series section. For the datetime axes, you can set your series section by using two different ways. You can use the first way when your data follow a regular time interval and the second way when your data don't necessarily follow a regular time interval. We will use both the ways by setting the two series with two different options. The first series follows a regular interval. For this series, we set the pointInterval parameter where we define the data interval in milliseconds. For our chart, we set an interval of one day. We set the pointStart parameter with the date of the first value. We then set the data section with our values. The tooltip section is set with the valueSuffix element, where we define the suffix to be added after the value inside our tool tip. We set our yAxis element with the axis we want to associate with our series. Because we want to set this series to the first axis, we set the value to 0(zero). For the second series, we will use the second way because our data is not necessarily following the regular intervals. But you can also use this way, even if your data follows a regular interval. We set our data by couple, where the first element represents the date and the second element represents the value. We also override the tooltip section of the second series. We then set the yAxis element with the value 1 because we want to associate this series to the second axis. For your chart, you can also set your date values with a timestamp value instead of using the JavaScript function Date.UTC. series: [{name: 'Temperature',pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ' °C'},yAxis: 0},{name: 'Electricity consumption',data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ' KWh'},yAxis: 1}] You should have this as the final code: $(function () {var chart = $(‘#myFirstChartContainer’).highcharts({chart: {type: ‘line’,zoomType: ‘x’},title: {text: ‘Energy consumption linked to the temperature’},xAxis: {type: ‘datetime’,title: {text: null}},yAxis: [{title: {text: ‘Temperature’},min:0},{title: {text: ‘Electricity consumed’},opposite:true,min:0}],tooltip: {crosshairs: true,shared: true},series: [{name: ‘Temperature’,pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ‘ °C’},yAxis: 0},{name: ‘Electricity consumption’,data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ‘ KWh’},yAxis: 1}]});}); You should have the expected result as shown in the following screenshot: Summary In this article, we learned how to perform a task with the most important features of Highcharts. We created a line chart with a time axis and two Y-axes and realized that there are a wide variety of things that you can do with it. Also, we learned about the most commonly performed tasks and most commonly used features in Highcharts. Resources for Article : Further resources on this subject: Converting tables into graphs (Advanced) [Article] Line, Area, and Scatter Charts [Article] Data sources for the Charts [Article]
Read more
  • 0
  • 0
  • 2936

article-image-creating-sheet-objects-and-starting-new-list-using-qlikview-11
Packt
20 Aug 2013
6 min read
Save for later

Creating sheet objects and starting new list using Qlikview 11

Packt
20 Aug 2013
6 min read
(For more resources related to this topic, see here.) How it works... To add the list box for a company, right-click in the blank area of the sheet, and choose New Sheet Object | List Box as shown in the following screenshot: As you can see in the drop-down menu, there are multiple types of sheet objects to choose from such as List Box, Statistics Box, Chart, Input Box, Current Selections Box, Multi Box, Table Box, Button, Text Object, Line/Arrow Object, Slider/Calendar Object, and Bookmark Object. We will only cover a few of them in the course of this article. The Help menu and extended examples that are available on the QlikView website will allow you to explore ideas beyond the scope of this article. The Help documentation for any item can be obtained by using the Help menu present on the top menu bar. Choose the List Box sheet object to add the company dimension to our analysis. The New List Box wizard has eight tabs: General, Expressions, Sort, Presentation, Number, Font, Layout, and Caption, as shown in the following screenshot: Give the new List Box the title Company. The Object ID will be system generated. We choose the Company field from the fields available in the datafile that we loaded. We can check the Show Frequency box to show frequency in percent, which will only tell us how many account lines in October were loaded for each company. In the Expressions tab, we can add formulas for analyzing the data. Here, click on Add and choose Average. Since, we only have numerical data in the Amount field, we will use the Average aggregation for the Amount field. Don't forget to click on the Paste button to move your expression into the expression checker. The expression checker will tell you if the expression format is valid or if there is a syntax problem. If you forget to move your expression into the expression checker with the Paste button, the expression will not be saved and will not appear in your application. The Sort tab allows you to change the Sort criteria from text to numeric or dates. We will not change the Sort criteria here. The Presentation tab allows you to adjust things such as column or row header wrap, cell borders, and background pictures. The Number tab allows us to override the default format to tell the sheet to format the data as money, percentage, or date for example. We will use this tab on our table box currently labeled Sum(Amount) to format the amount as money after we have finished creating our new company list box. The Font tab lets us choose the font that we want to use, its display size, and whether to make our font bold. The Layout tab allows us to establish and apply themes, and format the appearance of the sheet object, in this case, the list box. The Caption tab further formats the sheet object and, in the case of the list box, allows you to choose the icons that will appear in the top menu of the list box so that we can use those icons to select and clear selections in our list box. In this example, we have selected search, select all, and clear. We can see that the percentage contribution to the amount and the average amount is displayed in our list box. Now, we need to edit our straight table sheet object along with the amount. Right-click on the straight table sheet object and choose Properties from the pop-up menu. In the General tab, give the table a suitable name. In this case, use Sum of Accounts. Then move over to the Number tab and choose Money for the number format. Click on Apply to immediately apply the number format, and click on OK to close the wizard. Now our straight table sheet object has easier to read dollar amounts. One of the things we notice immediately in our analysis is that we are out of balance by one dollar and fifty-nine cents, as shown in the following screenshot: We can analyze our data just using the list boxes, by selecting a company from the Company list and seeing which account groups and which cost centers are included (white) and which are excluded (gray). Our selected Company shows highlighted in green: By selecting Cheyenne Holding, we can see that it is indeed a holding company and has no manufacturing groups, sales accounting groups, or cost centers. Also the company is in balance. But what about a more graphic visual analysis? To create a chart to further visualize and analyze our data, we are going to create a new sheet object. This time we are going to create a bar chart so that we can see various company contributions to administrative costs or sales by the Acct.5 field, and the account number. Just as when we created the company list box, we right-click on the sheet and choose New Sheet Object | Chart. This opens the following Chart Properties wizard for us: We follow the steps through the chart wizard by giving the chart a name, selecting the chart type, and the dimensions we want to use. Again our expression is going to be SUM(Amount), but we will use the Label option and name it Total Amount in the Expression tab. We have selected the Company and Acct.5 dimensions in the Dimension tab, and we take the defaults for the rest of the wizard tabs. When we close the wizard, the new bar chart appears on our sheet, and we can continue our analysis. In the following screenshot, we have chosen Cheyenne Manufacturing for our Company and all Sales/COS Trade to Mexico Branch as Account Groups. These two selection then show us in our straight table the cost centers that are associated with sales/COS trade to Mexico branch. In our bar chart, we see the individual accounts associated with sales/COS trade to Mexico branch and Cheyenne Manufacturing along with the related amounts posted for these accounts. Summary We created more sheet objects, started with a new list box to begin analyzing our loaded data. We alson added dimensions for analysis. Resources for Article: Further resources on this subject: Meet QlikView [Article] Linking Section Access to multiple dimensions [Article] Creating the first Circos diagram [Article]
Read more
  • 0
  • 0
  • 2398
article-image-working-remote-data
Packt
20 Aug 2013
4 min read
Save for later

Working with remote data

Packt
20 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new document in your editor. How to do it... Copy the following code into your new document: <!DOCTYPE html> <html> <head> <title>Kendo UI Grid How-to</title> <link rel="stylesheet" type="text/css" href="kendo/styles/kendo.common.min.css"> <link rel="stylesheet" type="text/css" href="kendo/styles/kendo.default.min.css"> <script src = "kendo/js/jquery.min.js"></script> <script src = "kendo/js/kendo.web.min.js"></script> </head> <body> <h3 style="color:#4f90ea;">Exercise 12- Working with Remote Data</h3> <p><a href="index.html">Home</a></p> <script type="text/javascript"> $(document).ready(function () { var serviceURL = "http://gonautilus.com/kendogen/KENDO.cfc?method="; var myDataSource = new kendo.data.DataSource({ transport: { read: { url: serviceURL + "getArt", dataType: "JSONP" } }, pageSize: 20, schema: { model: { id: "ARTISTID", fields: { ARTID: { type: "number" }, ARTISTID: { type: "number" }, ARTNAME: { type: "string" }, DESCRIPTION: { type: "CLOB" }, PRICE: { type: "decimal" }, LARGEIMAGE: { type: "string" }, MEDIAID: { type: "number" }, ISSOLD: { type: "boolean" } } } } } ); $("#myGrid").kendoGrid({ dataSource: myDataSource, pageable: true, sortable: true, columns: [ { field: "ARTID", title: "Art ID"}, { field: "ARTISTID", title: "Artist ID"}, { field: "ARTNAME", title: "Art Name"}, { field: "DESCRIPTION", title: "Description"}, { field: "PRICE", title: "Price", template: '#= kendo.toString(PRICE,"c") #'}, { field: "LARGEIMAGE", title: "Large Image"}, { field: "MEDIAID", title: "Media ID"}, { field: "ISSOLD", title: "Sold"}] } ); } ); </script> <div id="myGrid"></div> </body> </html> How it works... This example shows you how to access a JSONP remote datasource. JSONP allows you to work with cross-domain remote datasources. The JSONP format is like JSON except it adds padding, which is what the "P" in JSONP stands for. The padding can be seen if you look at the result of the AJAX call being made by the Kendo Grid. It simply responds back with the callback argument that is passed and wraps the JSON in parentheses. You'll notice that we created a serviceURL variable that points to the service we are calling to return our data. On line 19, you'll see that we are calling the getArt method and specifying the value of dataType as JSONP. Everything else should look familiar. There's more... Generally, the most common format used for remote data is JavaScript Object Notation (JSON). You'll find several examples of using ODATA on the Kendo UI demo website. You'll also find examples of performing create, update, and delete operations on that site. Outputting JSON with ASP MVC In an ASP MVC or ASP.NET application, you'll want to set up your datasource like the following example. ASP has certain security requirements that force you to use POST instead of the default GET request when making AJAX calls. ASP also requires that you explicitly define the value of contentType as application/json when requesting JSON. By default, when you create a service as ASP MVC that has JsonResultAction, ASP will nest the JSON data in an element named d: var dataSource = new kendo.data.DataSource({ transport: { read: { type: "POST", url: serviceURL, dataType: "JSON", contentType: "application/json", data: serverData }, parameterMap: function (data, operation) { return kendo.stringify(data); } }, schema: { data: "d" } }); Summary This article discussed about how to work with aggregates with the help of an example of counting the number of items in a column. Resources for Article: Further resources on this subject: Constructing and Evaluating Your Design Solution [Article] Data Manipulation in Silverlight 4 Data Grid [Article] Quick start – creating your first grid [Article]
Read more
  • 0
  • 0
  • 4494

article-image-creating-new-forum
Packt
19 Aug 2013
6 min read
Save for later

Creating a new forum

Packt
19 Aug 2013
6 min read
(For more resources related to this topic, see here.) In the WordPress Administration, click on New Forum, which is a subpage of the Forums menu item on the sidebar. You will be taken to a screen that is quite similar to a WordPress post creation page, but slightly different with a few extra areas: If you are not familiar with the WordPress post creation page, the following is a list of the page's features: The Enter Title Here box The long box on the top of the page is your forum title. This, on the forum page, will be what is clicked on, and will also provide the basis for the forum's URL Slug with some changes, as URL Slugs generally have to be letters, numbers, and dashes. So for example, if your forum title is My Product's Support Section, your Slug will probably be my-products-support-section. When you insert the forum title, the URL Slug will be generated below. However, if you wish to change it, click on the yellow highlighted section to change the Slug, and then click on OK. The Post box Beneath the title box is the post box. This should contain your forum description. This will be shown beneath your forum's name on the forum index page. You can add rich text to this, such as bold or italicized text, but my advice is to keep this short. One or two lines of text would suffice, otherwise it could make your forum look peculiar. Forum attributes Towards the right-hand side of the screen, you should see a Forum Attributes section. bbPress allows to set a number of different attributes for your created forum. The attributes are explained in detail as follows: Forum type: Your forum can be one of two types: "Forum" or "Category". Category is a section of the site where you cannot post, but forums are grouped in. So for example, if you have forums for "Football", "Cricket", and "Athletics", you may group them into a "Sport" category. Unless you have a large forum with a number of different areas, you shouldn't need many categories. Normally you would begin with a few forums, but then as your forums grow, you would introduce categories. If you create a category, any forum you create must be a subforum of the category. We will talk about creating subforums later in this article. Status: Your forum's status indicates if other users can post in the forum. If the status is "Open", any user can post in the forum. If the forum is "Closed", nobody can contribute other than Keymasters. Unless one of your forums is a "Forum Rules" forum, you would probably keep all forums as Open. Visibility: bbPress allows three types of forum visibility . These, as the names suggest, decide who gets to see the forums. The three options are as follows: Public: This type allows anybody visiting the site to see the forum and its contents. Private: This type allows users who are logged in to view and contribute to the forum, but the forum is hidden from users that are not logged in or users that are blocked. Private forums are prefixed with the word "Private". Hidden: This type allows only Moderators and Keymasters to view the forum. Most forums will probably have majority of their forums set to Public, but have selections that are Private or Hidden. Usually, having a Hidden forum to discuss forum matters with Administrators or Moderators is a good thing. You can have a private forum as well that could help encourage people to register on the site. Parent: You can have subforums of forums. By giving a parent to the forum, you make it a subforum. An example of this would be if you had a "Travel" forum, you can have subforums dedicated to "Europe", "Australia", and "Asia". Again, you will probably start with just a few forums, but over time, you will probably grow your forum to include subforums. Order: The Order field helps define the order in which your forums are listed. By default, or if unspecified, the order is always alphabetical. However, if you give a number, then the order of the forum will be determined by the Order number, from smallest to largest. It is good to put important forums at the top, and less important forums towards the bottom of the page. It's a good idea to number your orders in multiples of 10, rather than 1, 2, 3, and so on. That way, if you want to add a forum to your site that will be between two other forums, you can add it in with a number between the two multiples of 10, thus saving time. Now that you have set up a forum, click on publish, and congratulations, you should have a forum! Editing and deleting forums Forums are a community, and like all good communities, they evolve over time depending on their user's needs. As such, over time, you may need to restructure or delete forums. Luckily, this is easily done. First, click on Forums in the sidebar of the WordPress Administration. You should see a list of all the current forums you have on your site: If you hover over a forum, two options will appear: Edit, which will allow you to edit the forum. A screen similar to the New Forum page will appear, which will allow you to make changes to your forum. The second option is Trash, which will move your forum into Trash. After a while, it will be deleted from your site. When you click on Trash, you will trash everything associated with your forum (any topics, replies, or tags will be deleted). Be careful! Summary Right now, you should have a bustling forum, ably overseen by yourself and maybe even a couple of Moderators.Remember that all I have described so far has been how to use bbPress to manage your forum, and not how to manage your forum. Each forum will have its own rules and guidelines, and you will eventually learn how to manage your bbPress forum with more and more members joining in.A general rule of thumb, though, is set out your rules at the start of your forum, welcome change, act quickly on violations, and most importantly, treat your users with respect. As without users, you will have a very quiet forum. However, bbPress is a WordPress plugin, and in itself can be extensible and can take advantage of plugins and themes, both specifically designed for bbPress or even those that work with WordPress. Resources for Article: Further resources on this subject: Getting Started with WordPress 3 [Article] How to Create an Image Gallery in WordPress 3 [Article] Integrating phpList 2 with WordPress [Article]
Read more
  • 0
  • 0
  • 1797