Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-state-play-buddypress-themes
Packt
22 Oct 2013
13 min read
Save for later

State of Play of BuddyPress Themes

Packt
22 Oct 2013
13 min read
(For more resources related to this topic, see here.) What is BuddyPress? BuddyPress had its first release in April 2009 and is a plugin that you use with WordPress to bring community features to your site. BuddyPress is capable of so much, from connecting and bringing together an existing community through to building new communities. A few things you can create are: A community for your town or village An intranet for a company A safe community for students of a school to interact with each other A community around a product or event A support network for people with the same illness BuddyPress has a lot of different features; you can choose which you want to use. These include groups, streams, messaging, and member profiles. BuddyPress and WordPress are open source projects released under the GPL license. You can find out more about GPL here: http://codex.wordpress.org/License. A team of developers work on the project and anyone can get involved and contribute. As you use BuddyPress, you may want to get more involved in the project itself or find out more. There are a number of ways you can do this: The main site http://buddypress.org/ and the development blog at http://bpdevel.wordpress.com. For support and information there is http://buddypress.org/support/ and http://codex.buddypress.org. If you use IRC, you can use the dedicated channels on irc.freenode.net #buddypress or #buddypress-dev. The developer meeting is every Wednesday at 19:00 UTC in #buddypress-dev. IRC (Internet Relay Chat) is a form of real-time Internet text messaging (chat). You can find out more here: http://codex.wordpress.org/IRC. What is a theme? Your site theme can be thought of as the site design, but it's more than colors and fonts. Themes work by defining templates, which are then used for each section of your site (for instance, a front page or a single blog post). In the case of a BuddyPress theme, it brings BuddyPress templates and functionality on top of the normal WordPress theme. At the heart of any BuddyPress theme are the same files a WordPress theme needs; here are some useful WordPress theme resources: Wordpress.org theme handbook: http://make.wordpress.org/docs/theme-developer-handbook/ WordPress CSS coding standards: http://make.wordpress.org/core/handbook/coding-standards/css/ Theme check plugin (make sure you're using the WordPress standards): http://wordpress.org/extend/plugins/theme-check/ There is a great section in the WordPress codex on design and layout: http://codex.wordpress.org/Blog_Design_and_Layout How BuddyPress themes used to work BuddyPress in the past needed a theme to work. This theme had several variations; the latest of these was BP-Default, which is shown in the following screenshot: This is the default theme before BuddyPress 1.7 This theme was a workhorse solution giving everything a style, making sure you could be up and running with BuddyPress on your site. For a long time, the best way was to use this theme and create a child theme to then do what you wanted for your site. A child theme inherits the functionality of the parent theme. You can add, edit, and modify in the child theme without affecting the parent. This is a great way to build on the foundation of an existing theme and still be able to do updates and not affect the parent. For more information about a child theme see: http://codex.wordpress.org/Child_Themes. The trouble with default The BuddyPress default theme allowed people without much knowledge to set up BuddyPress, but it wasn't ideal. Fairly quickly, everything started to look the same and the learning curve to create your own theme was steep. People made child themes that often just looked more like clones. The fundamental issue above all was that it was a plugin that needed a theme and this wasn't the right way to do things. A plugin should work as best it can in any theme. There had been work done on bbPress to make the change in the theme compatibility and the time was right for BuddyPress to follow suit. bbPress is a plugin that allows you to easily create forums in your WordPress site and also integrate into BuddyPress. You can learn more about bbPress here: http://bbpress.org. Theme compatibility Theme compatibility in simple terms means that BuddyPress works with any theme. Everything that is required to make it work is included in the plugin. You can now go and get any theme from wordpress.org or other sites and be up and running in no time. Should you want to use your own custom template, it's really easy thanks to theme compatibility. So long as you give it the right name and place it in the right location in your theme, BuddyPress when loading will use your file instead of its own. Theme compatibility is always active as this allows BuddyPress to add new features in future versions. You can see this in the following screenshot: Here you see BuddyPress 1.7 in the Twenty Twelve theme Do you still need a BuddyPress theme? Probably by now you're asking yourself if you even need a theme for BuddyPress. With theme compatibility while, you don't have to, there are many reasons why you'd want to. A custom theme allows you to tailor the experience for your community. Different communities have different needs, one size doesn't fit all. Theme compatibility is great, but there is still a need for themes that focus on community and the other features of BuddyPress. A default experience will always be default and not tailored to your site or for BuddyPress. There are many things when creating a community theme that a normal WordPress theme won't have taken into account. This is what we mean when we nowadays refer to BuddyPress themes. There is still a need for these and a need to understand how to create them. Communities A community is a place for people to belong. Creating a community isn't something you should do lightly. The best advice when creating a community is to start small and build up functionality over time. Social networking on the other hand is purely about connections. Social networking is only part of a community. Most communities have far more than just social networking. When you are creating a community you want to focus on enabling your members to make strong connections. Niche communities BuddyPress is perfect for creating niche communities. These are pockets of people united by a cause, by a hobby, or by something else. However, don't think niche means small. Niche communities can be of any size. Having a home online for a community, a place where people of the same mindset, same experiences can unite, that's powerful. Niche communities can be forces for change, a place of comfort, give people a chance to be heard, or sometimes a place just to be. Techniques A community should encourage engagement and provide paths for users to easily accomplish tasks. You may provide areas, such as for user promotion, for user generated content, set achievements for completing tasks or allow users to collect points and have leaderboards. The idea of points and achievements comes from something called Gamification and is a powerful technique. Gamification techniques leverage people's natural desires for competition, achievement, status, self-expression, altruism and closure —Wikipedia (http://en.wikipedia.org/wiki/Gamification) Community is something you experience offline and online. When you are creating a community it's great to look at both. Think how groups of people interact, how they get tasks done together, and what hurdles they experience. Knowing a little bit about psychology will help you create a design that works for the community. Responsive design Responsive web design (RWD) is a web design approach aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from desktop computer monitors to mobile phones) —Wikipedia When you create your theme, you need to be careful to design it not only for one device. The world is multi-device and your theme should adapt to the device it's being used on without causing any issues. There are several options open to you when looking to create a theme that works across all devices. What about adaptive design? Responsive designs worked initially a lot by using media queries in CSS to create designs that adapted to CSS media queries. From this emerged the term adaptive design. There are many definitions of what responsive and adaptive mean. Adaptive loosely means progressive enhancement and responsive web design is part of this. You could say that responsive web design is the technique and adaptive design is the movement. A media query consists of a media type and zero or more expressions that check for the conditions of particular media features: http://www.w3.org/TR/css3-mediaqueries/. Mobile first Mobile first makes sure that the site works first for the smallest screen size. As you move up, styles are added using media queries to use the extra screen real estate. This was a phrase coined by Luke Wroblewski. This is a lighter approach and often preferred as it loads just what the device needs. Do you need an app? In the WordPress world, mobile themes tend to come in the form of plugins or apps. Historically, most WordPress themes had a mobile theme and this theme showed only to those accessing a mobile device. You were presented with a default theme that looked the same regardless of the site and there was often no way to see a non-mobile experience (a point of frustration on larger mobile devices). When thinking about mobile apps or plugins, a good rule of thumb is to ask if it's bringing something extra to your site. This could be focusing on one aspect, or making something mobile so devices have access to integrate. If they are simply changing the look of your site you can deal with that in your theme. You shouldn't need an app or plugin to get your site working on a mobile device. In the wild – BuddyPress custom themes There is a whole world out there of great BuddyPress examples, so let's take a little bit of time to get inspired and see what others have done. The following list shows some of the custom themes: Cuny Academic Commons: http://commons.gc.cuny.edu/. This is run by the City University of New York and is for supporting faculty initiatives along with building community. Enterprise Nation: http://enterprisenation.com. This site is a business club encouraging connections and sharing of information. Goed en wel.nl: http://goedenwel.nl. This is a community for 50-65 year olds and built around five areas of interest. Shift.ms: http://shift.ms. This is a community run by and for young people with multiple sclerosis. Teen Summer Challenge: http://teensummerchallenge.org. This site is part of an annual reading program for teens run by Pierce County Library. Trainerspace: http://www.trainerspace.com. This site is a place to go to find a trainer and those trainers can also have their own website. If you want to discover more great BuddyPress sites, BP Inspire is a great showcase site at http://www.bpinspire.com/. What are the options while creating a theme? There are several options available to you when creating a BuddyPress theme: Use an existing WordPress theme with theme compatibility Use an existing BuddyPress theme Create custom CSS with another theme Create custom templates – use in child theme or existing theme Create everything from custom Later, we will take a look at each of these and how each works. WordPress themes You can get free WordPress themes from the WordPress theme repository http://wordpress.org/extend/themes/ and also buy from many great theme shops. You can also use a theme framework. A framework goes beyond a theme and has a lot of extra tools to help you build your theme rolled in. You can learn more about theme frameworks here: http://codex.wordpress.org/Theme_Frameworks. BuddyPress themes You can get both free and paid BuddyPress themes if you want something a little bit more than theme compatibility. A BuddyPress theme, as we discussed earlier, brings something extra and it may bring that to all the features or just some. It is a theme designed to take your site beyond what theme compatibility brings. Free themes If you are looking for free themes your first place should be the theme repository on WordPress.org http://wordpress.org/extend/themes/. The theme repository page will look like the following screenshot: This is the WordPress.org theme repository where you can find BuddyPress themes You can find BuddyPress themes by searching for the word buddypress. Here you can see a range of themes. A few free themes you may want to consider are: Custom Community by svenl77: http://wordpress.org/extend/themes/custom-community. Fanwood by tungdo http://wordpress.org/extend/themes/fanwood. Frisco by David Carson: http://wordpress.org/extend/themes/frisco-for-buddypress. Status by buddypressthemers: http://wordpress.org/extend/themes/status. 3oneseven: http://3oneseven.com/buddypress-themes/. Infinity Theme Engine: http://infinity.presscrew.com. Commons in a box: http://commonsinabox.org. All the power of the CUNY site (mentioned earlier) with a theme engine. Themes to buy Buying a theme is a good way to get a lot of features from custom themes without the cost or learning involved in custom development. The downside of course is that you will not have a custom look, but if your budget and time is limited this may be a good option. When you buy a theme, be careful and as like with anything online, make sure you buy from a reputable source. There are a number of theme shops selling BuddyPress themes including (not extensive list): BuddyBoss: http://www.buddyboss.com Mojo Themes: http://www.mojo-themes.com Press Crew: http://shop.presscrew.com Theme Loom: http://themeloom.com/themes/pure-theme/ Theme Forest: http://themeforest.net/category/wordpress/buddypress Themekraft: http://themekraft.com WPMU DEV: http://premium.wpmudev.org Summary As you can see from this article, the way things are done now since the release of BuddyPress 1.7 are a little bit different to before, thanks to theme compatibility. This is great and means there is no better time than now to start developing BuddyPress themes. In this article we skimmed over a lot of important issues that anyone making a theme has to consider. These are important, just like knowing how to build. You should have in mind the users you are creating for and what their experience will be. We all view sites on different devices, we all need to have a similar experience and not one where we're penalized. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] The Basics of WordPress and jQuery Plugin [Article] Anatomy of a WordPress Plugin [Article]
Read more
  • 0
  • 0
  • 2225

article-image-spotfire-architecture-overview
Packt
21 Oct 2013
6 min read
Save for later

The Spotfire Architecture Overview

Packt
21 Oct 2013
6 min read
(For more resources related to this topic, see here.) The companies of today face innumerable market challenges due to the ferocious competition of a globalized economy. Hence, providing excellent service and having customer loyalty are the priorities for their survival. In order to best achieve both goals and have a competitive edge, companies can resort to the information generated by their digitalized systems, their IT. All the recorded events from Human Resources (HR) to Customer Relationship Management (CRM), Billing, and so on, can be leveraged to better understand the health of a business. The purpose of this article is to present a tool that behaves as a digital event analysis enabler, the TIBCO Spotfire platform. In this article, we will list the main characteristics of this platform, while also presenting its architecture and describing its components. TIBCO Spotfire Spotfire is a visual analytics and business intelligence platform from TIBCO software. It is a part of new breed of tools created to bridge the gap between the massive amount of data that the corporations produce today and the business users who need to interpret this data in order to have the best foundation for the decisions they make. In my opinion, there is no better description of what TIBCO Spotfire delivers than the definition of visual analytics made in the publication named Illuminating the Path: The Research and Development Agenda for Visual Analytics. "Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces". –Illuminating the Path: The Research and Development Agenda for Visual Analytics, James J. Thomas and Kristin A. Cook, IEEE Computer Society Press The TIBCO Spotfire platform offers the possibility of creating very powerful (yet easy to interpret and interact) data visualizations. From real-time Business Activity Monitoring (BAM) to Big Data, data-based decision making becomes easy – the what, the why, and the how becomes evident. Spotfire definitely allowed TIBCO to establish itself in an area where until recently it had very little experience and no sought-after products. The main features of this platform are: More than 30 different data sources to choose from: Several databases (including Big Data Teradata), web services, files, and legacy applications. Big Data analysis: Spotfire delivers the power of MapReduce to regular users. Database analysis: Data visualizations can be built on top of databases using information links. There is no need to pull the analyzed data into the platform, as a live link is maintained between Spotfire and the database. Visual join: Capability of merging data from several distinct sources into a single visualization. Rule-based visualizations: The platform enables the creation and tailoring of rules, and the filtering of data. These features facilitate emphasizing of outliers and foster management by exception. It is also possible to highlight other important features, such as commonalities and anomalies. Data drill-down: For data visualizations it is possible to create one (or many) details visualization(s). This drill-down can be performed in multiple steps as drill-down of drill-down of drill-down and so on. Real-time integration with other TIBCO tools. Spotfire platform 5.x The platform is composed of several intercommunicating components, each one with its own responsibilities (with clear separation of concerns) enabling a clustered deployment. As this is an introductory article, we will not dive deep into all the components, but we will identify the main ones and the underlying architecture. A depiction the platform's components is shown in the following diagram: The descriptions of each of the components in the preceding diagram are as follows: TIBCO Spotfire Server: The Spotfire server makes a set of services available to the analytics clients (TIBCO Spotfire Professional and TIBCO Spotfire Web Player Server): User services: It is responsible for authentication and authorization Deployment services: It handles the consistent upgrade of Spotfire clients Library services: It manages the repository of analysis files Information services: It persists information links to external data sources The Server component is available for several operating systems, such as Linux, Solaris, and Windows. TIBCO Spotfire Professional: This is a client application (fat client) that focuses on the creation of data visualizations, taking advantage of all of the platform's features. This is the main client application, and because of that, it has enabled all the data manipulation functionalities such as use of data filters, drill down, working online and offline (working offline allows embedding data in the visualizations for use in limited connectivity environments), and exporting visualizations to MS PowerPoint, PDF, and HTML. It is only available for Windows environment. TIBCO Spotfire Web Player Server: This offers users the possibility of accessing and interacting with visualizations created in TIBCO Spotfire Professional. The existence of this web application enables the usage of an Internet browser as a client, allowing for thin client access where no software has to be installed on the user's machine. Please be aware that the visualizations cannot be created or altered this way. They can only be accessed in a read-only mode, where all rules are enabled, as well as data is drill down. Since it is developed in ASP.NET, this server must be deployed in a Microsoft IIS server, and so it is restricted to Microsoft Windows environments. Server Database: This database is accessed by the TIBCO Spotfire Server for storage of server information. It should not be confused with the data stores that the platform can access to fetch data from, and build visualizations. Only two vendor databases are supported for this role: Oracle Database and Microsoft SQL Server. TIBCO Spotfire Web Player Client: These are thin clients to the Web Player Server. Several Internet browsers can be used on various operating systems (Microsoft Internet Explorer on Windows, Mozilla Firefox on Windows and Mac OS, Google Chrome on Windows and Android, and so on). TIBCO has also made available an application for iPad, which is available in iTunes. For more details on the iPad client application, please navigate to: https://itunes.apple.com/en/app/spotfire-analytics/id417436823?mt=8 Summary In this article, we introduced the main attributes of the Spotfire platform in the scope of visual analytics, and we detailed the platform's underlying architecture. Resources for Article: Further resources on this subject: Core Data iOS: Designing a Data Model and Building Data Objects [Article] Database/Data Model Round-Trip Engineering with MySQL [Article] Drilling Back to Source Data in Dynamics GP 2013 using Dashboards [Article]
Read more
  • 0
  • 0
  • 5795

article-image-using-openshift
Packt
21 Oct 2013
5 min read
Save for later

Using OpenShift

Packt
21 Oct 2013
5 min read
(For more resources related to this topic, see here.) Each of these utilize the OpenShift REST API at the backend; therefore, as a user, we could potentially orchestrate OpenShift using the API with such common command-line utilities as curl to write scripts for automation. We could also use the API to write our own custom user interface, if we had the desire. In the following sections, we will explore using each of the currently supported user experiences, all of which can be intermixed as they communicate with the backend in a uniform fashion using the REST API previously mentioned. Getting started using OpenShift As discussed previously, we will be using the OpenShift Online free hosted service for example portions. OpenShift Online has the lowest barrier of entry from a user's perspective because we will not have to deploy our own OpenShift PaaS before being able to utilize it. Since we will be using the OpenShift Online service, the very first step is going to be to visit their website and sign up for a free account via https://openshift.redhat.com/app/account/new. New Account Form Once this step is complete, we will find an e-mail in our inbox that was provided during sign up, with a subject line similar to Confirm your Red Hat OpenShift account; inside that e-mail will be a URL that needs to be followed to complete the setup and verification step. Now that we've successfully completed the sign up phase, let's move on to exploring the different ways in which we can use and interact with OpenShift. Command-line utilities Due to the advancements in modern computing and the advent of mobile devices such as tablets, smart phones, and many other devices, we are often accustomed to Graphical User Interface (GUI) over Command-Line Interface (CLI) for most of our computing needs. This trend is heavier in the realm of web applications because of the rich visual experiences that can be delivered using next generation web technologies. However, those of us who are in the development and system administration circles of the world are no strangers to the CLI, and we know that it is often the most powerful way to accomplish an array of tasks pertaining to development and administration. Much of this is a credit to powerful shell environments that have their roots in traditional UNIX environments; popular examples of these are bash and zsh. Also, in more recent years, PowerShell for the Microsoft Windows platform has aimed to provide some relatively similar CLI power. The shell, as it is referenced here, is that of a UNIX shell, which is a command interpreter that supports such features as variables, functions, pipes, I/O redirection, variable substitution, flow control, conditionals, the ability to be scripted, and more. There is also a POSIX standard for a shell that defines a standard set of features and behaviors that must be complied with, allowing for portability of complex scripts. With this inherent power at the fingertips of the person who wields the command line, the development team of the OpenShift PaaS has written a command-line utility, much in the spirit of offering powerful utilities to its users and developers. Before we get too deep into the details, let's quickly look at what a normal application creation and deployment requires in OpenShift using the following command: $ rhc app create myawesomewebapp ruby-1.9 $ cd myawesomewebapp (Write, create, and implement code changes) $ git commit -a -m "wrote awesome code" $ git push It will be discussed at length shortly, but for a quick rundown, the rhc app create myawesomewebapp ruby-1.9 command creates an application, which runs on OpenShift using ruby-1.9 as the programming platform. Behind the scenes, it's provisioning space, resources, and configuring services for us. It also creates a git repository that is then cloned locally—in our example named myawesomewebapp—and in order to access this, we need to change directories into the git repository. That is precisely what the next command cd myawesomewebapp does. And you're live, running your web application in the cloud. While this is an extremely high-level overview and there are some prerequisites necessary, normal use of OpenShift is that easy. In the following section, we will discuss at length all the steps necessary to launch a live application in OpenShift Online using the rhc command-line utility and git. This command-line utility, rhc, is written in the Ruby programming language and is distributed as a RubyGem (https://rubygems.org/). This is the recommended method of installation for Ruby modules, libraries, and utilities due to the platform-independent nature of Ruby and the ease of distribution of gems. The rhc command-line utility is also available using the native package management for both Fedora and Red Hat Enterprise Linux (via the EPEL repository, available at https://fedoraproject.org/wiki/EPEL) by running the yum install rubygem-rhc command. Another noteworthy proponent of RubyGems is that they can be installed to a user's home directory within their local machine's operating system, allowing them to be utilized even in environments where systems are centrally managed by an IT department. RubyGems are also installed using the gem package manager for users of GNU/Linux package managers, such as yum, apt-get, and pacman or Mac OS X's community homebrew (brew) package manager, which will be familiar with the concept. For those unfamiliar with these concepts, a package manager will track a software named "package" and its dependencies, handle installation, updates, as well as removal. We will take a short moment to tangent into the topic of RubyGems before moving on to the command-line utility for OpenShift to ensure that we don't leave out any background information. Summary Hopefully, we can select our preferred method of deploying on OpenShift, and developers of all backgrounds, preferences, and development platforms will feel at home working with OpenShift as a development and deployment platform. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Features of CloudFlare [Article] vCloud Networks [Article]
Read more
  • 0
  • 0
  • 2034
Visually different images

article-image-integrating-typeaheadjs-wordpress-and-ruby-rails
Packt
17 Oct 2013
6 min read
Save for later

Integrating typeahead.js into WordPress and Ruby on Rails

Packt
17 Oct 2013
6 min read
(For more resources related to this topic, see here.) Integrating typeahead.js into WordPress (Become an expert) WordPress is an incredibly well known and well used open source blogging platform, and it is almost fully featured, except of course for the ability to have a typeahead style lookup on your site! In this article we are going to fix that. Getting ready In order to create this we are going to first need to have a working WordPress installed. WordPress runs off a LAMP stack so if you haven't got one of those running locally you will need to set this up. Once set up you can download WordPress from http://wordpress.org/, extract the files, place them in your localhost, and visit http://localhost/install/. This will then guide you through the rest of the install process. Now we should be ready to get typeahead.js working with WordPress. How to do it... Like so many things in WordPress, when it comes to adding new functionality, there is probably already a plugin, and in our case there is one made by Kyle Reicks that can be found at https://github.com/kylereicks/typeahead.js.wp. Download the code and add the folder it downloads to /wp-content/plugins/ Log into our administration panel at http://localhost/wp-admin/ and go to the Plugins section. You will see an option to activate our new plugin, so activate it now. Once activated, under plugins you will now have access to typeahead Settings. In here you can set up what type of things you want typeahead to be used for; pick posts, tags, pages, and categories. How it works... This plugin hijacks the default search form that WordPress uses out of the box and adds the typeahead functionality to it. For each of the post types that you have associated with typeahead plugin, it will create a JSON file, with each JSON file representing a different dataSet and getting loaded in with prefetch. There's more... The plugin is a great first start, but there is plenty that could be done to improve it. For example, by editing /js/typeahead-activation.js we could edit the amount of values that get returned by our typeahead search: if(typeahead.datasets.length){ typeahead.data = []; for(i = 0, arrayLength = typeahead.datasets.length; i < arrayLength; i++){ typeahead.data[i] = { name: typeahead.datasets[i], prefetch: typeahead.dataUrl + '?data=' + typeahead.datasets[i], limit: 10 }; } jQuery(document).ready(function($){ $('#searchform input[type=text], #searchform input[type=search]').typeahead(typeahead.data); }); } Integrating typeahead.js into Ruby on Rails (Become an expert) Ruby on Rails has become one of the most popular frameworks for developing web applications in, and it comes as little surprise that Rails developers would like to be able to harness the power of typeahead.js. In this recipe we will look at how you can quickly get up and running with typeahead.js in your Rails project. Getting ready Ruby on Rails is an open source web application framework for the Ruby language. It famously champions the idea of convention over configuration, which is one of the reasons it has been so widely adopted. Obviously in order to do this we will need a rails application. Setting up Ruby on Rails is an entire article to itself, but if you follow the guides on http://rubyonrails.org/, you should be able to get up and start running quickly with your chosen setup. We will start from the point that both Ruby and Ruby on Rails have been installed and set up correctly. We will also be using a Gem made by Yousef Ourabi, which has the typeahead.js functionality we need. We can find it at https://github.com/yourabi/twitter-typeahead-rails. How to do it... The first thing we will need is a Rails project, and we can create one of these by typing; rails new typeahead_rails This will generate the basic rails application for us, and one of the files it will generate is the Gemfile which we need to edit to include our new Gem; source 'https://rubygems.org' gem 'rails', '3.2.13' gem 'sqlite3' gem 'json' group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'uglifier', '>= 1.0.3' end gem 'jquery-rails' gem 'twitter-typeahead-rails' With this change made, we need to reinstall our Gems: bundle install We will now have the required file, but before we can access them we need to add a reference to them in our manifest file. We do this by editing app/assets/javascripts and adding a reference to typeahead.js: //= require jquery //= require jquery_ujs //= require_tree //= require twitter/typeahead Of course we need a page to try this out on, so let's have Rails make us one; rails generate controller Pages home One of the files generated by the above command will be found in app/views/pages/home.html.erb. Let's edit this now: <label for="friends">Pick Your Friend</label> <input type="text" name="friends" /> <script> $('input').typeahead({ name: 'people', local: ['Elaine', 'Column', 'Kirsty', 'Chris Elder'] }); </script> Finally we will start up a web server to be able to view what we have accomplished; rails s And now if we go to localhost:3000/pages/home we should see something very much. How it works... The Gem we installed brings together the required JavaScript files that we normally need to include manually, allowing them to be accessed from our manifest file, which will load all mentioned JavaScript on every page. There's more... Of course we don't need to use a Gem to install typeahead functionality, we could have manually copied the code into a file called typeahead.js that sat inside of app/assets/javascripts/twitter/ and this would have been accessible to the manifest file too and produced the same functionality. This would mean one less dependency on a Gem, which in my opinion is always a good thing, although this isn't necessarily the Rails way, which is why I didn't lead with it. Summary In this article, we explained the functionality of WordPress, which is probably the biggest open source blogging platform in the world right now and it is pretty feature complete. One thing the search doesn't have, though, is good typeahead functionality. In this article we learned how to change that by incorporating a WordPress plugin that gives us this functionality out of the box. It also discussed how Ruby on Rails is fast becoming the framework of choice among developers wanting to build web applications fast, along with out of the box benefits of using Ruby on Rails. Using Ruby gives you access to a host of excellent resources in the form of Gems. In this article we had a look at one Gem that gives us typeahead.js functionality in our Ruby on Rails project. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Building tiny Web-applications in Ruby using Sinatra [Article]
Read more
  • 0
  • 0
  • 2910

article-image-getting-started-twitter-flight
Packt
16 Oct 2013
6 min read
Save for later

Getting Started with Twitter Flight

Packt
16 Oct 2013
6 min read
(For more resources related to this topic, see here.) Simplicity First and foremost, Flight is simple. Most frameworks provide layouts, data models, and utilities that tend to produce big and confusing APIs. Learning curves are steep. In contrast, you'll probably only use 10 Flight methods ever, and three of those are almost identical. All components and mixins follow the same simple format. Once you've learned one, you've learned them all. Simplicity means fast ramp-up times for new developers who should be able to come to understand individual components quickly. Efficient complexity management In most frameworks, the complexity of the code increases almost exponentially with the number of features. Dependency diagrams often look like a set of trees, each with branches and roots intermingling to create a dense thicket of connections. A simple change in one place could easily create an unforeseen error in another or a chain of failures that could easily take down the whole application. Flight applications are instead built up from reliable, reusable artifacts known as components. Each component knows nothing of the rest of the application, it simply listens for and triggers events. Components behave like cells in an organism. They have well-defined input and output, are exhaustively testable, and are loosely coupled. A component's cellular nature means that introducing more components has almost no effect on the overall complexity of the code, unlike traditional architectures. The structure remains flat, without any spaghetti code. This is particularly important in large applications. Complexity management is important in any application, but when you're dealing with hundreds or thousands of components, you need to know that they aren't going to have unforeseen knock-on effects. This flat, cellular structure also makes Flight well-suited to large projects with large or remote development teams. Each developer can work on an independent component, without first having to understand the architecture of the entire application. Reusability Flight components have well-defined interfaces and are loosely coupled, making it easy to reuse them within an application, and even across different applications. This separates Flight from other frameworks such as Backbone or AngularJS, where functionality is buried inside layers of complexity and is usually impossible to extract. Not only does this make it easier and faster to build complex applications in Flight but it also offers developers the opportunity to give back to the community. There are already a lot of useful Flight components and mixins being open sourced. Try searching for "flight-" on Bower or GitHub, or check out the list at http://flight-components.jit.su/. Twitter has already been taking advantage of this reusability factor within the company, sharing components such as Typeahead (Twitter's search autocomplete) between Twitter.com and TweetDeck, something which would have been unimaginable a year ago. Agnostic architecture Flight has agnostic architecture. For example, it doesn't matter which templating system is used, or even if one is used at all. Server-side, client-side, or plain old static HTML are all the same to Flight. Flight does not impose a data model, so any method of data storage and processing can be used behind the scenes. This gives the developer freedom to change all aspects of the stack without affecting the UI and the ability to introduce Flight to an existing application without conflict. Improved Performance Performance is about a lot more than how fast the code executes, or how efficient it is with memory. Time to first load is a very important factor. When a user loads a web page, the request must be processed, data must be gathered, and a response will be sent. The response is then rendered by the client. Server-side processing and data gathering is fast. Latency and interpretation makes rendering slow. One of the largest factors in response and rendering speed is the sheer amount of code being sent over the wire. The more code required to render a page, the slower the rendering will be. Most modern JavaScript frameworks use deferred loading (for example, via RequireJS) to reduce the amount of code sent in the first response. However, all this code is needed to be able to render a page, because layout and templating systems only exist on the client. Flight's architecture allows templates to be compiled and rendered on the server, so the first response is a fully-formed web page. Flight components can then be attached to existing DOM nodes and determine their state from the HTML, rather than having to request data over XMLHttpRequest (XHR) and generate the HTML themselves. Well-organized freedom Back in the good old days of JavaScript development, it was all a bit of a free for all. Everyone had their own way of doing things. Code was idiosyncratic rather than idiomatic. In a lot of ways, this was a pain, that is, it was hard to onboard new developers and still harder to keep a codebase consistent and well-organized. On the other hand, there was very little boilerplate code and it was possible to get things done without having to first read lengthy documentation on a big API. jQuery built on this ideal, reduced the amount of boilerplate code required. It made JavaScript code easier to read and write, while not imposing any particular requirements in terms of architecture. What jQuery failed to do (and was never intended to do) was provide an application-level structure. It remained all too easy for code to become a spaghetti mess of callbacks and anonymous functions. Flight solves this problem by providing much needed structure while maintaining a simple, architecture-agnostic approach. Its API empowers developers, helping them to create elegant and well-ordered code, while retaining a sense of freedom. Put simply, it's a joy to use. Summary The Flight API is small and should be familiar to any jQuery user, producing a shallow learning-curve. The atomic nature of components makes them reliable and reusable, and creates truly scalable applications, while its agnostic architecture allows developers to use any technology stack and even introduce Flight into existing applications. Resources for Article: Further resources on this subject: Integrating Twitter and YouTube with MediaWiki [Article] Integrating Twitter with Magento [Article] Drupal Web Services: Twitter and Drupal [Article]
Read more
  • 0
  • 0
  • 1078

article-image-learning-add-dependencies
Packt
15 Oct 2013
6 min read
Save for later

Learning to add dependencies

Packt
15 Oct 2013
6 min read
(For more resources related to this topic, see here.) Adding dependencies We suddenly realize that for our purposes, we will also need a function in our simplemath library that calculates greatest common divisor (gcd) of two numbers. It will take some effort to write a well-tested gcd function, why not look around to see if someone has already implemented it. Go to https://npmjs.org and search for gcd. You will get scores of results. You may find lots of node modules solving the same problem. It is often difficult to choose between seemingly identical node modules. In such situations, check out the credentials of the developer(s) who are maintaining the project. Compare the number of times each module was downloaded by users. You can get this information on the package's page on npmjs at https://npmjs.org/package/<pkg-name>. You can also check out the repository where the project is hosted, or the home page of the project. You will get this information on the npmjs home page of the module. If it isn't available, this probably isn't the module you want to use. If, however, it is available, check out the number of people who have starred the project on github, the number of people who have forked it, and active developers contributing to the project. Perhaps check out and run the test cases, or dive into the source code. If you are betting heavily on a node module which isn't actively maintained by reputed developers, or which isn't well tested, you might be setting yourself up for a major rework in the future. While we search for the gcd keyword on npmjs website we come to know that there is a node module named mathutils (https://npmjs.org/package/mathutils) that provides an implementation of gcd. We don't want to write our own implementation especially after knowing that someone somewhere in the node community has already solved that problem and published the JavaScript code. Now we want to be able to reuse that code from within our library. This use case is a little contrived and it is an overkill to include an external library for such simple tasks as to calculate GCD, which is as a matter of fact, very few lines of code, and is popular enough to be found easily. It is used here for the purpose of illustration. We can do so very easily. Again npm command line will help us reduce the number of steps. $npm install mathutils --save We have asked npm to install mathutils and the --save flag, in the end saves it as a dependency in our package.json file. So the mathutils library is downloaded in node_modules folder inside our project. Our new package.json file looks like this. { "name": "simplemath", "version": "0.0.1", "description": "A simple math library", "main": "index.js", "dependencies": { "mathutils": "0.0.1" }, "devDependencies": {}, "scripts": { "test": "test" }, "repository": "", "keywords": [ "math", "mathematics", "simple" ], "author": "yourname <[email protected]>", "license": "BSD" } And thus, mathutils is ready for us to use it as we please. Let's proceed to make use of it in our library. Add the test case: Add the following code to test gcd function to the end of tests.js file but before console.info. assert.equal( simplemath.gcd(12, 8), 4 ); console.log("GCD works correctly"); Glue the gcd function from mathutils to simplemath in index.js. var mathutils = require("mathutils"); to load mathutils? var simplemath = require("./lib/constants.js"); simplemath.sum = require("./lib/sum.js"); simplemath.subtract = require("./lib/subtract .js"); simplemath.multiply = require("./lib/multiply.js"); simplemath.divide = require("./lib/divide.js"); simplemath.gcd = mathutils.gcd; // Assign gcd module.exports = simplemath; We have imported the mathutil library in our index.js and assigned the gcd function from the mathutil library to simplemath property with the same name. Let's test it out. Since our package.json is aware of the test script, we can delegate it to npm. $ npm test … All tests passed successfully Thus we have successfully added a dependency to our project. The node_modules folder We do not want to litter our node.js application directory with code from external libraries or packages that we want to use, npm provides a way of keeping our application code and third party libraries or node modules into separate directories. That is why the node_modules folder. Code for any third-party modules will go into this folder. From node.js documentation (http://nodejs.org/api/modules.html): If the module identifier passed to require() is not a native module, and does not begin with '/', '../', or './', then node starts at the parent directory of the current module, and adds /node_modules, and attempts to load the module from that location. If it is not found there, then it moves to the parent directory, and so on, until the root of the tree is reached. For example, if the file at '/home/ry/projects/foo.js' called require ('bar.js'), then node would look in the following locations, in this order: /home/ry/projects/node_modules/bar.js /home/ry/node_modules/bar.js /home/node_modules/bar.js /node_modules/bar.js This allows programs to localize their dependencies, so that they do not clash. Whenever we run the npm install command, the packages get stored into the node_modules folder inside the directory in which the command was issued. Each module might have its own set of dependencies, which are then installed inside node_modules folder of that module. So in effect we obtain a dependency tree with each module having its dependencies installed in its own folder. Imagine two modules, on which your code is dependent, uses a different version of a third module. Having dependencies installed in their own folders, and the fact that require will look into the innermost node_modules folder first, affords a kind of safety that very few platforms are able to provide. Each module can have its own version of the same dependency. Thus node.js tactfully avoids dependency-hell which most of its peers haven't been able to do so far. Summary In this article you will learn how to install node.js and npm, modularize your code in different files and folders, create your own node modules and add them on npm registry, and configure the local npm installation to provide some of your own convenient default. Resources for Article: Further resources on this subject: An Overview of the Node Package Manager [Article] Understanding and Developing Node Modules [Article] Setting up Node [Article]
Read more
  • 0
  • 0
  • 4000
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-configuring-manage-out-directaccess-clients
Packt
14 Oct 2013
14 min read
Save for later

Configuring Manage Out to DirectAccess Clients

Packt
14 Oct 2013
14 min read
In this article by Jordan Krause, the author of the book Microsoft DirectAccess Best Practices and Troubleshooting, we will have a look at how Manage Out is configured to DirectAccess clients. DirectAccess is obviously a wonderful technology from the user's perspective. There is literally nothing that they have to do to connect to company resources; it just happens automatically whenever they have Internet access. What isn't talked about nearly as often is the fact that DirectAccess is possibly of even greater benefit to the IT department. Because DirectAccess is so seamless and automatic, your Group Policy settings, patches, scripts, and everything that you want to use to manage and manipulate those client machines is always able to run. You no longer have to wait for the user to launch a VPN or come into the office for their computer to be secured with the latest policies. You no longer have to worry about laptops being off the network for weeks at a time, and coming back into the network after having been connected to dozens of public hotspots while someone was on a vacation with it. While many of these management functions work right out of the box with a standard DirectAccess configuration, there are some functions that will need a couple of extra steps to get them working properly. That is our topic of discussion for this article. We are going to cover the following topics: Pulls versus pushes What does Manage Out have to do with IPv6 Creating a selective ISATAP environment Setting up client-side firewall rules RDP to a DirectAccess client No ISATAP with multisite DirectAccess (For more resources related to this topic, see here.) Pulls versus pushes Often when thinking about management functions, we think of them as the software or settings that are being pushed out to the client computers. This is actually not true in many cases. A lot of management tools are initiated on the client side, and so their method of distributing these settings and patches are actually client pulls. A pull is a request that has been initiated by the client, and in this case, the server is simply responding to that request. In the DirectAccess world, this kind of request is handled very differently than an actual push, which would be any case where the internal server or resource is creating the initial outbound communication with the client, a true outbound initiation of packets. Pulls typically work just fine over DirectAccess right away. For example, Group Policy processing is initiated by the client. When a laptop decides that it's time for a Group Policy refresh, it reaches out to Active Directory and says "Hey AD, give me my latest stuff". The Domain Controllers then simply reply to that request, and the settings are pulled down successfully. This works all day, every day over DirectAccess. Pushes, on the other hand, require some special considerations. This scenario is what we commonly refer to as DirectAccess Manage Out, and this does not work by default in a stock DirectAccess implementation. What does Manage Out have to do with IPv6? IPv6 is essentially the reason why Manage Out does not work until we make some additions to your network. "But DirectAccess in Server 2012 handles all of my IPv6 to IPv4 translations so that my internal network can be all IPv4, right?" The answer to that question is yes, but those translations only work in one direction. For example, in our previous Group Policy processing scenario, the client computer calls out for Active Directory, and those packets traverse the DirectAccess tunnels using IPv6. When the packets get to the DirectAccess server, it sees that the Domain Controller is IPv4, so it translates those IPv6 packets into IPv4 and sends them on their merry way. Domain Controller receives the said IPv4 packets, and responds in kind back to the DirectAccess server. Since there is now an active thread and translation running on the DirectAccess server for that communication, it knows that it has to take those IPv4 packets coming back (as a response) from the Domain Controller and spin them back into IPv6 before sending across the IPsec tunnels back to the client. This all works wonderfully and there is absolutely no configuration that you need to do to accomplish this behavior. However, what if you wanted to send packets out to a DirectAccess client computer from inside the network? One of the best examples of this is a Helpdesk computer trying to RDP into a DirectAccess-connected computer. To accomplish this, the Helpdesk computer needs to have IPv6 routability to the DirectAccess client computer. Let's walk through the flow of packets to see why this is necessary. First of all, if you have been using DirectAccess for any duration of time, you might have realized by now that the client computers register themselves in DNS with their DirectAccess IP addresses when they connect. This is a normal behavior, but what may not look "normal" to you is that those records they are registering are AAAA (IPv6) records. Remember, all DirectAccess traffic across the internet is IPv6, using one of these three transition technologies to carry the packets: 6to4, Teredo, or IP-HTTPS. Therefore, when the clients connect, whatever transition tunnel is established has an IPv6 address on the adapter (you can see it inside ipconfig /all on the client), and those addresses will register themselves in DNS, assuming your DNS is configured to allow it. When the Helpdesk personnel types CLIENT1 in their RDP client software and clicks on connect, it is going to reach out to DNS and ask, "What is the IP address of CLIENT1?" One of two things is going to happen. If that Helpdesk computer is connected to an IPv4-only network, it is obviously only capable of transmitting IPv4 packets, and DNS will hand them the DirectAccess client computer's A (IPv4) record, from the last time the client was connected inside the office. Routing will fail, of course, because CLIENT1 is no longer sitting on the physical network. The following screenshot is an example of pinging a DirectAccess connected client computer from an IPv4 network: How to resolve this behavior? We need to give that Helpdesk computer some form of IPv6 connectivity on the network. If you have a real, native IPv6 network running already, you can simply tap into it. Each of your internal machines that need this outbound routing, as well as the DirectAccess server or servers, all need to be connected to this network for it to work. However, I find out in the field that almost nobody is running any kind of IPv6 on their internal networks, and they really aren't interested in starting now. This is where the Intra-Site Automatic Tunnel Addressing Protocol, more commonly referred to as ISATAP, comes into play. You can think of ISATAP as a virtual IPv6 cloud that runs on top of your existing IPv4 network. It enables computers inside your network, like that Helpdesk machine, to be able to establish an IPv6 connection with an ISATAP router. When this happens, that Helpdesk machine will get a new network adapter, visible via ipconfig / all, named ISATAP, and it will have an IPv6 address. Yes, this does mean that the Helpdesk computer, or any machine that needs outbound communications to the DirectAccess clients, has to be capable of talking IPv6, so this typically means that those machines must be Windows 7, Windows 8, Server 2008, or Server 2012. What if your switches and routers are not capable of IPv6? No problem. Similar to the way that 6to4, Teredo, and IP-HTTPS take IPv6 packets and wrap them inside IPv4 so they can make their way across the IPv4 internet, ISATAP also takes IPv6 packets and encapsulates them inside IPv4 before sending them across the physical network. This means that you can establish this ISATAP IPv6 network on top of your IPv4 network, without needing to make any infrastructure changes at all. So, now I need to go purchase an ISATAP router to make this happen? No, this is the best part. Your DirectAccess server is already an ISATAP router; you simply need to point those internal machines at it. Creating a selective ISATAP environment All of the Windows operating systems over the past few years have ISATAP client functionality built right in. This has been the case since Vista, I believe, but I have yet to encounter anyone using Vista in a corporate environment, so for the sake of our discussion, we are generally talking about Windows 7, Windows 8, Server 2008, and Server 2012. For any of these operating systems, out of the box all you have to do is give it somewhere to resolve the name ISATAP, and it will go ahead and set itself up with a connection to that ISATAP router. So, if you wanted to immediately enable all of your internal machines that were ISATAP capable to suddenly be ISATAP connected, all you would have to do is create a single host record in DNS named ISATAP and point it at the internal IP address of your DirectAccess server. To get that to work properly, you would also have to tweak DNS so that ISATAP was no longer part of the global query block list, but I'm not even going to detail that process because my emphasis here is that you should not set up your environment this way. Unfortunately, some of the step-by-step guides that are available on the web for setting up DirectAccess include this step. Even more unfortunately, if you have ever worked with UAG DirectAccess, you'll remember on the IP address configuration screen that the GUI actually told you to go ahead and set ISATAP up this way. Please do not create a DNS host record named ISATAP! If you have already done so, please consider this article to be a guide on your way out of danger. The primary reason why you should stay away from doing this is because Windows prefers IPv6 over IPv4. Once a computer is setup with connection to an ISATAP router, it receives an IPv6 address which registers itself in DNS, and from that point onward whenever two ISATAP machines communicate with each other, they are using IPv6 over the ISATAP tunnel. This is potentially problematic for a couple of reasons. First, all ISATAP traffic default routes through the ISATAP router for which it is configured, so your DirectAccess server is now essentially the default gateway for all of these internal computers. This can cause performance problems and even network flooding. The second reason is that because these packets are now IPv6, even though they are wrapped up inside IPv4, the tools you have inside the network that you might be using to monitor internal traffic are not going to be able to see this traffic, at least not in the same capacity as it would do for normal IPv4 traffic. It is in your best interests that you do not follow this global approach for implementing ISATAP, and instead take the slightly longer road and create what I call a "Selective ISATAP environment", where you have complete control over which machines are connected to the ISATAP network, and which ones are not. Many DirectAccess installs don't require ISATAP at all. Remember, this is only used for those instances where you need true outbound reach to the DirectAccess clients. I recommend installing DirectAccess without ISATAP first, and test all of your management tools. If they work without ISATAP, great! If they don't, then you can create your selective ISATAP environment. To set ourselves up for success, we need to create a simple Active Directory security group, and a GPO. The combination of these things is going to allow us to decisively grant or deny access to the ISATAP routing features on the DirectAccess server. Creating a security group and DNS record First, let's create a new security group in Active Directory. This is just a normal group like any other, typically a global or universal, whichever you prefer. This group is going to contain the computer accounts of the internal computers to which we want to give that outbound reaching capability. So, typically the computer accounts that we will eventually add into this group are Helpdesk computers, SCCM servers, maybe a management "jump box" terminal server, that kind of thing. To keep things intuitive, let's name the group DirectAccess – ISATAP computers or something similar. We will also need a DNS host record created. For obvious reasons we don't want to call this ISATAP, but perhaps something such as Contoso_ISATAP, swapping your name in, of course. This is just a regular DNS A record, and you want to point it at the internal IPv4 address of the DirectAccess server. If you are running a clustered array of DirectAccess servers that are configured for load balancing, then you will need multiple DNS records. All of the records have the same name,Contoso_ISATAP, and you point them at each internal IP address being used by the cluster. So, one gets pointed at the internal Virtual IP (VIP), and one gets pointed at each of the internal Dedicated IPs (DIP). In a two-node cluster, you will have three DNS records for Contoso_ISATAP. Creating the GPO Now go ahead and follow these steps to create a new GPO that is going to contain the ISATAP connection settings: Create a new GPO, name it something such as DirectAccess – ISATAP settings. Place the GPO wherever it makes sense to you, keeping in mind that these settings should only apply to the ISATAP group that we created. One way to manage the distribution of these settings is a strategically placed link. Probably the better way to handle distribution of these settings is to reflect the same approach that the DirectAccess GPOs themselves take. This would mean linking this new GPO at a high level, like the top of the domain, and then using security filtering inside the GPO settings so that it only applies to the group that we just created. This way you ensure that in the end the only computers to receive these settings are the ones that you add to your ISATAP group. I typically take the Security Filtering approach, because it closely reflects what DirectAccess itself does with GPO filtering. So, create and link the GPO at a high level, and then inside the GPO properties, go ahead and add the group (and only the group, remove everything else) to the Security Filtering section, like what is shown in the following screenshot: Then move over to the Details tab and set the GPO Status to User configuration settings disabled. Configuring the GPO Now that we have a GPO which is being applied only to our special ISATAP group that we created, let's give it some settings to apply. What we are doing with this GPO is configuring those computers which we want to be ISATAP-connected with the ISATAP server address with which they need to communicate , which is the DNS name that we created for ISATAP. First, edit the GPO and set the ISATAP Router Name by configuring the following setting: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP Router Name = Enabled (and populate your DNS record). Second, in the same location within the GPO, we want to enable your ISATAP state with the following configuration: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP State = Enabled State. Adding machines to the group All of our settings are now squared away, time to test! Start by taking a single computer account of a computer inside your network from which you want to be able to reach out to DirectAccess client computers, and add the computer account to the group that we created. Perhaps pause for a few minutes to ensure Active Directory has a chance to replicate, and then simply restart that computer. The next time it boots, it'll grab those settings from the GPO, and reach out to the DirectAccess server acting as the ISATAP router, and have an ISATAP IPv6 connection. If you generate an ipconfig /all on that internal machine, you will see the ISATAP adapter and address now listed as shown in the following screenshot: And now if you try to ping the name of a client computer that is connected via DirectAccess, you will see that it now resolves to the IPv6 record. Perfect! But wait, why are my pings timing out? Let's explore that next.
Read more
  • 0
  • 2
  • 11388

Packt
10 Oct 2013
3 min read
Save for later

Top Features You Need to Know About – Responsive Web Design

Packt
10 Oct 2013
3 min read
Responsive web design Nowadays, almost everyone has a smartphone or tablet in hand; this article prepares these individuals to adapt their portfolio to this new reality. Acknowledging that, today, there are tablets that are also phones and some laptops that are also tablets, we use an approach known as device agnostic, where instead of giving devices names, such as mobile, tablet, or desktop, we refer to them as small, medium, or large. With this approach, we can cover a vast array of gadgets from smartphones, tablets, laptops, and desktops, to the displays on refrigerators, cars, watches, and so on. Photoshop Within the pages of this article, you will find two Photoshop templates that I prepared for you. The first is small.psd, which you may use to prepare your layouts for smartphones, small tablets, and even, to a certain extent, displays on a refrigerator. The second is medium.psd, which can be used for tablets, net books, or even displays in cars. I used these templates to lay out all the sizes of our website (portfolio) that we will work on in this article, as you can see in the following screenshot: One of the principle elements of responsive web design is the flexible grid and what I did with Photoshop layout was to mimic those grids, which we will use later. With time, this will be easier and it won't be necessary to lay out every version of every page, but, for now, it is good to understand how things happen. Code Now that we have a preview of how the small version will look, it's time to code it. The first thing we will need is the fluid version of the 960.gs, which you can download from https://raw.github.com/bauhouse/fluid960gs/master/css/grid.css and save as 960_fluid.css in the css folder. After that, let's create two more files in this folder, small.css and medium.css. We will use these files to maintain the organized versions of our portfolio. Lastly, let's link the files to our HTML document as follows: <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>Portfolio</title> <link href="css/reset.css" rel="stylesheet" type="text/css"> <link href="css/960_fluid.css" rel="stylesheet" type="text/css"> <link href="css/main.css" rel="stylesheet" type="text/css"> <link href="css/medium.css" rel="stylesheet" type="text/css"> <link href="css/small.css" rel="stylesheet" type="text/css"> </head> If you reload your browser now, you should see that the portfolio is stretching all over the browser. This occurs because the grid is now fluid. To fix the width to, at most, 960 pixels, we need to insert the following lines at the beginning of the main.css file: Code 2: /* grid ================================ */ .container_12 { max-width: 960px; margin: Once you reload the browser and resize the window, you will see that the display is overly stretched and broken. In order to fix this, keeping in mind the layout we did in Photoshop, we can use the small version and medium version. Summary In this article we saw how to prepare our desktop-only portfolio using Photoshop and the method used to fix the broken and overly stretched display. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape Building HTML5 Pages from Scratch HTML5 Presentations - creating our initial presentation
Read more
  • 0
  • 0
  • 2152

article-image-creating-autocad-command
Packt
10 Oct 2013
5 min read
Save for later

Creating an AutoCAD command

Packt
10 Oct 2013
5 min read
Some custom AutoCAD applications are designed to run unattended, such as when a drawing loads or in reaction to some other event that occurs in your AutoCAD drawing session. But, the majority of your AutoCAD programming work will likely involve custom AutoCAD commands, whether automating a sequence of built-in AutoCAD commands, or implementing new functionality to address a business need. Commands can be simple (printing to the command window or a dialog box), or more difficult (generating a new design on-the-fly, based on data stored in an existing design). Our first custom command will be somewhat simple. We will define a command which will count the number of AutoCAD entities found in ModelSpace (the space in AutoCAD where you model your designs). Then, we will display that data in the command window. Frequently, custom commands acquire information about an object in AutoCAD (or summarize a collection of user input), and then present that information to the user, either for the purpose of reporting data or so the user can make an informed choice or selection based upon the data being presented. Using Netload to load our command class You may be wondering at this point, "How do we load and run our plugin?" I'm glad you asked! To load the plugin, enter the native AutoCAD command NETLOAD. When the dialog box appears, navigate to the DLL file, MyAcadCSharpPlugin1.dll, select it and click on OK. Our custom command will now be available in the AutoCAD session. At the command prompt, enter COUNTENTS to execute the command. Getting ready In our initial project, we have a class MyCommands, which was generated by the AutoCAD 2014 .NET Wizard. This class contains stubs for four types of AutoCAD command structures: basic command; command with pickfirst selection; a session command; and a lisp function. For this plugin, we will create a basic command, CountEnts, using the stub for the Modal command. How to do it... Let's take a look at the code we will need in order to read the AutoCAD database, count the entities in ModelSpace, identify (and count) block references, and display our findings to users: First, let's get the active AutoCAD document and the drawing database. Next, begin a new transaction. Use the using keyword, which will also take care of disposing of the transaction. Open the block table in AutoCAD. In this case, open it for read operation using the ForRead keyword. Similarly, open the block table record for ModelSpace, also for read (ForRead) (we aren't writing new entities to the drawing database at this time). We'll initialize two counters: one to count all AutoCAD entities; one to specifically count block references (also known as Inserts). Then, as we iterate through all of the entities in AutoCAD's ModelSpace, we'll tally AutoCAD entities in general, as well as block references. Having counted the total number of entities overall, as well as the total number of block references, we'll display that information to the user in a dialog box. How it works... AutoCAD is a multi-document application. We must identify the active document (the drawing that is activated) in order to read the correct database. Before reading the database we must start a transaction. In fact, we use transactions whenever we read from or write to the database. In the drawing database, we open AutoCAD's block table to read it. The block table contains the block table records ModelSpace, PaperSpace, and PaperSpace0. We are going to read the entities in ModelSpace so we will open that block table record for reading. We create two variables to store the tallies as we iterate through ModelSpace, keeping track of both block references and AutoCAD entities in general. A block reference is just a reference to a block. A block is a group of entities that is selectable as if it was a single entity. Blocks can be saved as drawing files (.dwg) and then inserted into other drawings. Once we have examined every entity in ModelSpace, we display the tallies (which are stored in the two count variables we created) to the user in a dialog box. Because we used the using keyword when creating the transaction, it is automatically disposed of when our command function ends. Summary The Session command, one of the four types of command stubs added to our project by the AutoCAD 2014 .NET Wizard, has application (rather than document) context. This means it is executed in the context of the entire AutoCAD session, not just within the context of the current document. This allows for some operations that are not permitted in document context, such as creating a new drawing. The other command stub, described as having pickfirst selection is executed with pre-selected AutoCAD entities. In other words, users can select (or pick) AutoCAD entities just prior to executing the command and those entities will be known to the command upon execution. Resources for Article: Further resources on this subject: Dynamically enable a control (Become an expert) [Article] Introduction to 3D Design using AutoCAD [Article] Getting Started with DraftSight [Article]
Read more
  • 0
  • 0
  • 3913

article-image-introducing-magento-extension-development
Packt
10 Oct 2013
13 min read
Save for later

Introducing Magento extension development

Packt
10 Oct 2013
13 min read
Creating Magento extensions can be an extremely challenging and time-consuming task depending on several factors such as your knowledge of Magento internals, overall development skills, and the complexity of the extension functionality itself. Having a deep insight into Magento internals, its structure, and accompanying tips and tricks will provide you with a strong foundation for clean and unobtrusive Magento extension development. The word unobtrusive should be a constant thought throughout your entire development process. The reason is simple; given the massiveness of the Magento platform, it is way too easy to build extensions that clash with other third-party extensions. This is usually a beginner's flaw, which we will hopefully avoid once we have finished reading this article. The examples listed in this article are targeted towards Magento Community Edition 1.7.0.2. Version 1.7.0.2 is the last stable release at the time of writing this writing. Throughout this article we will be referencing our URL examples as if they are executing on the magento.loc domain. You are free to set your local Apache virtual host and host file to any domain you prefer, as long as you keep this in mind. If you're hearing about virtual host terminology for the first time, please refer to the Apache Virtual Host documentation. Here is a quick summary on each of those files and folders: .htaccess: This file is a directory-level configuration file supported by several web servers, most notably the Apache web server. It controls mod_rewrite for fancy URLs and sets configuration server variables (such as memory limit) and PHP maximum execution time. .htaccess.sample: This is basically a .htaccess template file used for creating new stores within subfolders. api.php: This is primarily used for the Magento REST API, but can be used for SOAP and XML-RPC API server functionality as well. app: This is where you will find Magento core code files for the backend and for the frontend. This folder is basically the heart of the Magento platform. Later on, we will dive into this folder for more details, given that this is the folder that you as an extension developer will spend most of your time on. cron.php: This file, when triggered via URL or via console PHP, will trigger certain Magento cron jobs logic. cron.sh: This file is a Unix shell script version of cron.php. downloader: This folder is used by the Magento Connect Manager, which is the functionality you access from the Magento administration area by navigating to System | Magento Connect | Magento Connect Manager. errors: This folder is a host for a slightly separate Magento functionality, the one that jumps in with error handling when your Magento store gets an exception during code execution. favicon.ico: This is your standard 16 x 16 px website icon. get.php: This file hosts a feature that allows core media files to be stored and served from the database. With the Database File Storage system in place, Magento would redirect requests for media files to get.php. includes: This folder is used by the Mage_Compiler extension whose functionality can be accessed via Magento administration System | Tools | Compilation. The idea behind the Magento compiler feature is that you end up with PHP system that pulls all of its classes from one folder, thus, giving it a massive performance boost. index.php: This is a main entry point to your application, the main loader file for Magento, and the file that initializes everything. Every request for every Magento page goes through this file. index.php.sample: This file is just a backup copy of the index.php file. js: This folder holds the core Magento JavaScript libraries, such as Prototype, scriptaculous.js, ExtJS, and a few others, some of which are from Magento itself. lib: This folder holds the core Magento PHP libraries, such as 3DSecure, Google Checkout, phpseclib, Zend, and a few others, some of which are from Magento itself. LICENSE*: These are the Magento licence files in various formats (LICENSE_AFL.txt, LICENSE.html, and LICENSE.txt). mage: This is a Magento Connect command-line tool. It allows you to add/remove channels, install and uninstall packages (extensions), and various other package related tasks. media: This folder contains all of the media files, mostly just images from various products, categories, and CMS pages. php.ini.sample: This file is a sample php.ini file for PHP CGI/FastCGI installations. Sample files are not actually used by Magento application. pkginfo: This folder contains text files that largely operate as debug files to inform us about changes when extensions are upgraded in any way. RELEASE_NOTES.txt: This file contains the release notes and changes for various Magento versions, starting from version 1.4.0.0 and later. shell: This folder contains several PHP-based shell tools, such as compiler, indexer, and logger. skin: This folder contains various CSS and JavaScript files specific for individual Magento themes. Files in this folder and its subfolder go hand in hand with files in app/design folder, as these two locations actually result in one fully featured Magento theme or package. var: This folder contains sessions, logs, reports, configuration cache, lock files for application processes, and possible various other files distributed among individual subfolders. During development, you can freely select all the subfolders and delete them, as Magento will recreate all of them on the next page request. From a standpoint of Magento extension developer, you might find yourself looking into the var/log and var/report folders every now and then. Code pools The folder code is a placeholder for what is called a codePool in Magento. Usually, there are three code pool's in Magento, that is, three subfolders: community, core, and local. The formula for your extension code location should be something like app/code/community/YourNamespace/YourModuleName/ or app/code/local/YourNamespace/YourModuleName/. There is a simple rule as to whether to chose community or local codePool: Choose the community codePool for extensions that you plan to share across projects, or possibly upload to Magento Connect Choose the local codePool for extensions that are specific for the project you are working on and won't be shared with the public For example, let's imagine that our company name is Foggyline and the extension we are building is called Happy Hour. As we wish to share our extension with the community, we can put it into a folder such as app/code/community/Foggyline/HappyHour/. The theme system In order to successfully build extensions that visually manifest themselves to the user either on the backend or frontend, we need to get familiar with the theme system. The theme system is comprised of two distributed parts: one found under the app/design folder and other under the root skin folder. Files found under the app/design folder are PHP template files and XML layout configuration files. Within the PHP template files you can find the mix of HTML, PHP, and some JavaScript. There is one important thing to know about Magento themes; they have a fallback mechanism, for example, if someone in the administration interface sets the configuration to use a theme called hello from the default package; and if the theme is missing, for example, the app/design/frontend/default/hello/template/catalog/product/view.phtml file in its structure, Magento will use app/design/frontend/default/default/template/catalog/product/view.phtml from the default theme; and if that file is missing as well, Magento will fall back to the base package for the app/design/frontend/base/default/template/catalog/product/view.phtml file. All your layout and view files should go under the /app/design/frontend/defaultdefault/default directory. Secondly, you should never overwrite the existing .xml layout or template .phtml file from within the /app/design/frontend/default/default directory, rather create your own. For example, imagine you are doing some product image switcher extension, and you conclude that you need to do some modifications to the app/design/frontend/default/default/template/catalog/product/view/media.phtml file. A more valid approach would be to create a proper XML layout update file with handles rewriting the media.phtml usage to let's say media_product_image_switcher.phtml. The model, resource, and collection A model represents the data for the better part, and to certain extent a business logic of your application. Models in Magento take the Object Relational Mapping (ORM) approach, thus, having the developer to strictly deal with objects while their data is then automatically persisted to database. If you are hearing about ORM for the first time, please take some time to familiarize yourself with the concept. Theoretically, you could write and execute raw SQL queries in Magento. However, doing so is not advised, especially if you plan on distributing your extensions. There are two types of models in Magento: Basic Data Model: This is a simpler model type, sort of like Active Record pattern based model. If you're hearing about Active Record for the first time, please take some time to familiarize yourself with the concept. EAV (Entity-Attribute-Value) Data Model: This is a complex model type, which enables you to dynamically create new attributes on an entity. As EAV Data Model is significantly more complex than Basic Data Model and Basic Data Model will suffice for most of the time, we will focus on Basic Data Model and everything important surrounding it. Each data model you plan to persist to database, that means models that present an entity, needs to have four files in order for it to work fully: The model file: This extends the Mage_Core_Model_Abstract class. This represents single entity, its properties (fields), and possible business logic within it. The model resource file: This extends the Mage_Core_Model_Resource_Db_Abstract class. This is your connection to database; think of it as the thing that saves your entity properties (fields) database. The model collection file: This extends the Mage_Core_Model_Resource_Db_Collection_Abstract class. This is your collection of several entities, a collection that can be filtered, sorted, and manipulated. The installation script file: In its simplest definition this is the PHP file through which you, in and object-oriented way, create your database table(s). The default Magento installation comes with several built in shipping methods available: Flat Rate, Table Rates, Free Shipping UPS, USPS, FedEx, DHL. For some merchants this is more than enough, for others you are free to build an additional custom Shipping extension with support for one or more shipping methods. Be careful about the terminology here. Shipping method resides within shipping extension. A single extension can define one or more shipping methods. In this article we will learn how to create our own shipping method. Shipping methods There are two, unofficially divided, types of shipping methods: Static, where shipping cost rates are based on a predefined set of rules. For example, you can create a shipping method called 5+ and make it available to the customer for selection under the checkout only if he added more than five products to the cart. Dynamic, where retrieval of shipping cost rates comes from various shipping providers. For example, you have a web service called ABC Shipping that exposes a SOAP web service API which accepts products weight, length, height, width, shipping address and returns the calculated shipping cost which you can then show to your customer. Experienced developers would probably expect one or more PHP interfaces to handle the implementation of new shipping methods. Same goes for Magento, implementing a new shipping method is done via an interface and via proper configuration. The default Magento installation comes with several built-in payment methods available: PayPal, Saved CC, Check/Money Order, Zero Subtotal Checkout, Bank Transfer Payment, Cash On Delivery payment, Purchase Order, and Authorize.Net. For some merchants this is more than enough. Various additional payment extensions can be found on Magento Connect. For those that do not yet exist, you are free to build an additional custom payment extension with support for one or more payment methods. Building a payment extension is usually a non-trivial task that requires a lot of focus. Payment methods There are several unofficially divided types of payment method implementations such as redirect payment, hosted (on-site) payment, and an embedded iframe. Two of them stand out as the most commonly used ones: Redirect payment: During the checkout, once the customer reaches the final ORDER REVIEW step, he/she clicks on the Place Order button. Magento then redirects the customer to specific payment provider website where customer is supposed to provide the credit card information and execute the actual payment. What's specific about this is that prior to redirection, Magento needs to create the order in the system and it does so by assigning this new order a Pending status. Later if customer provides the valid credit card information on the payment provider website, customer gets redirected back to Magento success page. The main concept to grasp here is that customer might just close the payment provider website and never return to your store, leaving your order indefinitely in Pending status. The great thing about this redirect type of payment method providers (gateways) is that they are relatively easy to implement in Magento. Hosted (on-site) payment: Unlike redirect payment, there is no redirection here. Everything is handled on the Magento store. During the checkout, once the customer reaches the Payment Information step, he/she is presented with a form for providing the credit card information. After which, when he/she clicks on the Place Order button in the last ORDER REVIEW checkout step, Magento then internally calls the appropriate payment provider web service, passing it the billing information. Depending on the web service response, Magento then internally sets the order status to either Processing or some other. For example, this payment provider web service can be a standard SOAP service with a few methods such as orderSubmit. Additionally, we don't even have to use a real payment provider, we can just make a "dummy" payment implementation like built-in Check/Money Order payment. You will often find that most of the merchants prefer this type of payment method, as they believe that redirecting the customer to third-party site might negatively affect their sale. Obviously, with this payment method there is more overhead for you as a developer to handle the implementation. On top of that there are security concerns of handling the credit card data on Magento side, in which case PCI compliance is obligatory. If this is your first time hearing about PCI compliance, please click here to learn more. This type of payment method is slightly more challenging to implement than the redirect payment method. Magento Connect Magento Connect is one of the world's largest eCommerce application marketplace where you can find various extensions to customize and enhance your Magento store. It allows Magento community members and partners to share their open source or commercial contributions for Magento with the community. You can access Magento Connect marketplace here. Publishing your extension to Magento Connect is a three-step process made of: Packaging your extension Creating an extension profile Uploading the extension package More of which we will talk later in the article. Only community members and partners have the ability to publish their contributions. Becoming a community member is simple, just register as a user on official Magento website https://www.magentocommerce.com. Member account is a requirement for further packaging and publishing of your extension. Read more: Categories and Attributes in Magento: Part 2 Integrating Twitter with Magento Magento Fundamentals for Developers
Read more
  • 0
  • 0
  • 1492
article-image-navigation-stack-robot-setups
Packt
10 Oct 2013
7 min read
Save for later

Navigation Stack - Robot Setups

Packt
10 Oct 2013
7 min read
The navigation stack in ROS In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position. You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack. The robot must satisfy some requirements before it uses the navigation stack: The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways. It requires that the robot publishes information about the relationships between all the joints and sensors' position. The robot must send messages with linear and angular velocities. A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot. The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous: In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack. Creating transforms The navigation stack needs to know the position of the sensors, wheels, and joints. To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy. Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us. If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets. Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation. Creating a broadcaster Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it: #include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } } Remember to add the following line in your CMakelist.txt file to create the new executable: rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp) And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot). Creating a listener Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code: #include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from "base_laser" to "base_link": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); } Remember to add the line in the CMakeList.txt file to create the executable. Compile the package and run both the nodes using the following commands: $ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener Then you will see the following message: [ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33 This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link. As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point. A transform tree defines offsets in terms of both translation and rotation between different coordinate frames. Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link): The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is. Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack. Watching the transformation tree If you want to see the transformation tree of your robot, use the following command: $ roslaunch chapter7_tutorials gazebo_map_robot.launch model:= "`rospack find chapter7 _tutorials`/urdf/robot1_base_04.xacro" $ rosrun tf view_frames The resultant frame is depicted as follows: And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code: $ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames The resultant frame is depicted as follows:
Read more
  • 0
  • 0
  • 2501

article-image-introducing-sproutcore
Packt
10 Oct 2013
6 min read
Save for later

Introducing SproutCore

Packt
10 Oct 2013
6 min read
(For more resources related to this topic, see here.) Understanding the SproutCore approach In the strictly technical sense, I would describe SproutCore as an open source web application development framework. As you are likely a technical person interested in web application development, this should be reassuring. And if you are interested in developing web applications, you must also already know how difficult it is to keep track of the vast number of libraries and frameworks to choose from. While it would be nice if we could say that there was one true way, and even nicer if I could say that the one true way was SproutCore; this is not the case and never will be the case. Competing ideas will always exist, especially in this area because the future of software is largely JavaScript and the web. So where does SproutCore fit ideologically within this large and growing group? To best describe it, I would ask you to picture a spectrum of all the libraries and frameworks one can use to build a web application. Towards one end are the small single-feature libraries that provide useful helper functions for use in dynamic websites. As we move across, you'll see that the libraries grow and become combined into frameworks of libraries that provide larger functions, some of which start to bridge the gap between what we may call a website and what we may call a web app. Finally, at the other end of the spectrum you'll find the full application development frameworks. These are the frameworks dedicated to writing software for the web and as you may have guessed, this is where you would find SproutCore along with very few others. First, let me take a moment to argue the position of full application development frameworks such as SproutCore. In my experience, in order to develop web software that truly rivals the native software, you need more than just a collection of parts, and you need a cohesive set of tools with strong fundamentals. I've actually toyed with calling SproutCore something more akin to a platform, rather than a framework, because it is really more than just the framework code, it's also the tools, the ideas, and the experience that come with it. On the other side of the argument, there is the idea of picking small pieces and cobbling them together to form an application. While this is a seductive idea and makes great demos, this approach quickly runs out of steam when attempting to go beyond a simple project. The problem isn't the technology, it's the realities of software development: customization is the enemy of maintainability and growth. Without a native software like structure to build on, the developers must provide more and more glue code to keep it all together and writing architecturally sound code is extremely hard. Unfortunately, under deadlines this results in difficult to maintain codebases that don't scale. In the end, the ability to execute and the ability to iterate are more important than the ability to start. Fortunately, almost all of what you need in an application is common to all applications and so there is no need to reinvent the foundations in each project. It just needs to work and work exceptionally well so that we can free up time and resources to focus on attaining the next level in the user experience. This is the SproutCore approach. SproutCore does not just include all the components you need to create a real application. It also includes thousands of hours of real world tested professional engineering experience on how to develop and deploy genre-changing web applications that are used by millions of people. This experience is baked into the heart of SproutCore and it's completely free to use, which I hope you find as exciting a prospect as I do! Knowing when SproutCore is the right choice As you may have noticed, I use the word "software" occasionally and I will continue to do so, because I don't want to make any false pretenses about what it is we are doing. SproutCore is about writing software for the web. If the term software feels too heavy or too involved to describe your project, then SproutCore may not be the best platform for you. A good measure of whether SproutCore is a good candidate for your project or not, is to describe the goals of your project in normal language. For example, if we were to describe a typical SproutCore application, we would use terms such as: "rich user experience" "large scale" "extremely fast" "immediate feedback" "huge amounts of data" "fluid scrolling through gigantic lists" "works on multiple browsers, even IE7" "full screen" "pixel perfect design" "offline capable" "localized in multiple languages" and perhaps the most telling descriptor of them all, "like a native app" If these terms match several of the goals for your own project, then we are definitely on the right path. Let me talk about the other important factor to consider, possibly the most important factor to consider when deciding as a business on which technology to use: developer performance. It does not matter at all what features a framework has if the time it takes or the skill required to build real applications with it becomes unmanageable. I can tell you first hand that custom code written by a star developer quickly becomes useless in the hands of the next person and all software eventually ends up in someone else's hands. However, SproutCore is built using the same web technology (HTML, JavaScript and CSS) that millions are already familiar with. This provides a simple entry point for a lot of current web developers to start from. But more importantly, SproutCore was built around the software concepts that native desktop and mobile developers have used for years, but that have barely existed in the web. These concepts include: Class-like inheritance, encapsulation, and polymorphism Model-View-Controller (MVC) structure Statecharts Key-value coding, binding, and observing Computed properties Query-able data stores Centralized event handling Responder chains Run loops While there is also a full UI library and many conveniences, the application of software development principles onto web technology is what makes SproutCore so great. When your web app becomes successful and grows exponentially, and I hope it does, then you will be thankful to have SproutCore at its root. As I often heard Charles Jolley , the creator of SproutCore, say: "SproutCore is the technology you bet the company on."
Read more
  • 0
  • 0
  • 2273

Packt
09 Oct 2013
11 min read
Save for later

Navigation Stack – Robot Setups

Packt
09 Oct 2013
11 min read
Introduction to the navigation stacks and their powerful capabilities—clearly one of the greatest pieces of software that comes with ROS. The TF is explained in order to show how to transform from the frame of one physical element to the other; for example, the data received using a sensor or the command for the desired position of an actuator. We will see how to create a laser driver or simulate it. We will learn how the odometry is computed and published, and how Gazebo provides it. A base controller will be presented, including a detailed description of how to create one for your robot. We will see how to execute SLAM with ROS. That is, we will show you how you can build a map from the environment with your robot as it moves through it. Finally, you will be able to localize your robot in the map using the localization algorithms of the navigation stack. The navigation stack in ROS In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position. You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack. The robot must satisfy some requirements before it uses the navigation stack: The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways. It requires that the robot publishes information about the relationships between all the joints and sensors' position. The robot must send messages with linear and angular velocities. A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot. The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous: In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack. Creating transforms The navigation stack needs to know the position of the sensors, wheels, and joints. To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy. Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us. If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets. Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation. Creating a broadcaster Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it: #include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } } Remember to add the following line in your CMakelist.txt file to create the new executable: rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp) And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot). Creating a listener Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code: #include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like totransform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); } Remember to add the line in the CMakeList.txt file to create the executable. Compile the package and run both the nodes using the following commands: $ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener Then you will see the following message: [ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33 This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link. As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point. A transform tree defines offsets in terms of both translation and rotation between different coordinate frames. Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link): The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is. Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack. Watching the transformation tree If you want to see the transformation tree of your robot, use the following command: $ roslaunch chapter7_tutorials gazebo_map_robot.launch model:="`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro"$ rosrun tf view_frames The resultant frame is depicted as follows: And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code: $ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames The resultant frame is depicted as follows: Publishing sensor information Your robot can have a lot of sensors to see the world; you can program a lot of nodes to take these data and do something, but the navigation stack is prepared only to use the planar laser's sensor. So, your sensor must publish the data with one of these types: sensor_msgs/LaserScan or sensor_msgs/PointCloud. We are going to use the laser located in front of the robot to navigate in Gazebo. Remember that this laser is simulated on Gazebo, and it publishes data on the base_scan/scan frame. In our case, we do not need to configure anything of our laser to use it on the navigation stack. This is because we have tf configured in the .urdf file, and the laser is publishing data with the correct type. If you use a real laser, ROS might have a driver for it. Anyway, if you are using a laser that has no driver on ROS and want to write a node to publish the data with the sensor_msgs/LaserScan sensor, you have an example template to do it, which is shown in the following section. But first, remember the structure of the message sensor_msgs/LaserScan. Use the following command: $ rosmsg show sensor_msgs/LaserScan std_msgs/Header header uint32 seq time stamp string frame_id float32 angle_min float32 angle_max float32 angle_increment float32 time_increment float32 scan_time float32 range_min float32 range_max float32[] rangesfloat32[] intensities Creating the laser node Now we will create a new file in chapter7_tutorials/src with the name laser.cpp and put the following code in it: #include <ros/ros.h> #include <sensor_msgs/LaserScan.h> int main(int argc, char** argv){ ros::init(argc, argv, "laser_scan_publisher"); ros::NodeHandle n; ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>("scan", 50); unsigned int num_readings = 100; double laser_frequency = 40; double ranges[num_readings]; double intensities[num_readings]; int count = 0; ros::Rate r(1.0); while(n.ok()){ //generate some fake data for our laser scan for(unsigned int i = 0; i < num_readings; ++i){ ranges[i] = count; intensities[i] = 100 + count; } ros::Time scan_time = ros::Time::now(); //populate the LaserScan message sensor_msgs::LaserScan scan; scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; scan.angle_min = -1.57; scan.angle_max = 1.57; scan.angle_increment = 3.14 / num_readings; scan.time_increment = (1 / laser_frequency) / (num_readings); scan.range_min = 0.0; scan.range_max = 100.0; scan.ranges.resize(num_readings); scan.intensities.resize(num_readings); for(unsigned int i = 0; i < num_readings; ++i){ scan.ranges[i] = ranges[i]; scan.intensities[i] = intensities[i]; } scan_pub.publish(scan); ++count; r.sleep(); } } As you can see, we are going to create a new topic with the name scan and the message type sensor_msgs/LaserScan. You must be familiar with this message type from sensor_msgs/LaserScan. The name of the topic must be unique. When you configure the navigation stack, you will select this topic to be used for the navigation. The following command line shows how to create the topic with the correct name: ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>("scan", 50); It is important to publish data with header, stamp, frame_id, and many more elements because, if not, the navigation stack could fail with such data: scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; Other important data on header is frame_id. It must be one of the frames created in the .urdf file and must have a frame published on the tf frame transforms. The navigation stack will use this information to know the real position of the sensor and make transforms such as the one between the data sensor and obstacles. With this template, you can use any laser although it has no driver for ROS. You only have to change the fake data with the right data from your laser. This template can also be used to create something that looks like a laser but is not. For example, you could simulate a laser using stereoscopy or using a sensor such as a sonar.
Read more
  • 0
  • 0
  • 1263
article-image-creating-new-characters-morphs
Packt
09 Oct 2013
7 min read
Save for later

Creating New Characters with Morphs

Packt
09 Oct 2013
7 min read
(For more resources related to this topic, see here.) Understanding morphs The word morph comes from metamorphosis, which means a change of the form or nature of a thing or person into a completely different one. Good old Franz Kafka had a field day with metamorphosis when he imagined poor Gregor Samsa waking up and finding himself changed into a giant cockroach. This concept applies to 3D modeling very well. As we are dealing with polygons, which are defined by groups of vertices, it's very easy to morph one shape into something different. All that we need to do is to move those vertices around, and the polygons will stretch and squeeze accordingly. To get a better visualization about this process, let’s bring the Basic Female figure to the scene and show it with the wireframe turned on. To do so, after you have added the Basic Female figure, click on the DrawStyle widget on the top-right portion of the 3D Viewport. From that menu, select Wire Texture Shaded. This operation changes how Studio draws the objects in the scene during preview. It doesn't change anything else about the scene. In fact, if you try to render the image at this point, the wireframe will not show up in the render. The wireframe is a great help in working with objects because it gives us a visual representation of the structure of a model. The type of wireframe that I selected in this case is superimposed to the normal texture used with the figure. This is not the only visualization mode available. Feel free to experiment with all the options in the DrawStyle menu; most of them have their use. The most useful, in my opinion, are the Hidden Line, Lit Wireframe, Wire Shaded, and Wire Texture Shaded options. Try the Wire Shaded option as well. It shows the wireframe with a solid gray color. This is, again, just for display purposes. It doesn't remove the texture from the figure. In fact, you can switch back to Texture Shaded to see Genesis fully textured. Switching the view to use the simple wireframe or the shaded wireframe is a Great way of speeding up your workflow. When Studio doesn’t have to render the textures, the Viewport becomes more responsive and all operations take less time. If you have a slow computer, using the wireframe mode is a good way of getting a faster response time. Here are the Wire Texture Shaded and Wire Shaded styles side by side: Now that we have the wireframe visible, the concept of morphing should be simpler to understand. If we pick any vertex in the geometry and we move it somewhere, the geometry is still the same, same number of polygons and same number of vertices, but the shape has shifted. Here is a practical example that shows Genesis loaded in Blender. Blender is a free, fully featured, 3D modeling program. It has extremely advanced features that compete with commercial programs sold for thousands of dollars per license. You can find more information about Blender at http://www.blender.org. Be aware that Blender is a very advanced program with a rather difficult UI. In this image, I have selected a single polygon and pulled it away from the face: In a similar way we can use programs such as modo or ZBrush to modify the basic geometry and come up with all kinds of different shapes. For example, there are people who are specialized in reproducing the faces of celebrities as morph for DAZ V4 or Genesis. What is important to understand about morphs is that they cannot add or remove any portion of the geometry. A morph only moves things around, sometimes to extreme degrees. Morphs for Genesis or Gen4 figures can be purchased from several websites specialized in selling content for Poser and DAZ Studio. In particular, Genesis makes it very easy to apply morphs and even to mix them together. Combining premade morphs to create new faces The standard installation of Genesis provides some interesting ways of changing its shape. Let's start a new Studio scene and add our old friend, the basic Female figure. Once Genesis is in the scene, double-click on it to select it. Now let’s take a look at a new tool, the Shaping tab. It should be visible in the right-hand side pane. Click on the Shaping tab; it should show a list of shapes available. The list should be something like this: As we can see, the Basic Female shape is unsurprisingly dialed all the way to the top. The value of each slider goes from zero, no influence, to one, full influence of the morph. Morphs are not exclusive so, for example, you can add a bit of Body Builder (scroll the list to the bottom if you don't see it) to be used in conjunction with the Basic Female morph. This will give us a muscular woman. This exercise is also giving us an insight about the Basic Female figure that we have used up to this time. The figure is basically the raw Genesis figure with the Basic Female morph applied as a preset. If we continue exploring the Shaping Editor, we can see that the various shapes are grouped by major body section. We have morphs for the shape of the head, the components of the face, the nose, eyes, mouth, and so on. Let's click on the head of Genesis and use the Camera: Frame tool to frame the head in the view. Move the camera a bit so that the face is visible frontally. We will apply a few morphs to the head to see how it can be transformed. Here is the starting point: Now let’s click on the Head category in the Shaping tab. In there we can see a slider labeled Alien Humanoid. Move the slider until it gets to 0.83. The difference is dramatic. Now let’s click on the Eyes category. In there we find two values: Eyes Height and Eyes Width. To create an out-of-this-world creature, we need to break the rules of proportions a little bit, and that means to remove the limits for a couple of parameters. Click on the gear button for the Eyes Height parameter and uncheck the Use Limits checkbox. Confirm by clicking on the Accept button. Once this is done, dial a value of 1.78 for the eyes height. The eyes should move dramatically up, toward the eyebrow. Lastly, let's change the neck; it's much too thick for an alien. Also, in this case, we will need to disable the use of limits. Click on the Neck category and disable the limits for the Neck Size parameter. Once that is done, set the neck size to -1.74. Here is the result, side by side, of the transformation. This is quite a dramatic change for something that is done with just dials, without using a 3D modeling program. It gets even better, as we will see shortly. Saving your morphs If you want to save a morph to re-use it later, you can navigate to File | Save As | Shaping Preset…. To re-use a saved morph, simply select the target figure and navigate to File | Merge… to load the previously saved preset/morph. Why is Studio using the rather confusing term Merge for loading its own files? Nobody knows for sure; it's one of those weird decisions that DAZ's developers made long ago and never changed. You can merge two different Studio scenes, but it is rather confusing to think of loading a morph or a pose preset as a scene merge. Try to mentally replace File | Merge with File | Load. This is the meaning of that menu option.
Read more
  • 0
  • 0
  • 4265

article-image-so-what-zeptojs
Packt
08 Oct 2013
7 min read
Save for later

So, what is Zepto.js?

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) One of the most influential JavaScript libraries in the last decade of web development is jQuery, a comprehensive set of functions that make Document Object Model (DOM) selection and manipulation consistent across a range of browsers, freeing web developers from having to handle all these themselves, as well as providing a friendlier interface to the DOM itself. Zepto.js is self-described as an aerogel framework—a JavaScript library that attempts to offer the most of the features as the jQuery API, yet only taking up a fraction of the size (9k versus 93k in the default, compressed current versions Zepto.js v1.01 and jQuery v1.10 respectively). In addition, Zepto.js has a modular assembly, so you can make it even smaller if you don't need the functionality of extra modules. Even the new, streamlined jQuery 2.0 weighs in at a heavyweight 84k. But why does this matter? At a first glance, the difference between the two libraries seems slight, especially in today's world where large files are normally described in terms of gigabytes and terabytes. Well, there are two good reasons why you'd prefer a smaller file size. Firstly, even the newest mobile devices on the market today have slower connections than you'll find on most desktop machines. Also, due to the constrained memory requirements on smartphones, mobile phone browsers tend to have limited caching compared to their bigger desktop cousins, so a smaller helper library means more chance of keeping your actual JavaScript code in the cache and thus preventing your app from slowing down on the device. Secondly, a smaller library helps in response time—although 90k versus 8k doesn't sound like a huge difference, it means fewer network packets; as your application code that relies on the library can't execute until the library's code is loaded, using the smaller library can shave off precious milliseconds in that ever-so-important time to first-page-load time, and will make your web page or application seem more responsive to users. Having said all that, there are a few downsides on using Zepto.js that you should be aware about before deciding to plump for it instead of jQuery. Most importantly, Zepto.js currently makes no attempt to support Internet Explorer. Its origins as a library to replace jQuery on mobile phones meant that it mainly targeted WebKit browsers, primarily iOS. As the library has got more mature, it has expanded to cover Firefox, but general IE support is unlikely to happen (at the time of writing, there is a patch waiting to go into the main trunk that would enable support for IE10 and up, but anything lower than Version 10 is probably never going to be supported). In this guide we'll show you how to include jQuery as a fallback in case a user is running on an older, unsupported browser if you do decide to use Zepto.js on browsers that it supports and want to maintain some compatibility with Internet Explorer. The other pitfall that you need to be aware of is that Zepto.js only claims to be a jQuery-like library, not a 100 percent compatible version. In the majority of web application development, this won't be an issue, but when it comes to integrating plugins and operating at the margins of the libraries, there will be some differences that you will need to know to prevent possible errors and confusions, and we'll be showing you some of them later in this guide. In terms of performance, Zepto.js is a little slower than jQuery, though this varies by browser (take a look at http://jsperf.com/zepto-vs-jquery-2013/ to see the latest benchmark results). In general, it can be up to twice as slow for repeated operations such as finding elements by class name or ID. However, on mobile devices, this is still around 50,000 operations per second. If you really require high-performance from your mobile site, then you need to examine whether you can use raw JavaScript instead—the JavaScript function getElementsByClassName() is almost one hundred times faster than Zepto.js and jQuery in the preceding benchmark. Writing plugins Eventually, you'll want to make your own plugins. As you can imagine, they're fairly similar in construction to jQuery plugins (so they can be compatible). But what can you do with them? Well, consider them as a macro system for Zepto.js; you can do anything that you'd do in normal Zepto.js operations, but they get added to the library's namespace so you can reuse them in other applications. Here is a plugin that will take a Zepto.js collection and turn all the text in it to Helvetica font-family at a user-supplied font-size (in pixels for this example). (function($){ $.extend($.fn, { helveticaize: function( options ){ $.each(this, function(){ $(this).css({"font-family":"Helvetica", "font-size": options['size']+'px'}); }); return this; } }) })(Zepto || jQuery) Then, to make all links on a page Helvetica, you can call $("a").helveticaize(). The most important part of this code is the use of the $.extend method. This adds the helveticaize property/function to the $.fn object, which contains all of the functions that Zepto.js provides. Note that you could potentially use this to redefine methods such as find(), animate(), or any other function you've seen so far. As you can imagine, this is not recommended—if you need different functionality, call $.extend and create a new function with a name like custom_find instead. In addition, you could pass multiple new functions to $.fn with a call to $.extend, but the convention for jQuery and Zepto.js is that you only provide as few functions as possible (ideally one) and offer different functionality through passed parameters (that is, through options). The reason for this is that your plugin may have to live alongside many other plugins, all of which share the same namespace in $.fn. By only setting one property, you hopefully reduce the chance of overriding a method that another plugin has defined. In the actual definition of the method that's being added, it iterates through the objects in the collection, setting the font and size (if present) for all the objects in the collection. But at the rest of the method it returns this. Why? Well, if you remember, part of the power of Zepto.js is that methods are chainable, allowing you to build up complex selectors and operations in one line. And thanks to helveticaize() returning this (which will be a collection), this newly-defined method is just as chainable as all the default methods provided. This isn't a requirement of plugin methods but, where possible, you should make your plugin methods return a collection of some sort to prevent breaking a chain (and if you can't, for some reason, make sure to spell that out in your plugin's documentation). Finally, at the end, the (Zepto || jQuery) part will immediately invoke this definition on either the Zepto object or jQuery object. In this way, you can create plugins that work with either framework depending on whether they're present, with the caveat, of course, that your method must work in both frameworks. Summary In this article, we learned what Zepto.js actually is, what you can do with it, and why it's so great. We also learned how to extend Zepto.js with plugins. Resources for Article: Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 3964