Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-using-mongoid
Packt
09 Dec 2013
12 min read
Save for later

Using Mongoid

Packt
09 Dec 2013
12 min read
(For more resources related to this topic, see here.) Introducing Origin Origin is a gem that provides the DSL for Mongoid queries. Though at first glance, a question may seem to arise as to why we need a DSL for Mongoid queries; If we are finally going to convert the query to a MongoDB-compliant hash, then why do we need a DSL? Origin was extracted from Mongoid gem and put into a new gem, so that there is a standard for querying. It has no dependency on any other gem and is a standalone pure query builder. The idea was that this could be a generic DSL that can be used even without Mongoid! So, now we have a very generic and standard querying pattern. For example, in Mongoid 2.x we had the criteria any_in and any_of, and no direct support for the and, or, and nor operations. In Mongoid 2.x, the only way we could fire a $or or a $and query was like this: Author.where("$or" => {'language' => 'English', 'address.city' => 'London '}) And now in Mongoid 3, we have a cleaner approach. Author.or(:language => 'English', 'address.city' => 'London') Origin also provides good selectors directly in our models. So, this is now much more readable: Book.gte(published_at: Date.parse('2012/11/11')) Memory maps, delayed sync, and journals As we have seen earlier, MongoDB stores data in memory-mapped files of at most 2 GB each. After the data is loaded for the first time into the memory mapped files, we now get almost memory-like speeds for access instead of disk I/O, which is much slower. These memory-mapped files are preallocated to ensure that there is no delay of the file generation while saving data. However, to ensure that the data is not lost, it needs to be persisted to the disk. This is achieved by journaling. With journaling, every database operation is written to the oplog collection and that is flushed to disk every 100 ms. Journaling is turned on by default in the MongoDB configuration. This is not the actual data but the operation itself. This helps in better recovery (in case of any crash) and also ensures the consistency of writes. The data that is written to various collections are flushed to the disk every 60 seconds. This ensures that the data is persisted periodically and also ensures the speed of data access is almost as fast as memory. MongoDB relies on the operating system for the memory management of its memory-mapped files. This has the advantage of getting inherent OS benefits as the OS is improved. Also, there's the disadvantage of lack of control on how memory is managed by MongoDB. However, what happens if something goes wrong (server crashes, database stops, or disk is corrupted)? To ensure durability, whenever data is saved in files, the action is logged to a file in a chronological order. This is the journal entry, which is also a memory-mapped file but is synced with the disk every 100 ms. Using the journal, the database can be easily recovered in case of any crash. So, in the worst case scenario, we could potentially lose 100 ms of information. This is a fair price to pay for the benefits of using MongoDB. MongoDB journaling makes it a very robust and durable database. However, it also helps us decide when to use MongoDB and when not to use it. 100 ms is a long time for some services, such as financial core banking or maybe stock price updates. In such applications, MongoDB is not recommended. For most cases that are not related to heavy multi-table transactions like most financial applications MongoDB can be suitable. All these things are handled seamlessly, and we don't usually need to change anything. We can control this behavior via the configuration of MongoDB but usually it's not recommended. Let's now see how we save data using Mongoid. Updating documents and attributes As with ActiveModel specifications, save will update the changed attributes and return the updated object, otherwise it will return false on failure. The save! function will raise an exception on the error. In both cases, if we pass validate: false as a parameter to save, it will bypass the validations. A lesser-known persistence option is the upsert action. An upsert action creates a new document if it does not find it and overwrites the object if it finds it. A good reason to use upsert is in the find_and_modify action. For example, suppose we want to reserve a book in our Sodibee system, and we want to ensure that at any one point, there can be only one reservation for a book. In a traditional scenario: t1: Request-1 searches for a for a book which is not reserved and finds it t2: Now, it saves the book with the reservation information t3: Request-2 searches for a reservation for the same book and finds that the book is reserved t4: Request-2 handles the situation with either error or waits for reservation to be freed So far so good! However in a concurrent model, especially for web applications, it creates problems. t1: Request-1 searches for a book which is not reserved and finds it t2: Request-2 searches for a reservation for the same book and also gets back that book since it's not yet reserved t3: Request-1 saves the book with its reservation information t4: Request-2 now overwrites previous update and saves the book with its reservation information Now we have a situation where two requests think that the reservation for the book was successful and that is against our expectations. This is a typical problem that plagues most web applications. The various ways in which we can solve this is discussed in the subsequent sections. Write concern MongoDB helps us ensure write consistency. This means that when we write something to MongoDB, it now guarantees the success of the write operation. Interestingly, this is a configurable option and is set to acknowledged by default. This means that the write is guaranteed because it waits for an acknowledgement before returning success. In earlier versions of Mongoid, safe: true was turned off by default. This meant that success of the write operation was not guaranteed. The write concern is configured in Mongoid.yml as follows: development:sessions:default:hosts:- localhost:27017options:write:w: 1 The default write concern in Mongoid is configured with w: 1, which means that the success of a write operation is guaranteed. Let's see an example: class Authorinclude Mongoid::Documentfield :name, type: Stringindex( {name: 1}, {unique: true, background: true})end Indexing blocks read and write operations. Hence, its recommended to configure indexing in the background in a Rails application. We shall now start a Rails console and see how this reacts to a duplicate key index by creating two Author objects with the same name. irb> Author.create(name: "Gautam") => #<Author _id: 5143678345db7ca255000001, name: "Gautam"> irb> Author.create(name: "Gautam") Moped::Errors::OperationFailure: The operation: #<Moped::Protocol::Command @length=83 @request_id=3 @response_to=0 @op_code=2004 @flags=[] @full_collection_name="sodibee_development.$cmd" @skip=0 @limit=-1 @selector={:getlasterror=>1, :w=>1} @fields=nil> failed with error 11000: "E11000 duplicate key error index: sodibee_ development.authors.$name_1 dup key: { : "Gautam" }" As we can see, it has raised a duplicate key error and the document is not saved. Now, let's have some fun. Let's change the write concern to unacknowledged: development:sessions:default:hosts:- localhost:27017options:write:w: 0 The write concern is now set to unacknowledged writes. That means we do not wait for the MongoDB write to eventually succeed, but assume that it will. Now let's see what happens with the same command that had failed earlier. irb > Author.where(name: "Gautam").count=> 1irb > Author.create(name: "Gautam")=> #<Author _id: 5287cba54761755624000000, name: "Gautam">irb > Author.where(name: "Gautam").count=> 1 There seems to be a discrepancy here. Though Mongoid create returned successfully, the data was not saved to the database. Since we specified background: true for the name index, the document creation seemed to succeed as MongoDB had not indexed it yet, and we did not wait for acknowledging the success of the write operation. So, when MongoDB tries to index the data in the background, it realizes that the index criterion is not met (since the index is unique), and it deletes the document from the database. Now, since that was in the background, there is no way to figure this out on the console or in our Rails application. This leads to an inconsistent result. So, how can we solve this problem? There are various ways to solve this problem: We leave the Mongoid default write concern configuration alone. By default, it is w: 1 and it will raise an exception. This is the recommended approach as prevention is better than cure! Do not specify the background: true option. This will create indexes in the foreground. However, this approach is not recommended as it can cause a drop in performance because index creations block read and write access. Add drop_dups: true. This deletes data, so you have to be really careful when using this option. Other options to the index command create different types of indexes as shown in the following table: Index Type Example Description sparse index({twitter_name: 1}, { sparse: true}) This creates sparse indexes, that is, only the documents containing the indexed fields are indexed. Use this with care as you can get incomplete results. 2d 2dsphere index({:location => "2dsphere"}) This creates a two-dimensional spherical index. Text index MongoDB 2.4 introduced text indexes that are as close to free text search indexes as it gets. However, it does only basic text indexing—that is, it supports stop words and stemming. It also assigns a relevance score with each search. Text indexes are still an experimental feature in MongoDB, and they are not recommended for extensive use. Use ElasticSearch, Solr (Sunspot), or ThinkingSphinx instead. The following code snippet shows how we can specify a text index with weightage: index({ "name" => 'text',"last_name" => 'text'},{weights: {'name' => 10,'last_name' => 5,},name: 'author_text_index'}) There is no direct search support in Mongoid (as yet). So, if you want to invoke a text search, you need to hack around a little. irb> srch = Mongoid::Contextual::TextSearch.new(Author.collection,Author.all, 'john')=> #<Mongoid::Contextual::TextSearchselector: {}class: Authorsearch: johnfilter: {}project: N/Alimit: N/Alanguage: default>irb> srch.execute=> {"queryDebugString"=>"john||||||", "language"=>"english","results"=>[{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00030b'), "name"=>"Bettye Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00046d'), "name"=>"John Pagac"}},{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f000578'),"name"=>"Jeanie Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058445db7c843f0007e7')...{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058a45db7c843f0025f1'), "name"=>"Alford Johns"}}], "stats"=>{"nscanned"=>25,"nscannedObjects"=>0, "n"=>25, "nfound"=>25, "timeMicros"=>31103},"ok"=>1.0} By default, text search is disabled in MongoDB configuration. We need to turn it on by adding setParameter = textSearchEnabled=true in the MongoDB configuration file, typically /usr/local/mongo.conf. This returns a result with statistical data as well the documents and their relevance score. Interestingly, it also specifies the language. There are a few more things we can do with the search result. For example, we can see the statistical information as follows: irb> a.stats=> {"nscanned"=>25, "nscannedObjects"=>0, "n"=>25, "nfound"=>25,"timeMicros"=>31103} We can also convert the data into our Mongoid model objects by using project, as shown in the following command: > a.project(:name).to_a=> [#<Author _id: 51fc058345db7c843f00030b, name: "Bettye Johns",last_name: nil, password: nil>, #<Author _id: 51fc058345db7c843f00046d,name: "John Pagac", last_name: nil, password: nil>, #<Author _id:51fc058345db7c843f000578, name: "Jeanie Johns", last_name: nil, password:nil> ... Some of the important things to remember are as follows: Text indexes can be very heavy in memory. They can return the documents, so the result can be large. We can use multiple keys (or filters) along with a text search. For example, the index with index ({ 'state': 1, name: 'text'}) will mandate the use of the state for every text search that is specified. A search for "john doe" will result in a search for "john"or "doe"or both. A search for "john"and "doe" will search for all "john"and "doe"in a random order. A search for ""john doe"", that is, with escaped quotes, will search for documents containing the exact phrase "john doe". A lot more data can be found at http://docs.mongodb.org/manual/tutorial/search-for-text/ Summary This article provides an excellent reference for using Mongoid. The article has examples with code samples and explanations that help in understanding the various features of Mongoid. Resources for Article: Further resources on this subject: Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article] Documentation with phpDocumentor: Part 1 [Article]
Read more
  • 0
  • 0
  • 3910

article-image-touch-events
Packt
26 Nov 2013
7 min read
Save for later

Touch Events

Packt
26 Nov 2013
7 min read
(For more resources related to this topic, see here.) The why and when of touch events These touch events come in a few different flavors. There are taps, swipes, rotations, pinches, and more complicated gestures available for you to make your users' experience more enjoyable. Each type of touch event has an established convention which is important to know, so your user has minimal friction when using your website for the first time: The tap is analogous to the mouse click in a traditional desktop environment, whereas, the rotate, pinch, and related gestures have no equivalence. This presents an interesting problem—what can those events be used for? Pinch events are typically used to scale views (have a look at the following screenshot of Google Maps accessed from an iPad, which uses pinch events to control the map's zoom level), while rotate events generally rotate elements on screen. Swipes can be used to move content from left to right or top to bottom. Multifigure gestures can be different as per your website's use cases, but it's still worthwhile examining if a similar interaction already exists, and thus an established convention. Web applications can benefit greatly from touch events. For example, imagine a paint-style web application. It could use pinches to control zoom, rotations to rotate the canvas, and swipes to move between color options. There is almost always an intuitive way of using touch events to enhance a web application. In order to get comfortable with touch events, it's best to get out there and start using them practically. To this end, we will be building a paint tool with a difference, that is, instead of using the traditional mouse pointer as the selection and drawing tool, we will exclusively use touch events. Together, we can learn just how awesome touch events are and the pleasure they can afford your users. Touch events are the way of the future, and adding them to your developer tool belt will mean that you can stay at the cutting edge. While MC Hammer may not agree with these events, the rest of the world will appreciate touching your applications. Testing while keeping your sanity (Simple) Touch events are awesome on touch-enabled devices. Unfortunately, your development environment is generally not touch-enabled, which means testing these events can be trickier than normal testing. Thankfully, there are a bunch of methods to test these events on non-touch enabled devices, so you don't have to constantly switch between environments. In saying that though, it's still important to do the final testing on actual devices, as there can be subtle differences. How to do it... Learn how to do initial testing with Google Chrome. Debug remotely with Mobile Safari and Safari. Debug further with simulators and emulators. Use general tools to debug your web application. How it works… As mentioned earlier, testing touch events generally isn't as straightforward as the normal update-reload-test flow in web development. Luckily though, the traditional desktop browser can still get you some of the way when it comes to testing these events. While you generally are targeting Mobile Safari on iOS and Chrome, or Browser on Android, it's best to do development and initial testing in Chrome on the desktop. Chrome has a handy feature in its debugging tools called Emulate touch events. This allows a lot of your mouse-related interactions to trigger their touch equivalents. You can find this setting by opening the Web Inspector, clicking on the Settings cog, switching to the Overrides tab, and enabling Emulate touch events (if the option is grayed out, simply click on Enable at the top of the screen). Using this emulation, we are able to test many touch interactions; however, there are certain patterns (such as scaling and rotating) that aren't really possible just using emulated events and the mouse pointer. To correctly test these, we need to find the middle ground between testing on a desktop browser and testing on actual devices. There's more… Remember the following about testing with simulators and browsers: Device simulators While testing with Chrome will get you some of the way, eventually you will want to use a simulator or emulator for the target device, be it a phone, tablet, or other touch-enabled device. Testing in Mobile Safari The iOS Simulator, for testing Mobile Safari on iPhone, iPod, and iPad, is available only on OS X. You must first download Xcode from the App Store. Once you have Xcode, go to Xcode | Preferences | Downloads | Components, where you can download the latest iOS Simulator. You can also refer to the following screenshot: You can then open the iOS Simulator from Xcode by accessing Xcode | Open Developer Tool | iOS Simulator. Alternatively, you can make a direct alias by accessing Xcode's package contents by accessing the Xcode icon's context menu in the Applications folder and selecting Show Package Contents. You can then access it at Xcode.app/Contents/Applications. Here you can create an alias, and drag it anywhere using Finder. Once you have the simulator (or an actual device); you can then attach Safari's Web Inspector to it, which makes debugging very easy. Firstly, open Safari and then launch Mobile Safari in either the iOS Simulator or on your actual device (ensure it's plugged in via its USB cable). Ensure that Web Inspector is enabled in Mobile Safari's settings via Settings | Safari | Advanced | Web Inspector. You can refer to the following screenshot: Then, in Safari, go to Develop | <your device name> | <tab name>. You can now use the Web Inspector to inspect your remote Mobile Safari instance as shown in the following screenshot: Testing in Mobile Safari in the iOS Simulator allows you to use the mouse to trigger all the relevant touch events. In addition, you can hold down the Alt key to test pinch gestures. Testing on Android's Chrome Testing on the Android on desktop is a little more complicated, due to the nature of the Android emulator. Being an emulator and not a simulator, its performance suffers (in the name of accuracy). It also isn't so straightforward to attach a debugger. You can download the Android SDK from http://developer.android.com/sdk. Inside the tools directory, you can launch the Android Emulator by setting up a new Android Virtual Device (AVD). You can then launch the browser to test the touch events. Keep in mind to use 10.0.2.2 instead of localhost to access local running servers. To debug with a real device and Google Chrome, official documentation can be followed at https://developers.google.com/chrome-developer-tools/docs/remote-debugging. When there isn't a built-in debugger or inspector, you can use generic tools such as Web Inspector Remote (Weinre). This tool is platform-agnostic, so you can use it for Mobile Safari, Android's Browser, Chrome for Android, Windows Phone, Blackberry, and so on. It can be downloaded from http://people.apache.org/~pmuellr/weinre. To start using Weinre, you need to include a JavaScript on the page you want to test, and then run the testing server. The URL of this JavaScript file is given to you when you run the weinre command from its downloaded directory. When you have Weinre running on your desktop, you can use its Web Inspector-like interface to do remote debugging and inspecting your touch events, you can send messages to console.log() and use a familiar interface for other debugging. Summary In this article the author has discussed about the touch events and has highlighted the limitations faced by such events when an unfavorable environment is encountered. He has also discussed the methods to test these events on non-touch enabled devices. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape [Article] Top Features You Need to Know About – Responsive Web Design [Article] Sencha Touch: Layouts Revisited [Article]
Read more
  • 0
  • 0
  • 1369

Packt
26 Nov 2013
6 min read
Save for later

CodeIgniter MVC – The Power of Simplicity!

Packt
26 Nov 2013
6 min read
(For more resources related to this topic, see here.) "Simplicity Wins In Big!" Back in the 80s there was a programming Language ADA that according to many contracts was required to be used. ADA was so complex and hard compared to C/C++ to maintain. Today ADA fades like Pascal. C/C++ is the simplicity winner for real time systems arena. In Telecom for network devices management protocols there were two standards in the 90s: CMIP (Common Management Information Protocol) and SNMP (Simple Network Management Protocol). Initially (90s) all telecom Requirement Papers required CMIP support. Eventually after several years a research found that there's about 1:10 or 10x effort to develop and maintain a same system based CMIP compared to SNMP. SNMP is the simplicity winner in network management systems arena! In VoIP or Media over IP, the H.323 and SIP (Session Initiation Protocol) were competing protocols in early 2000. H.323 had the messages in a cryptic binary way. SIP makes it all textual XML fashioned, easy to understand via text editor. Today almost all end point devices powered SIP while H.323 becomes a niche protocol for the VoIP backbone. SIP is the simplicity winner in VoIP arena! Back in 2010 I was looking for a good PHP platform to develop Web Application for my startup 1st product Logodial Zappix (http://zappix.com). I got a recommendation to use DRUPAL for this. I've tried the platform and found it very heavy to manipulate and change for my exact user interaction flow and experience I had in mind. Many times I had to compromise and the overhead of the platform was indeed horrible. Just make Hello world App and tons of irrelevant code will get into the project. Try to make free JavaScript and you found yourself struggling with the platform disabling you from the creativity of client side JavaScript and its Add-ons. I've decided to look for a better platform for my needs. Later on I've heard about Zend Framework MVC (Model-View-Controller framework typed). I've tried to work with it as it is based MVC and a lot of OOP usage, but I've found it heavy... Documentation seems great at first sight, but the more I've used I, looking for vivid examples and explanations, I found myself in endless close circle loops of links. It was lacking a clear explanation and vivid examples. The filling was like every match box moving task, I'd required a semi-trailer of declarations and calls to handle making it... Though it was MVC typed which I greatly liked. Keeping on with my search, I was looking for simple but powerful MVC based PHP which is my favorite language for server side. One day in early 2011 I got a note from a friend that there's a light and cool platform named CodeIgniter (CI in brief). I've checked the documentation link http://ellislab.com/codeigniter/user-guide/ and was amazed from the very clean, simple, well organized and well explained browsing experience. Having Examples? Yes, lots of clear examples, with great community. It was so great and simple. I felt like those platform designers were doing the best effort to make the simplest and most vivid code, reusable and clean OOP fashion from the infrastructure to the last function. I've tried making web app for a trail, trying to load helpers, libraries and use them and greatly loved the experience. Fast forward, today I see a matured CodeIgniter as a Lego like playground that I know well. I've wrote tons of models, helpers, libraries, controllers and views. CodeIgniter Simplicity enables me to do things fast and, clear and well maintained and expandable. In time I've gathered the most useful helpers and libraries, Ajax server and Browser side solutions for reuse, good links to useful add on such as the free Grid Powered Plug-In for CI the http://www.grocerycrud.com/ that keep improving day by day. Today I see Codeigniter as a matured scalable (See at&t and sprint Call Center Web apps based CI), reusability and simplicity champion. The following is the high-level architecture of the Codeigniter MVC with the Controller/s as the hub the application session. The CI controller main use cases are: Handles requests from web browser as HTTP URI call, based on submitted parameters (for example Submitting a Login with the credentials) or with no-parameters (for example Home Page navigation). Handles Asynchronous Ajax requests from the Web Client mostly as JSON HTTP POST request and response. Serving CRON job requests that creates HTTP URI request, calling controller methods, similar to browser navigation, silently from the CRON PHP module. The CI Views main features: Rendered by a controller with optionally set of parameters (scalar, arrays, objects) Has full open access to all the helpers, libraries, models as their rendering controller has. Has the freedom to integrate any JavaScript / 3rd party Web Client side plug-ins. The CI helper/s main features and fashion: Flat functions sets protected from duplication risks Can be loaded for use by any controller and accessed by any rendered view. Can access any CI resource / library and others via the &get_instance() service. The CI Libraries main features and fashion: OOP classes that can expand other 3rd party classes (For example, see the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). Can be used by the CI project controllers and all their rendered views. The CI Model main features and fashion: Similar to Libraries but has access to the default database, that can be expanded to multi databases and any other CI resource via the &get_instance(). OOP classes that can expand other 3rd party classes (For example, See the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). It seems that CodeIgniter is continuously increasing its popularity as it has a simple yet high quality OOP core that enables great creativity, reusability, and code clarity naming conventions, which are easy to expand (user class extends CI class), while more third-party application plugins (packages of views and/or models and/or libraries and/or helpers). I found Codeigniter flexible, great reusability enabler, having light infrastructure, enables developer creativity powered active global community. For a day to day the CI code clarity, high performance capabilities, minimal controllable footprint (You decide what helpers/libraries/models to load for each controller). Above all CI blessed with very fast learning curve of PHP developers and many blogs and community sites to share knowledge and raise and resolve issues and changes. CodeIgniter is the simplicity winner I've found for Web Apps MVC Server side. Summary This article introduces the CodeIgniter framework, while initially getting started with web-based applications. Resources for Article: Further resources on this subject: Database Interaction with Codeigniter 1.7 [Article] User Authentication with Codeigniter 1.7 using Facebook Connect [Article] CodeIgniter 1.7 and Objects [Article]
Read more
  • 0
  • 0
  • 5332
Visually different images

article-image-creating-blog-content-wordpress
Packt
25 Nov 2013
18 min read
Save for later

Creating Blog Content in WordPress

Packt
25 Nov 2013
18 min read
(For more resources related to this topic, see here.) Posting on your blog The central activity you'll be doing with your blog is adding posts. A post is like an article in a magazine; it's got a title, content, and an author (in this case, you, though WordPress allows multiple authors to contribute to a blog). If a blog is like an online diary, every post is an entry in that diary. A blog post also has a lot of other information attached to it, such as a date, excerpt, tags, and categories. In this section, you will learn how to create a new post and what kind of information to attach to it. Adding a simple post Let's review the process of adding a simple post to your blog. Whenever you want to add content or carry out a maintenance process on your WordPress website, you have to start by logging in to the WP Admin (WordPress Administration panel) of your site. To get to the admin panel, just point your web browser to http://yoursite.com/wp-admin. Remember that if you have installed WordPress in a subfolder (for example, blog), your URL has to include the subfolder (that is, http://yoursite.com/blog/wp-admin). When you first log in to the WP Admin, you'll be at the Dashboard. The Dashboard has a lot of information on it so don't worry about that right now. The quickest way to get to the Add New Post page at any time is to click on + New and then the Post link at the top of the page in the top bar. This is the Add New Post page: To add a new post to your site quickly, all you have to do is: Type in a title into the text field under Add New Post (for example, Making Lasagne). Type the text of your post in the content box. Note that the default view is Visual, but you actually have a choice of the Text view as well. Click on the Publish button, which is at the far right. Note that you can choose to save a draft or preview your post as well. Once you click on the Publish button, you have to wait while WordPress performs its magic. You'll see yourself still on the Edit Post screen, but now the following message would have appeared telling you that your post was published, and giving you a link View post: If you view the front page of your site, you'll see that your new post has been added at the top (newest posts are always at the top). Common post options Now that we've reviewed the basics of adding a post, let's investigate some of the other options on the Add New Post and Edit Post pages. In this section we'll look at the most commonly used options, and in the next section we'll look at the more advanced options. Categories and tags Categories and tags are two types of information that you can add to a blog post. We use them to organize the information in your blog by topic and content (rather than just by, say, date), and to help visitors find what they are looking for on your blog. Categories are primarily used for structural organizing. They can be hierarchical, meaning a category can be a parent of another category. A relatively busy blog will probably have at least 10 categories, but probably not more than 15 or 20. Each post in such a blog is likely to have from one up to, maybe four categories assigned to it. For example, a blog about food and cooking might have these categories: Cooking Adventures, In The Media, Ingredients, Opinion, Recipes Found, Recipes Invented, and Restaurants. Of course, the numbers mentioned are just suggestions; you can create and assign as many categories as you like. The way you structure your categories is entirely up to you as well. There are no true rules regarding this in the WordPress world, just guidelines like these. Tags are primarily used as shorthand for describing the topics covered in a particular blog post. A relatively busy blog will have anywhere from 15 to even 100 tags in use. Each post in this blog is likely to have 3 to 10 tags assigned to it. For example, a post on the food blog about a recipe for butternut squash soup may have these tags: soup, vegetarian, autumn, hot, and easy. Again, you can create and assign as many tags as you like. Let's add a new post to the blog. After you give it a title and content, let's add tags and categories. While adding tags, just type your list of tags into the Tags box on the right, separated by commas: Then click on the Add button. The tags you just typed in will appear below the text field with little x buttons next to them. You can click on an x button to delete a tag. Once you've used some tags in your blog, you'll be able to click on the Choose from the most used tags link in this box so that you can easily re-use tags. Categories work a bit differently than tags. Once you get your blog going, you'll usually just check the boxes next to existing categories in the Categories box. In this case, as we don't have any existing categories, we'll have to add one or two. In the Categories box on the right, click on the + Add New Category link. Type your category into the text field, and click on the Add New Category button. Your new category will show up in the list, already checked. Look at the following screenshot: If in the future you want to add a category that needs a parent category, select — Parent Category — from the pull-down menu before clicking on the Add New Category button. If you want to manage more details about your categories, move them around, rename them, assign parent categories, and assign descriptive text. You can do so on the Categories page. Click on the Publish button, and you're done (you can instead choose to schedule a post; we'll explore that in detail in a few pages). When you look at the front page of your site, you'll see your new post on the top, your new category in the sidebar, and the tags and category (that you chose for your post) listed under the post itself. Images in your posts Almost every good blog post needs an image! An image will give the reader an instant idea of what the post is about, and the image will draw people's attention as well. WordPress makes it easy to add an image to your post, control default image sizes, make minor edits to that image, and designate a featured image for your post. Adding an image to a post Luckily, WordPress makes adding images to your content very easy. Let's add an image to the post we just created. You can click on Edit underneath your post on the front page of your site to get there quickly. Alternatively, go back to the WP Admin, open Posts in the main menu, and then click on the post's title. To add an image to a post, first you'll need to have that image on your computer, or know the exact URL pointing to the image if it's already online. Before you get ready to upload an image, make sure that your image is optimized for the Web. Huge files will be uploaded slowly and slow down the process of viewing your site. Just to give you a good example here, I'm using a photo of my own so I don't have to worry about any copyright issues (always make sure to use only the images that you have the right to use, copyright infringement online is a serious problem, to say the least). I know it's on the desktop of my computer. Once you have a picture on your computer and know where it is, carry out the following steps to add the photo to your blog post: Click on the Add Media button, which is right above the content box and below the title box: The box that appears allows you to do a number of different things regarding the media you want to include in your post. The most user-friendly feature here, however, is the drag-and-drop support. Just drag the image from your desktop and drop it into the center area of the page labeled as Drop files anywhere to upload. Immediately after dropping the image, the uploader bar will show the progress of the operation, and when it's done, you'll be able to do some final tuning up. The fields that are important right now are Title, Alt Text, Alignment, Link To, and Size. Title is a description for the image, Alt Text is a phrase that's going to appear instead of the image in case the file goes missing or any other problems present themselves, Alignment will tell the image whether to have text wrap around it and whether it should be right, left, or center, Link To instructs WordPress whether or not to link the image to anything (a common solution is to select None), and Size is the size of the image. Once you have all of the above filled out click on Insert into post. This box will disappear, and your image will show up in the post—right where your cursor was prior to clicking on the Add Media button—on the edit page itself (in the visual editor, that is. If you're using the text editor, the HTML code of the image will be displayed instead). Now, click on the Update button, and go and look at the front page of your site again. There's your image! Controlling default image sizes You may be wondering about those image sizes. What if you want bigger or smaller thumbnails? Whenever you upload an image, WordPress creates three versions of that image for you. You can set the pixel dimensions of those three versions by opening Settings in the main menu, and then clicking on Media. This takes you to the Media Settings page. Here you can specify the size of the uploaded images for: Thumbnail size Medium size Large size If you change the dimensions on this page, and click on the Save Changes button, only images you upload in the future will be affected. Images you've already uploaded to the site will have had their thumbnail, medium, and large versions created already using the old settings. It's a good idea to decide what you want your three media sizes to be early on in your site, so you can set them and have them applied to all images, right from the start. Another thing about uploading images is the whole craze with HiDPI displays, also called Retina displays. Currently, WordPress is in a kind of a transitional phase with images and being in tune with the modern display technology; the Retina Ready functionality was introduced quite recently in WordPress 3.5. In short, if you want to make your images Retina-compatible (meaning that they look good on iPads and other devices with HiDPI screens), you should upload the images at twice the dimensions you plan to display them in. For example, if you want your image to be presented as 800 pixel wide and 600 pixel high, upload it as 1,600 pixel wide and 1,200 pixel high. WordPress will manage to display it properly anyway, and whoever visits your site from a modern device will see a high-definition version of the image. In future versions, WordPress will surely provide a more managed way of handling Retina-compatible images. Editing an uploaded image As of WordPress 2.9, you can now make minor edits on images you've uploaded. In fact, every image that has been previously uploaded to WordPress can be edited. In order to do this, go to Media Library by clicking on the Media button in the main sidebar. What you'll see is a standard WordPress listing (similar to the one we saw while working with posts) presenting all media files and allowing you to edit each one. When you click on the Edit link and then the Edit Image button on the subsequent screen, you'll enter the Edit Media section. Here, you can perform a number of operations to make your image just perfect. As it turns out, WordPress does a good enough job with simple image tuning so you don't really need expensive software such as Photoshop for this. Among the possibilities you'll find cropping, rotating, and flipping vertically and horizontally. For example, you can use your mouse to draw a box as I have done in the preceding image. On the right, in the box marked Image Crop, you'll see the pixel dimensions of your selection. Click on the Crop icon (top left), then the Thumbnail radio button (on the right), and then Save (just below your photo). You now have a new thumbnail! Of course, you can adjust any other version of your image just by making a different selection prior to hitting the Save button. Play around a little and you can become familiar with the details. Designating a featured image As of WordPress 2.9, you can designate a single image that represents your post. This is referred to as the featured image. Some themes will make use of this, and some will not. The default theme, the one we've been using, is named Twenty Thirteen, and it uses the featured image right above the post on the front page. Depending on the theme you're using, its behavior with featured images can vary, but in general, every modern theme supports them in one way or the other. In order to set a featured image, go to the Edit Post screen. In the sidebar you'll see a box labeled Featured Image. Just click on the Set featured image link. After doing so, you'll see a pop-up window, very similar to the one we used while uploading images. Here, you can either upload a completely new image or select an existing image by clicking on it. All you have to do now is click on the Set featured image button in the bottom right corner. After completing the operation, you can finally see what your new image looks like on the front page. Also, keep in mind that WordPress uses featured images in multiple places not only the front page. And as mentioned above, much of this behavior depends on your current theme. Using the visual editor versus text editor WordPress comes with a visual editor, otherwise known as a WYSIWYG editor (pronounced wissy-wig, and stands for What You See Is What You Get). This is the default editor for typing and editing your posts. If you're comfortable with HTML, you may prefer to write and edit your posts using the text editor—particularly useful if you want to add special content or styling. To switch from the rich text editor to the text editor, click on the Text tab next to the Visual tab at the top of the content box: You'll see your post in all its raw HTML glory, and you'll get a new set of buttons that lets you quickly bold and italicize text, as well as add link code, image code, and so on. You can make changes and swap back and forth between the tabs to see the result. Even though the text editor allows you to use some HTML elements, it's not a fully fledged HTML support. For instance, using the <p> tags is not necessary in the text editor, as they will be stripped by default. In order to create a new paragraph in the text editor, all you have to do is press the Enter key twice. That being said, at the same time, the text editor is currently the only way to use HTML tables in WordPress (within posts and pages). You can easily place your table content inside the <table><tr><td> tags and WordPress won't alter it in any way, effectively allowing you to create the exact table you want. Another thing the text editor is most commonly used for is introducing custom HTML parameters in the <img /> and <a> tags and also custom CSS classes in other popular tags. Some content creators actually prefer working with the text editor rather than the visual editor because it gives them much more control and more certainty regarding the way their content is going to be presented on the frontend. Lead and body One of many interesting publishing features WordPress has to offer is the concept of the lead and the body of the post. This may sound like a strange thing, but it's actually quite simple. When you're publishing a new post, you don't necessarily want to display its whole contents right away on the front page. A much more user-friendly approach is to display only the lead, and then display the complete post under its individual URL. Achieving this in WordPress is very simple. All you have to do is use the Insert More Tag button available in the visual editor (or the more button in the text editor). Simply place your cursor exactly where you want to break your post (the text before the cursor will become the lead) and then click on the Insert More Tag button: An alternative way of using this tag is to switch to the text editor and input the tag manually, which is <!--more-->. Both approaches produce the same result. Clicking on the main Update button will save the changes. On the front page, most WordPress themes display such posts by presenting the lead along with a Continue reading link, and then the whole post (both the lead and the rest of the post) is displayed under the post's individual URL. Drafts, pending articles, timestamps, and managing posts There are four additional, simple but common, items I'd like to cover in this section: drafts, pending articles, timestamps, and managing posts. Drafts WordPress gives you the option to save a draft of your post so that you don't have to publish it right away but can still save your work. If you've started writing a post and want to save a draft, just click on the Save Draft button at the right (in the Publish box), instead of the Publish button. Even if you don't click on the Save Draft button, WordPress will attempt to save a draft of your post for you, about once a minute. You'll see this in the area just below the content box. The text will say Saving Draft... and then show the time of the last draft saved: At this point, after a manual save or an autosave, you can leave the Edit Post page and do other things. You'll be able to access all of your draft posts from Dashboard or from the Edit Posts page. In essence, drafts are meant to hold your "work in progress" which means all the articles that haven't been finished yet, or haven't even been started yet, and obviously everything in between. Pending articles Pending articles is a functionality that's going to be a lot more helpful to people working with multi-author blogs, rather than single-author blogs. The thing is that in a bigger publishing structure, there are individuals responsible for different areas of the publishing process. WordPress, being a quality tool, supports such a structure by providing a way to save articles as Pending Review. In an editor-author relationship, if an editor sees a post marked as Pending Review, they know that they should have a look at it and prepare it for the final publication. That's it for the theory, and now how to do it. While creating a new post, click on the Edit link right next to the Status: Draft label: Right after doing so, you'll be presented with a new drop-down menu from which you can select Pending Review and then click on the OK button. Now just click on the Save as Pending button that will appear in place of the old Save Draft button, and you have a shiny new article that's pending review. Timestamps WordPress will also let you alter the timestamp of your post. This is useful if you are writing a post today that you wish you'd published yesterday, or if you're writing a post in advance and don't want it to show up until the right day. By default, the timestamp will be set to the moment you publish your post. To change it, just find the Publish box, and click on the Edit link (next to the calendar icon and Publish immediately), and fields will show up with the current date and time for you to change: Change the details, click on the OK button, and then click on Publish to publish your post (or save a draft). Managing posts If you want to see a list of your posts so that you can easily skim and manage them, you just need to go to the Edit Post page in the WP Admin and navigate to Posts in the main menu. Once you do so, there are many things you can do on this page as with every management page in the WP Admin.
Read more
  • 0
  • 0
  • 8020

article-image-working-data-components
Packt
22 Nov 2013
15 min read
Save for later

Working with Data Components

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Introducing the DataList component The DataList component displays a collection of data in the list layout with several display types and supports AJAX pagination. The DataList component iterates through a collection of data and renders its child components for each item. Let us see how to use <p:dataList>to display a list of tag names as an unordered list: <p:dataList value="#{tagController.tags}" var="tag" type="unordered" itemType="disc"> #{tag.label} </p:dataList> The preceding <p:dataList> component displays tag names as an unordered list of elements marked with disc type bullets. The valid type options are unordered, ordered, definition, and none. We can use type="unordered" to display items as an unordered collection along with various itemType options such as disc, circle, and square. By default, type is set to unordered and itemType is set to disc. We can set type="ordered" to display items as an ordered list with various itemType options such as decimal, A, a, and i representing numbers, uppercase letters, lowercase letters, and roman numbers respectively. Time for action – displaying unordered and ordered data using DataList Let us see how to display tag names as unordered and ordered lists with various itemType options. Create <p:dataList> components to display items as unordered and ordered lists using the following code: <h:form> <p:panel header="Unordered DataList"> <h:panelGrid columns="3"> <h:outputText value="Disc"/> <h:outputText value="Circle" /> <h:outputText value="Square" /> <p:dataList value="#{tagController.tags}" var="tag" itemType="disc"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="circle"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="square"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> <p:panel header="Ordered DataList"> <h:panelGrid columns="4"> <h:outputText value="Number"/> <h:outputText value="Uppercase Letter" /> <h:outputText value="Lowercase Letter" /> <h:outputText value="Roman Letter" /> <p:dataList value="#{tagController.tags}" var="tag" type="ordered"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="A"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="a"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="i"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> </h:form> Implement the TagController.getTags() method to return a collection of tag objects: public class TagController { private List<Tag> tags = null; public TagController() { tags = loadTagsFromDB(); } public List<Tag> getTags() { return tags; } } What just happened? We have created DataList components to display tag names as an unordered list using type="unordered" and as an ordered list using type="ordered" with various supported itemTypes values. This is shown in the following screenshot: Using DataList with pagination support DataList has built-in pagination support that can be enabled by setting paginator="true". By enabling pagination, the various page navigation options will be displayed using the default paginator template. We can customize the paginator template to display only the desired options. The paginator can be customized using the paginatorTemplate option that accepts the following keys of UI controls: FirstPageLink LastPageLink PreviousPageLink NextPageLink PageLinks CurrentPageReport RowsPerPageDropdown Note that {RowsPerPageDropdown} has its own template, and options to display is provided via the rowsPerPageTemplate attribute (for example, rowsPerPageTemplate="5,10,15"). Also, {CurrentPageReport} has its own template defined with the currentPageReportTemplate option. You can use the {currentPage}, {totalPages}, {totalRecords}, {startRecord}, and {endRecord} keywords within the currentPageReport template. The default is "{currentPage} of {totalPages}". The default paginator template is "{FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink}". We can customize the paginator template to display only the desired options. For example: {CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown} The paginator can be positioned using the paginatorPosition attribute in three different locations: top, bottom, or both(default). The DataList component provides the following attributes for customization: rows: This is the number of rows to be displayed per page. first: This specifies the index of the first row to be displayed. The default is 0. paginator: This enables pagination. The default is false. paginatorTemplate: This is the template of the paginator. rowsPerPageTemplate: This is the template of the rowsPerPage dropdown. currentPageReportTemplate: This is the template of the currentPageReport UI. pageLinks: This specifies the maximum number of page links to display. The default value is 10. paginatorAlwaysVisible: This defines if paginator should be hidden when the total data count is less than the number of rows per page. The default is true. rowIndexVar: This specifies the name of the iterator to refer to for each row index. varStatus: This specifies the name of the exported request scoped variable to represent the state of the iteration same as in <ui:repeat> attribute varStatus. Time for action – using DataList with pagination Let us see how we can use the DataList component's pagination support to display five tags per page. Create a DataList component with pagination support along with custom paginatorTemplate: <p:panel header="DataList Pagination"> <p:dataList value="#{tagController.tags}" var="tag" id="tags" type="none" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15"> <f:facet name="header"> Tags </f:facet> <h:outputText value="#{tag.id} - #{tag.label}" style="margin-left:10px" /> <br/> </p:dataList> </p:panel> What just happened? We have created a DataList component along with pagination support by setting paginator="true". We have customized the paginator template to display additional information such as CurrentPageReport and RowsPerPageDropdown. Also, we have used the rowsPerPageTemplate attribute to specify the values for RowsPerPageDropdown. The following screenshot displays the result: Displaying tabular data using the DataTable component DataTable is an enhanced version of the standard DataTable that provides various additional features such as: Pagination Lazy loading Sorting Filtering Row selection Inline row/cell editing Conditional styling Expandable rows Grouping and SubTable and many more In our TechBuzz application, the administrator can view a list of users and enable/disable user accounts. First, let us see how we can display list of users using basic DataTable as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <f:facet name="header"> List of Users </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> <f:facet name="footer"> Total no. of Users: #{fn:length(adminController.users)}. </f:facet> </p:dataTable> The following screenshot shows us the result: PrimeFaces 4.0 introduced the Sticky component and provides out-of-the-box support for DataTable to make the header as sticky while scrolling using the stickyHeader attribute: <p:dataTable var="user" value="#{adminController.users}" stickyHeader="true"> ... </p:dataTable> Using pagination support If there are a large number of users, we may want to display users in a page-by-page style. DataTable has in-built support for pagination. Time for action – using DataTable with pagination Let us see how we can display five users per page using pagination. Create a DataTable component using pagination to display five records per page, using the following code: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" currentPageReportTemplate="( {startRecord} - {endRecord}) of {totalRecords} Records." rowsPerPageTemplate="5,10,15"> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> What just happened? We have created a DataTable component with the pagination feature to display five rows per page. Also, we have customized the paginator template and provided an option to change the page size dynamically using the rowsPerPageTemplate attribute. Using columns sorting support DataTable comes with built-in support for sorting on a single column or multiple columns. You can define a column as sortable using the sortBy attribute as follows: <p:column headerText="FirstName" sortBy="#{user.firstName}"> <h:outputText value="#{user.firstName}" /> </p:column> You can specify the default sort column and sort order using the sortBy and sortOrder attributes on the <p:dataTable> element: <p:dataTable id="usersTbl2" var="user" value="#{adminController.users}" sortBy="#{user.firstName}" sortOrder="descending"> </p:dataTable> The <p:dataTable> component's default sorting algorithm uses a Java comparator, you can use your own customized sort method as well: <p:column headerText="FirstName" sortBy="#{user.firstName}" sortFunction="#{adminController.sortByFirstName}"> <h:outputText value="#{user.firstName}" /> </p:column> public int sortByFirstName(Object firstName1, Object firstName2) { //return -1, 0 , 1 if firstName1 is less than, equal to or greater than firstName2 respectively return ((String)firstName1).compareToIgnoreCase(((String)firstName2)); } By default, DataTable's sortMode is set to single, to enable sorting on multiple columns set sortMode to multiple. In multicolumns' sort mode, you can click on a column while the metakey (Ctrl or command) adds the column to the order group: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" sortMode="multiple"> </p:dataTable> Using column filtering support DataTable provides support for column-level filtering as well as global filtering (on all columns) and provides an option to hold the list of filtered records. In addition to the default match mode startsWith, we can use various other match modes such as endsWith, exact, and contains. Time for action – using DataTable with filtering Let us see how we can use filters with users' DataTable. Create a DataTable component and apply column-level filters and a global filter to apply filter on all columns: <p:dataTable widgetVar="userTable" var="user" value="#{adminController.users}" filteredValue="#{adminController.filteredUsers}" emptyMessage="No Users found for the given Filters"> <f:facet name="header"> <p:outputPanel> <h:outputText value="Search all Columns:" /> <p:inputText id="globalFilter" onkeyup="userTable.filter()" style="width:150px" /> </p:outputPanel> </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email" filterBy="#{user.emailId}" footerText="contains" filterMatchMode="contains"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName" filterBy="#{user.firstName}" footerText="startsWith"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="LastName" filterBy="#{user.lastName}" filterMatchMode="endsWith" footerText="endsWith"> <h:outputText value="#{user.lastName}" /> </p:column> <p:column headerText="Disabled" filterBy="#{user.disabled}" filterOptions="#{adminController.userStatusOptions}" filterMatchMode="exact" footerText="exact"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> Initialize userStatusOptions in AdminController ManagedBean. @ManagedBean @ViewScoped public class AdminController { private List<User> users = null; private List<User> filteredUsers = null; private SelectItem[] userStatusOptions; public AdminController() { users = loadAllUsersFromDB(); this.userStatusOptions = new SelectItem[3]; this.userStatusOptions[0] = new SelectItem("", "Select"); this.userStatusOptions[1] = new SelectItem("true", "True"); this.userStatusOptions[2] = new SelectItem("false", "False"); } //setters and getters } What just happened? We have used various filterMatchMode instances, such as startsWith, endsWith, and contains, while applying column-level filters. We have used the filterOptions attribute to specify the predefined filter values, which is displayed as a select drop-down list. As we have specified filteredValue="#{adminController.filteredUsers}", once the filters are applied the filtered users list will be populated into the filteredUsers property. This following is the resultant screenshot: Since PrimeFaces Version 4.0, we can specify the sortBy and filterBy properties as sortBy="emailId" and filterBy="emailId" instead of sortBy="#{user.emailId}" and filterBy="#{user.emailId}". A couple of important tips It is suggested to use a scope longer than the request such as the view scope to keep the filteredValue attribute so that the filtered list is still accessible after filtering. The filter located at the header is a global one applying on all fields; this is implemented by calling the client-side API method called filter(). The important part is to specify the ID of the input text as globalFilter, which is a reserved identifier for DataTable. Selecting DataTable rows Selecting one or more rows from a table and performing operations such as editing or deleting them is a very common requirement. The DataTable component provides several ways to select a row(s). Selecting single row We can use a PrimeFaces' Command component, such as commandButton or commandLink, and bind the selected row to a server-side property using <f:setPropertyActionListener>, shown as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <!-- Column definitions --> <p:column style="width:20px;"> <p:commandButton id="selectButton" update=":form:userDetails" icon="ui-icon-search" title="View"> <f:setPropertyActionListener value="#{user}" target="#{adminController.selectedUser}" /> </p:commandButton> </p:column> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Selecting rows using a row click Instead of having a separate button to trigger binding of a selected row to a server-side property, PrimeFaces provides another simpler way to bind the selected row by using selectionMode, selection, and rowKey attributes. Also, we can use the rowSelect and rowUnselect events to update other components based on the selected row, shown as follows: <p:dataTable var="user" value="#{adminController.users}" selectionMode="single" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:ajax event="rowSelect" listener="#{adminController.onRowSelect}" update=":form:userDetails"/> <p:ajax event="rowUnselect" listener="#{adminController.onRowUnselect}" update=":form:userDetails"/> <!-- Column definitions --> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Similarly, we can select multiple rows using selectionMode="multiple" and bind the selection attribute to an array or list of user objects: <p:dataTable var="user" value="#{adminController.users}" selectionMode="multiple" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <!-- Column definitions --> </p:dataTable> rowKey should be a unique identifier from your data model and should be used by DataTable to find the selected rows. You can either define this key by using the rowKey attribute or by binding a data model that implements org.primefaces.model.SelectableDataModel. When the multiple selection mode is enabled, we need to hold the Ctrl or command key and click on the rows to select multiple rows. If we don't hold on to the Ctrl or command key and click on a row and the previous selection will be cleared with only the last clicked row selected. We can customize this behavior using the rowSelectMode attribute. If you set rowSelectMode="add", when you click on a row, it will keep the previous selection and add the current selected row even though you don't hold the Ctrl or command key. The default rowSelectMode value is new. We can disable the row selection feature by setting disabledSelection="true". Selecting rows using a radio button / checkbox Another very common scenario is having a radio button or checkbox for each row, and the user can select one or more rows and then perform actions such as edit or delete. The DataTable component provides a radio-button-based single row selection using a nested <p:column> element with selectionMode="single": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:column selectionMode="single"/> <!-- Column definitions --> </p:dataTable> The DataTable component also provides checkbox-based multiple row selection using a nested <p:column> element with selectionMode="multiple": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <p:column selectionMode="multiple"/> <!-- Column definitions --> </p:dataTable> In our TechBuzz application, the administrator would like to have a facility to be able to select multiple users and disable them at one go. Let us see how we can implement this using the checkbox-based multiple rows selection.
Read more
  • 0
  • 0
  • 4304

article-image-exploring-streams
Packt
22 Nov 2013
15 min read
Save for later

Exploring streams

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) According to Bjarne Stoustrup in his book The C++ Programming Language, Third Edition: Designing and implementing a general input/output facility for a programming language is notoriously difficult... An I/O facility should be easy, convenient, and safe to use; efficient and flexible; and, above all, complete. It shouldn't surprise anyone that a design team, focused on providing efficient and easy I/O, has delivered such a facility through Node. Through a symmetrical and simple interface, which handles data buffers and stream events so that the implementer does not have to, Node's Stream module is the preferred way to manage asynchronous data streams for both internal modules and, hopefully, for the modules developers will create. A stream in Node is simply a sequence of bytes. At any time, a stream contains a buffer of bytes, and this buffer has a zero or greater length: Because each character in a stream is well defined, and because every type of digital data can be expressed in bytes, any part of a stream can be redirected, or "piped", to any other stream, different chunks of the stream can be sent do different handlers, and so on. In this way stream input and output interfaces are both flexible and predictable and can be easily coupled. Digital streams are well described using the analogy of fluids, where individual bytes (drops of water) are being pushed through a pipe. In Node, streams are objects representing data flows that can be written to and read from asynchronously. The Node philosophy is a non-blocking flow, I/O is handled via streams, and so the design of the Stream API naturally duplicates this general philosophy. In fact, there is no other way of interacting with streams except in an asynchronous, evented manner—you are prevented, by design, from blocking I/O. Five distinct base classes are exposed via the abstract Stream interface: Readable, Writable, Duplex, Transform, and PassThrough. Each base class inherits from EventEmitter, which we know of as an interface to which event listeners and emitters can be bound. As we will learn, and here will emphasize, the Stream interface is an abstract interface. An abstract interface functions as a kind of blueprint or definition, describing the features that must be built into each constructed instance of a Stream object. For example, a readable stream implementation is required to implement a public read method which delegates to the interface's internal _read method. In general, all stream implementations should follow these guidelines: As long as data exists to send, write to a stream until that operation returns false, at which point the implementation should wait for a drain event, indicating that the buffered stream data has emptied Continue to call read until a null value is received, at which point wait for a readable event prior to resuming reads Several Node I/O modules are implemented as streams. Network sockets, file readers and writers, stdin and stdout, zlib, and so on. Similarly, when implementing a readable data source, or data reader, one should implement that interface as a Stream interface. It is important to note that as of Node 0.10.0 the Stream interface changed in some fundamental ways. The Node team has done its best to implement backwards-compatible interfaces, such that (most) older programs will continue to function without modification. In this article we will not spend any time discussing the specific features of this older API, focusing on the current (and future) design. The reader is encouraged to consult Node's online documentation for information on migrating older programs. Implementing readable streams Streams producing data that another process may have an interest in are normally implemented using a Readable stream. A Readable stream saves the implementer all the work of managing the read queue, handling the emitting of data events, and so on. To create a Readable stream: var stream = require('stream'); var readable = new stream.Readable({ encoding : "utf8", highWaterMark : 16000, objectMode: true }); As previously mentioned, Readable is exposed as a base class, which can be initialized through three options: encoding: Decode buffers into the specified encoding, defaulting to UTF-8. highWaterMark: Number of bytes to keep in the internal buffer before ceasing to read from the data source. The default is 16 KB. objectMode: Tell the stream to behave as a stream of objects instead of a stream of bytes, such as a stream of JSON objects instead of the bytes in a file. Default false. In the following example we create a mock Feed object whose instances will inherit the Readable stream interface. Our implementation need only implement the abstract _read method of Readable, which will push data to a consumer until there is nothing more to push, at which point it triggers the Readable stream to emit an "end" event by pushing a null value: var Feed = function(channel) { var readable = new stream.Readable({ encoding : "utf8" }); var news = [ "Big Win!", "Stocks Down!", "Actor Sad!" ]; readable._read = function() { if(news.length) { return readable.push(news.shift() + "n"); } readable.push(null); }; return readable; } Now that we have an implementation, a consumer might want to instantiate the stream and listen for stream events. Two key events are readable and end. The readable event is emitted as long as data is being pushed to the stream. It alerts the consumer to check for new data via the read method of Readable. Note again how the Readable implementation must provide a private _read method, which services the public read method exposed to the consumer API. The end event will be emitted whenever a null value is passed to the push method of our Readable implementation. Here we see a consumer using these methods to display new stream data, providing a notification when the stream has stopped sending data: var feed = new Feed(); feed.on("readable", function() { var data = feed.read(); data && process.stdout.write(data); }); feed.on("end", function() { console.log("No more news"); }); Similarly, we could implement a stream of objects through the use of the objectMode option: var readable = new stream.Readable({ objectMode : true }); var prices = [ { price : 1 }, { price : 2 } ]; ... readable.push(prices.shift()); // } { prices : 1 } // } { prices : 2 } Here we see that each read event is receiving an object, rather than a buffer or string. Finally, the read method of a Readable stream can be passed a single argument indicating the number of bytes to be read from the stream's internal buffer. For example, if it was desired that a file should be read one byte at a time, one might implement a consumer using a routine similar to: readable.push("Sequence of bytes"); ... feed.on("readable", function() { var character; while(character = feed.read(1)) { console.log(character); }; }); // } S // } e // } q // } ... Here it should be clear that the Readable stream's buffer was filled with a number of bytes all at once, but was read from discretely. Pushing and pulling We have seen how a Readable implementation will use push to populate the stream buffer for reading. When designing these implementations it is important to consider how volume is managed, at either end of the stream. Pushing more data into a stream than can be read can lead to complications around exceeding available space (memory). At the consumer end it is important to maintain awareness of termination events, and how to deal with pauses in the data stream. One might compare the behavior of data streams running through a network with that of water running through a hose. As with water through a hose, if a greater volume of data is being pushed into the read stream than can be efficiently drained out of the stream at the consumer end through read, a great deal of back pressure builds, causing a data backlog to begin accumulating in the stream object's buffer. Because we are dealing with strict mathematical limitations, read simply cannot be compelled to release this pressure by reading more quickly—there may be a hard limit on available memory space, or other limitation. As such, memory usage can grow dangerously high, buffers can overflow, and so forth. A stream implementation should therefore be aware of, and respond to, the response from a push operation. If the operation returns false this indicates that the implementation should cease reading from its source (and cease pushing) until the next _read request is made. In conjunction with the above, if there is no more data to push but more is expected in the future the implementation should push an empty string (""), which adds no data to the queue but does ensure a future readable event. While the most common treatment of a stream buffer is to push to it (queuing data in a line), there are occasions where one might want to place data on the front of the buffer (jumping the line). Node provides an unshift operation for these cases, which behavior is identical to push, outside of the aforementioned difference in buffer placement. Writable streams A Writable stream is responsible for accepting some value (a stream of bytes, a string) and writing that data to a destination. Streaming data into a file container is a common use case. To create a Writable stream: var stream = require('stream'); var readable = new stream.Writable({ highWaterMark : 16000, decodeStrings: true }); The Writable streams constructor can be instantiated with two options: highWaterMark: The maximum number of bytes the stream's buffer will accept prior to returning false on writes. Default is 16 KB decodeStrings: Whether to convert strings into buffers before writing. Default is true. As with Readable streams, custom Writable stream implementations must implement a _write handler, which will be passed the arguments sent to the write method of instances. One should think of a Writable stream as a data target, such as for a file you are uploading. Conceptually this is not unlike the implementation of push in a Readable stream, where one pushes data until the data source is exhausted, passing null to terminate reading. For example, here we write 100 bytes to stdout: var stream = require('stream'); var writable = new stream.Writable({ decodeStrings: false }); writable._write = function(chunk, encoding, callback) { console.log(chunk); callback(); } var w = writable.write(new Buffer(100)); writable.end(); console.log(w); // Will be `true` There are two key things to note here. First, our _write implementation fires the callback function immediately after writing, a callback that is always present, regardless of whether the instance write method is passed a callback directly. This call is important for indicating the status of the write attempt, whether a failure (error) or a success. Second, the call to write returned true. This indicates that the internal buffer of the Writable implementation has been emptied after executing the requested write. What if we sent a very large amount of data, enough to exceed the default size of the internal buffer? Modifying the above example, the following would return false: var w = writable.write(new Buffer(16384)); console.log(w); // Will be 'false' The reason this write returns false is that it has reached the highWaterMark option—default value of 16 KB (16 * 1024). If we changed this value to 16383, write would again return true (or one could simply increase its value). What to do when write returns false? One should certainly not continue to send data! Returning to our metaphor of water in a hose: when the stream is full, one should wait for it to drain prior to sending more data. Node's Stream implementation will emit a drain event whenever it is safe to write again. When write returns false listen for the drain event before sending more data. Putting together what we have learned, let's create a Writable stream with a highWaterMark value of 10 bytes. We will send a buffer containing more than 10 bytes (composed of A characters) to this stream, triggering a drain event, at which point we write a single Z character. It should be clear from this example that Node's Stream implementation is managing the buffer overflow of our original payload, warning the original write method of this overflow, performing a controlled depletion of the internal buffer, and notifying us when it is safe to write again: var stream = require('stream'); var writable = new stream.Writable({ highWaterMark: 10 }); writable._write = function(chunk, encoding, callback) { process.stdout.write(chunk); callback(); } writable.on("drain", function() { writable.write("Zn"); }); var buf = new Buffer(20, "utf8"); buf.fill("A"); console.log(writable.write(buf.toString())); // false The result should be a string of 20 A characters, followed by false, then followed by the character Z. The fluid data in a Readable stream can be easily redirected to a Writable stream. For example, the following code will take any data sent by a terminal (stdin is a Readable stream) and pass it to the destination Writable stream, stdout: process.stdin.pipe(process.stdout); Whenever a Writable stream is passed to a Readable stream's pipe method, a pipe event will fire. Similarly, when a Writable stream is removed as a destination for a Readable stream, the unpipe event fires. To remove a pipe, use the following: unpipe(destination stream)   Duplex streams A duplex stream is both readable and writeable. For instance, a TCP server created in Node exposes a socket that can be both read from and written to: var stream = require("stream"); var net = require("net"); net .createServer(function(socket) { socket.write("Go ahead and type something!"); socket.on("readable", function() { process.stdout.write(this.read()) }); }) .listen(8080); When executed, this code will create a TCP server that can be connected to via Telnet: telnet 127.0.0.1 8080 Upon connection, the connecting terminal will print out Go ahead and type something! —writing to the socket. Any text entered in the connecting terminal will be echoed to the stdout of the terminal running the TCP server (reading from the socket). This implementation of a bi-directional (duplex) communication protocol demonstrates clearly how independent processes can form the nodes of a complex and responsive application, whether communicating across a network or within the scope of a single process. The options sent when constructing a Duplex instance merge those sent to Readable and Writable streams, with no additional parameters. Indeed, this stream type simply assumes both roles, and the rules for interacting with it follow the rules for the interactive mode being used. As a Duplex stream assumes both read and write roles, any implementation is required to implement both _write and _read methods, again following the standard implementation details given for the relevant stream type. Transforming streams On occasion stream data needs to be processed, often in cases where one is writing some sort of binary protocol or other "on the fly" data transformation. A Transform stream is designed for this purpose, functioning as a Duplex stream that sits between a Readable stream and a Writable stream. A Transform stream is initialized using the same options used to initialize a typical Duplex stream. Where Transform differs from a normal Duplex stream is in its requirement that the custom implementation merely provide a _transform method, excluding the _write and _read method requirement. The _transform method will receive three arguments, first the sent buffer, an optional encoding argument, and finally a callback which _transform is expected to call when the transformation is complete: _transform = function(buffer, encoding, cb) { var transformation = "..."; this.push(transformation) cb(); } Let's imagine a program that wishes to convert ASCII (American Standard Code for Information Interchange) codes into ASCII characters, receiving input from stdin. We would simply pipe our input to a Transform stream, then piping its output to stdout: var stream = require('stream'); var converter = new stream.Transform(); converter._transform = function(num, encoding, cb) { this.push(String.fromCharCode(new Number(num)) + "n") cb(); } process.stdin.pipe(converter).pipe(process.stdout); Interacting with this program might produce an output resembling the following: 65 A 66 B 256 A 257 a   Using PassThrough streams This sort of stream is a trivial implementation of a Transform stream, which simply passes received input bytes through to an output stream. This is useful if one doesn't require any transformation of the input data, and simply wants to easily pipe a Readable stream to a Writable stream. PassThrough streams have benefits similar to JavaScript's anonymous functions, making it easy to assert minimal functionality without too much fuss. For example, it is not necessary to implement an abstract base class, as one does with for the _read method of a Readable stream. Consider the following use of a PassThrough stream as an event spy: var fs = require('fs'); var stream = new require('stream').PassThrough(); spy.on('end', function() { console.log("All data has been sent"); }); fs.createReadStream("./passthrough.js").pipe(spy).pipe(process.std out); Summary As we have learned, Node's designers have succeeded in creating a simple, predictable, and convenient solution to the very difficult problem of enabling efficient I/O between disparate sources and targets. Its abstract Stream interface facilitates the instantiation of consistent readable and writable interfaces, and the extension of this interface into HTTP requests and responses, the filesystem, child processes, and other data channels makes stream programming with Node a pleasant experience. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Getting Started with Zombie.js [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 3127
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
Packt
21 Nov 2013
7 min read
Save for later

Zurb Foundation – an Overview

Packt
21 Nov 2013
7 min read
(For more resources related to this topic, see here.) Most importantly, you can apply your creativity to make the design your own. Foundation gives you the tools you need for this. Then it gets out of the way and your site becomes your own. Especially when you advance to using the Foundation's SASS variables, functions and mixins, you have the ability to make your site your own unique creation. Foundation's grid system The foundation (pun intended) of Zurb Foundation is its grid system—rows and columns—much like a spread sheet, a blank sheet of graph paper, or tables, similar to what we used to use for HTML layout. Think of it as the canvas upon which you design your website. Each cell is a content area that can be merged with other cells, beside or below it, to make larger content areas. A default installation of Foundation will be based on twelve cells in a row. A column is comprised of one or more individual cells. Lay out a website Let's put Foundation's grid system to work in an example. We'll build a basic website with a two part header, a two part content area, a sidebar, and a three part footer area. With the simple techniques we demonstrate here, you can craft mostly any layout you want. Here is the mobile view Foundation works best when you design for small devices first, so here is what we want our small device (mobile) view to look like: This is the layout we want on mobile or small devices. But we've labeled the content areas with a title that describes where we want them on a regular desktop. By doing this, we are thinking ahead and creating a view ready for the desktop as well. Here is the desktop view Since a desktop display is typically wider than a mobile display, we have more horizontal space and things that had to be presented vertically on the mobile view can be displayed horizontally on the desktop view. Here is how we want our regular desktop or laptop to display the same content areas: These are not necessarily drawn to scale. It is the layout we are interested in. The two part header went from being one above the other in the mobile view to being side-by-side in the desktop view. The header on the top went left and the bottom header went right. All these make perfect sense. However, the sidebar shifted from being above the content area in the mobile view and shifted to its right in the mobile view. That's not natural when rendering HTML. Something must have happened! The content areas, left and right, stayed the same in both the views. And that's exactly what we wanted. The three part footer got rearranged. The center footer appears to have slid down between the left and right footers. That makes sense from a design perspective but it isn't natural from an HTML rendering perspective. Foundation provides the classes to easily make all this magic happen. Here is the code Unlike the early days of mobile design where a separate website was built for mobile devices, with Foundation you build your site once, and use classes to specify how it should look on both mobile and regular displays. Here is the HTML code that generates the two layouts: <header class="row"> <div class="large-6 column">Header Left</div> <div class="large-6 column">Header Right</div> </header> <main class="row"> <aside class="large-3 push-9 column">Sidebar Right</aside> <section class="large-9 pull-3 columns"> <article class="row"> <div class="small-9 column">Content Left</div> <div class="small-3 column">Content Right</div> </article> </section> </main> <footer class="row"> <div class="small-6 small-centered large-4 large-uncentered push-4 column">Footer Center</div> <div class="small-6 large-4 pull-4 column">Footer Left</div> <div class="small-6 large-4 column">Footer Right</div> </footer> That's all there is to it. Replace the text we used for labels with real content and you have a design that displays on mobile and regular displays in the layouts we've shown in this article. Toss in some widgets What we've shown above is just the core of the Foundation framework. As a toolkit, it also includes numerous CSS components and JavaScript plugins. Foundation includes styles for labels, lists, and data tables. It has several navigation components including Breadcrumbs, Pagination, Side Nav, and Sub Nav. You can add regular buttons, drop-down buttons, and button groups. You can make unique content areas with Block Grids, a special variation of the underlying grid. You can add images as thumbnails, put content into panels, present your video feed using the Flex Video component, easily add pricing tables, and represent progress bars. All these components only require CSS and are the easiest to integrate. By tossing in Foundation's JavaScript plugins, you have even more capabilities. Plugins include things like Alerts, Tooltips, and Dropdowns. These can be used to pop up messages in various ways. The Section plugin is very powerful when you want to organize your content into horizontal or vertical tabs, or when you want horizontal or vertical navigation. Like most components and plugins, it understands the mobile and regular desktop views and adapts accordingly. The Top Bar plugin is a favorite for many developers. It is a multi-level fly out menu plugin. Build your menu in HTML the way Top Bar expects. Set it up with the appropriate classes and it just works. Magellan and Joyride are two plugins that you can put to work to help show your viewers where they are on a page or to help them navigate to various sections on a page. Orbit is Foundation's slide presentation plugin. You often see sliders on the home page of websites these days. Clearing is similar to Orbit except that it displays thumbnails of the images in a presentation below the main display window. A viewer clicks on a thumbnail to display the full image. Reveal is a plugin that allows you to put a link anywhere on your page and when the viewer clicks on it, a box pops up extra content, which could even be an Orbit slider, is revealed. Interchange is one of the most recent additions to Foundation's plugin factory. With it you can selectively load images depending on the target environment. This lets you optimize bandwidth between your web server and your viewer's browser. Foundation also provides a great Forms plugin. On its own it is capable. With the additional Abide plugin you have a great deal of control over form layout and editing. Summary As you can see, Foundation is very capable of laying out web page for mobile devices and regular displays. One set of code, two very different looks. And that's just the beginning. Foundation's CSS components and JavaScript plugins can be placed on a web page in almost any content area. With these widgets you can have much more interaction with your viewers than you otherwise would. Put Foundation to work in your website today! Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Introduction to RWD frameworks [Article] Nesting, Extend, Placeholders, and Mixins [Article]
Read more
  • 0
  • 0
  • 1036

Packt
21 Nov 2013
10 min read
Save for later

Our First Machine Learning Method – Linear Classification

Packt
21 Nov 2013
10 min read
(For more resources related to this topic, see here.) To get a grip on the problem of machine learning in scikit-learn, we will start with a very simple machine learning problem: we will try to predict the Iris flower species using only two attributes: sepal width and sepal length. This is an instance of a classification problem, where we want to assign a label (a value taken from a discrete set) to an item according to its features. Let's first build our training dataset—a subset of the original sample, represented by the two attributes we selected and their respective target values. After importing the dataset, we will randomly select about 75 percent of the instances, and reserve the remaining ones (the evaluation dataset) for evaluation purposes (we will see later why we should always do that): >>> from sklearn.cross_validation import train_test_split >>> from sklearn import preprocessing >>> # Get dataset with only the first two attributes >>> X, y = X_iris[:, :2], y_iris >>> # Split the dataset into a training and a testing set >>> # Test set will be the 25% taken randomly >>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33) >>> print X_train.shape, y_train.shape (112, 2) (112,) >>> # Standardize the features >>> scaler = preprocessing.StandardScaler().fit(X_train) >>> X_train = scaler.transform(X_train) >>> X_test = scaler.transform(X_test) The train_test_split function automatically builds the training and evaluation datasets, randomly selecting the samples. Why not just select the first 112 examples? This is because it could happen that the instance ordering within the sample could matter and that the first instances could be different to the last ones. In fact, if you look at the Iris datasets, the instances are ordered by their target class, and this implies that the proportion of 0 and 1 classes will be higher in the new training set, compared with that of the original dataset. We always want our training data to be a representative sample of the population they represent. The last three lines of the previous code modify the training set in a process usually called feature scaling. For each feature, calculate the average, subtract the mean value from the feature value, and divide the result by their standard deviation. After scaling, each feature will have a zero average, with a standard deviation of one. This standardization of values (which does not change their distribution, as you could verify by plotting the X values before and after scaling) is a common requirement of machine learning methods, to avoid that features with large values may weight too much on the final results. Now, let's take a look at how our training instances are distributed in the two-dimensional space generated by the learning feature. pyplot, from the matplotlib library, will help us with this: >>> import matplotlib.pyplot as plt >>> colors = ['red', 'greenyellow', 'blue'] >>> for i in xrange(len(colors)): >>> xs = X_train[:, 0][y_train == i] >>> ys = X_train[:, 1][y_train == i] >>> plt.scatter(xs, ys, c=colors[i]) >>> plt.legend(iris.target_names) >>> plt.xlabel('Sepal length') >>> plt.ylabel('Sepal width') The scatter function simply plots the first feature value (sepal width) for each instance versus its second feature value (sepal length) and uses the target class values to assign a different color for each class. This way, we can have a pretty good idea of how these attributes contribute to determine the target class. The following screenshot shows the resulting plot: Looking at the preceding screenshot, we can see that the separation between the red dots (corresponding to the Iris setosa) and green and blue dots (corresponding to the two other Iris species) is quite clear, while separating green from blue dots seems a very difficult task, given the two features available. This is a very common scenario: one of the first questions we want to answer in a machine learning task is if the feature set we are using is actually useful for the task we are solving, or if we need to add new attributes or change our method. Given the available data, let's, for a moment, redefine our learning task: suppose we aim, given an Iris flower instance, to predict if it is a setosa or not. We have converted our problem into a binary classification task (that is, we only have two possible target classes). If we look at the picture, it seems that we could draw a straight line that correctly separates both the sets (perhaps with the exception of one or two dots, which could lie in the incorrect side of the line). This is exactly what our first classification method, linear classification models, tries to do: build a line (or, more generally, a hyperplane in the feature space) that best separates both the target classes, and use it as a decision boundary (that is, the class membership depends on what side of the hyperplane the instance is). To implement linear classification, we will use the SGDClassifier from scikit-learn. SGD stands for Stochastic Gradient Descent, a very popular numerical procedure to find the local minimum of a function (in this case, the loss function, which measures how far every instance is from our boundary). The algorithm will learn the coefficients of the hyperplane by minimizing the loss function. To use any method in scikit-learn, we must first create the corresponding classifier object, initialize its parameters, and train the model that better fits the training data. You will see while you advance that this procedure will be pretty much the same for what initially seemed very different tasks. >>> from sklearn.linear_modelsklearn._model import SGDClassifier >>> clf = SGDClassifier() >>> clf.fit(X_train, y_train)</p></pre> The SGDClassifier initialization function allows several parameters. For the moment, we will use the default values, but keep in mind that these parameters could be very important, especially when you face more real-world tasks, where the number of instances (or even the number of attributes) could be very large. The fit function is probably the most important one in scikit-learn. It receives the training data and the training classes, and builds the classifier. Every supervised learning method in scikit-learn implements this function. What does the classifier look like in our linear model method? As we have already said, every future classification decision depends just on a hyperplane. That hyperplane is, then, our model. The coef_ attribute of the clf object (consider, for the moment, only the first row of the matrices), now has the coefficients of the linear boundary and the intercept_ attribute, the point of intersection of the line with the y axis. Let's print them: >>> print clf.coef_ [[-28.53692691 15.05517618] [ -8.93789454 -8.13185613] [ 14.02830747 -12.80739966]] >>> print clf.intercept_ [-17.62477802 -2.35658325 -9.7570213 ] Indeed in the real plane, with these three values, we can draw a line, represented by the following equation: -17.62477802 - 28.53692691 * x1 + 15.05517618 * x2 = 0 Now, given x1 and x2 (our real-valued features), we just have to compute the value of the left-side of the equation: if its value is greater than zero, then the point is above the decision boundary (the red side), otherwise it will be beneath the line (the green or blue side). Our prediction algorithm will simply check this and predict the corresponding class for any new iris flower. But, why does our coefficient matrix have three rows? Because we did not tell the method that we have changed our problem definition (how could we have done this?), and it is facing a three-class problem, not a binary decision problem. What, in this case, the classifier does is the same we did—it converts the problem into three binary classification problems in a one-versus-all setting (it proposes three lines that separate a class from the rest). The following code draws the three decision boundaries and lets us know if they worked as expected: >>> x_min, x_max = X_train[:, 0].min() - .5, X_train[:, 0].max() + .5 >>> y_min, y_max = X_train[:, 1].min() - .5, X_train[:, 1].max() + .5 >>> xs = np.arange(x_min, x_max, 0.5) >>> fig, axes = plt.subplots(1, 3) >>> fig.set_size_inches(10, 6) >>> for i in [0, 1, 2]: >>> axes[i].set_aspect('equal') >>> axes[i].set_title('Class '+ str(i) + ' versus the rest') >>> axes[i].set_xlabel('Sepal length') >>> axes[i].set_ylabel('Sepal width') >>> axes[i].set_xlim(x_min, x_max) >>> axes[i].set_ylim(y_min, y_max) >>> sca(axes[i]) >>> plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=plt.cm.prism) >>> ys = (-clf.intercept_[i] – Xs * clf.coef_[i, 0]) / clf.coef_[i, 1] >>> plt.plot(xs, ys, hold=True) The first plot shows the model built for our original binary problem. It looks like the line separates quite well the Iris setosa from the rest. For the other two tasks, as we expected, there are several points that lie on the wrong side of the hyperplane. Now, the end of the story: suppose that we have a new flower with a sepal width of 4.7 and a sepal length of 3.1, and we want to predict its class. We just have to apply our brand new classifier to it (after normalizing!). The predict method takes an array of instances (in this case, with just one element) and returns a list of predicted classes: >>>print clf.predict(scaler.transform([[4.7, 3.1]])) [0] If our classifier is right, this Iris flower is a setosa. Probably, you have noticed that we are predicting a class from the possible three classes but that linear models are essentially binary: something is missing. You are right. Our prediction procedure combines the result of the three binary classifiers and selects the class in which it is more confident. In this case, we will select the boundary line whose distance to the instance is longer. We can check that using the classifier decision_function method: >>>print clf.decision_function(scaler.transform([[4.7, 3.1]])) [[ 19.73905808 8.13288449 -28.63499119]] Summary In this article we included a very simple example of classification, trying to show the main steps for learning. Resources for Article: Further resources on this subject: Python Testing: Installing the Robot Framework [Article] Inheritance in Python [Article] Python 3: Object-Oriented Design [Article]
Read more
  • 0
  • 0
  • 2380

article-image-platform-service
Packt
21 Nov 2013
5 min read
Save for later

Platform as a Service

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) Platform as a Service is a very interesting take on the traditional cloud computing models. While there are many (often conflicting) definitions of a PaaS, for all practical purposes, PaaS provides a complete platform and environment to build and host applications or services. Emphasis is clearly on providing an end-to-end precreated environment to develop and deploy the application that automatically scales as required. PaaS packs together all the necessary components such as an operating system, database, programming language, libraries, web or application container, and a storage or hosting option. PaaS offerings vary and their chargebacks are dependent on what is utilized by the end user. There are excellent public offerings of PaaS such as Google App Engine, Heroku, Microsoft Azure, and Amazon Elastic Beanstalk. In a private cloud offering for an enterprise, it is possible to implement a similar PaaS environment. Out of the various possibilities, we will focus on building a Database as a Service (DBaaS) infrastructure using Oracle Enterprise Manager. DBaaS is sometimes seen as a mix of PaaS or SaaS depending on the kind of service it provides. DBaaS that provides services such as a database would be leaning more towards its PaaS legacy; but if it provides a service such as Business Intelligence, it takes more of a SaaS form. Oracle Enterprise Manager enables self-service provisioning of virtualized database instances out of a common shared database instance or cluster. Oracle Database is built to be clustered, and this makes it an easy fit for a robust DBaaS platform. Setting up the PaaS infrastructure Before we go about implementing a DBaaS, we will need to make sure our common platform is up and working. We will now check how we can create a PaaS Zone. Creating a PaaS Zone Enterprise Manager groups host or Oracle VM Manager Zones into PaaS Infrastructure Zones. You will need to have at least one PaaS Zone before you can add more features into the setup. To create a PaaS Zone, make sure that you have the following: The EM_CLOUD_ADMINISTRATOR, EM_SSA_ADMINISTRATOR, and EM_SSA_USER roles created A software library To set up a PaaS Infrastructure Zone, perform the following steps: Navigate to Setup | Cloud | PaaS Infrastructure Zone. Click on Create in the PaaS Infrastructure Zone main page. Enter the necessary details for PaaS Infrastructure Zone such as Name and Description. Based on the type of members you want to add to this zone, you can select any of the following member types: Host: This option will only allow the host targets to be part of this zone. Also, make sure you provide the necessary details for the placement policy constraints defined per host. These values are used to prevent over utilization of hosts which are already being heavily used. You can set a percentage threshold for Maximum CPU Utilization and Maximum Memory Allocation. Any host exceeding this threshold will not be used for provisioning. OVM Zone: This option will allow you to add Oracle Virtual Manager Zone targets: If you select Host at this stage, you will see the following page: Click on the + button to add named credentials and make sure you click on Test Credentials button to verify the credential. These named credentials must be global and available on all the hosts in this zone. Click on the Add button to add target hosts to this zone. If you selected OVM Zone in the previous screen (step 1 of 4), you will be presented with the following screen: Click on the Add button to add roles that can access this PaaS Infrastructure Zone. Once you have created a PaaS Infrastructure Zone, you can proceed with setting up necessary pieces for a DBaaS. However, time and again you might want to edit or review your PaaS Infrastructure Zone. To view and manage your PaaS Infrastructure Zones, navigate to Enterprise Menu | Cloud | Middleware and Database Cloud | PaaS Infrastructure Zones. From this page you can create, edit, delete, or view more details for a PaaS Infrastructure Zone. Clicking on the PaaS infrastructure zone link will display a detailed drill-down page with quite a few details related to that zone. The page is shown as follows: This page shows a lot of very useful details about the zone. Some of them are listed as follows: General: This section shows stats for this zone and shows details such as the total number of software pools, Oracle VM zones, member types (hosts or Oracle VM Zones), and other related details. CPU and Memory: This section gives an overview of CPU and memory utilization across all servers in the zone. Issues: This section shows incidents and problems for the target. This is a handy summary to check if there are any issues that needs attention. Request Summary: This section shows the status of requests being processed currently. Software Pool Summary: This section shows the name and type of each software pool in the zone. Unallocated Servers: This section shows a list of servers that are not associated with any software pool. Members: This section shows the members of the zones and the member. Service Template Summary: Shows the service templates associated with the zone. Summary We saw in this article, how PaaS plays a vital role in the structure of a DBaaS architechture. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Features of CloudFlare [Article] Oracle Tools and Products [Article]
Read more
  • 0
  • 0
  • 1766

article-image-foundations
Packt
20 Nov 2013
6 min read
Save for later

Foundations

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Installation If you do not have node installed, visit: http://nodejs.org/download/. There is also an installation guide on the node GitHub repository wiki if you prefer not to or cannot use an installer: https://github.com/joyent/node/wiki/Installation. Let's install Express globally: npm install -g express If you have downloaded the source code, install its dependencies by running this command: npm install Testing Express with Mocha and SuperTest Now that we have Express installed and our package.json file in place, we can begin to drive out our application with a test-first approach. We will now install two modules to assist us: mocha and supertest. Mocha is a testing framework for node; it's flexible, has good async support, and allows you to run tests in both a TDD and BDD style. It can also be used on both the client and server side. Let's install Mocha with the following command: npm install -g mocha –-save-dev SuperTest is an integration testing framework that will allow us to easily write tests against a RESTful HTTP server. Let's install SuperTest: npm install supertest –-save-dev Continuous testing with Mocha One of the great things about working with a dynamic language and one of the things that has drawn me to node is the ability to easily do Test-Driven Development and continuous testing. Simply run Mocha with the -w watch switch and Mocha will respond when changes to our codebase are made, and will automatically rerun the tests: mocha -w Extracting routes Express supports multiple options for application structure. Extracting elements of an Express application into separate files is one option; a good candidate for this is routes. Let's extract our route heartbeat into ./lib/routes/heartbeat.js; the following listing simply exports the route as a function called index: exports.index = function(req, res){ res.json(200, 'OK'); }; Let's make a change to our Express server and remove the anonymous function we pass to app.get for our route and replace it with a call to the function in the following listing. We import the route heartbeat and pass in a callback function, heartbeat.index: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); http.createServer(app).listen(app.get('port')); module.exports = app; 404 handling middleware In order to handle a 404 Not Found response, let's add a 404 not found middleware. Let's write a test, ./test/heartbeat.js; the content type returned should be JSON and the status code expected should be 404 Not Found: describe('vision heartbeat api', function(){ describe('when requesting resource /missing', function(){ it('should respond with 404', function(done){ request(app) .get('/missing') .expect('Content-Type', /json/) .expect(404, done); }) }); }); Now, add the following middleware to ./lib/middleware/notFound.js. Here we export a function called index and call res.json, which returns a 404 status code and the message Not Found. The next parameter is not called as our 404 middleware ends the request by returning a response. Calling next would call the next middleware in our Express stack; we do not have any more middleware due to this, it's customary to add error middleware and 404 middleware as the last middleware in your server: exports.index = function(req, res, next){ res.json(404, 'Not Found.'); }; Now add the 404 not found middleware to ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; Logging middleware Express comes with a logger middleware via Connect; it's very useful for debugging an Express application. Let's add it to our Express server ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.use(express.logger({ immediate: true, format: 'dev' })); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; The immediateoption will write a log line on request instead of on response. The devoption provides concise output colored by the response status. The logger middleware is placed high in the Express stack in order to log all requests. Logging with Winston We will now add logging to our application using Winston; let's install Winston: npm install winston --save The 404 middleware will need to log 404 not found, so let's create a simple logger module, ./lib/logger/index.js; the details of our logger will be configured with Nconf. We import Winston and the configuration modules. We define our Logger function, which constructs and returns a file logger—winston.transports. File—that we configure using values from our config. We default the loggers maximum size to 1 MB, with a maximum of three rotating files. We instantiate the Logger function, returning it as a singleton. var winston = require('winston') , config = require('../configuration'); function Logger(){ return winston.add(winston.transports.File, { filename: config.get('logger:filename'), maxsize: 1048576, maxFiles: 3, level: config.get('logger:level') }); } module.exports = new Logger(); Let's add the Loggerconfiguration details to our config files ./config/ development.jsonand ./config/test.json: { "express": { "port": 3000 }, "logger" : { "filename": "logs/run.log", "level": "silly", } } Let's alter the ./lib/middleware/notFound.js middleware to log errors. We import our logger and log an error message via logger when a 404 Not Found response is thrown: var logger = require("../logger"); exports.index = function(req, res, next){ logger.error('Not Found'); res.json(404, 'Not Found'); }; Summary This article has shown in detail with all the commands how Node.js is installed along with Express. The testing of our Express with Mocha and SuperTest is shown in detail. The logging in into our application is shown with middleware and Winston. Resources for Article: Further resources on this subject: Spring Roo 1.1: Working with Roo-generated Web Applications [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Develop PHP Web Applications with NetBeans, VirtualBox and Turnkey LAMP Appliance [Article]
Read more
  • 0
  • 0
  • 2928
article-image-understanding-web-based-applications-and-other-multimedia-forms
Packt
20 Nov 2013
5 min read
Save for later

Understanding Web-based Applications and Other Multimedia Forms

Packt
20 Nov 2013
5 min read
(For more resources related to this topic, see here.) However, we will not look at blogs, wikis, or social networking sites that are usually referred to as web-based reference tools. Moodle already has these, so instead we will take a look at web applications that allows the easy creation, collaboration, and sharing of multimedia elements, such as interactive floor planners, online maps, timelines, and many others applications that are very easy to use, and that support different learning styles. Usually, I use Moodle as a school operating system and web apps as its social applications, to illustrate what I believe can be a very powerful way of using Moodle and the web for learning. Designing meaningful activities in Moodle gives students the opportunity to express their creativity by using these tools, and reflecting on the produced multimedia artifacts with both peers and teacher. However, we have to keep in mind some issues of e-safety, backups, and licensing when using these online tools, usually associated with online communities. After all, we will have our students using them, and they will therefore be exposed to some risks. Creating dynamic charts using Google Drive (Spreadsheets) Assigning students in our Moodle course tasks will require them to use a tool like Google Spreadsheets to present their plans to colleagues in a visual way. Google Drive (http://drive.google.com) provides a set of online productivity tools that work on web standards and recreates a typical Office suite. We can make documents, spreadsheets, presentations, drawings, or forms. To use Google Drive, we will need a Google account. After creating our account and logging in to Google Drive, we can organize the files displayed on the right side of the screen, add them to folders, tag them, search (of course, it's Google!), collaborate (imagine a wiki spreadsheet), export to several formats (including the usual formats for Office documents from Microsoft, Open Office, or Adobe PDF), and publish these documents online. We will start by creating a new Spreadsheet to make a budget for a music studio which will be built during the music course, by navigating to CREATE | Spreadsheet. Insert a chart As in any spreadsheet application, we can add a title by double-clicking on Untitled spreadsheet and then we add some equipment and cost to the cells: After populating our table with values and selecting all of them, we should click on the Insert chart button. The Start tab will show up in the Chart Editor pop up, as shown in the following screenshot: If we click on the Charts tab, we can pick from a list of available charts. Let's pick one of the pie charts. In the Customize tab, we can add a title to the chart, and change its appearance: When everything is done, we can click on the Insert button, and the chart previewed in the Customize tab will be added to the spreadsheet. Publish If we click on the chart, a square will be displayed on the upper-right corner, and if we click on the drop-down arrow, we see a Publish chart... option, which can be used to publish the chart. When we click on this option, we will be presented with two ways of embedding the chart, the first, as an interactive chart, and the second, as an image. Both change dynamically if we change the values or the chart in Google Drive. We should use the image code to put the chart on a Moodle forum. Share, comment, and collaborate Google Drive has the options of sharing and allowing comments and changes in our spreadsheet by other people. On the upper-right corner of each opened document, there are two buttons for that, Comments and Share. To add collaborators to our spreadsheet, we have to click on the Share button and then add their contacts (for example, e-mail) in the Invite people: field, then click on the Share & save button, and hit Done. If a collaborator is working on the same spreadsheet, at the same time we are, we can see it below the Comments and Share buttons as shown in the following screenshot: If we click on the arrow next to 1 other viewer we can chat directly with the collaborator as we edit it collaboratively: Remember that, this can be quite useful in distance courses that have collaborative tasks assigned to groups. Creating a shared folder using Google Drive We can also use the sharing functionality to share documents with the collaborators (15 GB of space for that). In the main Google Drive page, we can create a folder by navigating to Create | Folder. We are then required to give it a name: The folder will be shown in the files and folder explorer in Google Drive: To share it with someone, we need to right-click the folder and choose the Share... option. Then, just like the process of sharing a spreadsheet we saw previously, we just need to add our collaborators' contacts (for example, e-mail) in the Invite people: field, then click on Share & save and hit Done. The invited people will receive an e-mail to add the shared folder to their Google Drive (they need a Google account for this) and it is done. Everything we add to this folder is automatically synced with everyone. This includes all the Google Drive documents, PDFs, and all the files uploaded to this folder. And it's an easy way to share multimedia projects between a group of people working on the same project.
Read more
  • 0
  • 0
  • 2077

article-image-styling-forms
Packt
20 Nov 2013
8 min read
Save for later

Styling the Forms

Packt
20 Nov 2013
8 min read
(For more resources related to this topic, see here.) CSS3 for web forms CSS3 brings us infinite new possibilities and allows styling to make better web forms. CSS3 gives us a number of new ways to create an impact with our form designs, with quite a few important changes. HTML5 introduced useful new form elements such as sliders and spinners and old elements such as textbox and textarea, and we can make them look really cool with our innovation and CSS3. Using CSS3, we can turn an old and boring form into a modern, cool, and eye catching one. CSS3 is completely backwards compatible, so we will not have to change the existing form designs. Browsers have and will always support CSS2. CSS3 forms can be split up into modules. Some of the most important CSS3 modules are: Selectors (with pseudo-selectors) Backgrounds and Borders Text (with Text Effects) Fonts Gradients Styling of forms always varies with requirements and the innovation of the web designer or developer. In this article, we will look at those CSS3 properties with which we can style our forms and give them a rich and elegant look. Some of the new properties of CSS3 required vendor prefixes, which were used frequently as they helped browsers to read the code. In general, it is no longer needed to use them with CSS3 for some of the properties, such as border-radius, but they come into action when the browser doesn't interpret the code. A list of all the vendor prefixes for major browsers is given as follows: -moz-: Firefox -webkit-: WebKit browsers such as Safari and Chrome -o-: Opera -ms-: Internet Explorer Before we start styling the form, let us have a quick revision of form modules for better understanding and styling of the forms. Selectors and pseudo-selectors Selectors are a pattern used to select the elements which we want to style. A selector can contain one or more simple selectors separated by combinators. The CSS3 Selectors module introduces three new attribute selectors; they are grouped together under the heading Substring Matching Attribute Selectors. These new selectors are as follows: [att^=val]: The "begins with" selector [att$=val]: The "ends with" selector [att*=val]: The "contains" selector The first of these new selectors, which we will refer to as the "begins with" selector, allows the selection of elements where a specified attribute (for example, the href attribute of a hyperlink) begins with a specified string (for example, http://, https://, or mailto:). In the same way, the additional two new selectors, which we will refer to as the "ends with" and "contains" selectors, allow the selection of elements where a specified attribute either ends with or contains a specified string respectively. A CSS pseudo-class is just an additional keyword to selectors that tells a special state of the element to be selected. For example, :hover will apply a style when the user hovers over the element specified by the selector. Pseudo-classes, along with pseudo-elements, apply a style to an element not only in relation to the content of the document tree, but also in relation to external factors like the history of the navigator, such as :visited, and the status of its content, such as :checked, on some form elements. The new pseudo-classes are as follows: Type Details :last-child It is used to match an element that is the last child element of its parent element. :first-child It is used to match an element that is the first child element of its parent element. :checked It is used to match elements such as radio buttons or checkboxes which are checked. :first-of-type It is used to match the first child element of the specified element type. :last-of-type It is used to match the last child element of the specified element type. :nth-last-of-type(N) It is used to match the Nth child element from the last of the specified element type. :only-child It is used to match an element if it's the only child element of its parent. :only-of-type It is used to match an element that is the only child element of its type. :root It is used to match the element that is the root element of the document. :empty It is used to match elements that have no children. :target It is used to match the current active element that is the target of an identifier in the document's URL. :enabled It is used to match user interface elements that are enabled. :nth-child(N) It is used to match every Nth child element of the parent. :nth-of-type(N) It is used to match every Nth child  element of the parent counting from the last of the parent. :disabled It is used to match user interface elements that are disabled. :not(S) It is used to match elements that aren't matched by the specified selector. :nth-last-child(N) Within a parent element's list of child elements, it is used to match elements on the basis of their positions. Backgrounds CSS3 contains several new background attributes; and moreover, in CSS3, some changes are also made in the previous properties of the background; which allow greater control on the background element. The new background properties added are as follows. The background-clip property The background-clip property is used to determine the allowable area for the background image. If there is no background image, then this property has only visual effects such as when the border has transparent regions or partially opaque regions; otherwise, the border covers up the difference. Syntax The syntax for the background-clip property are as follows: background-clip: no-clip / border-box / padding-box / content-box; Values The values for the background-clip property is as follows: border-box: With this, the background extends to the outside edge of the border padding-box: With this, no background is drawn below the border content-box: With this, the background is painted within the content box; only the area the content covers is painted no-clip: This is the default value, same as border-box The background-origin property The background-origin property specifies the positioning of the background image or color with respect to the background-position property. This property has no effect if the background-attachment property for the background image is fixed. Syntax The following is the syntax for the background-attachment property: background-origin: border-box / padding-box / content-box; Values The values for the background-attachment property are as follows: border-box: With this, the background extends to the outside edge of the border padding-box: By using this, no background is drawn below the border content-box: With this, the background is painted within the content box The background-size property The background-size property specifies the size of the background image. If this property is not specified then the original size of the image will be displayed. Syntax The following is the syntax for the background-size property: background-size: length / percentage / cover / contain; Values The values for the background-size property are as follows: length: This specifies the height and width of the background image. No negative values are allowed. percentage: This specifies the height and width of the background image in terms of the percent of the parent element. cover: This specifies the background image to be as large as possible so that the background area is completely covered. contain: This specifies the image to the largest size such that its width and height can fit inside the content area. Apart from adding new properties, CSS3 has also enhanced some old background properties, which are as follows. The background-color property If the underlying layer of the background image of the element cannot be used, we can specify a fallback color in addition to specifying a background color. We can implement this by adding a forward slash before the fallback color. background-color: red / blue; The background-repeat property In CSS2 when an image is repeated at the end, the image often gets cut off. CSS3 introduced new properties with which we can fix this problem: space: By using this property between the image tiles, an equal amount of space is applied until they fill the element round: By using this property until the tiles fit the element, the image is scaled down The background-attachment property With the new possible value of local, we can now set the background to scroll when the element's content is scrolled. This comes into action with elements that can scroll. For example: body{background-image:url('example.gif');background-repeat:no-repeat;background-attachment:fixed;} CSS3 allows web designers and developers to have multiple background images, using nothing but just a simple comma-separated list. For example: background-image: url(abc.png), url(xyz.png); Summary In this article, we learned about the basics of CSS3 and the modules in which we can categorize the CSS3 for forms, such as backgrounds. Using this we can improvise the look and feel of a form. This makes a form more effective and attractive. Resources for Article: Further resources on this subject: HTML5 Canvas [Article] Building HTML5 Pages from Scratch [Article] HTML5 Presentations - creating our initial presentation [Article]
Read more
  • 0
  • 0
  • 1905

article-image-issues-and-wikis-gitlab
Packt
20 Nov 2013
6 min read
Save for later

Issues and Wikis in GitLab

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Issues The built-in features for issue tracking and documentation will be very beneficial to you, especially if you're working on extensive software projects, the ones with many components, or those that need to be supported in multiple versions at once, for example, stable, testing, and unstable. In this article, we will have a closer look at the formats that are supported for issues and wiki pages (in particular, Markdown); also the elements that can be referenced from within these and how issues can be organized. Furthermore, we will go through the process of assigning issues to team members, and keeping documentation in wiki pages, which can also be edited locally. Lastly, we will see how the RSS feeds generated by GitLab can keep your team in a closer loop around the projects they work on. The metadata covered in this article may seem trivial, but many famous software projects have gained traction due to their extensive and well-written documentation, which initially was done by core developers. It enables your users to do the same with their projects, even if only internally; it opens up for a much more efficient collaboration. GitLab-flavored Markdown GitLab comes with a Markdown formatting parser that is fairly similar to GitHubs, which makes it very easy to adapt and migrate. Many standalone editors also support this format, such as Mou (http://mouapp.com/) for Mac or MarkdownPad (http://markdownpad.com/) for Windows. On Linux, editors with a split view, such as ReText (http://sourceforge.net/projects/retext/) or the more Zen-writing UberWriter (http://uberwriter.wolfvollprecht.de/) are available. For the popular Vim editor , multiple Markdown plugins too are up for grabs on a number of GitHub repositories; one of them is Vim Markdown (https://github.com/tpope/vim-markdown) by Tim Pope. Lastly, I'd like to mention that you don't need a dedicated editor for Markdown because they are plain text files. The mentioned editors simply enhance the view through syntax highlighting and preview modes. About Markdown Markdown was originally written by John Gruber, and has since evolved into various flavors. The intention of this very lightweight markup language is to have a source that is easy to edit and can be transformed into meaningful HTML to be displayed on the Web. Different variations of Markdown have made it to a majority of very successful software projects as the default language; readme files, documentation, and even blogging engines adopt it. In Markdown, text styles can be applied, links placed, and images can be inserted. If ever Markdown, by default, does not support what you are currently trying to do, you can insert plain HTML, which will not be altered by the Markdown parser. Referring to elements inside GitLab When working with source code, it can be of importance to refer to a line of code, a file, or other things, when discussing something. Because many development teams are nowadays spread throughout the world, GitLab adapts to that and makes it easy to refer and reference many things directly from comments, wiki pages, or issues. Some things like files or lines can be referenced via links, because GitLab has unique links to the branches of a repository; others are more directly accessible. The following items (basically, prefixed strings or IDs) can be referenced through shortcodes: commit messages comments wall posts issues merge requests milestones wiki pages To reference items, use the following shortcodes inside any field that supports Markdown or RDoc on the web interface: @foofor team members #123for issues !123for merge requests $123for snippets 1234567for commits Issues, knowing what needs to be done An issue is a text message of variable length, describing a bug in the code, an improvement to be made, or something else that should be done or discussed. By commenting on the issue, developers or project leaders can respond to this request or statement. The meta information attached to an issue can be very valuable to the team, because developers can be assigned to an issue, and it can be tagged or labeled with keywords that describe the content or area to which it belongs. Furthermore, you can also set a goal for the milestone to be included in this fix or feature. In the following screenshot, you can see the interface for issues: Creating issues By navigating to the Issues tab of a repository in the web interface, you can easily create new issues. Their title should be brief and precise, because a more elaborate description area is available. The description area supports the GitLab-flavored Markdown, as mentioned previously. Upon creation, you can choose a milestone and a user to assign an issue to, but you can also leave these fields unset, possibly to let your developers themselves choose with what they want to work and at what time. Before they begin their work, they can assign the issues to themselves. In the following screenshot, you can see what the issue creation form looks like: Working with labels or tags Labels are tags used to organize issues by the topic and severity. Creating labels is as easy as inserting them, separated by a comma, into the respective field while creating an issue. Currently in Version 5.2, certain keywords trigger a certain background color on the label. Labels like critical or bug turn red, feature turns green, and other labels are blue by default. The following screenshot shows what a list of labeled features looks like: After the creation of a label, it will be listed under the Labels tab within the Issues page, with a link that lists all the issues that have been labeled the same. Filtering by the label, assigned user, or milestone is also possible from the list of issues within each projects overview. Summary In this article, we have had a look at the project management side of things. You can now make use of the built-in possibilities to distribute tasks across team members through issues, keep track of things that still have to do with the issues, or enable observers to point out bugs. Resources for Article : Further resources on this subject: Using Gerrit with GitHub [Article] The architecture of JavaScriptMVC [Article] Using the OSGi Bundle Repository in OSGi and Apache Felix 3.0 [Article]
Read more
  • 0
  • 1
  • 4401
article-image-building-responsive-image-sliders
Packt
20 Nov 2013
7 min read
Save for later

Building Responsive Image Sliders

Packt
20 Nov 2013
7 min read
(For more resources related to this topic, see here.) Responsive image sliders Opening a website and seeing an image slider in the header area is common nowadays. Image sliders display highlighted content, which are really useful, within a limited space. Although the free space is more limited when a site is viewed through mobile devices, the slider element still catches the client's attention. The difference between how much area can be used to display a highlighted content and the resource available to render it is really big if compared with desktop, where we generally do not have problems with script performance, and the interaction of each transition is performed through the use of arrow signs to switch images. When the responsive era started, the way that people normally interacted with image sliders was observed, and changes, such as the way to change each slide, were identified, based on the progressive enhancement concept. The solution was to provide a similar experience to the users of mobile devices: the adoption of gestures and touches on image slider elements for devices that accept them instead of displaying fallbacks. With the constant evolution of browsers and technologies, there are many image slider plugins with responsive characteristics. My personal favorite plugins are Elastislide, FlexSlider2, ResponsiveSlides, Slicebox, and Swiper. There are plenty available, and the only way to find one you truly like is to try them! Let's look in detail at how each of them works. Elastislide plugin Elastislide is a responsive image slider that will adapt its size and behavior in order to work on any screen size based on jQuery. This jQuery plugin handles the slider's structure, including images with percentage-based width inside, displaying it horizontally or vertically with a predefined minimum number of shown images. Elastislide is licensed under the MIT license and can be downloaded from https://github.com/codrops/Elastislide. When we are implementing an image slider, simply decreasing the container size and displaying a horizontal scrollbar will not solve the problem for small devices gracefully. The recommendation is to resize the internal items too. Elastislide fixes this resizing issue very well and defines the minimum elements we want to show instead of simply hiding those using CSS. Also, Elastislide uses a complementary and customized version of jQuery library named jQuery++. jQuery++ is another JavaScript library very useful to deal with DOM and special events. In this case, Elastislide has a custom version of jQuery++, which enables the plugin working with swipe events on touch devices. How to do it As we will see four different applications of this plugin for the same carousel, we will use the same HTML carousel's structure and may modify only the JavaScript before executing the plugin, specifying the parameters: <ul id="carousel" class="elastislide-list"> <li><a href="#"><img src = "image-photo.jpg" /></a></li> <li><a href="#"><img src = "image-sky.jpg" /></a></li> <li><a href="#"><img src = "image-gardem.jpg" /></a></li> <li><a href="#"><img src = "image-flower.jpg" /></a></li> <li><a href="#"><img src = "image-belt.jpg" /></a></li> <li><a href="#"><img src = "image-wall.jpg" /></a></li> <li><a href="#"><img src = "image-street.jpg" /></a></li> </ul> At the bottom of the DOM (before the </body> closing tag), we will need to include the jQuery and jQuery++ libraries (required for this solution), and then the ElastiSlide script: <script src = "http://code.jquery.com/jquery-1.9.1.min.js"></script> <script src = "js/jquerypp.custom.js"></script> <script src = "js/modernizr.custom.17475.js"></script> <script src = "js/jquery.elastislide.js"></script> Then, include the CSS stylesheet inside the <head> tag: <link rel="stylesheet" type="text/css" href="css/elastislide.css" /> Alright, now we already have the basis to show four different examples. For each example, you must add different parameters when executing the plugin script, in order to get different rendering, depending on the project need. Example 1 – minimum of three visible images (default) In this first example, we will see the default visual and behavior, and whether we will put the following code right after it, including the ElastiSlide plugin: <script type="text/javascript"> $('#carousel').elastislide(); </script> The default options that come with this solution are: Minimum three items will be shown Speed of scroll effect is 0.5 seconds Horizontal orientation Easing effect is defined as ease-in-out The carousel will start to show the first image on the list The following screenshot represents what the implementation of this code will look like. Look at the difference between its versions shown on tablets and smartphones: Example 2 – vertical with a minimum of three visible images There is an option to render the carousel vertically, just by changing one parameter. Furthermore, we may speed up the scrolling effect. Remember to include the same files used in Example 1, and then insert the following code into the DOM: <script type="text/javascript"> $('#carousel').elastislide({ orientation: 'vertical', speed: 250 }); </script> By default, three images are displayed as a minimum. But this minimum value can be modified as we will see in our next example: Example 3 – fixed wrapper with a minimum of two visible images In this example, we will define the minimum visible items in the carousel, the difference may be noticed when the carousel is viewed on small screens and the images will not reduce too much. Also, we may define the image to be shown starting from the third one. Remember to include the same fi les that were used in Example 1, and then execute the scripts informing the following parameters and positioning them after including the ElastiSlide plugin: <script> $('#carousel').elastislide({ minItems: 2, start: 2 }); </script> Example 4 – minimum of four images visible in an image gallery In the fourth example, we can see many JavaScript implementations. However, the main objective of this example is to show the possibility which this plugin provides to us. Through the use of plugin callback functions and private functions we may track the click and the current image, and then handle this image change on demand by creating an image gallery: <script> var current = 0; var $preview = $('#preview'); var $carouselEl = $('#carousel'); var $carouselItems = $carouselEl.children(); var carousel = $carouselEl.elastislide({ current: current, minItems: 4, onClick: function(el, pos, evt){ changeImage(el, pos); evt.preventDefault(); }, onReady: function(){ changeImage($carouselItems.eq(current), current); } }); function changeImage(el, pos) { $preview.attr('src', el.data('preview')); $carouselItems.removeClass('current-img'); el.addClass('current-img'); carousel.setCurrent(pos); } </script> For this purpose, ElastiSlide may not have big advantages if compared with other plugins because it depends on our extra development to finalize this gallery. So, let's see what the next plugin offers to solve this problem. Summary This article, explains four different image-slider plugins and their implementation. These examples are elaborative for the use of the plugins. Resources for Article: Further resources on this subject: Top Features You Need to Know About – Responsive Web Design [Article] Different strategies to make responsive websites [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 2507

article-image-configuring-ovirt
Packt
19 Nov 2013
9 min read
Save for later

Configuring oVirt

Packt
19 Nov 2013
9 min read
(For more resources related to this topic, see here.) Configuring the NFS storage NFS storage is a fairly common type of storage that is quite easy to set up and run even without special equipment. You can take the server with large disks and create NFS directory.But despite the apparent simplicity of NFS, setting s should be done with attention to details. Make sure that the NFS directory is suitable for use; go to the procedure of connecting storage to the data center. The following options are displayed after you click on the Configure Storage dialog box in which we specify the basic storage configuration: Name and Data Center: It is used to specify a name and target of the data center for storage Domain Function/Storage Type: It is used to choose a data function and NFS type Use Host: It is used to enter the host that will make the initial connection to the storage and a host who will be in the role of SPM Export Path: It is used to enter the storage server name and path of the exported directory Advanced Parameters: It provides additional connection options, such as NFS version, number of retransmissions and timeout, that are recommended to be changed only in exceptional cases Fill in the required storage settings and click on the OK button; this will start the process of connecting storage. The following image shows the New Storage dialog box with the connecting NFS storage: Configuring the iSCSI storage This section will explain how to connect the iSCSI storage to the data center with the type of storage as iSCSI. You can skip this section if you do not use iSCSI storage. iSCSI is a technology for building SAN (Storage Area Network). A key feature of this technology is the transmission of SCSI commands over the IP networks. Thus, there is a transfer of block data via IP. By using the IP networks, data transfer can take place over long distances and through network equipment such as routers and switches. These features make the iSCSI technology good for construction of low-cost SAN. oVirt supports iSCSI and iSCSI storages that can be connected to oVirt data centers. Then begin the process of connecting the storage to the data center. After you click on the Configure Storage dialog box in which you specify the basic storage configuration, the following options are displayed: Name and Data Center: It is used to specify the name and target of the data center. Domain Function/Storage Type: It is used to specify the domain function and storage type. In this case, the data function and iSCSI type. Use Host: It is used to specify the host to which the storage (SPM) will be attached. The following options are present in the search box for iSCSI targets: Address and Port: It is used to specify the address and port of the storage server that contains the iSCSI target User Authentication: Enable this checkbox if authentication is to be used on the iSCSI target CHAP username and password: It is used to specify the username and password for authentication Click on the Discover button and oVirt Engine connects to the specified server for the searching of iSCSI targets. In the resulting list, click on the designated targets, we click on the Login button to authenticate. Upon successful completion of the authentication, the display target LUN will be displayed; check it and click on OK to start connection to the data center. New storage will automatically connect to the data center. If it does not, select the location from the list and click on the Attach button in the detail pane where we choose a target data center. Configuring the Fibre Channel storage If you have selected Fibre Channel when creating the data center, we should create a Fibre Channel storage domain. oVirt supports Fibre Channel storage based on multiple preconfigured Logical Unit Numbers (LUN). Skip this section if you do not use Fibre Channel equipment. Begin the process of connecting the storage to the data center. Open the Guide Me wizard and click on the Configure Storage dialog box where you specify the basic storage configuration: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: Here we need to specify the data function and Fibre Channel type Use Host: It specifies the address of the virtualization host that will act as the SPM In the area below, the list of LUNs are displayed, enable the Add LUN checkbox on the selected LUN to use it as Fibre Channel data storage. Click on the OK button and this will start the process of connecting storage to the data centers. In the Storage tab and in the list of storages, we can see created Fibre Channel storage. In the process of connecting, its status will change and at the end new storage will be activated and connected to the data center. The connection process can also be seen in the event pane. The following screenshot shows the New Storage dialog box with Fibre Channel storage type: Configuring the GlusterFS storage GlusterFS is a distributed, parallel, and linearly scalable filesystem. GlusterFS can combine the data storage that are located on different servers into a parallel network filesystem. GlusterFS's potential is very large, so developers directed their efforts towards the implementation and support of GlusterFS in oVirt (GlusterFS documentation is available at http://www.gluster.org/community/documentation/index.php/Main_Page). oVirt 3.3 has a complete data center with the GlusterFS type of storage. Configuring the GlusterFS volume Before attempting to connect GlusterFS storage into the data center, we need to create the volume. The procedure of creating GlusterFS volume is common in all versions. Select the Volumes tab in the resource pane and click on Create Volume. In the open window, fill the volume settings: Data Center: It is used to specify the data center that will be attached to the GlusterFS storage. Volume Cluster: It is used to specify the name of the cluster that will be created. Name: It is used to specify a name for the new volume. Type: It is used to specify the type of GlusterFS volume available to choose from, there are seven types of volume that implement various strategic placements of data on the filesystem. Base types are Distribute, Replicate, and Stripe and other combination of these types: Distributed Replicate, Distributed Stripe, Striped Replicate, and Distributed Striped Replicate (additional info can be found at the link: http://gluster.org/community/documentation/index.php/GlusterFS_Concepts). Bricks: With this button, a list of bricks will be collected from the filesystem. Brick is a separate piece with which volume will be built. These bricks are distributed across the hosts. As bricks use a separate directory, it should be placed on a separate partition. Access Protocols: It defines basic access protocols that can be used to gain access to the following: Gluster: It is a native protocol access to volumes GlusterFS, enabled by default. NFS: It is an access protocol based on NFS. CIFS: It is an access protocol based on CIFS. Allow Access From: It allows us to enter a comma-separated IP address, hostnames, or * for all hosts that are allowed to access GlusterFS volume. Optimize for oVirt Store: Enabling this checkbox will enable extended options for created volume. The following screenshot shows the dialog box of Create Volume: Fill in the parameters, click on the Bricks button, and go to the new window to add new bricks with the following properties: Volume Type: This is used to change the previously marked type of the GlusterFS volume Server: It is used to specify a separate server that will export GlusterFS brick Brick Directory: It is used to specify the directory to use Specify the server and directory and click on Add. Depending on the type of volume, specify multiple bricks. After completing the list with bricks, click on the OK button to add volume and return to the menu. Click on the OK button to create GlusterFS volumes with the specified parameters. The following screenshot shows the Add Bricks dialog box: Now that we have GlusterFS volume, we select it from the list and click on Start. Configuring the GlusterFS storage oVirt 3.3 has support for creating data centers with the GlusterFS storage type: The GlusterFS storage type requires a preconfigured data center. A pre-created cluster should be present inside the data center. The enabled Gluster service is required. Go to the Storage section in resource pane and click on New Domain. In the dialog box that opens, fill in the details of our storage. The details are given as follows: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: It is used to specify the data function and GlusterFS type Use Host: It is used to specify the host that will connect to the SPM Path: It is used to specify the path to the location in the format hostname:volume_name VFS Type: Leave it as glusterfs and leave Mount Option blank Click on the OK button; this will start the process of creating the repository. The created storage automatically connects to the specified data centers. If not, select the repository created in the list, and in the subtab named Data Center in the detail pane, click on the Attach button and choose our data center. After you click on OK, the process of connecting storage to the data center starts. The following screenshot shows the New Storage dialog box with the GlusterFS storage type. Summary In this article we learned how to configure NFS Storage, iSCSI Storage, FC storages, and GlusterFS Storage. Resources for Article: Further resources on this subject: Tips and Tricks on Microsoft Application Virtualization 4.6 [Article] VMware View 5 Desktop Virtualization [Article] Qmail Quickstarter: Virtualization [Article]
Read more
  • 0
  • 0
  • 2190