Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-understanding-backbone
Packt
02 Sep 2013
12 min read
Save for later

Understanding Backbone

Packt
02 Sep 2013
12 min read
(For more resources related to this topic, see here.) Backbone.js is a lightweight JavaScript framework that is based on the Model-View-Controller (MVC) pattern and allows developers to create single-page web applications. With Backbone, it is possible to update a web page quickly using the REST approach with a minimal amount of data transferred between a client and a server. Backbone.js is becoming more popular day by day and is being used on a large scale for web applications and IT startups; some of them are as follows: Groupon Now!: The team decided that their first product would be AJAX-heavy but should still be linkable and shareable. Though they were completely new to Backbone, they found that its learning curve was incredibly quick, so they were able to deliver the working product in just two weeks. Foursquare: This used the Backbone.js library to create model classes for the entities in foursquare (for example, venues, check-ins, and users). They found that Backbone's model classes provide a simple and light-weight mechanism to capture an object's data and state, complete with the semantics of a classical inheritance. LinkedIn mobile: This used Backbone.js to create its next-generation HTML5 mobile web app. Backbone made it easy to keep the app modular, organized, and extensible, so it was possible to program the complexities of LinkedIn's user experience. Moreover, they are using the same code base in their mobile applications for iOS and Android platforms. WordPress.com: This is a SaaS version of Wordpress and uses Backbone.js models, collections, and views in its notification system, and is integrating Backbone.js into the Stats tab and into other features throughout the home page. Airbnb: This is a community marketplace for users to list, discover, and book unique spaces around the world. Its development team has used Backbone in many latest products. Recently, they rebuilt a mobile website with Backbone.js and Node.js tied together with a library named Rendr. You can visit the following links to get acquainted with other usage examples of Backbone.js: http://backbonejs.org/#examples Backbone.js was started by Jeremy Ashkenas from DocumentCloud in 2010 and is now being used and improved by lots of developers all over the world using Git, the distributed version control system. In this article, we are going to provide some practical examples of how to use Backbone.js, and we will structure a design for a program named Billing Application by following the MVC and Backbone pattern. Reading this article is especially useful if you are new to developing with Backbone.js. Designing an application with the MVC pattern MVC is a design pattern that is widely used in user-facing software, such as web applications. It is intended for splitting data and representing it in a way that makes it convenient for user interaction. To understand what it does, understand the following: Model: This contains data and provides business logic used to run the application View: This presents the model to the user Controller: This reacts to user input by updating the model and the view There could be some differences in the MVC implementation, but in general it conforms to the following scheme: Worldwide practice shows that the use of the MVC pattern provides various benefits to the developer: Following the separation of the concerned paradigm, which splits an application into independent parts, it is easier to modify or replace It achieves code reusability by rendering a model in different views without the need to implement model functionality in each view It requires less training and has a quicker startup time for the new developers within an organization To have a better understanding of the MVC pattern, we are going to design a Billing Application. We will refer to this design throughout the book when we are learning specific topics. Our Billing Application will allow users to generate invoices, manage them, and send them to clients. According to the worldwide practice, the invoice should contain a reference number, date, information about the buyer and seller, bank account details, a list of provided products or services, and an invoice sum. Let's have a look at the following screenshot to understand how an invoice appears: How to do it... Let's follow the ensuing steps to design an MVC structure for the Billing Application: Let's write down a list of functional requirements for this application. We assume that the end user may want to be able to do the following: Generate an invoice E-mail the invoice to the buyer Print the invoice See a list of existing invoices Manage invoices (create, read, update, and delete) Update an invoice status (draft, issued, paid, and canceled) View a yearly income graph and other reports To simplify the process of creating multiple invoices, the user may want to manage information about buyers and his personal details in the specific part of the application before he/she creates an invoice. So, our application should provide additional functionalities to the end user, such as the following: The ability to see a list of buyers and use it when generating an invoice The ability to manage buyers (create, read, update, and delete) The ability to see a list of bank accounts and use it when generating an invoice The ability to manage his/her own bank accounts (create, read, update, and delete) The ability to edit personal details and use them when generating an invoice Of course, we may want to have more functions, but this is enough for demonstrating how to design an application using the MVC pattern. Next, we architect an application using the MVC pattern. After we have defined the features of our application, we need to understand what is more related to the model (business logic) and what is more related to the view (presentation). Let's split the functionality into several parts. Then, we learn how to define models. Models present data and provide data-specific business logic. Models can be related to each other. In our case, they are as follows: InvoiceModel InvoiceItemModel BuyerModel SellerModel BankAccountModel Then, will define collections of models. Our application allows users to operate on a number of models, so they need to be organized into a special iterable object named Collection. We need the following collections: InvoiceCollection InvoiceItemCollection BuyerCollection BankAccountCollection Next, we define views. Views present a model or a collection to the application user. A single model or collection can be rendered to be used by multiple views. The views that we need in our application are as follows: EditInvoiceFormView InvoicePageView InvoiceListView PrintInvoicePageView EmailInvoiceFormView YearlyIncomeGraphView EditBuyerFormView BuyerPageView BuyerListView EditBankAccountFormView BankAccountPageView BankAccountListView EditSellerInfoFormView ViewSellectInfoPageView ConfirmationDialogView Finally, we define a controller. A controller allows users to interact with an application. In MVC, each view can have a different controller that is used to do following: Map a URL to a specific view Fetch models from a server Show and hide views Handle user input Defining business logic with models and collections Now, it is time to design business logic for the Billing Application using the MVC and OOP approaches. In this recipe, we are going to define an internal structure for our application with model and collection objects. Although a model represents a single object, a collection is a set of models that can be iterated, filtered, and sorted. Relations between models and collections in the Billing Application conform to the following scheme: How to do it... For each model, we are going to create two tables: one for properties and another for methods: We define BuyerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   Then, we define SellerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   taxDetails Text Yes   After this, we define BankAccountModel properties. Name Type Required Unique id Integer Yes Yes beneficiary Text Yes   beneficiaryAccount Text Yes   bank Text No   SWIFT Text Yes   specialInstructions Text No   We define InvoiceItemModel properties. Name Arguments Return Type Unique calculateAmount - Decimal   Next, we define InvoiceItemModel methods. We don't need to store the item amount in the model, because it always depends on the price and the quantity, so it can be calculated. Name Type Required Unique id Integer Yes Yes deliveryDate Date Yes   description Text Yes   price Decimal Yes   quantity Decimal Yes   Now, we define InvoiceModel properties. Name Type Required Unique id Integer Yes Yes referenceNumber Text Yes   date Date Yes   bankAccount Reference Yes   items Collection Yes   comments Text No   status Integer Yes   We define InvoiceModel methods. The invoice amount can easily be calculated as the sum of invoice item amounts. Name Arguments Return Type Unique calculateAmount   Decimal   Finally, we define collections. In our case, they are InvoiceCollection, InvoiceItemCollection, BuyerCollection, and BankAccountCollection. They are used to store models of an appropriate type and provide some methods to add/remove models to/from the collections. How it works... Models in Backbone.js are implemented by extending Backbone.Model, and collections are made by extending Backbone.Collection. To implement relations between models and collections, we can use special Backbone extensions. To learn more about object properties, methods, and OOP programming in JavaScript, you can refer to the following resource: https://developer.mozilla.org/en-US/docs/JavaScript/Introduction_to_Object-Oriented_JavaScript Modeling an application's behavior with views and a router Unlike traditional MVC frameworks, Backbone does not provide any distinct object that implements controller functionality. Instead, the controller is diffused between Backbone.Router and Backbone. View and the following is done: A router handles URL changes and delegates application flow to a view. Typically, the router fetches a model from the storage asynchronously. When the model is fetched, it triggers a view update. A view listens to DOM events and either updates a model or navigates an application through a router. The following diagram shows a typical workflow in a Backbone application: How to do it... Let's follow the ensuing steps to understand how to define basic views and a router in our application: First, we need to create wireframes for an application. Let's draw a couple of wireframes in this recipe: The Edit Invoice page allows users to select a buyer, to select the seller's bank account from the lists, to enter the invoice's date and a reference number, and to build a table of shipped products and services. The Preview Invoice page shows how the final invoice will be seen by a buyer. This display should render all the information we have entered in the Edit Invoice form. Buyer and seller information can be looked up in the application storage. The user has the option to either go back to the Edit display or save this invoice. Then, we will define view objects. According to the previous wireframes, we need to have two main views: EditInvoiceFormView and PreviewInvoicePageView. These views will operate with InvoiceModel; it refers to other objects, such as BankAccountModel and InvoiceItemCollection. Now, we will split views into subviews. For each item in the Products or Services table, we may want to recalculate the Amount field depending on what the user enters in the Price and Quantity fields. The first way to do this is to re-render the entire view when the user changes the value in the table; however, it is not an efficient way, and it takes a significant amount of computer power to do this. We don't need to re-render the entire view if we want to update a small part of it. It is better to split the big view into different, independent pieces, such as subviews, that are able to render only a specific part of the big view. In our case, we can have the following views: As we can see, EditInvoiceItemTableView and PreviewInvoiceItemTableView render InvoiceItemCollection with the help of the additional views EditInvoiceItemView and PreviewInvoiceItemView that render InvoiceItemModel. Such separation allows us to re-render an item inside a collection when it is changed. Finally, we will define URL paths that will be associated with a corresponding view. In our case, we can have several URLs to show different views, for example: /invoice/add /invoice/:id/edit /invoice/:id/preview Here, we assume that the Edit Invoice view can be used for either creating a new invoice or editing an existing one. In the router implementation, we can load this view and show it on specific URLs. How it works... The Backbone.View object can be extended to create our own view that will render model data. In a view, we can define handlers to user actions, such as data input and keyboard or mouse events. In the application, we can have a single Backbone.Router object that allows users to navigate through an application by changing the URL in the address bar of the browser. The router object contains a list of available URLs and callbacks. In a callback function, we can trigger the rendering of a specific view associated with a URL. If we want a user to be able to jump from one view to another, we may want him/her to either click on regular HTML links associated with a view or navigate to an application programmatically.
Read more
  • 0
  • 0
  • 2102

article-image-jquery-refresher
Packt
30 Aug 2013
6 min read
Save for later

jQuery refresher

Packt
30 Aug 2013
6 min read
(For more resources related to this topic, see here.) If you haven't used jQuery in a while, that's okay, we'll get you up to speed very quickly. The first thing to realize is that the Document.Ready function is extremely important when using UI. Although page loading happens incredibly fast, we always want our DOM (the HTML content) to be loaded before our UI code gets applied. Otherwise we have nothing to apply it to! We want to place our code inside the Document.Ready function, and we will be writing it the shorthand way as we did previously. Please remove the previous UI checking code in your header that you had previously: $(function() {// Your code here is called only once the DOM is completelyloaded}); Easy enough. Let's refresh on some jQuery selectors. We'll be using these a lot in our examples so we can manipulate our page. I'll write out a few DOM elements next and how you can select them. I will apply hide() to them so we know what's been selected and hidden. Feel free to place the JavaScript portion in your header script tags and the HTML elements within your <body> tags as follows: Selecting elements (unchanging the HTML entities) as follows: $('p').hide();<p>This is a paragraph</p><p>And here is another</p><p>All paragraphs will go hidden!</p> Selecting classes as follows: $('.edit').hide();<p>This is an intro paragraph</p><p class="edit">But this will go hidden!</p><p>Another paragraph</p><p class="edit">This will also go hidden!</p> Selecting IDs as follows: <div id="box">Hide the Box </div><div id="house">Just a random divider</div> Those are the three basic selectors. We can get more advanced and use the CSS3 selectors as follows: $("input[type=submit]").hide();<form><input type="text" name="name" /><input type="submit" /></form> Lastly, you can chain your DOM tree to hide elements more specifically: $("table tr td.hidden").hide(); <table> <tbody> <tr> <td>Data</td> <td class="hidden">Hide Me</td> </tr> </tbody> </table> Step 3 – console.log is your best friend I brought up that developing with the console open is very helpful. When you need to know details about a JavaScript item you have, whether it be the typeof type or value, a friend of yours is the console.log() method. Notice that it is always in lowercase. This allows you to place things in the console rather than somewhere on your page. For example, if I were having trouble figuring out what a value was returning to me, I would simply do the following: function add(a, b) {return a + b;}var total = add(5, 20);console.log(total); This will give me the result I wanted to know quickly and easily. Internet Explorer does not support console logging, it will prevent your JavaScript from running once it hits a console.log method. Make sure to comment out or remove all the console logs before releasing a live project or else all the IE users will have a serious problem. Step 4 – creating the slider widget Let's get busy! Open your template file and let's create a DOM element to attach a slider widget to. And to make it more interesting, we are also going to add an additional DIV to show a text value. Here is what I placed in my <body> tag: <div id="slider"></div><div id="text"></div> It doesn't have to be a <div> tag, but it's a good generic block-level element to use. Next, to attach a slider element we place the following in our <script> tags (the empty ones): $(function() {var my_slider = $("#slider").slider();}); Refresh your page, and you will have a widget that can slide along a bar. If you don't see a slider, first check your browser's development tools console to see if there are any JavaScript errors. If you don't see any still, make sure you don't have a JavaScript blocker on! The reason we assign a variable to the slider is because, later on, we may want to reference the options, which you'll see next. You are not required to do this, but if you want to access the slider outside of its initial setup, you must give it a variable name. Our widget doesn't do much now, but it feels cool to finally make something, whatever it is! Let's break down a few things we can customize. There are three categories: Options: These are defined in a JavaScript object ({}) and will determine how you want your widget to behave when it's loaded, for example, you could set your slider to have minimum and maximum values. Events: These are always a function and they are triggered when a user does something to your item. Methods: You can use methods to destroy a widget, get and set values from outside of the widget, and even set different options from what you started with. To play with a few categories, the easiest start is to adjust the options. Let's do it by creating an empty object inside our slider: var my_slider = $("#slider").slider({}); Then we'll create a minimum and maximum value for our slider using the following code: var my_slider = $("#slider").slider({min: 1,max: 50}); Now our slider will accept and move along a bar with 50 values. There are many more options at UI API located at api.jquery.com under slider. You'll find many other options we won't have time to cover such as a step option to make the slider count every two digits, as follows: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2}); If we want to attach this to a text field we created in the DOM, a good way to start is by assigning the minimum value in the DIV, as this way we only have to change it once: var min = my_slider.slider('option', 'min');$("#text").html(min); Next we want to update the text value every time the slider is moved, easy enough; this will introduce us to our first event. Let's add it: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2,change: function(event, ui) {$("#text").html(ui.value);}}); Summary This article describes the basis for all widgets. Creating them, setting the options, events, and methods. That is the very simple pattern that handles everything for us. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 2275

article-image-getting-started-your-first-jquery-plugin
Packt
29 Aug 2013
9 min read
Save for later

Getting started with your first jQuery plugin

Packt
29 Aug 2013
9 min read
(For more resources related to this topic, see here.) Getting ready Before we dive into our development, we need to have a good idea of how our plugin is going to work. For this, we will write some simple HTML to declare a shape and a button. Each shape will be declared in the CSS, and then we will use the JavaScript to toggle which shape is shown by toggling the CSS class appended to it. The aim of this recipe is to help you familiarize yourself both with jQuery plugin development and the jQuery Boilerplate template. How to do it Our first step is to set up our HTML. For this we need to open up our index.html file. We will need to add two elements in HTML: shape and wrapper to contain our shape. The button for changing the shape element will be added dynamically by our JavaScript. We will then add an event listener to it so that we can change the shape. The HTML code for this is as follows: <div class="shape_wrapper"> <div class="shape"> </div> </div> This should be placed in the div tag with class="container" in our index.html file. We then need to define each of the shapes we intend to use using CSS. In this example, we will draw a square, a circle, a triangle, and an oval, all of which can be defined using CSS. The shape we will be manipulating will be 100px * 100px. The following CSS should be placed in your main.css file: .shape{ width: 100px; height: 100px; background: #ff0000; margin: 10px 0px; } .shape.circle{ border-radius: 50px; } .shape.triangle{ width: 0; height: 0; background: transparent; border-left: 50px solid transparent; border-right: 50px solid transparent; border-bottom: 100px solid #ff0000; } .shape.oval{ width: 100px; height: 50px; margin: 35px 0; border-radius: 50px / 25px; } Now it's time to get onto the JavaScript. The first step in creating the plugin is to name it; in this case we will call it shapeShift. In the jQuery Boilerplate code, we will need to set the value of the pluginName variable to equal shapeShift. This is done as: var pluginName = "shapeShift" Once we have named the plugin, we can edit our main.js file to call the plugin. We will call the plugin by selecting the element using jQuery and creating an instance of our plugin by running .shapeShift(); as follows: (function(){$('.shape_wrapper').shapeShift();}()); For now this will do nothing, but it will enable us to test our plugin once we have written the code. To ensure the flexibility of our plugin, we will store our shapes as part of the defaults object literal, meaning that, in the future, the shapes used by the plugin can be changed without the plugin code being changed. We will also set the class name of the shape in the defaults object literal so that this can be chosen by the plugin user as well. After doing this, your defaults object should look like the following: defaults = {shapes: ["square", "circle", "triangle", "oval"],shapeClass: ".shape"}; When the .shapeShift() function is triggered, it will create an instance of our plugin and then fire the init function. For this instance of our plugin, we will store the current shape location in the array; this is done by adding it to this by using this.shapeRef = 0. The reason we are storing the shape reference on this is that it attaches it to this instance of the plugin, and it will not be available to other instances of the same plugin on the same page. Once we have stored the shape reference, we need to apply the first shape class to the div element according to our shape. The simplest way to do this is to use jQuery to get the shape and then use addClass to add the shape class as follows: $(this.element).find(this.options.shapeClass).addClass(this.options.shapes[this.shapeRef]); The final step that we need to do in our init function is to add our button to enable the user to change the shape. To do this, we simply append a button element to the shape container as follows: $(this.element).append('<button>Change Shape</button>'); Once we have our button element, we then need to add the shape reference, which changes the shape of the elements. To do this we will create a separate function called changeShape. While we are still in our init function, we can add an event handler to call the changeShape function onto the button. For reasons that will become apparent shortly, we will use the event delegation format of the jQuery. on() function to do this: $(this.element).on('click','button',this.changeShape); We now need to create our changeShape function; the first thing we will do is change this function name to changeShape. We will then change the function declaration to accept a parameter, in this case e. The first thing to note is that this function is called from an event listener on a DOM element and therefore this is actually the element that has been clicked on. This function was called using event delegation; the reason for this becomes apparent here as it allows us to find out which instance of the plugin belongs to the button that has been clicked on. We do this by using the e parameter that was passed to the function. The e parameter passed to the function is the jQuery event object related to the click event that has been fired. Inside it, we will find a reference to the original element that the click event was set to, which in this case is the element that the instance of the plugin is tied to. To retrieve the instance of the plugin, we can simply use the jQuery.data() function. The instance of the plugin is stored on the element as data using the data key plugin_pluginName, so we are able to retrieve it the same way as follows: var plugin = $(e.delegateTarget).data("plugin_" + pluginName); Now that we have the plugin instance, we are able to access everything it contains; the first thing we need to do is to remove the current shape class from the shape element in the DOM. To do this, we will simply find the shape element then look up in the shapes array to get the currently displayed shape, and then use the jQuery.removeClass function to remove the individual class. The code for doing this starts with a simple jQuery selector that allows us to work with the plugin element; we do this using $(plugin.element). We then look inside the plugin element to find the actual shape. As the name of the shape class is configurable, we can read this from our plugin option; so when we are finding the shape we use .find(plugin.options.shapeClass). Finally we add the class; so that we know which shape is next, we look up the shape class from the shapes array stored in the plugin options, selecting the item indicated by the plugin.shapeRef. The full command then looks as follows: $(plugin.element).find(plugin.options.shapeClass).removeClass(plugin.options.shapes[plugin.shapeRef]); We then need to work out which is the next shape we should show; we know that the current shape reference can be found in plugin.shapeRef, so we just need to work out if we have any more shapes left in the shape array or if we should start from the beginning. To do this, we look at the value of plugin.shapeRef and compare it to the length of the shapes array minus 1 (we substract 1 because arrays start at 0); if the shape reference is equal to the length of the shapes array minus 1, we know that we have reached the last shape, so we reset the plugin.shapeRef parameter to 0. Otherwise, we simply increment the shapeRef parameter by 1 as shown in the snippet: if((plugin.shapeRef) === (plugin.options.shapes.length -1)){plugin.shapeRef = 0;}else{plugin.shapeRef = plugin.shapeRef+1;} Our final step is to add the new shape class to the shape element; this can be achieved by finding the shape element and using the jQuery.addClass function to add the shape from the shapes array. This is very similar to our removeClass command that we used earlier with addClass replacing removeClass. $(plugin.element).find(plugin.options.shapeClass).addClass(plugin.options.shapes[plugin.shapeRef]); At this point we should now have a working plugin; so if we fire up the browser and navigate to the index.html file, we should get a square with a button beneath it. Clicking on the button should show the next shape. If your code is working correctly, the shapes should be shown in the order: square, circle, triangle, oval, and then loop back to square. As a final test to show that each plugin instance is tied to one element, we will add a second element to the page. This is as simple as duplicating the original shape_wrapper and creating a second one as shown: <div class="shape_wrapper"><div class="shape"></div></div> If everything is working correctly when loading the index.html page, we will have 2 squares each with a button underneath them, and on clicking the button only the shape above will change. Summary This article explained how to create your first jQuery plugin that manipulates the shape of a div element. We achieved this by writing some HTML to declare a shape and a button, declaring each shape in the CSS, and then using the JavaScript to toggle which shape is shown by toggling the CSS class appended to it. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 1081
Banner background image

article-image-nokogiri
Packt
27 Aug 2013
8 min read
Save for later

Nokogiri

Packt
27 Aug 2013
8 min read
(For more resources related to this topic, see here.) Spoofing browser agents When you request a web page, you send metainformation along with your request in the form of headers. One of these headers, User-agent, informs the web server which web browser you are using. By default open-uri, the library we are using to scrape, will report your browser as Ruby. There are two issues with this. First, it makes it very easy for an administrator to look through their server logs and see if someone has been scraping the server. Ruby is not a standard web browser. Second, some web servers will deny requests that are made by a nonstandard browsing agent. We are going to spoof our browser agent so that the server thinks we are just another Mac using Safari. An example is as follows: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# this string is the browser agent for Safari running on a Macbrowser = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4)AppleWebKit/536.30.1 (KHTML, like Gecko) Version/6.0.5Safari/536.30.1'# create a new Nokogiri HTML document from the scraped URL and pass inthe# browser agent as a second parameterdoc = Nokogiri::HTML(open('http://nytimes.com', browser))# you can now go along with your request as normal# you will show up as just another safari user in the logsputs doc.at_css('h2 a').to_s Caching It's important to remember that every time we scrape content, we are using someone else's server's resources. While it is true that we are not using any more resources than a standard web browser request, the automated nature of our requests leave the potential for abuse. In the previous examples we have searched for the top headline on The New York Times website. What if we took this code and put it in a loop because we always want to know the latest top headline? The code would work, but we would be launching a mini denial of service (DOS) attack on the server by hitting their page potentially thousands of times every minute. Many servers, Google being one example, have automatic blocking set up to prevent these rapid requests. They ban IP addresses that access their resources too quickly. This is known as rate limiting. To avoid being rate limited and in general be a good netizen, we need to implement a caching layer. Traditionally in a large app this would be implemented with a database. That's a little out of scope for this article, so we're going to build our own caching layer with a simple TXT file. We will store the headline in the file and then check the file modification date to see if enough time has passed before checking for new headlines. Start by creating the cache.txt file in the same directory as your code: $ touch cache.txt We're now ready to craft our caching solution: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# set how long in minutes until our data is expired# multiplied by 60 to convert to secondsexpiration = 1 * 60# file to store our cache incache = "cache.txt"# Calculate how old our cache is by subtracting it's modification time# from the current time.# Time.new gets the current time# The mtime methods gets the modification time on a filecache_age = Time.new - File.new(cache).mtime# if the cache age is greater than our expiration timeif cache_age > expiration# our cache has expireputs "cache has expired. fetching new headline"# we will now use our code from the quick start to# snag a new headline# scrape the web pagedata = open('http://nytimes.com')# create a Nokogiri HTML Document from our datadoc = Nokogiri::HTML(data)# parse the top headline and clean it upheadline = doc.at_css('h2 a').content.gsub(/n/," ").strip# we now need to save our new headline# the second File.open parameter "w" tells Ruby to overwrite# the old fileFile.open(cache, "w") do |file| # we then simply puts our text into the file file.puts headlineendputs "cache updated"else # we should use our cached copy puts "using cached copy" # read cache into a string using the read method headline = IO.read("cache.txt")end puts "The top headline on The New York Times is ..."puts headline Our cache is set to expire in one minute, so assuming it has been one minute since you created your cache.txt file, let's fire up our Ruby script: ruby cache.rbcache has expired. fetching new headlinecache updatedThe top headline on The New York Times is ...Supreme Court Invalidates Key Part of Voting Rights Act If we run our script again before another minute passes, it should use the cached copy: $ ruby cache.rbusing cached copyThe top headline on The New York Times is ...Supreme Court Invalidates Key Part of Voting Rights Act SSL By default, open-uri does not support scraping a page with SSL. This means any URL that starts with https will give you an error. We can get around this by adding one line below our require statements: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# disable SSL checking to allow scrapingOpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE Mechanize Sometimes you need to interact with a page before you can scrape it. The most common examples are logging in or submitting a form. Nokogiri is not set up to interact with pages. Nokogiri doesn't even scrape or download the page. That duty falls on open-uri. If you need to interact with a page, there is another gem you will have to use: Mechanize. Mechanize is created by the same team as Nokogiri and is used for automating interactions with websites. Mechanize includes a functioning copy of Nokogiri. To get started, install the mechanize gem: $ gem install mechanizeSuccessfully installed mechanize-2.7.1 We're going to recreate the code sample from the installation where we parsed the top Google results for "packt", except this time we are going to start by going to the Google home page and submitting the search form: # mechanize takes the place of Nokogiri and open-urirequire 'mechanize'# create a new mechanize agent# think of this as launching your web browseragent = Mechanize.new# open a URL in your agent / web browserpage = agent.get('http://google.com/')# the google homepage has one big search box# if you inspect the HTML, you will find a form with the name 'f'# inside of the form you will find a text input with the name 'q'google_form = page.form('f')# tell the page to set the q input inside the f form to 'packt'google_form.q = 'packt'# submit the formpage = agent.submit(google_form)# loop through an array of objects matching a CSS# selector. mechanize uses the search method instead of# xpath or css. search supports xpath and css# you can use the search method in Nokogiri too if you# like itpage.search('h3.r').each do |link| # print the link text puts link.contentend Now execute the Ruby script and you should see the titles for the top results: $ ruby mechanize.rbPackt Publishing: HomeBooksLatest BooksLogin/registerPacktLibSupportContactPackt - Wikipedia, the free encyclopediaPackt Open Source (PacktOpenSource) on TwitterPackt Publishing (packtpub) on TwitterPackt Publishing | LinkedInPackt Publishing | Facebook For more information refer to the site: http://mechanize.rubyforge.org/ People and places you should get to know If you need help with Nokogiri, here are some people and places that will prove invaluable. Official sites The following are the sites you can refer: Homepage and documentation: http://nokogiri.org Source code: https://github.com/sparklemotion/nokogiri/ Articles and tutorials The top five Nokogiri resources are as follows: Nokogiri History, Present, and Future presentation slides from Nokogiri co-author Mike Dalessio: http://bit.ly/nokogiri-goruco-2013 In-depth tutorial covering Ruby, Nokogiri, Sinatra, and Heroku complete with 90 minute behind-the-scenes screencast written by me: http://hunterpowers.com/data-scraping-and-more-with-ruby-nokogiri-sinatra-and-heroku RailsCasts episode 190: Screen Scraping with Nokogiri – an excellent Nokogiri quick start video: http://railscasts.com/episodes/190-screen-scraping-with-nokogiri Mechanize – an excellent Mechanize quick start video: http://railscasts.com/episodes/191-mechanize RailsCasts episode 191 Nokogiri co-author Mike Dalessio's blog: http://blog.flavorjon.es Community The community sites are as follows: Listserve: http://groups.google.com/group/nokogiri-talk GitHub: https://github.com/sparklemotion/nokogiri/ Wiki: http://github.com/sparklemotion/nokogiri/wikis Known issues: http://github.com/sparklemotion/nokogiri/issues Stackoverflow: http://stackoverflow.com/search?q=nokogiri Twitter Nokogiri leaders on Twitter are: Nokogiri co-author Mike Dalessio: @flavorjones Nokogiri co-author Aaron Patterson: @tenderlove Me: @TheHunter For more information on open source, follow Packt Publishing: @PacktOpenSource Summary Thus, we learnt about Nokogiri open source library in this article. Resources for Article : Further resources on this subject: URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Introducing RubyMotion and the Hello World app [Article] Building the Facebook Clone using Ruby [Article]
Read more
  • 0
  • 0
  • 2145

article-image-need-directives
Packt
22 Aug 2013
7 min read
Save for later

The Need for Directives

Packt
22 Aug 2013
7 min read
(For more resources related to this topic, see here.) What makes a directive a directive Angular directives have several distinguishing features, but for the sake of simplicity we'll focus on just three in this article. In contrast to most plugins or other forms of drop-in functionality, directives are declarative, data driven, and conversational. Directives are declarative If you've done any JavaScript development before, you've almost certainly used jQuery (or perhaps Prototype), and likely one of the thousands of plugins available for it. Perhaps you've even written your own such plugin. In either case, you probably have a decent understanding of the flow required to integrate it. They all look something like the following code: $(document).ready(function() { $('#myElement').myPlugin({pluginOpts});}); In short, we're finding the DOM element matching #myElement, then applying our jQuery plugin to it. These frameworks are built from the ground up on the principle of DOM manipulation. In contrast, Angular directives are declarative, meaning we write them into the HTML elements themselves. Declarative programming means that instead of telling an object how to behave (imperative programming), we describe what an object is. So, where in jQuery we might grab an element and apply certain properties or behaviors to it, with Angular we label that element as a type of directive, and, elsewhere, maintain code that defines what properties and behaviors make up that type of object: <html> <body> <div id="myElement" my-awesome-directive></div> </body></html> At a first glance, this may seem rather pedantic, merely a difference in styles, but as we begin to make our applications more complex, this approach serves to streamline many of the usual development headaches. In a more fully developed application, our messages would likely be interactive, and in addition to growing or shrinking during the course of the user's visit, we'd want them to be able to reply to some or retweet themselves. If we were to implement this with a DOM manipulation library (such as jQuery or Prototype), that would require rebuilding the HTML with each change (assuming you want it sorted, just using .append() won't be enough), and then rebinding to each of the appropriate elements to allow the various interactions. In contrast, if we use Angular directives, this all becomes much simpler. As before, we use the ng-repeat directive to watch our list and handle the iterated display of tweets, so any changes to our scoped array will automatically be reflected within the DOM. Additionally, we can create a simple tweet directive to handle the messaging interactions, starting with the following basic definition. Don't worry right now about the specific syntax of creating a directive; for now just take a look at the overall flow in the following code: angular.module('myApp', []) .directive('tweet', ['api', function (api) { return function ($scope, $element, $attributes) { $scope.retweet = function () { api.retweet($scope.tweet);// Each scope inherits from it's parent, so we still have access to the full tweet object of { author : '…', text : '…' } }; $scope.reply = function () { api.replyTo($scope.tweet); }; } }]); For now just know that we're getting an instance of our Twitter API connection and passing it into the directive in the variable api, then using that to handle the replies and retweets. Our HTML for each message now looks like the following code: <p ng-repeat="tweet in tweets" tweet> <!-- ng-click allows us to bind a click event to a function on the $scope object --> @{{tweet.author}}: {{tweet.text}} <span ng-click="retweet()">RT</span> | <span ng-click="reply()">Reply</span></p> By adding the tweet attribute to the paragraph tag, we tell Angular that this element should use the tweet directive, which gives us access to the published methods, as well as anything else we later decide to attach to the $scope object. Directives in Angular can be declared in multiple ways, including classes and comments, though attributes are the most common. Scoping within directives is simultaneously one of the most powerful and most complicated features within Angular, but for now it's enough to know that every property and function we attach to the scope is accessible to us within the HTML declarations. Directives are data driven Angular directives are built from the ground up with this philosophy. The scope and attribute objects accessible to each directive form the skeleton around which the rest of a directive is built and can be monitored for changes both within the DOM as well as the rest of your JavaScript code. What this means for developers is that we no longer have to constantly poll for changes, or ensure that every data change that might have an impact elsewhere within our application is properly broadcast. Instead, the scope object handles all data changes for us, and because directives are declarative as well, that data is already connected to the elements of the view that need to update when the data changes. There's a proposal for ECMAScript 6 to support this kind of data watching natively with Object.observe(), but until that is implemented and fully supported, Angular's scope provides the much needed intermediary. Directives are conversational Modular coding emphasizes the use of messages to communicate between separate building blocks within an application. You're likely familiar with DOM events, used by many plugins to broadcast internal changes (for example, save, initialized, and so on) and subscribe to external events (for example, click, focus, and so on). Angular directives have access to all those events as well (the $element variable you saw earlier is actually a jQuery wrapped DOM element), but $scope also provides an additional messaging system that functions only along the scope tree. The $emit and $broadcast methods serve to send messages up and down the scope tree respectively, and like DOM events, allow directives to subscribe to changes or events within other parts of the application, while still remaining modular and uncoupled from the specific logic used to implement those changes. If you don't have jQuery included in your application, Angular wraps the element in jqLite, which is a lightweight wrapper that provides the same basic methods. Additionally, when you add in the use of Angular services, directives gain an even greater vocabulary. Services, among many other things, allow you to share specific pieces of data between the different pieces of your application, such as a collection of user preferences or utility mapping item codes to their names. Between this shared data and the messaging methods, separate directives are able to communicate fully with each other without requiring a retooling of their internal architecture. Directives are everything you've dreamed about Ok, that might be a bit of hyperbole, but you've probably noticed by now that the benefits outlined so far here are exactly in line with the best practices. One of the most common criticisms of Angular is that it's relatively new (especially compared to frameworks such as Backbone and Ember). In contrast, however, I consider that to be one of its greatest assets. Older frameworks all defined themselves largely before there was a consensus on how frontend web applications should be developed. Angular, on the other hand, has had the advantage of being defined after many of the existing best practices had been established, and in my opinion provides the cleanest interface between an application's data and its display. As we've seen already, directives are essentially data driven modules. They allow developers to easily create a packageable feature that declaratively attaches to an element, molds to fit the data at its disposal, and communicates with the other directives around it to ensure coordinated functionality without disruption of existing features. Summary In this article, we learned about what attributes define directives and why they're best suited for frontend development, as well as what makes them different from the JavaScript techniques and packages you've likely used before. I realize that's a bold statement, and likely one that you don't fully believe yet. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] So, what is EaselJS? [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 1280

article-image-creating-sheet-objects-and-starting-new-list-using-qlikview-11
Packt
20 Aug 2013
6 min read
Save for later

Creating sheet objects and starting new list using Qlikview 11

Packt
20 Aug 2013
6 min read
(For more resources related to this topic, see here.) How it works... To add the list box for a company, right-click in the blank area of the sheet, and choose New Sheet Object | List Box as shown in the following screenshot: As you can see in the drop-down menu, there are multiple types of sheet objects to choose from such as List Box, Statistics Box, Chart, Input Box, Current Selections Box, Multi Box, Table Box, Button, Text Object, Line/Arrow Object, Slider/Calendar Object, and Bookmark Object. We will only cover a few of them in the course of this article. The Help menu and extended examples that are available on the QlikView website will allow you to explore ideas beyond the scope of this article. The Help documentation for any item can be obtained by using the Help menu present on the top menu bar. Choose the List Box sheet object to add the company dimension to our analysis. The New List Box wizard has eight tabs: General, Expressions, Sort, Presentation, Number, Font, Layout, and Caption, as shown in the following screenshot: Give the new List Box the title Company. The Object ID will be system generated. We choose the Company field from the fields available in the datafile that we loaded. We can check the Show Frequency box to show frequency in percent, which will only tell us how many account lines in October were loaded for each company. In the Expressions tab, we can add formulas for analyzing the data. Here, click on Add and choose Average. Since, we only have numerical data in the Amount field, we will use the Average aggregation for the Amount field. Don't forget to click on the Paste button to move your expression into the expression checker. The expression checker will tell you if the expression format is valid or if there is a syntax problem. If you forget to move your expression into the expression checker with the Paste button, the expression will not be saved and will not appear in your application. The Sort tab allows you to change the Sort criteria from text to numeric or dates. We will not change the Sort criteria here. The Presentation tab allows you to adjust things such as column or row header wrap, cell borders, and background pictures. The Number tab allows us to override the default format to tell the sheet to format the data as money, percentage, or date for example. We will use this tab on our table box currently labeled Sum(Amount) to format the amount as money after we have finished creating our new company list box. The Font tab lets us choose the font that we want to use, its display size, and whether to make our font bold. The Layout tab allows us to establish and apply themes, and format the appearance of the sheet object, in this case, the list box. The Caption tab further formats the sheet object and, in the case of the list box, allows you to choose the icons that will appear in the top menu of the list box so that we can use those icons to select and clear selections in our list box. In this example, we have selected search, select all, and clear. We can see that the percentage contribution to the amount and the average amount is displayed in our list box. Now, we need to edit our straight table sheet object along with the amount. Right-click on the straight table sheet object and choose Properties from the pop-up menu. In the General tab, give the table a suitable name. In this case, use Sum of Accounts. Then move over to the Number tab and choose Money for the number format. Click on Apply to immediately apply the number format, and click on OK to close the wizard. Now our straight table sheet object has easier to read dollar amounts. One of the things we notice immediately in our analysis is that we are out of balance by one dollar and fifty-nine cents, as shown in the following screenshot: We can analyze our data just using the list boxes, by selecting a company from the Company list and seeing which account groups and which cost centers are included (white) and which are excluded (gray). Our selected Company shows highlighted in green: By selecting Cheyenne Holding, we can see that it is indeed a holding company and has no manufacturing groups, sales accounting groups, or cost centers. Also the company is in balance. But what about a more graphic visual analysis? To create a chart to further visualize and analyze our data, we are going to create a new sheet object. This time we are going to create a bar chart so that we can see various company contributions to administrative costs or sales by the Acct.5 field, and the account number. Just as when we created the company list box, we right-click on the sheet and choose New Sheet Object | Chart. This opens the following Chart Properties wizard for us: We follow the steps through the chart wizard by giving the chart a name, selecting the chart type, and the dimensions we want to use. Again our expression is going to be SUM(Amount), but we will use the Label option and name it Total Amount in the Expression tab. We have selected the Company and Acct.5 dimensions in the Dimension tab, and we take the defaults for the rest of the wizard tabs. When we close the wizard, the new bar chart appears on our sheet, and we can continue our analysis. In the following screenshot, we have chosen Cheyenne Manufacturing for our Company and all Sales/COS Trade to Mexico Branch as Account Groups. These two selection then show us in our straight table the cost centers that are associated with sales/COS trade to Mexico branch. In our bar chart, we see the individual accounts associated with sales/COS trade to Mexico branch and Cheyenne Manufacturing along with the related amounts posted for these accounts. Summary We created more sheet objects, started with a new list box to begin analyzing our loaded data. We alson added dimensions for analysis. Resources for Article: Further resources on this subject: Meet QlikView [Article] Linking Section Access to multiple dimensions [Article] Creating the first Circos diagram [Article]
Read more
  • 0
  • 0
  • 2333
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-quick-start-your-first-sinatra-application
Packt
14 Aug 2013
15 min read
Save for later

Quick start - your first Sinatra application

Packt
14 Aug 2013
15 min read
(For more resources related to this topic, see here.) Step 1 – creating the application The first thing to do is set up Sinatra itself, which means creating a Gemfile. Open up a Terminal window and navigate to the directory where you're going to keep your Sinatra applications. Create a directory called address-book using the following command: mkdir address-book Move into the new directory: cd address-book Create a file called Gemfile: source 'https://rubygems.org'gem 'sinatra' Install the gems via bundler: bundle install You will notice that Bundler will not just install the sinatra gem but also its dependencies. The most important dependency is Rack (http://rack.github.com/), which is a common handler layer for web servers. Rack will be receiving requests for web pages, digesting them, and then handing them off to your Sinatra application. If you set up your Bundler configuration as indicated in the previous section, you will now have the following files: .bundle: This is a directory containing the local configuration for Bundler Gemfile: As created previously Gemfile.lock: This is a list of the actual versions of gems that are installed vendor/bundle: This directory contains the gems You'll need to understand the Gemfile.lock file. It helps you know exactly which versions of your application's dependencies (gems) will get installed. When you run bundle install, if Bundler finds a file called Gemfile.lock, it will install exactly those gems and versions that are listed there. This means that when you deploy your application on the Internet, you can be sure of which versions are being used and that they are the same as the ones on your development machine. This fact makes debugging a lot more reliable. Without Gemfile.lock, you might spend hours trying to reproduce behavior that you're seeing on your deployed app, only to discover that it was caused by a glitch in a gem version that you haven't got on your machine. So now we can actually create the files that make up the first version of our application. Create address-book.rb: require 'sinatra/base'class AddressBook < Sinatra::Base get '/' do 'Hello World!' endend This is the skeleton of the first part of our application. Line 1 loads Sinatra, line 3 creates our application, and line 4 says we handle requests to '/'—the root path. So if our application is running on myapp.example.com, this means that this method will handle requests to http://myapp.example.com/. Line 5 returns the string Hello World!. Remember that a Ruby block or a method without explicit use of the return keyword will return the result of its last line of code. Create config.ru: $: << File.dirname(__FILE__)require 'address-book'run AddressBook.new This file gets loaded by rackup, which is part of the Rack gem. Rackup is a tool that runs rack-based applications. It reads the configuration from config.ru and runs our application. Line 1 adds the current directory to the list of paths where Ruby looks for files to load, line 2 loads the file we just created previously, and line 4 runs the application. Let's see if it works. In a Terminal, run the following command: bundle exec rackup -p 3000 Here rackup reads config.ru, loads our application, and runs it. We use the bundle exec command to ensure that only our application's gems (the ones in vendor/bundle) get used. Bundler prepares the environment so that the application only loads the gems that were installed via our Gemfile. The -p 3000 command means we want to run a web server on port 3000 while we're developing. Open up a browser and go to http://0.0.0.0:3000; you should see something that looks like the following screenshot: Illustration 1: The Hello World! output from the application Logging Have a look at the output in the Terminal window where you started the application. I got the following (line numbers are added for reference): 1 [2013-03-03 12:30:02] INFO WEBrick 1.3.12 [2013-03-03 12:30:02] INFO ruby 1.9.3 (2013-01-15) [x86_64-linux]3 [2013-03-03 12:30:02] INFO WEBrick::HTTPServer#start: pid=28551 port=30004 127.0.0.1 - - [03/Mar/2013 12:30:06] "GET / HTTP/1.1" 200 12 0.01425 127.0.0.1 - - [03/Mar/2013 12:30:06] "GET /favicon.ico HTTP/1.1" 404 445 0.0018 Like it or not, you'll be seeing a lot of logs such as this while doing web development, so it's a good idea to get used to noticing the information they contain. Line 1 says that we are running the WEBrick web server. This is a minimal server included with Ruby—it's slow and not very powerful so it shouldn't be used for production applications, but it will do for now for application development. Line 2 indicates that we are running the application on Version 1.9.3 of Ruby. Make sure you don't develop with older versions, especially the 1.8 series, as they're being phased out and are missing features that we will be using in this book. Line 3 tells us that the server started and that it is awaiting requests on port 3000, as we instructed. Line 4 is the request itself: GET /. The number 200 means the request succeeded—it is an HTTP status code that means Success . Line 5 is a second request created by our web browser. It's asking if the site has a favicon, an icon representing the site. We don't have one, so Sinatra responded with 404 (not found). When you want to stop the web server, hit Ctrl + C in the Terminal window where you launched it. Step 2 – putting the application under version control with Git When developing software, it is very important to manage the source code with a version control system such as Git or Mercurial. Version control systems allow you to look at the development of your project; they allow you to work on the project in parallel with others and also to try out code development ideas (branches) without messing up the stable application. Create a Git repository in this directory: git init Now add the files to the repository: git add Gemfile Gemfile.lock address-book.rb config.ru Then commit them: git commit -m "Hello World" I assume you created a GitHub account earlier. Let's push the code up to www.github.com for safe keeping. Go to https://github.com/new. Create a repo called sinatra-address-book. Set up your local repo to send code to your GitHub account: git remote add origin [email protected]:YOUR_ACCOUNT/sinatra-address-book.git Push the code: git push You may need to sort out authentication if this is your first time pushing code. So if you get an error such as the following, you'll need to set up authentication on GitHub: Permission denied (publickey) Go to https://github.com/settings/ssh and add the public key that you generated in the previous section. Now you can refresh your browser, and GitHub will show you your code as follows: Note that the code in my GitHub repository is marked with tags. If you want to follow the changes by looking at the repository, clone my repo from //github.com/joeyates/sinatra-address-book.git into a different directory and then "check out" the correct tag (indicated by a footnote) at each stage. To see the code at this stage, type in the following command: git checkout 01_hello_world If you type in the following command, Git will tell you that you have "untracked files", for example, .bundle: git status To get rid of the warning, create a file called .gitignore inside the project and add the following content: /.bundle//vendor/bundle/ Git will no longer complain about those directories. Remember to add .gitignore to the Git repository and commit it. Let's add a README file as the page is requesting, using the following steps: Create the README.md file and insert the following text: sinatra-address-book ==================== An example program of various Sinatra functionality. Add the new file to the repo: git add README.md Commit the changes: git commit -m "Add a README explaining the application" Send the update to GitHub: git push Now that we have a README file, GitHub will stop complaining. What's more is other people may see our application and decide to build on it. The README file will give them some information about what the application does. Step 3 – deploying the application We've used GitHub to host our project, but now we're going to publish it online as a working site. In the introduction, I asked you to create a Heroku account. We're now going to use that to deploy our code. Heroku uses Git to receive code, so we'll be setting up our repository to push code to Heroku as well. Now let's create a Heroku app: heroku createCreating limitless-basin-9090... done, stack is cedarhttp://limitless-basin-9090.herokuapp.com/ | [email protected]:limitless-basin-9090.gitGit remote heroku added My Heroku app is called limitless-basin-9090. This name was randomly generated by Heroku when I created the app. When you generate an app, you will get a different, randomly generated name. My app will be available on the Web at the http://limitless-basin-9090.herokuapp.com/ address. If you deploy your app, it will be available on an address based on the name that Heroku has generated for it. Note that, on the last line, Git has been configured too. To see what has happened, use the following command: git remote show heroku* remote heroku Fetch URL: [email protected]:limitless-basin-9090.git Push URL: [email protected]:limitless-basin-9090.git HEAD branch: (unknown) Now let's deploy the application to the Internet: git push heroku master Now the application is online for all to see: The initial version of the application, running on Heroku Step 4 – page layout with Slim The page looks a bit sad. Let's set up a standard page structure and use a templating language to lay out our pages. A templating language allows us to create the HTML for our web pages in a clearer and more concise way. There are many HTML templating systems available to the Sinatra developer: erb , haml , and slim are three popular choices. We'll be using Slim (http://slim-lang.com/). Let's add the gem: Update our Gemfile: gem 'slim' Install the gem: bundle We will be keeping our page templates as .slim files. Sinatra looks for these in the views directory. Let's create the directory, our new home page, and the standard layout for all the pages in the application. Create the views directory: mkdir views Create views/home.slim: p address book – a Sinatra application When run via Sinatra, this will create the following HTML markup: <p>address book – a Sinatra application</p> Create views/layout.slim: doctype html html head title Sinatra Address Book body == yield Note how Slim uses indenting to indicate the structure of the web page. The most important line here is as follows: == yield This is the point in the layout where our home page's HTML markup will get inserted. The yield instruction is where our Sinatra handler gets called. The result it returns (that is, the web page) is inserted here by Slim. Finally, we need to alter address-book.rb. Add the following line at the top of the file: require 'slim' Replace the get '/' handler with the following: get '/' do slim :home end Start the local web server as we did before: bundle exec rackup -p 3000 The following is the new home page: Using the Slim Templating Engine Have a look at the source for the page. Note how the results of home.slim are inserted into layout.slim. Let's get that deployed. Add the new code to Git and then add the two new files: git add views/*.slim Also add the changes made to the other files: git add address-book.rb Gemfile Gemfile.lock Commit the changes with a comment: git commit -m "Generate HTML using Slim" Deploy to Heroku: git push heroku master Check online that everything's as expected. Step 5 – styling To give a slightly nicer look to our pages, we can use Bootstrap (http://twitter.github.io/bootstrap/); it's a CSS framework made by Twitter. Let's modify views/layout.slim. After the line that says title Sinatra Address Book, add the following code: link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.min.css" rel="stylesheet"There are a few things to note about this line. Firstly, we will be using a file hosted on a Content Distribution Network (CDN ). Clearly, we need to check that the file we're including is actually what we think it is. The advantage of a CDN is that we don't need to keep a copy of the file ourselves, but if our users visit other sites using the same CDN, they'll only need to download the file once. Note also the use of // at the beginning of the link address; this is called a "protocol agnostic URL". This way of referencing the document will allow us later on to switch our application to run securely under HTTPS, without having to readjust all our links to the content. Now let's change views/home.slim to the following: div class="container" h1 address book h2 a Sinatra application We're not using Bootstrap to anywhere near its full potential here. Later on we can improve the look of the app using Bootstrap as a starting point. Remember to commit your changes and to deploy to Heroku. Step 6 – development setup As things stand, during local development we have to manually restart our local web server every time we want to see a change. Now we are going to set things up with the following steps so the application reloads after each change: Add the following block to the Gemfile: group :development do gem 'unicorn' gem 'guard' gem 'listen' gem 'rb-inotify', :require => false gem 'rb-fsevent', :require => false gem 'guard-unicorn' endThe group around these gems means they will only be installed and used in development mode and not when we deploy our application to the Web. Unicorn is a web server—it's better than WEBrick —that is used in real production environments. WEBrick's slowness can even become noticeable during development, while Unicorn is very fast. rb-inotify and rb-fsevent are the Linux and Mac OS X components that keep a check on your hard disk. If any of your application's files change, guard restarts the whole application, updating the changes. Finally, update your gems: bundle Now add Guardfile: guard :unicorn, :daemonize => true do `git ls-files`.each_line { |s| s.chomp!; watch s }end Add a configuration file for unicorn: mkdir config In config/unicorn.rb, add the following: listen 3000 Run the web server: guard Now if you make any changes, the web server will restart and you will get a notification via a desktop message. To see this, type in the following command: touch address-book.rb You should get a desktop notification saying that guard has restarted the application. Note that to shut guard down, you need to press Ctrl + D . Also, remember to add the new files to Git. Step 7 – testing the application We want our application to be robust. Whenever we make changes and deploy, we want to be sure that it's going to keep working. What's more, if something does not work properly, we want to be able to fix bugs so we know that they won't come back. This is where testing comes in. Tests check that our application works properly and also act as detailed documentation for it; they tell us what the application is intended for. Our tests will actually be called "specs", a term that is supposed to indicate that you write tests as specifications for what your code should do. We will be using a library called RSpec . Let's get it installed. Add the gem to the Gemfile: group :test do gem 'rack-test' gem 'rspec'end Update the gems so RSpec gets installed: bundle Create a directory for our specs: mkdir spec Create the spec/spec_helper.rb file: $: << File.expand_path('../..', __FILE__)require 'address-book'require 'rack/test'def app AddressBook.newendRSpec.configure do |config| config.include Rack::Test::Methodsend Create a directory for the integration specs: mkdir spec/integration Create a spec/integration/home_spec.rb file for testing the home page: require 'spec_helper'describe "Sinatra App" do it "should respond to GET" do get '/' expect(last_response).to be_ok expect(last_response.body).to match(/address book/) endend What we do here is call the application, asking for its home page. We check that the application answers with an HTTP status code of 200 (be_ok). Then we check for some expected content in the resulting page, that is, the address book page. Run the spec: bundle exec rspec Finished in 0.0295 seconds1 example, 0 failures Ok, so our spec is executed without any errors. There you have it. We've created a micro application, written tests for it, and deployed it to the Internet. Summary This article discussed how to perform the core tasks of Sinatra: handling a GET request and rendering a web page. Resources for Article : Further resources on this subject: URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Setting up environment for Cucumber BDD Rails [Article]  
Read more
  • 0
  • 0
  • 14430

article-image-creating-your-first-freemarker-template
Packt
26 Jul 2013
10 min read
Save for later

Creating your first FreeMarker Template

Packt
26 Jul 2013
10 min read
(For more resources related to this topic, see here.) Step 1 – setting up your development directory If you haven't done so, create a directory to work in. I'm going to keep this as simple as possible, so we won't need a complicated directory structure. Everything can be done in one directory.Put the freemarker.jar in the directory. All future talk about files and running from the command-line will refer to your working directory. If you want to, you can set up a more advanced project-like set of directories. Step 2 – writing your first template This is a quick start, so let's just dive in and write the template. Open a file for editing called hello.ftl. The ftl extension is customary for FreeMarker Template Language files, but you are free to name your template files anything you want. Put this line in your file: Hello, ${name}! FreeMarker will replace the ${name} expression with the value of an element called name in the model. FreeMarker calls this an interpolation. I prefer to refer to this as "evaluating an expression", but you will encounter the term interpolation in the documentation. Everything else you have put in this initial template is static text. If name contained the value World, then this template would evaluate to: Hello, World! Step 3 – writing the Java code Templates are not scripts that can be run, so we need to write some Java code to invoke the FreeMarker engine and combine the template with a populated model. Here is that code: import java.io.*;import java.util.*;import freemarker.template.*;public class HelloFreemarker { public static void main(String[] args) throws IOException, TemplateException { Configuration cfg = new Configuration(); cfg.setObjectWrapper(new DefaultObjectWrapper()); cfg.setDirectoryForTemplateLoading(new File(".")); Map<String, Object> model = new HashMap<String, Object>(); model.put("name", "World"); Template template = cfg.getTemplate("hello.ftl"); template.process(model, new OutputStreamWriter(System.out)); }} The highlighted line says that FreeMarker should look for FTL files in the "working directory" where the program is run as a simple Java application. If you set your project up differently, or run in an IDE, you may need to change this to an absolute path. The first thing we do is create a FreeMarker freemarker.template.Configuration object. This acts as a factory for freemarker.template.Template objects. FreeMarker has its own internal object types that it uses to extract values from the model.In order to use the objects that you supply, it must wrap these in its own native types. The job of doing this is done by an object wrapper. You must provide an object wrapper. It will always be FreeMarker's own freemarker.template.DefaultObjectWrapper unless you havespecial object wrapping requirements. Finally, we set the root directory for loading templates. For the purposes of our sample code, everything is in the same directory so we just set it to ".". Setting the template directory can throw an java.lang.IOException exception in this code. We simply allow that to be thrown out of the method. Next, we create our model, which is a simple map of java.lang.String keys to java.lang.Object values. The values can be simple object types such as String or java.lang.Number, or they can be complex object types, including arrays and collections. Our needs are simple here, so we're going to map "name" to the string "World". The next step is to get a Template object. We ask the Configuration instance to load the template into a Template object. This can also throw an IOException. The magic finally happens when we ask the Template instance to process the model and create an output. We already have the model, but where does the output go? For this, we need an implementation of java.io.Writer. For convenience, we are going to wrap the java.io.PrintWriter in java.lang.System.out with a java.io.OutputStreamWriter and give that to the template. After compiling this program, we can run it from the command line: java -cp .;freemarker.jar HelloFreemarker For Linux or OSX, you would use a ":" instead of a ";" in the command: java -cp .:freemarker.jar HelloFreemarker The result should be that the program prints out: Hello, World! Step 4 – moving beyond strings If you plan to create simple templates populated with preformatted text, then you now know all you need to know about FreeMarker. Chances are that you will, so let's take a look at how FreeMarker handles formatting other types and complex objects. Let's try binding the "name" object in our model to some other types of objects. We can replace: model.put("name", "World"); with: model.put("name", 123456789); The output format of the program will depend on the default locale, so if you are in the United States, you will see this: Hello, 123,456,789! If your default locale was set to Germany, you would see this: Hello, 123.456.789! FreeMarker does not call toString() method on instances of Number types it employs java.text.DecimalFormat. Unless you want to pass all of your values to FreeMarker as preformatted strings, you are going to need to understand how to control the way FreeMarker converts values to text. If preformatting all of the items in your model sounds like a good idea, it isn't. Moving "view" logic into your "controller" code is a sure-fre way to make updating the appearance of your site into a painful experience. Step 5 – formatting different types In the previous section, we saw how FreeMarker will choose a default method of formatting numbers. One of the features of this method is that it employs grouping separators: a comma or a period every three digits. It may also use a comma rather than a period to denote the decimal portion of the number. This is great for humans who may expect these formatting details, but if your number is destined to be parsed by a computer, it needs to be free of grouping separators and it must use a period as a decimal point. In this case, you need a way to control how FreeMarker decides to format a number. In order to control exactly how model objects are converted to text FreeMarker provides operators called built-ins. Let's create a new template called types.ftl and put in some expressions that use built-ins to control formatting: String: ${string?html}Number: ${number?c}Boolean: ${boolean?string("+++++", "-----")}Date: ${.now?time}Complex: ${object} The value .now come is a special variable that is automatically provided by FreeMarker. It contains the date and time when the Template began processing. There are other special variables, but this is the only one you're likely to use. This template is a little more complicated than the last template. The " ?" at the end of a variable name denotes the use of a built-in. Before we explore these particular built-ins, let's see them in action. Create a java program, FreemarkerTypes, which populates a model with values for our new template: import java.io.*;import java.math.BigDecimal;import java.util.*;import freemarker.template.*;public class FreemarkerTypes { public static void main(String[] args) throws IOException, TemplateException { Configuration cfg = new Configuration(); cfg.setObjectWrapper(new DefaultObjectWrapper()); cfg.setDirectoryForTemplateLoading(new File(".")); Map<String, Object> model = new HashMap<String, Object>(); model.put("string", "easy & fast "); model.put("number", new BigDecimal("1234.5678")); model.put("boolean", true); model.put("object", Locale.US); Template template = cfg.getTemplate("types.ftl"); template.process(model, new OutputStreamWriter(System.out)); }} Run the FreemarkerType program the same way you ran HelloFreemarker. You will see this output: String: easy &amp; fastNumber: 1234.5678Boolean: +++++Date: 9:12:33 AMComplex: en_US Let's walk through the template and see how the built-ins affected the output. Our purpose is to get a solid foundation in the basics. We'll look at more details about how to use FreeMarker features in later articles. First we output a String modified with the html built-in. This encoded the string for HTML, turning the & into the &amp; HTML entity. You will want this applied to a lot of your expressions on HTML pages in order to ensure proper display of your text and to prevent cross-site scripting ( XSS ) attacks. The second line outputs a number with the c built-in. This tells FreeMarker that the number should be written for parsing by computers. As we saw in the previous section, FreeMarker will by default format numbers with grouping separators. It will also localize the decimal point, using a comma instead of a period. This is great when you are displaying numbers to humans, but not computers. If you want to put an ID number in a URL or a price in an XML document, you will want to use this built-in to format it. Next, we format a Boolean. It may surprise you to learn that unless you use the string built-in, FreeMarker will not format a Boolean value at all. In fact, it throws an exception. Conceptually, "true" and "false" have no universal text representation. If you use string with no arguments, the interpolation will evaluate to either "true" or "false", but this is a default you can change. Here, we have told the built-in to use a series of + characters for "true" and a series of – characters for "false". Another type which FreeMarker will not process without a built-in is java.util.Date. The main issue here is that FreeMarker doesn't know whether you want to display a date, a time, or both. By specifying the time built-in we are letting FreeMarker know that we want to display a time. The output shown previously was generated shortly past nine o'clock in the morning. Finally, we see a complex object converted to text with no built-ins. Complex objects are turned into text by calling their toString() method, so you can use string built-ins on them. Step 6 – where do we go from here? We've reached the end of the Quick start section. You've created two simple templates and worked with some of the basic features of FreeMarker. You might be wondering what are the other built-ins, or what options they offer. In the upcoming sections we'll look at these options and also ways to change the default behavior. Another issue we've glossed over is errors. Once you have applied some of these built-ins, you must make sure that you supply the correct types for the named model elements. We also haven't looked at what happens when a referenced model element is missing. The FreeMarker manual provides excellent reference for all of this. Rather than trying to find your way around on your own, we'll take a guided tour through the important features in the Top Features section of the article. Quick start versus slow start A key difference between the Quick start and Top Features sections is that we'll be starting with the sample output. In this article, we created templates and evaluated them to see what we would get. In a real-world project, you will get better results if you worked backwards from the desired result. In many cases, you won't have a choice. The sample output will be generated by web designers and you will be expected to produce the same HTML with dynamic content. In other cases, you will need to work from mock-ups and decide the HTML for yourself. In these cases, it is still worth creating a static sample document. These static samples will show you where you need to apply some of the techniques. Summary In this article, we discussed how to create a freemarker template. Resources for Article: Further resources on this subject: Getting Started with the Alfresco Records Management Module [Article] Installing Alfresco Software Development Kit (SDK) [Article] Apache Felix Gogo [Article]
Read more
  • 0
  • 0
  • 4446

article-image-building-your-first-zend-framework-application
Packt
26 Jul 2013
15 min read
Save for later

Building Your First Zend Framework Application

Packt
26 Jul 2013
15 min read
(For more resources related to this topic, see here.) Prerequisites Before you get started with setting up your first ZF2 Project, make sure that you have the following software installed and configured in your development environment: PHP Command Line Interface Git : Git is needed to check out source code from various github.com repositories Composer : Composer is the dependency management tool used for managing PHP dependencies The following commands will be useful for installing the necessary tools to setup a ZF2 Project: To install PHP Command Line Interface: $ sudo apt-get install php5-cli To install Git: $ sudo apt-get install git To install Composer: $ curl -s https://getcomposer.org/installer | php ZendSkeletonApplication ZendSkeletonApplication provides a sample skeleton application that can be used by developers as a starting point to get started with Zend Framework 2.0. The skeleton application makes use of ZF2 MVC, including a new module system. ZendSkeletonApplication can be downloaded from GitHub (https://github.com/zendframework/ZendSkeletonApplication). Time for action – creating a Zend Framework project To set up a new Zend Framework project, we will need to download the latest version of ZendSkeletonApplication and set up a virtual host to point to the newly created Zend Framework project. The steps are given as follows: Navigate to a folder location where you want to set up the new Zend Framework project: $ cd /var/www/ Clone the ZendSkeletonApplication app from GitHub: $ git clone git://github.com/zendframework/ ZendSkeletonApplication.git CommunicationApp In some Linux configurations, necessary permissions may not be available to the current user for writing to /var/www. In such cases, you can use any folder that is writable and make necessary changes to the virtual host configuration. Install dependencies using Composer: $ cd CommunicationApp/ $ php composer.phar self-update $ php composer.phar install The following screenshot shows how Composer downloads and installs the necessary dependencies: Before adding a virtual host entry we need to set up a hostname entry in our hosts file so that the system points to the local machine whenever the new hostname is used. In Linux this can be done by adding an entry to the /etc/hosts file: $ sudo vim /etc/hosts In Windows, this file can be accessed at %SystemRoot%system32driversetchosts. Add the following line to the hosts file: 127.0.0.1 comm-app.local The final hosts file should look like the following: Our next step would be to add a virtual host entry on our web server; this can be done by creating a new virtual host's configuration file: $ sudo vim /usr/local/zend/etc/sites.d/vhost_comm-app-80.conf This new virtual host filename could be different for you depending upon the web server that you use; please check out your web server documentation for setting up new virtual hosts. For example, if you have Apache2 running on Linux, you will need to create the new virtual host file in /etc/apache2/sites-available and enable the site using the command a2ensite comm-app.local. Add the following configuration to the virtual host file: <VirtualHost *:80> ServerName comm-app.local DocumentRoot /var/www/CommunicationApp/public SetEnv APPLICATION_ENV "development" <Directory /var/www/CommunicationApp/public> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> If you are using a different path for checking out the ZendSkeletonApplication project make sure that you include that path for both DocumentRoot and Directory directives. After configuring the virtual host file, the web server needs to be restarted: $ sudo service zend-server restart Once the installation is completed, you should be able to open http://comm-app.local on your web browser. This should take you to the following test page : Test rewrite rules In some cases, mod_rewrite may not have been enabled in your web server by default; to check if the URL redirects are working properly, try to navigate to an invalid URL such as http://comm-app.local/12345; if you get an Apache 404 page, then the .htaccess rewrite rules are not working; they will need to be fixed, otherwise if you get a page like the following one, you can be sure of the URL working as expected. What just happened? We have successfully created a new ZF2 project by checking out ZendSkeletonApplication from GitHub and have used Composer to download the necessary dependencies including Zend Framework 2.0. We have also created a virtual host configuration that points to the project's public folder and tested the project in a web browser. Alternate installation options We have seen just one of the methods of installing ZendSkeletonApplication; there are other ways of doing this. You can use Composer to directly download the skeleton application and create the project using the following command: $ php composer.phar create-project --repositoryurl="http://packages.zendframework.com" zendframework/skeleton-application path/to/install You can also use a recursive Git clone to create the same project: $ git clone git://github.com/zendframework/ZendSkeletonApplication.git --recursive Refer to: http://framework.zend.com/downloads/skeleton-app Zend Framework 2.0 – modules In Zend Framework, a module can be defined as a unit of software that is portable and reusable and can be interconnected to other modules to construct a larger, complex application. Modules are not new in Zend Framework, but with ZF2, there is a complete overhaul in the way modules are used in Zend Framework. With ZF2, modules can be shared across various systems, and they can be repackaged and distributed with relative ease. One of the other major changes coming into ZF2 is that even the main application is now converted into a module; that is, the application module. Some of the key advantages of Zend Framework 2.0 modules are listed as follows: Self-contained, portable, reusable Dependency management Lightweight and fast Support for Phar packaging and Pyrus distribution Zend Framework 2.0 – project folder structure The folder layout of a ZF2 project is shown as follows: Folder name Description config Used for managing application configuration. data Used as a temporary storage location for storing application data including cache files, session files, logs, and indexes. module Used to manage all application code. module/Application This is the default application module that is provided with ZendSkeletonApplication. public This is the default application module that is provided with ZendSkeletonApplication. vendor Used to manage common libraries that are used by the application. Zend Framework is also installed in this folder. vendor/zendframework endor/zendframework Zend Framework 2.0 is installed here. Time for action – creating a module Our next activity will be about creating a new Users module in Zend Framework 2.0. The Users module will be used for managing users including user registration, authentication, and so on. We will be making use of ZendSkeletonModule provided by Zend, shown as follows: Navigate to the application's module folder: $ cd /var/www/CommunicationApp/ $ cd module/ Clone ZendSkeletonModule into a desired module name, in this case it is Users: $ git clone git://github.com/zendframework/ZendSkeletonModule.git Users After the checkout is complete, the folder structure should look like the following screenshot: Edit Module.php ; this file will be located in the Users folder under modules (CommunicationApp/module/Users/module.php) and change the namespace to Users. Replace namespace ZendSkeletonModule; with namespace Users;. The following folders can be removed because we will not be using them in our project: * Users/src/ZendSkeletonModule * Users/view/zend-skeleton-module What just happened? We have installed a skeleton module for Zend Framework; this is just an empty module, and we will need to extend this by creating custom controllers and views. In our next activity, we will focus on creating new controllers and views for this module. Creating a module using ZFTool ZFTool is a utility for managing Zend Framework applications/projects, and it can also be used for creating new modules; in order to do that, you will need to install ZFTool and use the create module command to create the module using ZFTool: $ php composer.phar require zendframework/zftool:dev-master $ cd vendor/zendframework/zftool/ $ php zf.php create module Users2 /var/www/CommunicationApp Read more about ZFTool at the following link: http://framework.zend.com/manual/2.0/en/modules/zendtool.introduction.html MVC layer The fundamental goal of any MVC Framework is to enable easier segregation of three layers of the MVC, namely, model, view, and controller. Before we get to the details of creating modules, let's quickly try to understand how these three layers work in an MVC Framework: Model : The model is a representation of data; the model also holds the business logic for various application transactions. View : The view contains the display logic that is used to display the various user interface elements in the web browser. Controller : The controller controls the application logic in any MVC application; all actions and events are handled at the controller layer. The controller layer serves as a communication interface between the model and the view by controlling the model state and also by representing the changes to the view. The controller also provides an entry point for accessing the application. In the new ZF2 MVC structure, all the models, views, and controllers are grouped by modules. Each module will have its own set of models, views, and controllers, and will share some components with other modules. Zend Framework module – folder structure The folder structure of Zend Framework 2.0 module has three vital components—the configurations, the module logic, and the views. The following table describes how contents in a module are organized: Folder name Description config Used for managing module configuration src Contains all module source code, including all controllers and models view Used to store all the views used in the module Time for action – creating controllers and views Now that we have created the module, our next step would be having our own controllers and views defined. In this section, we will create two simple views and will write a controller to switch between them: Navigate to the module location: $ cd /var/www/CommunicationApp/module/Users Create the folder for controllers: $ mkdir -p src/Users/Controller/ Create a new IndexController file, < ModuleName >/src/<ModuleName>/Controller/: $ cd src/Users/Controller/ $ vim IndexController.php Add the following code to the IndexController file: <?php namespace UsersController; use ZendMvcControllerAbstractActionController; use ZendViewModelViewModel; class IndexController extends AbstractActionController { public function indexAction() { $view = new ViewModel(); return $view; } public function registerAction() { $view = new ViewModel(); $view->setTemplate('users/index/new-user'); return $view; } public function loginAction() { $view = new ViewModel(); $view->setTemplate('users/index/login'); return $view; } } The preceding code will do the following actions; if the user visits the home page, the user is shown the default view; if the user arrives with an action register, the user is shown the new-user template; and if the user arrives with an action set to login, then the login template is rendered. Now that we have created the controller, we will have to create necessary views to render for each of the controller actions. Create the folder for views: $ cd /var/www/CommunicationApp/module/Users $ mkdir -p view/users/index/ Navigate to the views folder, <Module>/view/<module-name>/index: $ cd view/users/index/ Create the following view files: index login new-user For creating the view/users/index/index.phtml file, use the following code: <h1>Welcome to Users Module</h1> <a href="/users/index/login">Login</a> | <a href = "/users/index/register">New User Registration</a> For creating the view/users/index/login.phtml file, use the following code: <h2> Login </h2> <p> This page will hold the content for the login form </p> <a href="/users"><< Back to Home</a> For creating the view/users/index/new-user.phtml file, use the following code: <h2> New User Registration </h2> <p> This page will hold the content for the registration form </p> <a href="/users"><< Back to Home</a> What just happened? We have now created a new controller and views for our new Zend Framework module; the module is still not in a shape to be tested. To make the module fully functional we will need to make changes to the module's configuration, and also enable the module in the application's configuration. Zend Framework module – configuration Zend Framework 2.0 module configuration is spread across a series of files which can be found in the skeleton module. Some of the configuration files are described as follows: Module.php: The Zend Framework 2 module manager looks for the Module.php file in the module's root folder. The module manager uses the Module.php file to configure the module and invokes the getAutoloaderConfig() and getConfig() methods. autoload_classmap.php: The getAutoloaderConfig() method in the skeleton module loads autoload_classmap.php to include any custom overrides other than the classes loaded using the standard autoloader format. Entries can be added or removed to the autoload_classmap.php file to manage these custom overrides. config/module.config.php: The getConfig() method loads config/module.config.php; this file is used for configuring various module configuration options including routes, controllers, layouts, and various other configurations. Time for action – modifying module configuration In this section will make configuration changes to the Users module to enable it to work with the newly created controller and views using the following steps: Autoloader configuration – The default autoloader configuration provided by the ZendSkeletonModule needs to be disabled; this can be done by editing autoload_classmap.php and replacing it with the following content: <?php return array(); Module configuration – The module configuration file can be found in config/module.config.php; this file needs to be updated to reflect the new controllers and views that have been created, as follows: Controllers – The default controller mapping points to the ZendSkeletonModule; this needs to be replaced with the mapping shown in the following snippet: 'controllers' => array( 'invokables' => array( 'UsersControllerIndex' => 'UsersControllerIndexController', ), ), Views – The views for the module have to be mapped to the appropriate view location. Make sure that the view uses lowercase names separated by a hyphen (for example, ZendSkeleton will be referred to as zend-skeleton): 'view_manager' => array( 'template_path_stack' => array( 'users' => __DIR__ . '/../view', ), ), Routes – The last module configuration is to define a route for accessing this module from the browser; in this case we are defining the route as /users, which will point to the index action in the Index controller of the Users module: 'router' => array( 'routes' => array( 'users' => array( 'type' => 'Literal', 'options' => array( 'route' => '/users', 'defaults' => array( '__NAMESPACE__' => 'UsersController', 'controller' => 'Index', 'action' => 'index', ), ), After making all the configuration changes as detailed in the previous sections, the final configuration file, config/module.config.php, should look like the following: <?php return array( 'controllers' => array( 'invokables' => array( 'UsersControllerIndex' => 'UsersControllerIndexController', ), ), 'router' => array( 'routes' => array( 'users' => array( 'type' => 'Literal', 'options' => array( // Change this to something specific to your module 'route' => '/users', 'defaults' => array( //Change this value to reflect the namespace in which // the controllers for your module are found '__NAMESPACE__' => 'UsersController', 'controller' => 'Index', 'action' => 'index', ), ), 'may_terminate' => true, 'child_routes' => array( // This route is a sane default when developing a module; // as you solidify the routes for your module, however, // you may want to remove it and replace it with more // specific routes. 'default' => array( 'type' => 'Segment', 'options' => array( 'route' => '/[:controller[/:action]]', 'constraints' => array( 'controller' => '[a-zA-Z][a-zA-Z0-9_-]*', 'action' => '[a-zA-Z][a-zA-Z0-9_-]*', ), 'defaults' => array( ), ), ), ), ), ), ), 'view_manager' => array( 'template_path_stack' => array( 'users' => __DIR__ . '/../view', ), ), ); Application configuration – Enable the module in the application's configuration—this can be done by modifying the application's config/application.config.php file, and adding Users to the list of enabled modules: 'modules' => array( 'Application', 'Users', ), To test the module in a web browser, open http://comm-app.local/users/ in your web browser; you should be able to navigate within the module. The module home page is shown as follows: The registration page is shown as follows: What just happened? We have modified the configuration of ZendSkeletonModule to work with the new controller and views created for the Users module. Now we have a fully-functional module up and running using the new ZF module system. Have a go hero Now that we have the knowledge to create and configure own modules, your next task would be to set up a new CurrentTime module. The requirement for this module is to render the current time and date in the following format: Time: 14:00:00 GMT Date: 12-Oct-2012 Summary We have now learned about setting up a new Zend Framework project using Zend's skeleton application and module. In our next chapters, we will be focusing on further development on this module and extending it into a fully-fledged application. Resources for Article : Further resources on this subject: Magento's Architecture: Part 2 [Article] Authentication with Zend_Auth in Zend Framework 1.8 [Article] Authorization with Zend_Acl in Zend Framework 1.8 [Article]
Read more
  • 0
  • 0
  • 6094

article-image-scaffolding-command-line-tool
Packt
25 Jul 2013
4 min read
Save for later

Scaffolding with the command-line tool

Packt
25 Jul 2013
4 min read
(For more resources related to this topic, see here.) CakePHP comes packaged with the Cake command-line tool, which provides a number of code generation tools for creating models, controllers, views, data fixtures, and more, all on the fly. Please note that this is great for prototyping, but is non-ideal for a production environment. On your command line, from the cake-starter folder, type the following: cd appConsolecake bake You will see something similar to the following: > Console/cake bakeWelcome to CakePHP v2.2.3 Console---------------------------------------------------------------App : appPath: /path/to/app/---------------------------------------------------------------Interactive Bake Shell---------------------------------------------------------------[D]atabase Configuration[M]odel[V]iew[C]ontroller[P]roject[F]ixture[T]est case[Q]uitWhat would you like to Bake? (D/M/V/C/P/F/T/Q)> As you can see, there's a lot to be done with this tool. Note that there are other commands beside bake, such as schema, which we be our main focus in this article. Creating the schema definition Inside the app/Config/Schema folder, create a file called glossary.php. Insert the following code into this file: <?php/** * This schema provides the definitions for the core tables in the glossary app. * * @var $glossary_terms - The main terms/definition table for the app * @var $categories - The categories table * @var $terms_categories - The lookup table, no model will be created. * * @author mhenderson * */class GlossarySchema extends CakeSchema { public $glossaryterms = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'title' => array('type' => 'string', 'null' => false, 'length' => 100), 'definition' => array('type' => 'string', 'null' => false, 'length' => 512) ); public $categories = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'name' => array('type' => 'string', 'null' => false, 'length' => 100), 'definition' => array('type' => 'string', 'null' => false, 'length' => 512) ); public $glossaryterms_categories = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'glossaryterm_id' => array('type' => 'integer', 'null' => false), 'category_id' => array('type' => 'string', 'null' => false) );} This class definition represents three tables: glossaryterms , categories, and a lookup table to facilitate the relationship between the two tables. Each variable in the class represents a table, and the array keys inside of the variable represent the fields in the table. As you can see, the first two tables match up with our earlier architecture description. Creating the database schema On the command line, assuming you haven't moved to any other folders, type the following command: Console/cake schema create glossary You should then see the following responses. When prompted, type y once to drop the tables, and again to create them. Welcome to CakePHP v2.2.3 Console---------------------------------------------------------------App : appPath: /path/to/app---------------------------------------------------------------Cake Schema Shell---------------------------------------------------------------The following table(s) will be dropped.glossarytermscategoriesglossaryterms_categoriesAre you sure you want to drop the table(s)? (y/n)[n] > yDropping table(s).glossaryterms updated.categories updated.glossaryterms_categories updated.The following table(s) will be created.glossarytermscategoriesglossaryterms_categoriesAre you sure you want to create the table(s)? (y/n)[y] > yCreating table(s).glossaryterms updated.categories updated.glossaryterms_categories updated.End create. If you look at your database now, you will notice that the three tables have been created. We can also make modifications to the glossary.php file and run the cake schema command again to update it. If you want to try something a little more daring, you can use the migrations plugin found at https://github.com/CakeDC/migrations. This plugin allows you to save "snapshots" of your schema to be recalled later, and also allows you to write custom scripts to migrate "up" to a certain snapshot version, or migrate "down" in the event of an emergency or a mistake. Summary In this article we saw the use of the schema tool and also its database. Resources for Article: Further resources on this subject: Create a Quick Application in CakePHP: Part 1 [Article] Working with Simple Associations using CakePHP [Article] Creating and Consuming Web Services in CakePHP 1.3 [Article]
Read more
  • 0
  • 0
  • 1069
article-image-implementing-log-screen-using-ext-js
Packt
18 Jul 2013
31 min read
Save for later

Implementing a Log-in screen using Ext JS

Packt
18 Jul 2013
31 min read
In this article Loiane Groner, author of Mastering Ext JS, talks about developing a login page for an application using Ext JS. It is very common to have a login page for an application, which we can use to control access to the system by identifying and authenticating the user through the credentials presented by him/her. Once the user is logged in, we can track the actions performed by the user. We can also restrain access of some features and screens of the system that we do not want a particular user or even a specific group of users to have access to. In this article, we will cover: Creating the login page Handling the login page on the server Adding the Caps Lock warning message in the Password field Submitting the form by pressing the Enter key Encrypting the password before sending to the server (For more resources related to this topic, see here.) The Login screen The Login window will be the first view we are going to implement in this project. We are going to build it step by step and it will have the following capabilities: User will enter the username and password to log in Client-side validation (username and password required to log in) Submit the Login form by pressing Enter Encrypt the password before sending to the server Password Caps Lock warning (similar to Windows OS) Multilingual capability Except for the multilingual capability, we will implement all the other features throughout this topic. So at the end of the implementation, we will have a Login window that looks like the following: So let's get started! Creating the Login screen Under the app/view directory, we will create a new file named Login.js.In this file, we will implement all the code that the user is going to see on the screen. Inside the Login.js file, we will implement the following code: Ext.define('Packt.view.Login', { // #1 extend: 'Ext.window.Window', // #2 alias: 'widget.login', // #3 autoShow: true, // #4 height: 170, // #5 width: 360, // #6 layout: { type: 'fit' // #7 }, iconCls: 'key', // #8 title: "Login", // #9 closeAction: 'hide', // #10 closable: false // #11 }); On the first line (#1) we have the definition of the class. To define a class we use Ext.define, followed by parentheses (()), and inside the parentheses we first declare the name of the class, followed by a comma (") and curly brackets ({}), and at the end a semicolon. All the configurations and properties (#2 to #11) go inside curly brackets. We also need to pay attention to the name of the class. This is the formula suggested by Sencha in Ext JS MVC projects: App Namespace + package name + name of the JS file. we defined the namespace as Packt (configuration name inside the app.js file). We are creating a View for this project, so we will create the JS file under the view package/directory. And then, the name of the file we created is Login.js; therefore, we will lose the .js part and use only Login as the name of the View. Putting all together, we have Packt.view.Login and this will be the name of our class. Then, we are saying that the Login class will extend from the Window class (#2), because we want it to be displayed inside a window, and not on any other component. We are also assigning this class an alias (#3). The alias for a class that extends from a component always starts with widget., followed by the alias we want to assign. The naming convention for an alias is lowercase . It is also important to remember that the alias must be unique in an application. In this case we want to assign login as alias to this class so later we can instantiate this same class using its alias (that is the same as xtype). For example, we can instantiate the Login class using four different options: Using the complete name of the class, which is the most used one: Ext.create('Packt.view.Login'); Using the alias in the Ext.create method: Ext.create('widget.login'); Using the Ext.widget, which is a shorthand way of using Ext.ClassManager.instantiateByAlias: Ext.widget('login'); Using the xtype as an item of another component: items: [ { xtype: 'login' } ] In this book we will use the first, third, and fourth options most of the time. Then we have autoShow configured to true (#4). What happens with the window is that instantiating the component is not enough for displaying it. When we instantiate the window we will have its reference, but it will not be displayed on the screen. If we want it to be displayed we need to call the method show() manually. Another option is to have the autoShow configuration set to true. This way the window will be automatically displayed when we instantiate it. We also have height (#5) and width (#6) of the window. We set the layout as fit (#7) because we want to add a form inside this window that will contain the username and password fields. And using the fit layout the form will occupy all the body space of the window. Remember that when using the fit layout we can only have one item as a child component. We are setting an iconCls (#8) property to the window; this way we will have an icon of a key in the header of the window. We can also give a title for the window (#9), and in this case we chose Login. Following is the declaration of the key style used by the iconCls property: .key { background-image:url('../icons/key.png') !important; } All the styles we will create to use as iconCls have a format like the preceding one. And at last we have the closeAction (#10) and closable (#11) configurations. The closeAction configuration will tell if we want to destroy the window when we close it. In this case, we do not want to destroy it; we only want to hide it. The closable configuration tells if we want to display the X icon on the top-right corner of the window. As this is a Login window, we do not want to give this option for the user. If you would like to, you can also add the resizable and draggable options as false. This will prevent the user to drag the Login window around and also to resize it. So far, this will be the output we have. A single window with an icon at the top-left corner with a title Login : The next step is to add the form with the username and password fields. We are going to add the following code to the Login class: items: [ { xtype: 'form', // #12 frame: false, // #13 bodyPadding: 15, // #14 defaults: { // #15 xtype: 'textfield', // #16 anchor: '100%', // #17 labelWidth: 60 // #18 }, items: [ { name: 'user', fieldLabel: "User" }, { inputType: 'password', // #19 name: 'password', fieldLabel: "Password" } ] } ] As we are using the fit layout, we can only declare one child item in this class. So we are going to add a form (#12) and to make the form to look prettier, we are going to remove the frame property (#13) and also add padding to the form body (#14). The form's frame property is by default set to false. But by default, there is a blue border that appears if we to do not explicitly add this property set to false. As we are going to add two fields to the form, we probably want to avoid repeating some code. That is why we are going to declare some field configurations inside the defaults configuration of the form (#15); this way the configuration we declare inside defaults will be applied to all items of the form, and we will need to declare only the configurations we want to customize. As we are going to declare two fields, both of them will be of type textfield. The default layout of the form is the anchor layout, so we do not need to make this declaration explicit. However, we want both fields can occupy all the horizontal available space of the body of the form. That is why we are declaring anchor as 100% (#17). By default, the width attribute of the label of the TextField class is 100 pixels. It is too much space for a label User and Password, so we are going to decrease this value to 60 pixels (#18). And finally, we have the user text field and the password text field. The configuration name is what we are going to use to identify each field when we submit the form to the server. But there is only one detail missing: when the user types the password into the field the system cannot display its value, we need to mask it somehow. That is why inputType is 'password' (#19) for the password field, as we want to display bullets instead of the original value, and the user will not be able to see the password value. Now we have improved our Login window a little more. This is the output so far: Client-side validations The field component in Ext JS provides some client-side validation capability. This can save time and also bandwidth (the system will only make a server request when it is sure the information has passed the basic validation). It also helps to point out to the user where they have gone wrong in filling out the form. Of course, it is also good to validate the information again on the server side for security reasons, but for now we will focus on the validations we can apply to the form of our Login window. Let's brainstorm some validations we can apply to the username and password fields: The username and password must be mandatory—how are going to authenticate the user without a username and password? The user can only enter alphanumeric characters (A-Z, a-z, and 0-9) in both the fields. The user can only type between 3 and 25 chars in the username field. The user can only type between 3 and 15 chars in the password field. So let's add into the code the ones that are common to both fields: allowBlank: false, // #20 vtype: 'alphanum', // #21 minLength: 3, // #22 msgTarget: 'under' // #23 We are going to add the preceding configurations inside the defaults configuration of the form, as they all apply to both the fields we have. First, both need to be mandatory (#20), we can only allow to enter alphanumeric characters (#21) and the minimum number of characters the user needs to input is three (#22). Then, a last common configuration is that we want to display any validation error message under the field (#23). And the only validation customized for each field is that we can enter a maximum of 25 characters in the User field: name: 'user', fieldLabel: "User", maxLength : 25 And a maximum of 15 characters in the Password field: inputType: 'password', name: 'password', fieldLabel: "Password", maxLength : 15 After we apply the client validations, we will have the following output in case the user went wrong in filling out the Login window: If you do not like it, we can change the place where the error message appears. We just need to change the msgTarget value. The available options are: title, under, side, and none. We can also show the error message as a tooltip (qtip) or display it in a specific target (inner HTML of a specific component). Creating custom VTypes Many systems have a special format for passwords. Let's say we need the password to have at least one digit (0-9), one letter lowercase, one letter uppercase, one special character (@, #, $, %, and so on) and its length between 6 and 20 characters. We can create a regular expression to validate that the password is entering into the app. And to do this, we can create a custom VType to do the validation for us. Creating a custom VType is simple. For our case, we can create a custom VType called passRegex: Ext.apply(Ext.form.field.VTypes, { customPass: function(val, field) { return /^((?=.*d)(?=.*[a-z])(?=.*[A-Z])(?=.*[@#$%]).{6,20})/.test(val); }, customPassText: 'Not a valid password. Length must be at least 6 characters and maximum of 20Password must contain one digit, one letter lowercase, one letter uppercase, onse special symbol @#$% and between 6 and 20 characters.', }); customPass is the name of our custom VType, and we need to declare a function that will validate our regular expression. customPassText is the message that will be displayed to the user in case the incorrect password format is entered. The preceding code can be added anywhere on the code, inside the init function of a controller, inside the launch function of the app.js, or even in a separate JavaScript file (recommended) where you can put all your custom VTypes. To use it, we simply need to add vtype: 'customPass' to our Password field. To learn more about regular expressions, please visit http://www.regular-expressions.info/. Adding the toolbar with buttons So far we have created the Login window, which contains a form with two fields and it is already being validated as well. The only thing missing is to add the two buttons: cancel and submit . We are going to add the buttons as items of a toolbar and the toolbar will be added on the form as a docked item. The docked items can be docked to either on the top, right, left, or bottom of a panel (both form and window components are subclasses of panel). In this case we will dock the toolbar to the bottom of the form. Add the following code right after the items configuration of the form: dockedItems: [ { xtype: 'toolbar', dock: 'bottom', items: [ { xtype: 'tbfill' //#24 }, { xtype: 'button', // #25 itemId: 'cancel', iconCls: 'cancel', text: 'Cancel' }, { xtype: 'button', // #26 itemId: 'submit', formBind: true, // #27 iconCls: 'key-go', text: "Submit" } ] } ] If we take a look back to the screenshot of the Login screen we first presented at the beginning of this article, we will notice that there is a component for the translation/multilingual capability. And after this component there is a space and then we have the Cancel and Submit buttons. As we do not have the multilingual component yet, we can only implement the two buttons, but they need to be at the right end of the form and we need to leave that space. That is why we first need to add a toolbar fill component (#24), which is going to instruct the toolbar's layout to begin using the right-justified button container. Then we will add the Cancel button (#25) and then the Submit button (#26). We are going to add icons to both buttons (iconCls) and later, when we implement the controller class, we will need a way to identify the buttons. This is why we assigned itemId to both of them. We already have the client validations, but even with the validations, the user can click on the Submit button and we want to avoid this behavior. That is why we are binding the Submit button to the form (#27); this way the button will only be enabled if the form has no error from the client validation. In the following screenshot, we can see the current output of the Login form (after we added the toolbar) and also verify the behavior of the Submit button: Running the code To execute the code we have created so far, we need to make a few changes in the app.js file. First, we need to declare views we are using (only one in this case). Also, as we are going to instantiate using the Login class' xtype, we need to declare this class in the requires declaration: requires: [ 'Packt.view.Login' ], views: [ 'Login' ], And the last change is inside the launch function. now we only need to replace the console.log message with the Login instance (#1): splashscreen.next().fadeOut({ duration: 1000, remove:true, listeners: { afteranimate: function(el, startTime, eOpts ){ Ext.widget('login'); // #1 } } }); Now the app.js is OK and we can execute what we have implemented so far! Using itemId versus id Ext.Cmp is bad! Before we create the controller, we will need to have some knowledge about Ext.ComponentQuery selectors. And in this topic we will discuss a subject to help us to understand better why we took some decisions while creating the Login window and why we are going to take some other decisions on the controller topic. Whenever we can, we will always try to use the itemId configuration instead of id to uniquely identify a component. And here comes the question, why? When using id, we need to make sure that id is unique, and none of all the other components of the application has the same id. Now imagine the situation where you are working with other developers of the same team and it is a big application. How can you make sure that id is going to be unique? Pretty difficult, don't you think? And this can be a hard task to achieve. Components created with an id may be accessed globally using Ext.getCmp, which is a short-hand reference for Ext.ComponentManager.get. Just to mention one example, when using Ext.getCmp to retrieve a component by its id, it is going to return the last component declared with the given id. And if the id is not unique, it can return the component that you are not expecting and this can lead into an error of the application. Do not panic! There is an elegant solution, which is using itemId instead of id. The itemId can be used as an alternative way to get a reference of a component. The itemId is an index to the container's internal MixedCollection, and that is why the itemId is scoped locally to the container. This is the biggest advantage of the itemId. For example, we can have a class named MyWindow1, extending from window and inside this class we can have a button with item ID submit. Then we can have another class named MyWindow2, also extending from window, and also with a button with item ID submit. Having two item IDs with the same value is not an issue. We only need to be careful when we use Ext.ComponentQuery to retrieve the component we want. For example, if we have a Login window whose alias is login and another screen called the Registration window whose alias is registration. Both the windows have a button Save whose itemId is save. If we simply use Ext.ComponentQuery.query('button#save'), the result will be an array with two results. However, if we narrow down the selector even more, let's say we want the Login window's Save button, and not the Registration window's Save button, we need to use Ext.ComponentQuery.query('login button#save'), and the result will be a single item, which is exactly we expect. You will notice that we will not use Ext.getCmp in the code of our project. Because it is not a good practice; especially for Ext JS 4 and also because we can use itemId and Ext.ComponentQuery instead. We will understand Ext.ComponentQuery better during the next topic. Creating the login controller We have created the view for the Login screen so far. As we are following the MVC architecture, we are not implementing the user interaction on the View class. If we click on the buttons on the Login class, nothing will happen because we have not yet implemented this logic. We are going to implement this logic now on the controller class. Under the app/controller directory, we will create a new file named Login.js. In this file we will implement all the code related to the events management of the Login screen. Inside the Login.js file we will implement the following code, which is only a base of the controller class we are going to implement: Ext.define('Packt.controller.Login', { // #1 extend: 'Ext.app.Controller', // #2 views: [ 'Login' // #3 ], init: function(application) { // #4 this.control({ // #5 }); } }); As usual, on the first line of the class we have its name (#1). Following the same formula we used for the view/Login.js we will have Packt (app namespace) + controller (name of the package) + Login (which is the name of the file), resulting in Packt.controller.Login. Note that that the controller JS file (controller/Login.js) has the same name as view/Login.js, but that is OK because they are in a different package. It is good to use a similar name for the views, models, stores and controllers because it is going to be easier to maintain the project later. For example, let's say that after the project is in production, we need to add a new button on the Login screen. With only this information (and a little bit of MVC concept knowledge) we know we will need to add the button code on the view/Login.js file and listen to any events that might be fired by this button on the controller/Login.js. Easier maintainability is also a great pro of using the MVC architecture. The controller classes need to extend from Ext.app.Controller (#2), so we will always use this parent class for our controllers. Then we have the views declaration (#3), which is where we are going to declare all the views that this controller will care about. In this case, we only have the Login view so far. We will add more views later on this article. Next, we have the init method declaration (#4). The init method is called before the application boots, before the launch function of Ext.application (app.js). The controller will also load the views, models, and stores declared inside its class. Then we have the control method configured (#5). This is where we are going to listen to all events we want the controller to react. And as we are coding the events fired by the Login window and its child components, this will be our scope in this controller. Adding the controller to app.js Now that we already have a base of the login controller, we need to add it to the app.js file. We can remove this code, since the controller will be responsible for loading the view/Login.js file for us: requires: [ 'Packt.view.Login' ], views: [ 'Login' ], And add the controllers declaration: controllers: [ 'Login' ], And as our project is only starting, declaring the views on the controller classes will help us to have a code more organized, as we do not need to declare all the application's views in the app.js file. Listening to the button click event Our next step now is to start listening to the Login window events. First, we are going to listen to the Submit and Cancel buttons. We already know that we are going to add the listeners inside the this.control declaration. The format that we need to use is the following: 'Ext.ComponentQuery selector': { eventWeWantToListenTo: functionOrMethodWeWantToExecute } First, we need to pass the selector that is going to be used by the Ext.ComponentQuery class to find the component. Then we need to list the event that we want to listen to. And then, we need to declare the function that is going to be executed when the event we are listening to is fired, or declare the name of the controller method that is going to be executed when the event is fired. In our case, we are going to declare the method only for code organization purposes. Now let's focus on finding the correct selector for the Submit and Cancel buttons. According to Ext.ComponentQuery API documentation, we can retrieve components by using their xtype (if you are already familiar with jQuery, you will notice that Ext.ComponentQuery selectors are very similar to jQuery selectors' behavior). Well, we are trying to retrieve two buttons, and their xtype is button. We try then the selector button. But before we start coding, let's make sure that this is the correct selector to avoid us to change the code all the time when trying to figure out the correct selector. There is one very useful tip we can try: open the browser console (command editor), type the following command, and click on Run : Ext.ComponentQuery.query('button'); As we can see in the screenshot, it returned an array of the buttons that were found by the selector we used, and the array contains six buttons; too many buttons and it is not what we want. We want to narrow down to the Submit and Cancel buttons. Let's try to draw a path of the Login window using the components xtype we used: We have a Login window (xtype: login or window), inside the window we have a form (xtype: form), inside the form we have a toolbar (xtype: toolbar), and inside the toolbar we have two buttons (xtype: button). Therefore, we have login-form-toolbar-button. However, if we use login-form-button we will have the same result, because we do not have any other buttons inside the form. So we can try the following command: Ext.ComponentQuery.query('login form button'); So let's try this last selector on the command editor: Now the result is an array of two buttons and these are the buttons that we are looking for! There is still one detail missing: if we use the login form button selector, it will listen to the click event (which is the event we want to listen to) of both buttons. When we click on the Cancel button one thing should happen (reset the form) and when we click on the Submit button, another thing should happen (submit the form to the server to validate the login). So we still want to narrow down the selector even more, until it returns the Cancel button and another selector that will return the Submit button. Going back to the view/Login code, notice that we declared a configuration named itemId to both buttons. We can use these itemId configurations to identify the buttons in a unique way. According to the Ext.ComponentQuery API docs, we can use # as a prefix of itemId. So let's try the following command on the command editor to get the Submit button reference: Ext.ComponentQuery.query('login form button#submit'); The output will be only one button as we expect: Now let's try the following command to retrieve the Cancel button reference: Ext.ComponentQuery.query('login form button#cancel'); The output will be only one button as we expect: So now we have the selectors that we were looking for! Console command editor is a great tool and using it can save us a lot of time when trying to find the exact selector that we want, instead of coding, testing, not the selector we want, code again, test again, and so on. Could we use only button#submit or button#cancel as selectors? Yes, we could use a shorter selector. However, it would work perfectly for now. As the application grows and we declare many more classes and buttons, the event would be fired for all buttons that have the itemId named submit or cancel and this could lead to an error in the application. We always need to remember that itemId is scoped locally to the container. By using login form button as the selector, we make sure that the event will come from the button from the Login window. So let's implement the code inside the controller class: init: function(application) { this.control({ "login form button#submit": { // #1 click: this.onButtonClickSubmit // #2 }, "login form button#cancel": { // #3 click: this.onButtonClickCancel // #4 } }); }, onButtonClickSubmit: function(button, e, options) { console.log('login submit'); // #5 }, onButtonClickCancel: function(button, e, options) { console.log('login cancel'); // #6 } In the preceding code, we have first the listener to the Submit button (#1), and on the following line we say that we want to listen to the click event, and then, when the click event of the Submit button is fired, the onButtonClickSubmit method should be executed (#2). Then we have the same for the Cancel button: we have the listener to the Cancel button (#3), and on the following line we say that we want to listen to the click event, and then, when the click event of the Cancel button is fired, the onButtonClickCancel method should be executed (#4). Next, we have the declaration of the methods onButtonClickSubmit and onButtonClickCancel. For now, we are only going to output a message on the console to make sure that our code is working. So we are going to output login submit (#5) in case the user clicks on the Submit button, and login cancel (#6) in case the user clicks on the Cancel button. But how do you know which are the parameters the event method can receive? You can find the answer to this question in the documentation. If we take a look at the click event in the documentation, this is what we will find: This is exactly what we declared. For all the other event listeners, we will go to the docs and see which are the parameters the event accepts, and then list them as parameters in our code. This is also a very good practice. We should always list out all the arguments from the docs, even if we are only interested in the first one. This way we always know that we have the full collection of the parameters, and this can come very handy when we are doing maintenance of the application. Let's go ahead and try it. Click on the Cancel button and then on the Submit button. This should be the output: Cancel button listener implementation Let's remove the console.log messages and add the code we actually want the methods to execute. First, let's work on the onButtonClickCancel method. When we execute this method, we want it to reset the form. So this is the logic sequence we want to program: Get the Login form reference. Call the method getForm, which is going to return the form basic class. Call the reset method to reset the form. The form basic class provides input field management, validation, submission, and form loading services. The Ext.form.Panel class (xtype: form) works as the container, and it is automatically hooked up with an instance of Ext.form.Basic. That is why we need to get the form basic reference to call the reset method. If we take a look at the parameters we have available on the onButtonClickCancel method, we have: button, e, and options, and none of them provides us the form reference. So what can we do about it? We can use the up method from the Button class (inherited from the AbstractComponent class). With this method, we can use a selector to try to retrieve the form. The up method navigates up the component hierarchy, searching from an ancestor container that matches the passed selector. As the button is inside a toolbar that is inside the form we are looking for, if we use button.up('form'), it will retrieve exactly what we want. Ext JS will see what is the first ancestor in the hierarchy of the button and will find a toolbar. Not what we are looking for. So it goes up again and it will find a form, which is what we are looking for. So this is the code that we are going to implement inside the onButtonClickCancel method: button.up('form').getForm().reset(); Some people like to implement the toolbar inside the window instead of the form. No problem at all, it is only a matter of how you like to implement it. In this case, if the toolbar that contains the Submit button is inside the Window class we can use: button.up('window').down('form').getForm().reset() And we will have the same result! Submit button listener implementation Now we need to implement the onButtonClickSubmit method. Inside this method, we want to program the logic to send the username and password values to the server so that the user can be authenticated. We can implement two programming logics inside this method: the first one is to use the submit method that is provided by the form basic class and the second one is to use an Ajax call to submit the values to the server. Either way we will achieve what we want to do. However, there is one detail that we need to know prior to making this decision: if using the submit method of the form basic class, we will not be able to encrypt the password before we send it to the server, and if we take a look at the parameters sent to the server, the password will be a plain text, and this is not good. Using the Ajax request will result the same; however, we can encrypt the password value before sending to the server. So apparently, the second option seems better and that is the one that we will implement. So to summarize, following are the steps we need to perform in this method: Get the Login form reference Get the Login window reference (so that we can close it once the user has been authenticated) Get the username and password values from the form Encrypt the password Send login information to the server Handle the server response If user is authenticated display application If not, display an error message First, let's get the references that we need: var formPanel = button.up('form'), login = button.up('login'), user = formPanel.down('textfield[name=user]').getValue(), pass = formPanel.down('textfield[name=password]').getValue(); To get the form reference, we can use the button.up('form') code that we already used in the onButtonClickCancel method; to get the Login window reference we can do the same thing, only changing the selector to login or window. Then to get the values from the User and Password fields we can use the down method, but this time the scope will start from the form reference. For the selector we will use the text field xtype, and to make sure we are retrieving the text field we want, we can create an itemId attribute, but there is no need for it. We can use the name attribute since the user and password fields have different names and they are unique within the Login window. To use attributes within a selector we must wrap it in brackets. The next step is to submit the values to the server: if (formPanel.getForm().isValid()) { Ext.Ajax.request({ url: 'php/login.php', params: { user: user, password: pass } }); } If we try to run this code, the application will send the request to the server, but we will get an error as the response because we do not have the login.php page implemented yet. That's OK because we are interested in other details right now. With Firebug or Chrome Developer Tools enabled, open the Net tab and filter by the XHR requests. Make sure to enter a username and password (any valid value so that we can click on the Submit button). This will be the output: We still do not have the password encrypted. The original value is still being displayed and this is not good. We need to encrypt the password. Under the app directory, we will create a new folder named util where we are going to create all the utility classes. We will also create a new file named MD5.js; therefore, we will have a new class named Packt.util.MD5. This class contains a static method called encode and this method encodes the given value using the MD5 algorithm. To understand more about the MD5 algorithm go to http://en.wikipedia.org/wiki/MD5. As Packt.util.MD5 is big, we will not list its code here, but you can download the source code of this book from http://www.packtpub.com/mastering-ext-javascript/book or get the latest version at https://github.com/loiane/masteringextjs). If you would like to make it even more secure, you can also use SSL and ask for a random salt string from the server, salt the password and hash it. You can learn more about it at one the following URLs: http://en.wikipedia.org/wiki/Transport_Layer_Security and http://en.wikipedia.org/wiki/Salt_(cryptography). A static method does not require an instance of the class to be able to be called. In Ext JS, we can declare static attributes and methods inside the static configuration. As the encode method from Packt.util.MD5 class is static, we can call it like Packt.util.MD5.encode(value);. So before Ext.Ajax.request, we will add the following code: pass = Packt.util.MD5.encode(pass); We must not forget to add the Packt.util.MD5 class on the controller's requires declaration (the requires declaration is right after the extend declaration): requires: [ 'Packt.util.MD5' ], Now, if we try to run the code again, and check the XHR requests on the Net tab, we will have the following output: The password is encrypted and it is much safer now.
Read more
  • 0
  • 0
  • 8237

article-image-so-what-play
Packt
14 Jun 2013
11 min read
Save for later

So, what is Play?

Packt
14 Jun 2013
11 min read
(For more resources related to this topic, see here.) Quick start – Creating your first Play application Now that we have a working Play installation in place, we will see how easy it is to create and run a new application with just a few keystrokes. Besides walking through the structure of our Play application, we will also look at what we can do with the command-line interface of Play and how fast modifications of our application are made visible. Finally, we will take a look at the setup of integrated development environments ( IDEs ). Step 1 – Creating a new Play application So, let's create our first Play application. In fact, we create two applications, because Play comes with the APIs for Java and Scala, the sample accompanying us in this book is implemented twice, each in one separate language. Please note that it is generally possible to use both languages in one project. Following the DRY principle, we will show code only once if it is the same for the Java and the Scala application. In such cases we will use the play-starter-scala project. First, we create the Java application. Open a command line and change to a directory where you want to place the project contents. Run the play script with the new command followed by the application name (which is used as the directory name for our project): $ play new play-starter-java We are asked to provide two additional information: The application name, for display purposes. Just press the Enter key here to use the same name we passed to the play script. You can change the name later by editing the appName variable in play-starter-java/project/Build.scala. The template we want to use for the application. Here we choose 2 for Java. Repeat these steps for our Scala application, but now choose 1 for the Scala template. Please note the difference in the application name: $ play new play-starter-scala The following screenshot shows the output of the play new command: On our way through the next sections, we will build an ongoing example step-by-step. We will see Java and Scala code side-by-side, so create both projects if you want to find out more about the difference between Java and Scala based Play applications. Structure of a Play application Physically, a Play application consists of a series of folders containing source code, configuration files, and web page resources. The play new command creates the standardized directory structure for these files: /path/to/play-starter-scala└app source code| └controllers http request processors| └views templates for html files└conf configuration files└project sbt project definition└public folder containing static assets| └images images| └javascripts javascript files| └stylesheets css style sheets└test source code of test cases During development, Play generates several other directories, which can be ignored, especially when using a version control system: /path/to/play-starter-scala└dist releases in .zip format└logs log files└project THIS FOLDER IS NEEDED| └project but this...| └target ...and this can be ignored└target generated sources and binaries There are more folders that can be found in a Play application depending on the IDE we use. In particular, a Play project has optional folders on more involved topics we do not discuss in this book. Please refer to the Play documentation for more details. The app/ folder The app/ folder contains the source code of our application. According to the MVC architectural pattern, we have three separate components in the form of the following directories: app/models/: This directory is not generated by default, but it is very likely present in a Play application. It contains the business logic of the application, for example, querying or calculating data. app/views/: In this directory we find the view templates. Play's view templates are basically HTML files with dynamic parts. app/controllers/: This controllers contain the application specific logic, for example, processing HTTP requests and error handling. The default directory (or package) names, models, views, and controllers, can be changed if needed. The conf/ directory The conf/ directory is the place where the application's configuration files are placed. There are two main configuration files: application.conf: This file contains standard configuration parameters routes – This file defines the HTTP interface of the application The application.conf file is the best place to add more configuration options if needed for our application. Configuration files for third-party libraries should also be put in the conf/ directory or an appropriate sub-directory of conf/. The project/ folder Play builds applications with the Simple Build Tool ( SBT ). The project/ folder contains the SBT build definitions: Build.scala: This is the application's build script executed by SBT build.properties: This definition contains properties such as the SBT version plugins.sbt: This definition contains the SBT plugins used by the project The public/ folder Static web resources are placed in the public/ folder. Play offers standard sub-directories for images, CSS stylesheets, and JavaScript files. Use these directories to keep your Play applications consistent. Create additional sub-directories of public/ for third-party libraries for a clear resource management and to avoid file name clashes. The test/ folder Finally, the test/ folder contains unit tests or functional tests. This code is not distributed with a release of our application. Step 2 – Using the Play console Play provides a command-line interface (CLI), the so-called Play console. It is based on the SBT and provides several commands to manage our application's development cycle. Starting our application To enter the Play console, open a shell, change to the root directory of one of our Play projects, and run the play script. $ cd /path/to/play-starter-scala$ play On the Play console, type run to run our application in development (DEV) mode. [play-starter-scala] $ run Use ~run instead of run to enable automatic compilation of file changes. This gives us an additional performance boost when accessing our application during development and it is recommended by the author. All console commands can be called directly on the command line by running play <command>. Multiple arguments have to be denoted in quotation marks, for example, play "~run 9001" A web server is started by Play, which will listen for HTTP requests on localhost:9000 by default. Now open a web browser and go to this location. The page displayed by the web browser is the default implementation of a new Play application. To return to our shell, type the keys Ctrl + D to stop the web server and get back to the Play console. Play console commands Besides run , we typically use the following console commands during development: clean: This command deletes cached files, generated sources, and compiled classes compile: This command compiles the current application test: This command executes unit tests and functional tests We get a list of available commands by typing help play in the Play development console. A release of an application is started with the start command in production (PROD) mode. In contrast to the DEV mode no internal state is displayed in the case of an error. There are also commands of the play script, available only on the command line: clean-all: This command deletes all generated directories, including the logs. debug: This command runs the Play console in debug mode, listening on the JPDA port 9999. Setting the environment variable JDPA_PORT changes the port. stop: This command stops an application that is running in production mode. Closing the console We exit the Play console and get back to the command line with the exit command or by simply typing the key Ctrl + D . Step 3 – Modifying our application We now come to the part that we love the most as impatient developers: the rapid development turnaround cycles. In the following sections, we will make some changes to the given code of our new application visible. Fast turnaround – change your code and hit reload! First we have to ensure that our applications are running. In the root of each of our Java and Scala projects, we start the Play console. We start our Play applications in parallel on two different ports to compare them side-by-side with the commands ~run and ~run 9001. We go to the browser and load both locations, localhost:9000 and I Then we open the default controller app/controllers/Application.java and app/controllers/Application.scala respectively, which we created at application creation, in a text editor of our choice, and change the message to be displayed in the Java code: public class Application extends Controller {public static Result index() {return ok(index.render("Look ma! No restart!"));}} and then in the Scala code: object Application extends Controller {def index = Action {Ok(views.html.index("Look ma! No restart!"))}} Finally, we reload our web pages and immediately see the changes: That's it. We don't have to restart our server or re-deploy our application. The code changes take effect by simply reloading the page. Step 4 – Setting up your preferred IDE Play takes care of automatically compiling modifications we make to our source code. That is why we don't need a full-blown IDE to develop Play applications. We can use a simple text editor instead. However, using an IDE has many advantages, such as code completion, refactoring assistance, and debugging capabilities. Also it is very easy to navigate through the code. Therefore, Play has built-in project generation support for two of the most popular IDEs: IntelliJ IDEA and Eclipse . IntelliJ IDEA The free edition, IntelliJ IDEA Community , can be used to develop Play projects. However, the commercial release, IntelliJ IDEA Ultimate , includes Play 2.0 support for Java and Scala. Currently, it offers the most sophisticated features compared to other IDEs.More information can be found here: http://www.jetbrains.com/idea and also here: http://confluence.jetbrains.com/display/IntelliJIDEA/Play+Framework+2.0 We generate the required IntelliJ IDEA project files by typing the idea command on the Play console or by running it on the command line: $ play idea We can also download the available source JAR files by running idea with-source=true on the console or on the command line: $ play "idea with-source=true" After that, the project can be imported into IntelliJ IDEA. Make sure you have the IDE plugins Scala, SBT , and Play 2 (if available) installed. The project files have to be regenerated by running play idea every time the classpath changes, for example, when adding or changing project dependencies. IntelliJ IDEA will recognize the changes and reloads the project automatically. The generated files should not be checked into a version control system, as they are specific to the current environment. Eclipse Eclipse is also supported by Play. The Eclipse Classic edition is fine, which can be downloaded here: http://www.eclipse.org/downloads. It is recommended to install the Scala IDE plugin, which comes up with great features for Scala developers and can be downloaded here: http://scala-ide.org. You need to download Version 2.1.0 (milestone) or higher to get Scala 2.10 support for Play 2.1. A Play 2 plugin exists also for Eclipse, but it is in a very early stage. It will be available in a future release of the Scala IDE. More information can be found here: https://github.com/scala-ide/scala-ide-play2/wiki The best way to edit Play templates with Eclipse currently is by associating HTML files with the Scala Script Editor. You get this editor by installing the Scala Worksheet plugin, which is bundled with the Scala IDE. We generate the required Eclipse project files by typing the eclipse command on the Play console or by running it on the command line: $ play eclipse Analogous to the previous code, we can also download available source JAR files by running eclipse with-source=true on the console or on the command line: $ play "eclipse with-source=true" Also, don't check in generated project files for a version control system or regenerate project files if dependencies change. Eclipse (Juno) is recognizing the changed project files automatically. Other IDEs Other IDEs are not supported by Play out of the box. There are a couple of plugins, which can be configured manually. For more information on this topic, please consult the Play documentation. Summary We saw how easy it is to create and run a new application with just a few keystrokes. Besides walking through the structure of our Play application, we also looked at what we can do with the command-line interface of Play and how fast modifications of our application are made visible. Finally, we looked at the setup of integrated development environments ( IDEs ). Resources for Article : Further resources on this subject: Play! Framework 2 – Dealing with Content [Article] Play Framework: Data Validation Using Controllers [Article] Play Framework: Binding and Validating Objects and Rendering JSON Output [Article]
Read more
  • 0
  • 0
  • 1652

article-image-so-what-kineticjs
Packt
14 Jun 2013
3 min read
Save for later

So, what is KineticJS?

Packt
14 Jun 2013
3 min read
(For more resources related to this topic, see here.) With KineticJS you can draw shapes on the stage and manipulate them using the following elements: Move Rotate Animate Even if your application has thousands of figures, the animation will run smoothly and with a high enough FPS. The items are organized into layers, of which you can have as many as you want. Shapes can also be organized into groups. KineticJS allows unlimited nesting of shapes and groups. Scenes, layers, groups, and figures are virtual nodes, similar to DOM nodes in HTML. Any node can be styled or transformed. There are several predefined shapes, such as rectangles, circles, images, text, lines, polygons, stars, and so on. You can also create custom drawing functions in order to create custom shapes. For each object you can assign different event handlers (touch or mouse). You can also apply filter or animation to the shapes. Of course, you can implement all the necessary HTML5 Canvas functionality without KineticJS, but you have to spend a lot more time, and not necessarily get the same level of performance. The creators of KineticJS put all their love and faith into a brighter future of HTML5 interactivity. The main advantage of the library is high performance, which is achieved by creating two canvas renderers – a scene renderer and a hit graph renderer. One renderer is what you see, and the second is a special hidden canvas that's used for high-performance event detection. A huge advantage of KineticJS is that it is an extension to HTML5 Canvas, and thus is perfectly suited for developing applications for mobile platforms. High performance can hide all the flaws of the canvas in iOS, Android, and other platforms. It is a known fact that the iOS platform does not support Adobe Flash. In this case, KineticJS is a good Flash alternative for iOS devices. You can wrap up your KineticJS application with Cordova/PhoneGap and use it as an offline application, or publish to the App store. In short, the following are the main advantages of KineticJS: Speed Scalability Extensibility Flexibility Familiarity with API (for developers with the knowledge of HTML, CSS, JS, and jQuery) If you are an active innovator and indomitable web developer, this library is for you. Summary In this article, we walked through the basics and main advantages KineticJS. Resources for Article : Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] Removing Unnecessary jQuery Loads [Article] Using JavaScript Effects with Joomla! [Article]
Read more
  • 0
  • 0
  • 2816
article-image-creating-your-own-theme
Packt
13 Jun 2013
5 min read
Save for later

Creating Your Own Theme

Packt
13 Jun 2013
5 min read
(For more resources related to this topic, see here.) Starting with a new layout Before we start creating a concrete5 theme we need a layout. In this article, we're going to use a simple layout without any pictures to keep the code as short as possible—it's about concrete5, not about HTML and CSS. If you don't have the time for an exercise, you can use your own layout. With good knowledge about the basic technologies of concrete5, you should be able to amend the instructions in this article to match your own layout. If you don't feel very comfortable working with PHP you should probably use the printed HTML code in this article. Here's a screenshot of what our site is going to look like once we've finished our theme: While this layout isn't very pretty, it has an easy structure; navigation on top and a big content area where we can insert any kind of block we want. In case you're using your own layout, try to use one with a simple structure; navigation on top or on the left with one big place for the content, and try to avoid Flash. The HTML code Let's have a look at the HTML code: <!DOCTYPE html><html lang="en"><head><title>concrete5 Theme</title><meta http-equiv="Content-Type" content="text/html;charset=utf-8" /><style type="text/css" media="screen">@import "main.css";</style></head><body><div id="wrapper"><div id="page"><div id="header_line_top"></div><div id="header"><ul class="nav-dropdown"><li><a href="#">Home</a></li><li><a href="#">Test</a></li><li><a href="#">About</a></li></ul></div><div id="header_line_bottom"></div><div id="content"><p>Paragraph 1</p><p>Paragraph 2</p><p>Paragraph 3</p></div><div id="footer_line_top"></div><div id="footer"></div><div id="footer_line_bottom"></div></div></div></body></html> There are three highlighted lines in the preceding code: The CSS import: This is to keep the layout instructions separated from the HTML elements; we've got all the CSS rules in a different file named main.css. This is also how almost all concrete5 themes are built. The header block contains the navigation. As we're going to apply some styles to it, make sure it has its own ID. Using an ID also improves the performance when using CSS and JavaScript to access an element, as an ID is unique. The same applies to the content block. Make sure it has a unique ID. Most web technologies we use nowadays are standardized in one way or another. Currently, the most important organization is W3C. They also offer tools to validate your code. Checking your code is never a bad idea. Navigate to http://validator.w3.org/ and enter the address of the website you want to check or in this case, as your website isn't accessible by the public, click on Validate by Direct Input and paste the HTML code to see if there are any mistakes. While it should be fairly easy to produce valid HTML code, things are a bit tricky with CSS. Due to some old browser bugs, you're often forced to use invalid CSS rules. There's often a way to rebuild the layout to avoid some invalid rules but often this isn't the case—you won't be doomed if something isn't 100 percent valid but you're on the safer side if it is. CSS rules As mentioned earlier, all CSS rules are placed in a file named main.css. Let's have a look at all CSS rules you have to put in our CSS file: /* global HTML tag rules */html, body, div, pre, form, fieldset, input, h1, h2, h3, h4, h5, h6,p, textarea, ul, ol, li, dl, dt, dd, blockquote, th, td {margin: 0;padding: 0;}p {margin: 5px 0px 15px 0px;}html {height: 100%;}body {background-color: #989898;height: 100%;}/* layout rules */#wrapper {margin: 0 auto;width: 980px;text-align: left;padding-top: 35px;}#page {background: #FFFFFF;float: left;width: 960px;padding: 5px;-moz-box-shadow: 0 0 15px black;-webkit-box-shadow: 0 0 15px black;box-shadow: 0 0 15pxblack;border-radius: 10px;}/* header */#header {background: #262626;border-radius: 10px 10px 0px 0px;height: 75px;}#header_line_top {background: #262626;height: 0px;}#header_line_bottom {background: #e64116;height: 3px;}/* content */#content {min-height: 300px;padding: 30px;color: #1E1E1E;font-family: verdana, helvetica, arial;font-size: 13px;line-height: 22px;}/* footer */#footer {background: #262626;height: 75px;border-radius: 0px 0px 10px 10px;}#footer_line_top {background: #e64116;height: 3px;}#footer_line_bottom {background: #262626;height: 0px;}/* header navigation */#header ul{margin: 0px;padding: 20px;}#header ul li {float: left;list-style-type: none;}#header ul li a {margin-right: 20px;display: block;padding: 6px 15px 6px 15px;color: #ccc;text-decoration: none;font-family: verdana, helvetica, arial;}#header ul li a:hover {color: white;}
Read more
  • 0
  • 0
  • 1127

article-image-preparing-your-website-use-gridster
Packt
22 May 2013
4 min read
Save for later

Preparing your website to use Gridster

Packt
22 May 2013
4 min read
(For more resources related to this topic, see here.) Getting ready There are only two things needed to get Gridster installed on your source code. You first need to download jQuery if you don't already have it, and then download the latest version of Gridster. After that, you will use plain HTML code to include both libraries in your webpage. For most casual users, adding the latest version of jQuery will suffice. There are also nightly builds available, but these won't be discussed here as they are not necessary, and the latest version should be able to do everything we need. How to do it... Start by visiting jQuery's website, http://jquery.com/download/, and download the library to a location you will remember. We won't be debugging our jQuery code in the examples given in this book, so downloading the production version will be fine. We should now head over to Gridster's website, http://gridster.net/#download/, and download the minified versions of both the gridster.js and gridster.css files. These are the files we will use throughout this entire book, so make sure they are kept safe and accessible. A suggestion would be to create a directory structure to make it easier to refer to the files. I will be using the following structure for the examples given here: Under the directory recipe1, create a new text file called index.html. This file should initially contain the following code: <!DOCTYPE html><html><head><script src = "../scripts/jquery-1.8.3.min.js"></script><script src = "../scripts/jquery.gridster.min.js"></script><link href = "../styles/jquery.gridster.min.css" rel="stylesheet" /><title>Recipe 1</title></head><body>Hello Gridster!</body></html> You can download the example code files for all Packt books that you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub. com/support and register to have the files e-mailed directly to you. By double-clicking on this file, you should be presented with a screen that looks like the following screenshot: You can check that everything has loaded up correctly by pressing the F12 key on your browser (Chrome or Firefox), and checking that all files have been correctly loaded without errors, as shown in the following screenshot: There's more... Instead of downloading the files into your project, you can simply load them up via CDN-hosted copies of the files, as follows: <script src = "https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> In that case, the files wouldn't be coming from your website, but from jQuery's website itself. This is a good practice when trying to improve performance, as big web servers tend to host files on multiple locations and use very aggressive caching techniques to make sure the files are served quickly. Gridster also offers files in the same way from their website as you will find in their download section. So, for example, you could link directly to their minified file as follows: <scriptsrc ="https://raw.github.com/ducksboard/gridster.js/master/dist/jquery.gridster.min.js"></script> Summary In this recipe we described all the necessary steps to get Gridster up and running on your website, and also demonstrated how to include any of the dependencies needed by the library. Resources for Article : Further resources on this subject: Getting Started with jQuery [Article] jQuery Animation: Tips and Tricks [Article] Tips and Tricks for Working with jQuery and WordPress [Article]
Read more
  • 0
  • 0
  • 1288