Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-using-linq-query-linqpad
Packt
05 Sep 2013
3 min read
Save for later

Using a LINQ query in LINQPad

Packt
05 Sep 2013
3 min read
(For more resources related to this topic, see here.) The standard version We are going to implement a simple scenario: given a deck of 52 cards, we want to pick a random number of cards, and then take out all of the hearts. From this stack of hearts, we will discard the first two and take the next five cards (if possible), and order them by their face value for display. You can try it in a C# program query in LINQPad: public static Random random = new Random();void Main(){ var deck = CreateDeck(); var randomCount = random.Next(52); var hearts = new Card[randomCount]; var j = 0; // take all hearts out for(var i=0;i<randomCount;i++) { if(deck[i].Suit == "Hearts") { hearts[j++] = deck[i]; } } // resize the array to avoid null references Array.Resize(ref hearts, j); // check that we have at least 2 cards. If not, stop if(hearts.Length <= 2) return; var count = 0; // check how many cards we can take count = hearts.Length - 2; // the most we need to take is 5 if(count > 5) { count = 5; } // take the cards var finalDeck = new Card[count]; Array.Copy(hearts, 2, finalDeck, 0, count); // now order the cards Array.Sort(finalDeck, new CardComparer()); // Display the result finalDeck.Dump();}public class Card{ public string Suit { get; set; } public int Value { get; set; }}// Create the cards' deckpublic Card[] CreateDeck(){ var suits = new [] { "Spades", "Clubs", "Hearts", "Diamonds" }; var deck = new Card[52]; for(var i = 0; i < 52; i++) { deck[i] = new Card { Suit = suits[i / 13], FaceValue = i-(13*(i/13))+1 }; } // randomly shuffle the deck for (var i = deck.Length - 1; i > 0; i--) { var j = random.Next(i + 1); var tmp = deck[j]; deck[j] = deck[i]; deck[i] = tmp; } return deck;}// CardComparer compare 2 cards against their face valuepublic class CardComparer : Comparer<Card>{ public override int Compare(Card x, Card y) { return x.FaceValue.CompareTo(y.FaceValue); }} Even if we didn't consider the CreateDeck() method, we had to do quite a few operations to produce the expected result (your values might be different as we are using random cards). The output is as follows: Depending on the data, LINQPad will add contextual information. For example, in this sample it will add the bottom row with the sum of all the values (here, only FaceValue). Also, if you click on the horizontal graph button, you will get a visual representation of your data, as shown in the following screenshot: This information is not always relevant but it can help you explore your data. Summary In this article we saw how LINQ queries can be used in LINQPad. The powerful query capabilitiesof LINQ has been utilized to the maximum in LINQPad. Resources for Article: Further resources on this subject: Displaying SQL Server Data using a Linq Data Source [Article] Binding MS Chart Control to LINQ Data Source Control [Article] LINQ to Objects [Article]
Read more
  • 0
  • 0
  • 1292

article-image-so-what-nodejs
Packt
04 Sep 2013
2 min read
Save for later

So, what is Node.js?

Packt
04 Sep 2013
2 min read
(For more resources related to this topic, see here.) Node.js is an open source platform that allows you to build fast and scalable network applications using JavaScript. Node.js is built on top of V8, a modern JavaScript virtual machine that powers Google's Chrome web browser. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient.Node.js can handle multiple concurrent network connections with little overhead, making it ideal for data-intensive, real-time applications. With Node.js, you can build many kinds of networked applications. For instance, you can use it to build a web application service, an HTTP proxy, a DNS server, an SMTP server, an IRC server, and basically any kind of process that is network intensive. You program Node.js using JavaScript, which is the language that powers the Web. JavaScript is a powerful language that, when mastered, makes writing networked, event-driven applications fun and easy. Node.js recognizes streams that are resistant to precarious network conditions and misbehaving clients. For instance, mobile clients are notoriously famous for having large latency network connections, which can put a big burden on servers by keeping around lots of connections and outstanding requests. By using streaming to handle data, you can use Node.js to control incoming and outgoing streams and enable your service to survive. Also, Node.js makes it easy for you to use third-party open source modules. By using Node Package Manager (NPM), you can easily install, manage, and use any of the several modules contained in a big and growing repository. NPM also allows you to manage the modules your application depends on in an isolated way, allowing different applications installed in the same machine to depend on different versions of the same module without originating a conflict, for instance. Given the way it's designed, NPM even allows different versions of the same module to coexist in the same application. Summary In this article, we learned that Node.js uses an event-driven, non-blocking I/O mode and can handle multiple concurrent network connections with little overhead. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] Cross-browser-distributed testing [Article] Accessing and using the RDF data in Stanbol [Article]
Read more
  • 0
  • 0
  • 3940

article-image-managing-adobe-connect-meeting-room
Packt
04 Sep 2013
6 min read
Save for later

Managing Adobe Connect Meeting Room

Packt
04 Sep 2013
6 min read
(For more resources related to this topic, see here.) The Meeting Information page In order to get to the Meeting Information page, you will first need to navigate to the Meeting List page by following these steps: Log in to the Connect application. Click on the Meetings tab in the Home Page main menu. When you access the Meetings page, the My Meetings link is opened by default and a view is set on the Meeting List tab. You will find the meeting that is listed on this page as shown in the following screenshot: By clicking on the Cookbook Meeting option in the Name column (marked with a red outline), you will be presented with the Meeting Information page. In the section titled Meeting Information, you can examine various pieces of information about the selected meeting. On this page, you can review Name, Summary, Start Time, Duration, Number of users in room(that are currently present in the meeting room), URL, Language(selected), and the Access rights of the meeting. The two most important fields are marked with a red outline in the previous screenshot. The first one is the link to the meeting URL and the second is the Enter Meeting Room button. You can join the selected meeting room by clicking on any of these two options. In the upper portion of this page, you will notice the navigation bar with the following links: Meeting Information Edit Information Edit Participants Invitations Uploaded Content Recordings Reports By selecting any of these links, you will open pages associated with them. Our main focus of this article will be on the functionalities of these pages. Since we have explained the Meeting Information page, we can proceed to the Edit Information page. The Edit Information page The Edit Information page is very similar to the Enter Meeting Information page. We will briefly inform you about the meeting settings, which you can edit on this page. These settings are: Name Summary Start time Duration Language Access Audio conference settings Any changes made on this page are preserved by clicking on the Save button that you will find at very bottom of this page. Changes will not affect participants who are already logged in to the room, except changes to the Audio Conference settings. Next to the Save button, you will find the Cancel button. Any changes made on the Edit Information page, which are not already saved will be reverted by clicking on the Cancel button. The Edit Participants page After the Edit Information page, it's time for us to access the next page by clicking on the Edit Participants link in the navigation bar. This link will take you to the Select Participants page. In addition to the already described features, we will introduce you to a couple more functionalities that will help you to add participants, change their roles, or remove them from the meeting. Example 1 – changing roles In this example, we will change the role of the administrators group from participant to presenter by using the Search button. This feature is of great help when there are a large number of Connect users that are already added as meeting participants. In order to do so, you will need to follow the steps listed: In the Current Participants For Cookbook Meeting table on the right-hand side, click on the Search button located in the lower-left corner of the table. When you click on the Search button, a text field for instant search will be displayed. In the text field, enter the name of the Administrators group or part of the group name (the auto-complete function should recognize the name of the present group). When the group is present in the table, select it. Click on the Set User Role button. Select new role for this group in the menu. For the purpose of this example, we will select the Presenter role. By completing this action, you will grant Presenter privileges in the Cookbook Meeting table to all the administrators as shown in the following screenshot: Example 2 – removing a user In this example, we will show you how to remove a specific user from the selected meeting. For the purpose of this exercise, we will remove the Administrators group from the Participants list. In order to complete this action, please follow the given steps: Select Administrators in the Current Participants For Cookbook Meeting table. Click on the Remove button. Now, all the members of this group will be excluded from the meeting, and Administrators should not be present in the list. Example 3 – adding a specific user This example will demonstrate how to add a specific user from any group. For example, we will add a user from the Authors group to the Current Participants list. In the Available users and Groups table, double-click on the Authors group. This action will change the user interface of this table and list all the users that belong to the Authors group. Please note that table header is now changed to Authors. Select a specific user and click on the Add button. This will add the selected user from the Authors group to the Current Participants For Cookbook Meeting table. One thing that we would like to mention here is the ability to perform multiple selections in both the Available Users and Groups and Current Participants For Cookbook Meeting tables. To enable multiple selection functionality, select a specific user and group by clicking and selecting Ctrl and Shift on the keyboard at the same time. By demonstrating these examples, we reviewed the Edit Participant link functionalities. Summary In this article, we learned how to master all functionalities on how to edit different settings for already existing meetings. We covered the following topics: The Meeting information page The Managing Edit information page The Managing Edit participants page Resources for Article: Further resources on this subject: Top features you'll want to know about [Article] Remotely Preview and test mobile web pages on actual devices with Adobe Edge Inspect [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 0
  • 838
Visually different images

article-image-article-rapid-development
Packt
04 Sep 2013
7 min read
Save for later

Rapid Development

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) Concept of reusability The concept of reusability has its roots in the production process. Typically, most of us go about creating e-learning using a process similar to what is shown in the following screenshot. It works well for large teams and the one man band, except in the latter case, you become a specialist for all the stages of production. That's a heavy load. It's hard to be good at all things and it demands that you constantly stretch and improve your skills, and find ways to increase the efficiency of what you do. Reusability in Storyline is about leveraging the formatting, look and feel and interactions you create so that you can re-purpose your work and speed-up production. Not every project will be an original one-off, in fact most won't, so the concept is to approach development with a plan to repurpose 80 percent of the media, quizzes, interactions, and designs you create. As you do this, you begin to establish processes, templates, and libraries that can be used to rapidly assemble base courses. With a little tweaking and some minor customization, you'll have a new, original course in no time. Your client doesn't need to know that 80 percent was made from reusable elements with just 20 percent created as original, unique components, but you'll know the difference in terms of time and effort. Leveraging existing assets So how can you leverage existing assets with Storyline? The first things you'll want to look at are the courses you've built with other authoring programs, such as PowerPoint, QuizMaker Engage, Captivate, Flash, and Camtasia. If there are design themes, elements, or interactions within these courses that you might want to use for future Storyline courses, you should focus your efforts on importing what you can, and further adjusting within Storyline to create a new version of the asset that can be reused for future Storyline courses. If re-working the asset is too complex or if you don't expect to reuse it in multiple courses, then using Storyline's web object feature to embed the interaction without re-working it in any way may be the better approach. In both cases, you'll save time by reusing content you've already put a lot of time in developing. Importing external content Here are the steps to bring external content into Storyline: From the Articulate Startup screen or by choosing the Insert tab, and then New Slide within a project, select the Import option. There are options to import PowerPoint, Quizmaker, and Storyline. All of these will display the slides within the file to be imported. You can pick and choose which slides to import into a new or the current scene in Storyline. The Engage option displays the entire interaction that can be imported into a single slide in the current or a new scene. Click on Import to complete the process. Considerations when importing Keep the following points in mind when importing: PowerPoint and Quizmaker files can be imported directly into Storyline. Once imported, you can edit the content like you would any other Storyline slide. Master slides come along with the import making it simple to reuse previous designs. Note that 64-bit PowerPoint is not supported and you must have an installed, activated version of Quizmaker for the import to work. The PowerPoint to Storyline conversion is not one-to-one. You can expect some alignment issues with slide objects due to the fact that PowerPoint uses points and Storyline uses pixels. There are 2.66 pixels for each point which is why you'll need to tweak the imported slides just a bit. Same with Quizmaker though the reason why is slightly different; Quizmaker is 686 x 424 in size, whereas Storyline is 720 x 540 by default. Engage files can be imported into Storyline and they are completely functional, but cannot be edited within Storyline. Though the option to import Engage appears on the Import screen, what Storyline is really doing is creating a web object to contain the Engage interaction. Once imported into a new scene, clicking on the Engage interaction will display an Options menu where you can make minor adjustments to the behavior of the interaction as well as Preview and Edit in it Engage. You can also resize and position the interaction just as you would any web object. Remember that though web objects work in iPad and HTML5 outputs, Engage content is Flash, so it will not playback on an iPad or in an HTML5 browser. Like Quizmaker, you'll need an installed, activated version of Engage for the import to work. Flash, Captivate, and Camtasia files cannot be imported in Storyline and cannot be edited within Storyline. You can however, use web objects to embed these projects into Storyline or the Insert Flash option. In both cases, the imported elements appear seamless to the learner while retaining full functionality.   Build once, and reuse many times Quizzing is at the heart of many e-learning courses where often the quiz questions need to be randomized or even reused in different sections of a single course (that is, the same questions for a pre and post-test). The concept of building once and reusing many times works well with several aspects of Storyline. We'll start with quizzing and a feature called Question Banks as follows: Question Banks Question Bank offers a way to pool, reuse, and randomize questions within a project. Slides in a question bank are housed within the project file but are not visible until placed into the story. Question Banks can include groups of quiz slides and regular slides (that is, you might include a regular slide if you need to provide instructions for the quiz or would like to include a post-quiz summary). When you want to include questions from a Question Bank, you just need to insert a new Quizzing slide, and then choose Draw from Bank . You can then select one or more questions to include and randomize them if desired. Follow along… In this exercise we will be removing three questions from a scene and moving them into a question bank. This will allow you to draw one or more of those questions at any point in the project where the quiz questions are needed, as follows: From the Home tab, choose Question Banks , and then Create Question bank . Title this Identity Theft Questions . Notice that a new tab has opened in Normal View . The Question Bank appears in this tab. Click on the Import link and navigate to question slides 2, 3, and 4. From the Import drop-down menu at the top, select move questions into question bank . Click on the Story View tab and notice the three slides containing the quiz questions are no longer in the story. Click back on the Identity Theft tab and notice that they are located here. The questions will not become a part of the story until the next step, when you draw them from the bank. In Story View, click once on slide 1 to select it, and then from the Home tab, choose Question Banks and New Draw from Question Bank . From the Question Bank drop-down menu, select Identity Theft Questions . All questions will be selected by default and will be randomized after being placed into the story. This means that the learner will need to answer three questions before continuing onto the next slide in the story. Click on Insert . The Question Bank draw has been inserted as slide 2. To see how this works, Preview the scene. Save as Exercise 11 – Identity Theft Quiz.   There are multiple ways to get back to the questions that are in a question bank. You can do this by selecting the tab the questions are located in (in this case, Identity Theft ), you can view the question bank slide in Normal View or choose Question Banks from the Home tab and navigate to the name of the question bank you'd like to edit.
Read more
  • 0
  • 0
  • 946

article-image-so-what-apache-wicket
Packt
04 Sep 2013
7 min read
Save for later

So, what is Apache Wicket?

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) Wicket is a component-based Java web framework that uses just Java and HTML. Here, you will see the main advantages of using Apache Wicket in your projects. Using Wicket, you will not have mutant HTML pages. Most of the Java web frameworks require the insertion of special syntax to the HTML code, making it more difficult for Web designers. On the other hand, Wicket adopts HTML templates by using a namespace that follows the XHTML standard. It consists of an id attribute in the Wicket namespace (wicket:id). You won't need scripts to generate messy HTML code. Using Wicket, the code will be clearer, and refactoring and navigating within the code will be easier. Moreover, you can utilize any HTML editor to edit the HTML files, and web designers can work with little knowledge of Wicket in the presentation layer without worrying about business rules and other developer concerns. The advantages for developers are as follows: All code is written in Java No XML configuration files POJO-centric programming No Back-button problems (that is, unexpected and undesirable results on clicking on the browser's Back button) Ease of creating bookmarkable pages Great compile-time and runtime problem diagnosis Easy testability of components Another interesting thing is that concepts such as generics and anonymous subclasses are widely used in Wicket, leveraging the Java programming language to the max. Wicket is based on components. A component is an object that interacts with other components and encapsulates a set of functionalities. Each component should be reusable, replaceable, extensible, encapsulated, and independent, and it does not have a specific context. Wicket provides all these principles to developers because it has been designed taking into account all of them. In particular, the most remarkable principle is reusability. Developers can create custom reusable components in a straightforward way. For instance, you could create a custom component called SearchPanel (by extending the Panel class, which is also a component) and use it in all your other Wicket projects. Wicket has many other interesting features. Wicket also aims to make the interaction of the stateful server-side Java programming language with the stateless HTTP protocol more natural. Wicket's code is safe by default. For instance, it does not encode state in URLs. Wicket is also efficient (for example, it is possible to do a tuning of page-state replication) and scalable (Wicket applications can easily work on a cluster). Last, but not least, Wicket has support for frameworks like EJB and Spring. Installation In seven easy steps, you can build a Wicket "Hello World" application. Step 1 – what do I need? Before you start to use Apache Wicket 6, you will need to check if you have all of the required elements, listed as follows: Wicket is a Java framework, so you need to have Java virtual machine (at least Version 6) installed on your machine. Apache Maven is required. Maven is a tool that can be used for building and managing Java projects. Its main purpose is to make the development process easier and more structured. More information on how to install and configure Maven can be found at http://maven.apache.org. The examples of this book use the Eclipse IDE Juno version, but you can also use other versions or other IDEs, such as NetBeans. In case you are using other versions, check the link for installing the plugins to the version you have; the remaining steps will be the same. In case of other IDEs, you will need to follow some tutorial to install other equivalent plugins or not use them at all. Step 2 – installing the m2eclipse plugin The steps for installing the m2eclipse plugin are as follows: Go to Help | Install New Software. Click on Add and type in m2eclipse in the Name field; copy and paste the link https://repository.sonatype.org/content/repositories/forge-sites/m2e/1.3.0/N/LATEST onto the Location field. Check all options and click on Next. Conclude the installation of the m2eclipse plugin by accepting all agreements and clicking on Finish. Step 3 – creating a new Maven application The steps for creating a new Maven application are as follows: Go to File | New | Project. Then go to Maven | Maven Project. Click on Next and type wicket in the next form. Choose the wicket-archetype-quickstart maven Archetype and click on Next. Fill the next form according to the following screenshot and click on Finish: Step 4 – coding the "Hello World" program In this step, we will build the famous "Hello World" program. The separation of concerns will be clear between HTML and Java code. In this example, and in most cases, each HTML file has a corresponding Java class (with the same name). First, we will analyse the HTML template code. The content of the HomePage.html file must be replaced by the following code: <!DOCTYPE html> <html > <body> <span wicket_id="helloWorldMessage">Test</span> </body> </html> It is simple HTML code with the Wicket template wicket:id="helloWorldMessage". It indicates that in the Java code related to this page, a method will replace the message Test by another message. Now, let's edit the corresponding Java class; that is, HomePage. package com.packtpub.wicket.hello_world; import org.apache.wicket.markup.html.WebPage; import org.apache.wicket.markup.html.basic.Label; public class HomePage extends WebPage { public HomePage() { add(new Label("helloWorldMessage", "Hello world!!!")); } } The class HomePage extends WebPage; that is, it inherits some of the WebPage class's methods and attributes, and it becomes a WebPage subtype. One of these inherited methods is the method add(), where a Label object can be passed as a parameter. A Label object can be built by passing two parameters: an identifier and a string. The method add() is called in the HomePage class's constructor and will change the message in wicket:id="helloWorldMessage" with Hello world!!!. The resulting HTML code will be as shown in the following code snippet: <!DOCTYPE html> <html > <body> <span>Hello world!!!</span> </body> </html> Step 5 – compile and run! The steps to compile and run the project are as follows: To compile, right-click on the project and go to Run As | Maven install. Verify if the compilation was successful. If not, Wicket provides good error messages, so you can try to fix what is wrong. To run the project, right-click on the class Start and go to Run As | Java application. The class Start will run an embedded Jetty instance that will run the application. Verify if the server has started without any problems. Open a web browser and enter this in the address field: http://localhost:8080. In case you have changed the port, enter http://localhost:<port>. The browser should show Hello world!!!. The most common problem that can occur is that port 8080 is already in use. In this case, you can go into the Java Start class (found at src/test/java) and set another port by replacing 8080 in connector. setPort(8080) (line 21) by another number (for example, 9999). To stop the server, you can either click on Console and press any key or click on the red square on the console, which indicates termination. And that's it! By this point, you should have a working Wicket "Hello World" application and are free to play around and discover more about it. Summary This article describes how to create a simple "Hello World" application using Apache Wicket 6. Resources for Article : Further resources on this subject: Tips for Deploying Sakai [Article] OSGi life cycle [Article] Apache Wicket: Displaying Data Using DataTable [Article]
Read more
  • 0
  • 0
  • 4035

Packt
03 Sep 2013
19 min read
Save for later

Oracle ADF Essentials – Adding Business Logic

Packt
03 Sep 2013
19 min read
(For more resources related to this topic, see here.) Adding logic to business components by default, a business component does not have an explicit Java class. When you want to add Java logic, however, you generate the relevant Java class from the Java tab of the business component. On the Java tab, you also decide which of your methods are to be made available to other objects by choosing to implement a Client Interface . Methods that implement a client interface show up in the Data Control palette and can be called from outside the object. Logic in entity objects Remember that entity objects are closest to your database tables –– most often, you will have one entity object for every table in the database. This makes the entity object a good place to put data logic that must be always executed. If you place, for example, validation logic in an entity object, it will be applied no matter which view object attempts to change data. In the database or in an entity object? Much of the business logic you can place in an entity object can also be placed in the database using database triggers. If other systems are accessing your database tables, business logic should go into the database as much as possible. Overriding accessors To use Java in entity objects, you open an entity object and select the Java tab. When you click on the pencil icon, the Select Java Options dialog opens as shown in the following screenshot: In this dialog, you can select to generate Accessors (the setXxx() and getXxx() methods for all the attributes) as well as Data Manipulation Methods (the doDML() method; there is more on this later). When you click on OK , the entity object class is generated for you. You can open it by clicking on the hyperlink or you can find it in the Application Navigator panel as a new node under the entity object. If you look inside this file, you will find: Your class should start with an import section that contains a statement that imports your EntityImpl class. If you have set up your framework extension classes correctly this could be import com.adfessentials.adf.framework.EntityImpl. You will have to click on the plus sign in the left margin to expand the import section. The Structure panel in the bottom-left shows an overview of the class including all the methods it contains. You will see a lot of setter and getter methods like getFirstName() and setFirstName() as shown in the following screenshot: There is a doDML() method described later. If you were to decide, for example, that last name should always be stored in upper case, you could change the setLastName() method to: public void setLastName(String value) { setAttributeInternal(LASTNAME, value.toUpperCase()); } Working with database triggers If you decide to keep some of your business logic in database triggers, your triggers might change the values that get passed from the entity object. Because the entity object caches values to save database work, you need to make sure that the entity object stays in sync with the database even if a trigger changes a value. You do this by using the Refresh on Update property. To find this property, select the Attributes subtab on the left and then select the attribute that might get changed. At the bottom of the screen, you see various settings for the attribute with the Refresh settings in the top-right of the Details tab as shown in the following screenshot: Check the Refresh on Update property checkbox if a database trigger might change the attribute value. This makes the ADF framework requery the database after an update has been issued. Refresh on Insert doesn't work if you are using MySQL and your primary key is generated with AUTO_INCREMENT or set by a trigger. ADF doesn't know the primary key and therefore cannot find the newly inserted row after inserting it. It does work if you are running against an Oracle database, because Oracle SQL syntax has a special RETURNING construct that allows the entity object to get the newly created primary key back. Overriding doDML() Next, after the setters and getters, the doDML() method is the one that most often gets overridden. This method is called whenever an entity object wants to execute a Data Manipulation Language (DML ) statement like INSERT, UPDATE, or DELETE. This offers you a way to add additional processing; for example, checking that the account balance is zero before allowing a customer to be deleted. In this case, you would add logic to check the account balance, and if the deletion is allowed, call super.doDML() to invoke normal processing. Another example would be to implement logical delete (records only change state and are not actually deleted from the table). In this case, you would override doDML() as follows: @override protected void doDML(int operation, TransactionEvent e) { if (operation == DML_DELETE) { operation = DML_UPDATE; } super.doDML(operation, e); } As it is probably obvious from the code, this simply replaces a DELETE operation with an UPDATE before it calls the doDML() method of its superclass (your framework extension EntityImpl, which passes the task on to the Oracle-supplied EntityImpl class). Of course, you also need to change the state of the entity object row, for example, in the remove() method. You can find fully-functional examples of this approach on various blogs, for example at http://myadfnotebook.blogspot.dk/2012/02/updating-flag-when-deleting-entity-in.html. You also have the option of completely replacing normal doDML() method processing by simply not calling super.doDML(). This could be the case if you want all your data modifications to go via a database procedure –– for example, to insert an actor, you would have to call insertActor with first name and last name. In this case, you would write something like: @override protected void doDML(int operation, TransactionEvent e) { CallableStatement cstmt = null; if (operation == DML_INSERT) { String insStmt = "{call insertActor (?,?)}"; cstmt = getDBTransaction().createCallableStatement(insStmt, 0); try { cstmt.setString(1, getFirstName()); cstmt.setString(2, getLastName()); cstmt.execute(); } catch (Exception ex) { … } finally { … } } } If the operation is insert, the above code uses the current transaction (via the getDBTransaction() method) to create a CallableStatement with the string insertActor(?,?). Next, it binds the two parameters (indicated by the question marks in the statement string) to the values for first name and last name (by calling the getter methods for these two attributes). Finally, the code block finishes with a normal catch clause to handle SQL errors and a finally clause to close open objects. Again, fully working examples are available in the documentation and on the Internet in various blog posts. Normally, you would implement this kind of override in the framework extension EntityImpl class, with additional logic to allow the framework extension class to recognize which specific entity object the operation applies to and which database procedure to call. Data validation With the techniques you have just seen, you can implement every kind of business logic your requirements call for. One requirement, however, is so common that it has been built right into the ADF framework: data validation . Declarative validation The simplest kind of validation is where you compare one individual attribute to a limit, a range, or a number of fixed values. For this kind of validation, no code is necessary at all. You simply select the Business Rules subtab in the entity object, select an attribute, and click on the green plus sign to add a validation rule. The Add Validation Rule dialog appears as shown in the following screenshot: You have a number of options for Rule Type –– depending on your choice here, the Rule Definition tab changes to allow you to define the parameters for the rule. On the Failure Handling tab, you can define whether the validation is an error (that must be corrected) or a warning (that the user can override), and you define a message text as shown in the following screenshot: You can even define variable message tokens by using curly brackets { } in your message text. If you do so, a token will automatically be added to the Token Message Expressions section of the dialog, where you can assign it any value using Expression Language. Click on the Help button in the dialog for more information on this. If your application might ever conceivably be needed in a different language, use the looking glass icon to define a resource string stored in a separate resource bundle. This allows your application to have multiple resource bundles, one for each different user interface language. There is also a Validation Execution tab that allows you to specify under which condition your rule should be applied. This can be useful if your logic is complex and resource intensive. If you do not enter anything here, your rule is always executed. Regular expression validation One of the especially powerful declarative validations is the Regular Expression validation. A regular expression is a very compact notation that can define the format of a string –– this is very useful for checking e-mail addresses, phone numbers, and so on. To use this, set Rule Type to Regular Expression as shown in the following screenshot: JDeveloper offers you a few predefined regular expressions, for example, the validation for e-mails as shown in the preceding screenshot. Even though you can find lots of predefined regular expressions on the Internet, someone from your team should understand the basics of regular expression syntax so you can create the exact expression you need. Groovy scripts You can also set Rule Type to Script to get a free-format box where you can write a Groovy expression. Groovy is a scripting language for the Java platform that works well together with Java –– see http://groovy.codehaus.org/ for more information on Groovy. Oracle has published a white paper on Groovy in ADF (http://www.oracle.com/technetwork/developer-tools/jdev/introduction-to-groovy-128837.pdf), and there is also information on Groovy in the JDeveloper help. Method validation If none of these methods for data validation fit your need, you can of course always revert to writing code. To do this, set Rule Type to Method and provide an error message. If you leave the Create a Select Method checkbox checked when you click on OK , JDeveloper will automatically create a method with the right signature and add it to the Java class for the entity object. The autogenerated validation method for Length (in the Film entity object) would look as follows: /** * Validation method for Length. */ public boolean validateLength (Integer length) { return true; } It is your task to fill in the logic and return either true (if validation is OK) or false (if the data value does not meet the requirements). If validation fails, ADF will automatically display the message you defined for this validation rule. Logic in view objects View objects represent the dataset you need for a specific part of the application — typically a specific screen or part of a screen. You can create Java objects for either an entire view object (an XxxImpl.java class, where Xxx is the name of your view object) or for a specific row (an XxxRowImpl.java class). A view object class contains methods to work with the entire data-set that the view object represents –– for example, methods to apply view criteria or re-execute the underlying database query. The view row class contains methods to work with an individual record of data –– mainly methods to set and get attribute values for one specific record. Overriding accessors Like for entity objects, you can override the accessors (setters and getters) for view objects. To do this, you use the Java subtab in the view object and click on the pencil icon next to Java Classes to generate Java. You can select to generate a view row class including accessors to ask JDeveloper to create a view row implementation class as shown in the following screenshot: This will create an XxxRowImpl class (for example, RentalVORowImpl) with setter and getter methods for all attributes. The code will look something like the following code snippet: … public class RentalVORowImpl extends ViewRowImpl { … /** * This is the default constructor (do not remove). */ public RentalVORowImpl() { } … /** * Gets the attribute value for title using the alias name * Title. * @return the title */ public String getTitle() { return (String) getAttributeInternal(TITLE); } /** * Sets <code>value</code> as attribute value for title using * the alias name Title. * @param value value to set the title */ public void setTitle(String value) { setAttributeInternal(TITLE, value); } … } You can change all of these to manipulate data before it is delivered to the entity object or to return a processed version of an attribute value. To use such attributes, you can write code in the implementation class to determine which value to return. You can also use Groovy expressions to determine values for transient attributes. This is done on the Value subtab for the attribute by setting Value Type to Expression and filling in the Value field with a Groovy expression. See the Oracle white paper on Groovy in ADF (http://www.oracle.com/technetwork/developer-tools/jdev/introduction-to-groovy-128837.pdf) or the JDeveloper help. Change view criteria Another example of coding in a view object is to dynamically change which view criteria are applied to the view object.It is possible to define many view criteria on a view object –– when you add a view object instance to an application module, you decide which of the available view criteria to apply to that specific view object instance. However, you can also programmatically change which view criteria are applied to a view object. This can be useful if you want to have buttons to control which subset of data to display –– in the example application, you could imagine a button to "show only overdue rentals" that would apply an extra view criterion to a rental view object. Because the view criteria apply to the whole dataset, view criteria methods go into the view object, not the view row object. You generate a Java class for the view object from the Java Options dialog in the same way as you generate Java for the view row object. In the Java Options dialog, select the option to generate the view object class as shown in the following screenshot: A simple example of programmatically applying a view criteria would be a method to apply an already defined view criterion called called OverdueCriterion to a view object. This would look like this in the view object class: public void showOnlyOverdue() { ViewCriteria vc = getViewCriteria("OverdueCriterion"); applyViewCriteria(vc); executeQuery(); } View criteria often have bind variables –– for example, you could have a view criteria called OverdueByDaysCriterion that uses a bind variable OverdueDayLimit. When you generate Java for the view object, the default option of Include bind variable accessors (shown in the preceding screenshot) will create a setOverdueDayLimit() method if you have an OverdueDayLimit bind variable. A method in the view object to which we apply this criterion might look like the following code snippet: public void showOnlyOverdueByDays(int days) { ViewCriteria vc = getViewCriteria("OverdueByDaysCriterion"); setOverdueDayLimit(days); applyViewCriteria(vc); executeQuery(); } If you want to call these methods from the user interface, you must select create a client interface for them (on the Java subtab in the view object). This will make your method available in the Data Control palette, ready to be dragged onto a page and dropped as a button. When you change the view criteria and execute the query, only the content of the view object changes –– the screen does not automatically repaint itself. In order to ensure that the screen refreshes, you need to set the PartialTriggers property of the data table to point to the ID of the button that changes the view criteria. For more on partial page rendering, see the Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework (http://docs.oracle.com/cd/E37975_01/web.111240/e16181/af_ppr.htm). Logic in application modules You've now seen how to add logic to both entity objects and view objects. However, you can also add custom logic to application modules. An application module is the place where logic that does not belong to a specific view object goes –– for example, calls to stored procedures that involve data from multiple view objects. To generate a Java class for an application module, you navigate to the Java subtab in the application module and select the pencil icon next to the Java Classes heading. Typically, you create Java only for the application module class and not for the application module definition. You can also add your own logic here that gets called from the user interface or you can override the existing methods in the application module. A typical method to override is prepareSession(), which gets called before the application module establishes a connection to the database –– if you need to, for example, call stored procedures or do other kinds of initialization before accessing the database, an application module method is a good place to do so. Remember that you need to define your own methods as client methods on the Java tab of the application module for the method to be available to be called from elsewhere in the application. Because the application module handles the transaction, it also contains methods, such as beforeCommit(), beforeRollback(), afterCommit(), afterRollback(), and so on. The doDML() method on any entity object that is part of the transaction is executed before any of the application modules' methods. Adding logic to the user interface Logic in the user interface is implemented in the form of managed beans. These are Java classes that are registered with the task flow and automatically instantiated by the ADF framework.ADF operates with various memory scopes –– you have to decide on a scope when you define a managed bean. Adding a bean method to a button The simplest way to add logic to the user interface is to drop a button (af:commandButton) onto a page or page fragment and then double-click on it. This brings up the Bind Action Property dialog as shown in the following screenshot: If you leave Method Binding selected and click on New , the Create Managed Bean dialog appears as shown in the following screenshot: In this dialog, you can give your bean a name, provide a class name (typically the same as the bean name), and select a scope. The backingBean scope is a good scope for logic that is only used for one action when the user clicks on the button and which does not need to store any state for later. Leaving the Generate Class If It Does Not Exist checkbox checked asks JDeveloper to create the class for you. When you click on OK , JDeveloper will automatically suggest a method for you in the Method dropdown (based on the ID of the button you double-clicked on). In the Method field, provide a more useful name and click on OK to add the new class and open it in the editor. You will see a method with your chosen name, as shown in the following code snippet: Public String rentDvd() { // Add event code here... return null; } Obviously, you place your code inside this method. If you accidentally left the default method name and ended up with something like cb5_action(), you can right-click on the method name and navigate to Refactor | Rename to give it a more descriptive name. Note that JDeveloper automatically sets the Action property for your button matching the scope, bean name, and method name. This might be something like #{backingBeanScope.RentalBean.rentDvd}. Adding a bean to a task flow Your beans should always be part of a task flow. If you're not adding logic to a button, or you just want more control over the process, you can also create a backing bean class first and then add it to the task flow. A bean class is a regular Java class created by navigating to File | New | Java Class . When you have created the class, you open the task flow where you want to use it and select the Overview tab. On the Managed Beans subtab, you can use the green plus to add your bean. Simply give it a name, point to the class you created, and select a memory scope. Accessing UI components from beans In a managed bean, you often want to refer to various user interface elements. This is done by mapping each element to a property in the bean. For example, if you have an af:inputText component that you want to refer to in a bean, you create a private variable of type RichInputText in the bean (with setter and getter methods) and set the Binding property (under the Advanced heading) to point to that bean variable using Expression Language. When creating a page or page fragment, you have the option (on the Managed Bean tab) to automatically have JDeveloper create corresponding attributes for you. The Managed Bean tab is shown in the following screenshot: Leave it on the default setting of Do Not Automatically Expose UI Components in a Managed Bean . If you select one of the options to automatically expose UI elements, your bean will acquire a lot of attributes that you don't need, which will make your code unnecessarily complex and slow. However, while learning ADF, you might want to try this out to see how the bean attributes and the Binding property work together. If you do activate this setting, it applies to every page and fragment you create until you explicitly deselect this option. Summary In this article, you have seen some examples of how to add Java code to your application to implement the specific business logic your application needs. There are many, many more places and ways to add logic –– as you work with ADF, you will continually come across new business requirements that force you to figure out how to add code to your application in new ways. Fortunately, there are other books, websites, online tutorials and training that you can use to add to your ADF skill set –– refer to http://www.adfessentials.com for a starting point. Resources for Article : Further resources on this subject: Oracle Tools and Products [Article] Managing Oracle Business Intelligence [Article] Oracle Integration and Consolidation Products [Article]
Read more
  • 0
  • 0
  • 3502
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-configuring-payment-models-intermediate
Packt
03 Sep 2013
4 min read
Save for later

Configuring payment models (Intermediate)

Packt
03 Sep 2013
4 min read
(For more resources related to this topic, see here.) How to do it... Let's learn how to integrate PayPal Website Payments Standard into our store in test mode. Integrating PayPal Website Payments Standard into our store (test mode) We will start by activating PayPal Payments Standard using the Payments section under the Extensions menu, and then we will edit the settings for it. The next step is to fill in the needed information for testing the PayPal system. For test purposes, we will choose the option for Sandbox Mode as Yes. We will now open a developer account to create test accounts on PayPal. Let's browse to http://developer.paypal.com and sign up for a developer account. Following is a screenshot of the screen that is displayed after we sign up and log in to the account. Click on the Create a preconfigured account link. The next screen will propose an account name with which your account will be created. Now we only need to add funds to the Account Balance field to create the account. Remember that it is a test account, so we can give any virtual amount of funds we want. Now we have a test PayPal account that can be used for our test purchases: Let's go to our shop's user interface, add a product to the shopping cart, and proceed to the Checkout page: Let's log in with the test account we have just created: The following screenshot shows a successful test order: Integrating PayPal Website Payments Pro into our store (live mode) We need to get the API information first. Let's log in to our PayPal account. Click on the User Profile link, and then click on the Update link next to API access in the My selling tools section. The next step is to click on the Request API Credentials link. Choose the option that says Request API signature. This will help us get the API information. The next step is to activate Payment Pro on the OpenCart administration interface using the Payments section under the Extensions menu. We need to edit the details and enter the API information. Let's not forget to select No for Test Mode, which means that this will be a live system. Choose Enabled for the Status field. How it works... Now let's learn how the PayPal Standard and Pro models work and how they differ from each other. PayPal Website Payments Standard PayPal Standard is the easiest payment model to integrate into our store. All we need is a PayPal account, and a bank account to withdraw the money from. PayPal Standard has no monthly costs or setup fee. However, the company charges a small percentage from each transaction. Please go to https://www.paypal.com/webapps/mpp/merchant for merchant service details. The activation of the Standard method is very straightforward. We only need to provide our e-mail address and then set Transaction Method to Sale, Sandbox Mode to No, and Status to Enabled on the administration panel. There is a difference in the test payments that we have made. Customers can also pay with their credit cards instantly, even without a PayPal account. This makes PayPal a very powerful and popular solution. If you are afraid of charging your real PayPal account, there is a good way to test your real payment environment. Create a dummy product with the price of $0.01 and complete the purchase with this tiny amount. PayPal Website Payments Pro This service can be used to charge credit cards using PayPal services in the background. The customers will not need to leave the store at all; the transaction will be completed at the shop itself. Many big e-commerce websites operate this way. Currently, Pro service is only available for merchant accounts located in the US, UK, and Canada. Summary This article explains about implementing different PayPal integrations. The article discusses about how to integrate the PayPal Website Payments Standard and Pro methods into a simple store. Resources for Article: Further resources on this subject: Upgrading OpenCart [Article] Setting Payment Model in OpenCart [Article] OpenCart: Layout Structure [Article]
Read more
  • 0
  • 0
  • 881

Packt
03 Sep 2013
7 min read
Save for later

Quickstart – Creating an application

Packt
03 Sep 2013
7 min read
(For more resources related to this topic, see here.) Step 1 – Planning the workflow When you'll be writing a real application, you should start with the requirements to application functionality. For the blog example, they described in the Getting Started: Requirements Analysis section, at the very beginning of the tutorial. Direct URL is http://www.yiiframework.com/doc/blog/1.1/en/start.requirements. After you have written all the desired features, you basically start implementing them one by one. Of course, in serious software development there's a lot of gotchas included but overall it's the same. Blog example is a database driven application, so we need to prepare a database schema beforehand. Here's what they came up with for the blog demo. This image is a verbatim copy from the blog example demo. Note that there are two links missing. The posts table have tags field which is the storage area for tags written in raw and is not a foreign key to tags table. Also author field in comment should really be the foreign key to user table. Anyways, we'll not cover the actual database generation, and suggest you can do it yourself. The blog tutorial at the Yii website has all the relevant instructions addressed to total newbies. Next in this article we will see how easy it is with Yii to get a working user interface by which one will be able to manipulate our database. Step 2 – Linking to the database from your app Once you design and physically create, the database in some database management system like MySQL or maybe SQLite, you are ready to configure your app to point to this database. The skeleton app generated by the ./yiic webapp command needs to be configured to point to this database. To do this, you need to set a db component in the main config file located at protected/config/main.php. There is a section that contains an array of components. Below is the setup for a MySQL database located at the same server as the web application itself. You will find a commented-out template for this already present when you generate your app. /protected/config/main.php'components'=>array( /* other components */ 'db'=>array( 'connectionString' => 'mysql:host=localhost;dbname=DB_NAME, 'emulatePrepare' => true, 'username' => YOUR_USERNAME, 'password' => YOUR_PASSWORD, 'charset' => 'utf8', ), /* other components */), This is a default component having a class CDbConnection and is used by all of our ActiveRecord design patterns which we will create later. As with all application components, all configuration parameters corresponds to the public properties of the component's class, so, you can check the API documentation for details. By the way, you really want to understand more about the main application config. Read about it in the Definitive Guide to Yii at the official website, at Fundamentals | Application | Application Configuration. Direct URL is http://www.yiiframework.com/doc/guide/1.1/en/basics.application#application-configuration. Just remember that all configuration parameters are just properties of CWebApplication object, which you can read about it the API documentation, direct URL is http://www.yiiframework.com/doc/api/1.1/CWebApplication. Step 3 – Generating code automatically Now that we have our app linked up to a fully built database, we can start using one of Yii's greatest features: automatic code generation. To get started, there are two types of code generation that are necessary: Generate a model classes based on the tables in your database Run the CRUD generator that takes a model and sets up a corresponding controller and set of views for basic listing, creating, viewing, updating and deleting from the table Console way There are two ways to go about automatic code generating. Originally, there was only the yiic tool used earlier to create the skeleton app. For the automatic code generation features, you would use yiic shell index.php command, which would bring up a command-line interface where you could run subcommands for modeling and scaffolding. $ /usr/local/yii/framework/yiic shell index.phpYii Interactive Tool v1.1 (based on Yiiv1.1.13)Please type 'help' for help. Type 'exit' to quit.>> model Post tbl_post generate models/Post.php unchanged fixtures/tbl_post.php generate unit/PostTest.phpThe following model classes are successfully generated: PostIf you have a 'db' database connection, you can test these models nowwith: $model=Post::model()->find(); print_r($model);>> crud Post generate PostController.php generate PostTest.phpmkdir /var/www/app/protected/views/post generate create.php generate update.php generate index.php generate view.php As you can see, this is a quick and easy way to perform the model and crud actions. The model command produces just two files: For your actual model class For unit tests The crud command creates your controller and view files. Gii Console tools may be the preferred option for some, but for developers who like to use graphical tools, there is now solution for this, called Gii. To use Gii, it is necessary to turn it on in the main config file: protected/config/main.php. You will find the template for it already present, but it is commented out by default. Simply uncomment it, set your password, and decide from what hosts it may be accessed. The configuration looks like this: 'gii'=>array( 'class'=>'system.gii.GiiModule', 'password'=>'giiPassword', // If removed, Gii defaults to localhost only. // Edit carefully to taste. 'ipFilters'=>array('127.0.0.1','::1'), // For development purposes, // a wildcard will allow access from anywhere. // 'ipFilters'=>array('*'),), Once Gii is configured, it can be accessed by navigating to the app URL with ?r=gii after it. For example, http://www.example.com/index.php?r=gii. It will begin with a prompt asking for the password set in the config file. Once entered, it will display a list of generators. If the database is not set in the config file, you will see an error when you attempt to use one. The first most basic generator in Gii is the model generator. It asks for a table name from the database and a name to be used for the PHP class. Note that we can specify a table name prefix which will be ignored when generating the model class name. For instance, the blog demo's user table is tbl_user, where the tbl_ is a prefix. This feature exists to support some setups, especially common in shared hosting environments, where a single database holds tables for several distinct applications. In such an environment, it's a common practice to prefix something to names of tables to avoid getting into naming conflict and easily find tables relevant to some specific application. So, as this prefixes don't mean anything in the application itself, Gii offers a way to automatically ignore them. Model class names are being constructed from the remaining table names by the obvious rules: Underscores are converted to uppercasing the next letter The first letter of the class name is being uppercased as well. The first step in getting your application off the ground is to generate models for all the entity tables in your database. Things like bridge tables will not need models, as they simply relate two entities to one another, rather than actually being a distinct thing. Bridge tables are being used for generating relations between models, expressed in the relations method in model class. For the blog demo, basic models are User, Post, Comment, Tag, and Lookup. The second phase of scaffolding is to generate the CRUD code for each of these models. This will create a controller and a series of view templates. The controller (for example. PostController) will handle routing to actions related to the given model. The view files represent everything needed to list and view entities, as well as the forms needed to create and update individual entities. Summary In this article we created an application by following a series of steps such as planning the workflow, linking to the database from your app, and generating code automatically. Resources for Article : Further resources on this subject: Database, Active Record, and Model Tricks [Article] Building multipage forms (Intermediate) [Article] Creating a Recent Comments Widget in Agile [Article]
Read more
  • 0
  • 0
  • 3824

article-image-getting-ibus
Packt
03 Sep 2013
10 min read
Save for later

Getting on the IBus

Packt
03 Sep 2013
10 min read
(For more resources related to this topic, see here.) Why NServiceBus? Before diving in, we should take a moment to consider why NServiceBus might be a tool worth adding to your repertoire. If you're eager to get started, feel free to skip this section and come back later. So what is NServiceBus? It's a powerful, extensible framework that will help you to leverage the principles of Service-oriented architecture ( SOA ) to create distributed systems that are more reliable, more extensible, more scalable, and easier to update. That's all well and good, but if you're just picking up this book for the first time, why should you care? What problems does it solve? How will it make your life better? Ask yourself whether any of the following situations describe you: My code updates values in several tables in a transaction, which acquires locks on those tables, so it frequently runs into deadlocks under load. I've optimized all the queries that I can. The transaction keeps the database consistent but the user gets an ugly exception and has to retry what they were doing, which doesn't make them very happy. Our order processing system sometimes fails on the third of three database calls. The transaction rolls back and we log the error, but we're losing money because the end user doesn't know if their order went through or not, and they're not willing to retry for fear of being double charged, so we're losing business to our competitor. We built a system to process images for our clients. It worked fine for a while but now we've become a victim of our own success. We designed it to be multithreaded (which was no small feat!) but we already maxed out the original server it was running on, and at the rate we're adding clients it's only a matter of time until we max out this one too. We need to scale it out to run on multiple servers but have no idea how to do it. We have a solution that is integrating with a third-party web service, but when we call the web service we also need to update data in a local database. Sometimes the web service times out, so our database transaction rolls back, but sometimes the web service call does actually complete at the remote end, so now our local data and our third-party provider's data are out of sync. We're sending emails as part of a complex business process. It is designed to be retried in the event of a failure, but now customers are complaining that they're receiving duplicate emails, sometimes dozens of them. A failure occurs after the email is sent, the process is retried, and the emails is sent over and over until the failure no longer occurs. I have a long-running process that gets kicked off from a web application. The website sits on an interstitial page while the backend process runs, similar to what you would see on a travel site when you search for plane tickets. This process is difficult to set up and fairly brittle. Sometimes the backend process fails to start and the web page just spins forever. We added latitude and longitude to our customer database, but now it is a nightmare to try to keep that information up-to-date. When a customer's address changes, there is nothing to make sure the location information is also recalculated. There are dozens of procedures that update the customer address, and not all of them are under our department's control. If any of these situations has you nodding your head in agreement, I invite you to read on. NServiceBus will help you to make multiple transactional updates utilizing the principle of eventual consistency so that you do not encounter deadlocks. It will ensure that valuable customer order data is not lost in the deep dark depths of a multi-megabyte log file. By the end of the book, you'll be able to build systems that can easily scale out, as well as up. You'll be able to reliably perform non-transactional tasks such as calling web services and sending emails. You will be able to easily start up long-running processes in an application server layer, leaving your web application free to process incoming requests, and you'll be able to unravel your spaghetti codebases into a logical system of commands, events, and handlers that will enable you to more easily add new features and version the existing ones. You could try to do this all on your own by rolling your own messaging infrastructure and carefully applying the principles of service-oriented architecture, but that would be really time consuming. NServiceBus is the easiest solution to solve the aforementioned problems without having to expend too much effort to get it right, allowing you to put your focus on your business concerns, where it belongs. So if you're ready, let's get started creating an NServiceBus solution. Getting the code We will be covering a lot of information very quickly in this article, so if you see something that doesn't immediately make sense, don't panic! Once we have the basic example in place, we will loop back and explain some of the finer points more completely. There are two main ways to get the NServiceBus code integrated with your project, by downloading the Windows Installer package, and via NuGet. I recommend you use Windows Installer the first time to ensure that your machine is set up properly to run NServiceBus, and then use NuGet to actually include the assemblies in your project. Windows Installer automates quite a bit of setup for you, all of which can be controlled through the advanced installation options: NServiceBus binaries, tools, and sample code are installed. The NServiceBus Management Service is installed to enable integration with ServiceInsight. . Microsoft Message Queueing ( MSMQ ) is installed on your system if it isn't already. MSMQ provides the durable, transactional messaging that is at the core of NServiceBus. The Distributed Transaction Coordinator ( DTC ) is configured on your system. This will allow you to receive MSMQ messages and coordinate data access within a transactional context. RavenDB is installed, which provides the default persistence mechanism for NServiceBus subscriptions, timeouts, and saga data. NServiceBus performance counters are added to help you monitor NServiceBus performance. Download the installer from http://particular.net/downloads and install it on your machine. After the install is complete, everything will be accessible from your Start Menu. Navigate to All Programs | Particular Software | NServiceBus as shown in the following screenshot: The install package includes several samples that cover all the basics as well as several advanced features. The Video Store sample is a good starting point. Multiple versions of it are available for different message transports that are supported by NServiceBus. If you don't know which one to use, take a look at VideoStore.Msmq. I encourage you to work through all of the samples, but for now we are going to roll our own solution by pulling in the NServiceBus NuGet packages. NServiceBus NuGet packages Once your computer has been prepared for the first time, the most direct way to include NServiceBus within an application is to use the NuGet packages. There are four core NServiceBus NuGet packages: NServiceBus.Interfaces: This package contains only interfaces and abstractions, but not actual code or logic. This is the package that we will use for message assemblies. NServiceBus: This package contains the core assembly with most of the code that drives NServiceBus except for the hosting capability. This is the package we will reference when we host NServiceBus within our own process, such as in a web application. NServiceBus.Host: This package contains the service host executable. With the host we can run an NServiceBus service endpoint from the command line during development, and then install it as a Windows service for production use. NServiceBus.Testing: This package contains a framework for unit testing NServiceBus endpoints and sagas. The NuGet packages will also attempt to verify that your system is properly prepared through PowerShell cmdlets that ship as part of the package. However, if you are not running Visual Studio as an Administrator, this can be problematic as the tasks they perform sometimes require elevated privileges. For this reason it's best to run Windows Installer before getting started. Creating a message assembly The first step to creating an NServiceBus system is to create a messages assembly. Messages in NServiceBus are simply plain old C# classes. Like the WSDL document of a web service, your message classes form a contract by which services communicate with each other. For this example, let's pretend we're creating a website like many on the Internet, where users can join and become a member. We will construct our project so that the user is created in a backend service and not in the main code of the website. Follow these steps to create your solution: In Visual Studio, create a new class library project. Name the project UserService.Messages and the solution simply Example. This first project will be your messages assembly. Delete the Class1.cs file that came with the class project. From the NuGet Package Manager Console, run this command to install the NServiceBus.Interfaces package, which will add the reference to NServiceBus.dll. PM> Install-Package NServiceBus.Interfaces –ProjectName UserService.Messages Add a new folder to the project called Commands. Add a new class to the Commands folder called CreateNewUserCmd.cs. Add using NServiceBus; to the using block of the class file. It is very helpful to do this first so that you can see all of the options available with IntelliSense. Mark the class as public and implement ICommand. This is a marker interface so there is nothing you need to implement. Add the public properties for EmailAddress and Name. When you're done, your class should look like this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using NServiceBus; namespace UserService.Messages.Commands { public class CreateNewUserCmd : ICommand { public string EmailAddress { get; set; } public string Name { get; set; } } } Congratulations! You've created a message! This will form the communication contract between the message sender and receiver. Unfortunately, we don't have enough to run yet, so let's keep moving. Creating a service endpoint Now we're going to create a service endpoint that will handle our command message. Add a new class library project to your solution. Name the project UserService. Delete the Class1.cs file that came with the class project. From the NuGet Package Manager Console window, run this command to install the NServiceBus.Host package: PM> Install-Package NServiceBus.Host –ProjectName UserService Take a look at what the host package has added to your class library. Don't worry; we'll cover this in more detail later. References to NServiceBus.Host.exe, NServiceBus.Core.dll, and NServiceBus.dll An App.config file A class named EndpointConfig.cs In the service project, add a reference to the UserService.Messages project you created before. Right-click on the project file and click on Properties , then in the property pages, navigate to the Debug tab and enter NServiceBus.Lite under Command line arguments . This tells NServiceBus not to run the service in production mode while we're just testing. This may seem obvious, but this is part of the NServiceBus promise to be safe by default, meaning you won't be able to mess up when you go to install your service in production.
Read more
  • 0
  • 0
  • 1180

article-image-introducing-emberjs-framework
Packt
03 Sep 2013
5 min read
Save for later

Introducing the Ember.JS framework

Packt
03 Sep 2013
5 min read
(For more resources related to this topic, see here.) Introduction to Ember.js Ember.js is a frontend MVC JavaScript framework that runs in the browser. It is for developers who are looking to build ambitious and large web applications that rival native applications. Ember.js was created from concepts introduced by native application frameworks, such as Cocoa. Ember.js helps you to create great experiences for the user. It will help you to organize all the direct interactions a user may perform on your website. A common use case for Ember.js is when you believe your JavaScript code will become complex; when the code base becomes complex, problems about maintaining and refactoring the code base will arise. MVC stands for model-view-controller. This kind of structure makes it easy to make modifications or refactor changes to any part of your code. It will also allow you to adhere to Don't Repeat Yourself (DRY) principles. The model is responsible for notifying associated views and controllers when there has been a change in the state of the application. The controller sends CRUD requests to the model to notify it of a change in state. It can also send requests to the view to change how the view is representing the current state of the model. The view will then receive information from the model to create a graphical rendering. If you are still unclear on how the three parts interact with each other, the following is a simple diagram illustrating this: Ember.js decouples the problematic areas of your frontend, enabling you to focus on one area at a time without worrying about affecting other parts of your application. To give you an example of some of these areas of Ember.js, take a look at the following list: Navigation : Ember's router takes care of your application's navigation Auto-updating templates : Ember view expressions are binding-aware, meaning they will update automatically if the underlying data ever changes Data handling : Each object you create will be an Ember object, thus inheriting all Ember.object methods Asynchronous behavior : Bindings and computed properties within Ember help manage asynchronous behavior Ember.js is more of a framework than a library. Ember.js expects you to build a good portion of your frontend around its methodologies and architecture, creating a solid application architecture once you are finished with it. This is the main difference between Ember and a framework like Angular.js. Angular allows itself to be incorporated into an existing application, whereas an Ember application would have had to have been planned out with its specific architecture in mind. Backbone.js would be another example of a library that can easily be inserted into existing JavaScript projects. Ember.js is a great framework for handling complex interactions performed by users in your application. You may have been led to believe that Ember.js is a difficult framework to learn, but this is false. The only difficulty for developers lies in understanding the concepts that Ember.js tries to implement. How to set up Ember.js The js folder contains a subfolder named libs and the app.js file. libs is for storing any external libraries that you will want to include into your application. app.js is the JavaScript file that contains your Ember application structure. index.html is a basic HTML index file that will display information in the user's browser. We will be using this file as the index page of the sample application that we will be creating. We create a namespace called MovieTracker where we can access any necessary Ember.js components. Initialize() will instantiate all the controllers currently available with the namespace. After that is done, it injects all the controllers onto a router. We then set ApplicationController as the rendering context of our views. Your application must have ApplicationController, otherwise your application will not be capable of rendering dynamic templates. Router in Ember is a subclass of the Ember StateManager. The Ember StateManager tracks the current active state and triggers callbacks when states have changed. This router will help you match the URL to an application state and detects the browser URL at application load time. The router is responsible for updating the URL as the application's state changes. When Ember parses the URL to determine the state, it attempts to find Ember.Route that matches this state. Our router must contain root and index. You can think of root as a general container for routes. It is a set of routes. An Ember view is responsible for structuring the page through the view's associated template. The view is also responsible for registering and responding to user events. ApplicationView we are creating is required for any Ember application. The view we created is associated with our ApplicationController as well. The templateName variable is the name we use in our index.html file. The templateName variable can be changed to anything you wish. Creating an Ember Object An object or a model is a way to manage data in a structured way. In other words, they are a way of representing persistent states in your application. In Ember.js, almost every object is derived from the Ember.Object class. Since most objects will be derived from the same base object, they will end up sharing properties with each other. This allows the observation and binding to properties of other objects.
Read more
  • 0
  • 0
  • 3736
article-image-understanding-backbone
Packt
02 Sep 2013
12 min read
Save for later

Understanding Backbone

Packt
02 Sep 2013
12 min read
(For more resources related to this topic, see here.) Backbone.js is a lightweight JavaScript framework that is based on the Model-View-Controller (MVC) pattern and allows developers to create single-page web applications. With Backbone, it is possible to update a web page quickly using the REST approach with a minimal amount of data transferred between a client and a server. Backbone.js is becoming more popular day by day and is being used on a large scale for web applications and IT startups; some of them are as follows: Groupon Now!: The team decided that their first product would be AJAX-heavy but should still be linkable and shareable. Though they were completely new to Backbone, they found that its learning curve was incredibly quick, so they were able to deliver the working product in just two weeks. Foursquare: This used the Backbone.js library to create model classes for the entities in foursquare (for example, venues, check-ins, and users). They found that Backbone's model classes provide a simple and light-weight mechanism to capture an object's data and state, complete with the semantics of a classical inheritance. LinkedIn mobile: This used Backbone.js to create its next-generation HTML5 mobile web app. Backbone made it easy to keep the app modular, organized, and extensible, so it was possible to program the complexities of LinkedIn's user experience. Moreover, they are using the same code base in their mobile applications for iOS and Android platforms. WordPress.com: This is a SaaS version of Wordpress and uses Backbone.js models, collections, and views in its notification system, and is integrating Backbone.js into the Stats tab and into other features throughout the home page. Airbnb: This is a community marketplace for users to list, discover, and book unique spaces around the world. Its development team has used Backbone in many latest products. Recently, they rebuilt a mobile website with Backbone.js and Node.js tied together with a library named Rendr. You can visit the following links to get acquainted with other usage examples of Backbone.js: http://backbonejs.org/#examples Backbone.js was started by Jeremy Ashkenas from DocumentCloud in 2010 and is now being used and improved by lots of developers all over the world using Git, the distributed version control system. In this article, we are going to provide some practical examples of how to use Backbone.js, and we will structure a design for a program named Billing Application by following the MVC and Backbone pattern. Reading this article is especially useful if you are new to developing with Backbone.js. Designing an application with the MVC pattern MVC is a design pattern that is widely used in user-facing software, such as web applications. It is intended for splitting data and representing it in a way that makes it convenient for user interaction. To understand what it does, understand the following: Model: This contains data and provides business logic used to run the application View: This presents the model to the user Controller: This reacts to user input by updating the model and the view There could be some differences in the MVC implementation, but in general it conforms to the following scheme: Worldwide practice shows that the use of the MVC pattern provides various benefits to the developer: Following the separation of the concerned paradigm, which splits an application into independent parts, it is easier to modify or replace It achieves code reusability by rendering a model in different views without the need to implement model functionality in each view It requires less training and has a quicker startup time for the new developers within an organization To have a better understanding of the MVC pattern, we are going to design a Billing Application. We will refer to this design throughout the book when we are learning specific topics. Our Billing Application will allow users to generate invoices, manage them, and send them to clients. According to the worldwide practice, the invoice should contain a reference number, date, information about the buyer and seller, bank account details, a list of provided products or services, and an invoice sum. Let's have a look at the following screenshot to understand how an invoice appears: How to do it... Let's follow the ensuing steps to design an MVC structure for the Billing Application: Let's write down a list of functional requirements for this application. We assume that the end user may want to be able to do the following: Generate an invoice E-mail the invoice to the buyer Print the invoice See a list of existing invoices Manage invoices (create, read, update, and delete) Update an invoice status (draft, issued, paid, and canceled) View a yearly income graph and other reports To simplify the process of creating multiple invoices, the user may want to manage information about buyers and his personal details in the specific part of the application before he/she creates an invoice. So, our application should provide additional functionalities to the end user, such as the following: The ability to see a list of buyers and use it when generating an invoice The ability to manage buyers (create, read, update, and delete) The ability to see a list of bank accounts and use it when generating an invoice The ability to manage his/her own bank accounts (create, read, update, and delete) The ability to edit personal details and use them when generating an invoice Of course, we may want to have more functions, but this is enough for demonstrating how to design an application using the MVC pattern. Next, we architect an application using the MVC pattern. After we have defined the features of our application, we need to understand what is more related to the model (business logic) and what is more related to the view (presentation). Let's split the functionality into several parts. Then, we learn how to define models. Models present data and provide data-specific business logic. Models can be related to each other. In our case, they are as follows: InvoiceModel InvoiceItemModel BuyerModel SellerModel BankAccountModel Then, will define collections of models. Our application allows users to operate on a number of models, so they need to be organized into a special iterable object named Collection. We need the following collections: InvoiceCollection InvoiceItemCollection BuyerCollection BankAccountCollection Next, we define views. Views present a model or a collection to the application user. A single model or collection can be rendered to be used by multiple views. The views that we need in our application are as follows: EditInvoiceFormView InvoicePageView InvoiceListView PrintInvoicePageView EmailInvoiceFormView YearlyIncomeGraphView EditBuyerFormView BuyerPageView BuyerListView EditBankAccountFormView BankAccountPageView BankAccountListView EditSellerInfoFormView ViewSellectInfoPageView ConfirmationDialogView Finally, we define a controller. A controller allows users to interact with an application. In MVC, each view can have a different controller that is used to do following: Map a URL to a specific view Fetch models from a server Show and hide views Handle user input Defining business logic with models and collections Now, it is time to design business logic for the Billing Application using the MVC and OOP approaches. In this recipe, we are going to define an internal structure for our application with model and collection objects. Although a model represents a single object, a collection is a set of models that can be iterated, filtered, and sorted. Relations between models and collections in the Billing Application conform to the following scheme: How to do it... For each model, we are going to create two tables: one for properties and another for methods: We define BuyerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   Then, we define SellerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   taxDetails Text Yes   After this, we define BankAccountModel properties. Name Type Required Unique id Integer Yes Yes beneficiary Text Yes   beneficiaryAccount Text Yes   bank Text No   SWIFT Text Yes   specialInstructions Text No   We define InvoiceItemModel properties. Name Arguments Return Type Unique calculateAmount - Decimal   Next, we define InvoiceItemModel methods. We don't need to store the item amount in the model, because it always depends on the price and the quantity, so it can be calculated. Name Type Required Unique id Integer Yes Yes deliveryDate Date Yes   description Text Yes   price Decimal Yes   quantity Decimal Yes   Now, we define InvoiceModel properties. Name Type Required Unique id Integer Yes Yes referenceNumber Text Yes   date Date Yes   bankAccount Reference Yes   items Collection Yes   comments Text No   status Integer Yes   We define InvoiceModel methods. The invoice amount can easily be calculated as the sum of invoice item amounts. Name Arguments Return Type Unique calculateAmount   Decimal   Finally, we define collections. In our case, they are InvoiceCollection, InvoiceItemCollection, BuyerCollection, and BankAccountCollection. They are used to store models of an appropriate type and provide some methods to add/remove models to/from the collections. How it works... Models in Backbone.js are implemented by extending Backbone.Model, and collections are made by extending Backbone.Collection. To implement relations between models and collections, we can use special Backbone extensions. To learn more about object properties, methods, and OOP programming in JavaScript, you can refer to the following resource: https://developer.mozilla.org/en-US/docs/JavaScript/Introduction_to_Object-Oriented_JavaScript Modeling an application's behavior with views and a router Unlike traditional MVC frameworks, Backbone does not provide any distinct object that implements controller functionality. Instead, the controller is diffused between Backbone.Router and Backbone. View and the following is done: A router handles URL changes and delegates application flow to a view. Typically, the router fetches a model from the storage asynchronously. When the model is fetched, it triggers a view update. A view listens to DOM events and either updates a model or navigates an application through a router. The following diagram shows a typical workflow in a Backbone application: How to do it... Let's follow the ensuing steps to understand how to define basic views and a router in our application: First, we need to create wireframes for an application. Let's draw a couple of wireframes in this recipe: The Edit Invoice page allows users to select a buyer, to select the seller's bank account from the lists, to enter the invoice's date and a reference number, and to build a table of shipped products and services. The Preview Invoice page shows how the final invoice will be seen by a buyer. This display should render all the information we have entered in the Edit Invoice form. Buyer and seller information can be looked up in the application storage. The user has the option to either go back to the Edit display or save this invoice. Then, we will define view objects. According to the previous wireframes, we need to have two main views: EditInvoiceFormView and PreviewInvoicePageView. These views will operate with InvoiceModel; it refers to other objects, such as BankAccountModel and InvoiceItemCollection. Now, we will split views into subviews. For each item in the Products or Services table, we may want to recalculate the Amount field depending on what the user enters in the Price and Quantity fields. The first way to do this is to re-render the entire view when the user changes the value in the table; however, it is not an efficient way, and it takes a significant amount of computer power to do this. We don't need to re-render the entire view if we want to update a small part of it. It is better to split the big view into different, independent pieces, such as subviews, that are able to render only a specific part of the big view. In our case, we can have the following views: As we can see, EditInvoiceItemTableView and PreviewInvoiceItemTableView render InvoiceItemCollection with the help of the additional views EditInvoiceItemView and PreviewInvoiceItemView that render InvoiceItemModel. Such separation allows us to re-render an item inside a collection when it is changed. Finally, we will define URL paths that will be associated with a corresponding view. In our case, we can have several URLs to show different views, for example: /invoice/add /invoice/:id/edit /invoice/:id/preview Here, we assume that the Edit Invoice view can be used for either creating a new invoice or editing an existing one. In the router implementation, we can load this view and show it on specific URLs. How it works... The Backbone.View object can be extended to create our own view that will render model data. In a view, we can define handlers to user actions, such as data input and keyboard or mouse events. In the application, we can have a single Backbone.Router object that allows users to navigate through an application by changing the URL in the address bar of the browser. The router object contains a list of available URLs and callbacks. In a callback function, we can trigger the rendering of a specific view associated with a URL. If we want a user to be able to jump from one view to another, we may want him/her to either click on regular HTML links associated with a view or navigate to an application programmatically.
Read more
  • 0
  • 0
  • 4336

article-image-communicating-servers
Packt
02 Sep 2013
24 min read
Save for later

Communicating with Servers

Packt
02 Sep 2013
24 min read
(For more resources related to this topic, see here.) Creating an HTTP GET request to fetch JSON One of the basic means of retrieving information from the server is using HTTP GET. This type of method in a RESTful manner should be only used for reading data. So, GET calls should never change server state. Now, this may not be true for every possible case, for example, if we have a view counter on a certain resource, is that a real change? Well, if we follow the definition literally then yes, this is a change, but it's far from significant to be taken into account. Opening a web page in a browser does a GET request, but often we want to have a scripted way of retrieving data. This is usually to achieve Asynchronous JavaScript and XML (AJAX ), allowing reloading of data without doing a complete page reload. Despite the name, the use of XML is not required, and these days, JSON is the format of choice. A combination of JavaScript and the XMLHttpRequest object provides a method for exchanging data asynchronously, and in this recipe, we are going to see how to read JSON for the server using plain JavaScript and jQuery. Why use plain JavaScript rather than using jQuery directly? We strongly believe that jQuery simplifies the DOM API, but it is not always available to us, and additionally, we need have to know the underlying code behind asynchronous data transfer in order to fully grasp how applications work. Getting ready The server will be implemented using Node.js. In this example, for simplicity, we will use restify (http://mcavage.github.io/node-restify/), a Node.js module for creation of correct REST web services. How to do it... Let's perform the following steps. In order to include restify to our project in the root directory of our server side scripts, use the following command: npm install restify After adding the dependency, we can proceed to creating the server code. We create a server.js file that will be run by Node.js, and at the beginning of it we add restify: var restify = require('restify'); With this restify object, we can now create a server object and add handlers for get methods: var server = restify.createServer(); server.get('hi', respond); server.get('hi/:index', respond); The get handlers do a callback to a function called respond, so we can now define this function that will return the JSON data. We will create a sample JavaScript object called hello, and in case the function was called having a parameter index part of the request it was called from the "hi/:index" handler: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " responding"); var hello = [{ 'id':'0', 'hello': 'world' },{ 'id':'1', 'say':'what' }]; if(req.params.index){ var found = hello[req.params.index]; if(found){ res.send(found); } else { res.status(404); res.send(); } }; res.send(hello); addHeaders(req,res); return next(); } The following addHeaders function that we call at the beginning is adding headers to enable access to the resources served from a different domain or a different server port: function addHeaders(req, res) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); }; The definition of headers and what they mean will be discussed later on in the Article. For now, let's just say they enable accesses to the resources from a browser using AJAX. At the end, we add a block of code that will set the server to listen on port 8080: server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); To start the sever using command line, we type the following command: node server.js If everything went as it should, we will get a message in the log: restify listening at http://0.0.0.0:8080 We can then test it by accessing directly from the browser on the URL we defined http://localhost:8080/hi Now we can proceed with the client-side HTML and JavaScript. We will implement two ways for reading data from the server, one using standard XMLHttpRequest and the other using jQuery.get(). Note that not all features are fully compatible with all browsers. We create a simple page where we have two div elements, one with the ID data and another with the ID say. These elements will be used as placeholders to load data form the server into them: Hello <div id="data">loading</div> <hr/> Say <div id="say">No</div>s <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> In the example.js file, we define a function called getData that will create a AJAX call to a given url and do a callback if the request went successfully: function getData(url, onSuccess) { var request = new XMLHttpRequest(); request.open("GET", url); request.onload = function() { if (request.status === 200) { console.log(request); onSuccess(request.response); } }; request.send(null); } After that, we can call the function directly, but in order to demonstrate that the call happens after the page is loaded, we will call it after a timeout of three seconds: setTimeout( function() { getData( 'http://localhost:8080/hi', function(response){ console.log('finished getting data'); var div = document.getElementById('data'); var data = JSON.parse(response); div.innerHTML = data[0].hello; }) }, 3000); The jQuery version is a lot cleaner, as the complexity that comes with the standard DOM API and the event handling is reduced substantially: (function(){ $.getJSON('http://localhost:8080/hi/1', function(data) { $('#say').text(data.say); }); }()) How it works... At the beginning, we installed the dependency using npm install restify; this is sufficient to have it working, but in order to define dependencies in a more expressive way, npm has a way of specifying it. We can add a file called package.json, a packaging format that is mainly used for for publishing details for Node.js applications. In our case, we can define package.json with the flowing code: { "name" : "ch8-tip1-http-get-example", "description" : "example on http get", "dependencies" : ["restify"], "author" : "Mite Mitreski", "main" : "html5dasc", "version" : "0.0.1" } If we have a file like this, npm will automatically handle the installation of dependencies after calling npm install from the command line in the directory where the package.json file is placed. Restify has a simple routing where functions are mapped to appropriate methods for a given URL. The HTTP GET request for '/hi' is mapped with server.get('hi', theCallback), where theCallback is executed, and a response should be returned. When we have a parameterized resource, for example in 'hi/:index', the value associated with :index will be available under req.params. For example, in a request to '/hi/john' to access the john value, we simple have req.params.index. Additionally, the value for index will automatically get URL-decoded before it is passed to our handler. One other notable part of the request handlers in restify is the next() function that we called at the end. In our case, it mostly does not makes much sense, but in general, we are responsible for calling it if we want the next handler function in the chain to be called. For exceptional circumstances, there is also an option to call next() with an error object triggering custom responses. When it comes to the client-side code, XMLHttpRequest is the mechanism behind the async calls, and on calling request.open("GET", url, true) with the last parameter value as true, we get a truly asynchronous execution. Now you might be wondering why is this parameter here, isn't the call already done after loading the page? That is true, the call is done after loading the page, but if, for example, the parameter was set to false, the execution of the request will be a blocking method, or to put it in layman's terms, the script will pause until we get a response. This might look like a small detail, but it can have a huge impact on performance. The jQuery part is pretty straightforward; there is function that accepts a URL value of the resource, the data handler function, and a success function that gets called after successfully getting a response: jQuery.getJSON( url [, data ] [, success(data, textStatus, jqXHR) ] ) When we open index.htm, the server should log something like the following: Got HTTP GET on /hi/1 responding Got HTTP GET on /hi responding Here one is from the jQuery request and the other from the plain JavaScript. There's more... XMLHttpRequest Level 2 is one of the new improvements being added to the browsers, although not part of HTML5 it is still a significant change. There are several features with the Level 2 changes, mostly to enable working with files and data streams, but there is one simplification we already used. Earlier we would have to use onreadystatechange and go through all of the states, and if the readyState was 4, which is equal to DONE, we could read the data: var xhr = new XMLHttpRequest(); xhr.open('GET', 'someurl', true); xhr.onreadystatechange = function(e) { if (this.readyState == 4 && this.status == 200) { // response is loaded } } In a Level 2 request however, we can use request.onload = function() {} directly without checking states. Possible states can be seen in the table: table One other thing to note is that XMLHttpRequest Level 2 is supported in all major browsers and IE 10; the older XMLHttpRequest has a different way of instantiation on older versions of IE (older than IE 7), where we can access it through an ActiveX object via new ActiveXObject("Msxml2.XMLHTTP.6.0");. Creating a request with custom headers The HTTP headers are a part of the request object being sent to the server. Many of them give information about the client's user agent setup and configuration, as that is sometimes the basis of making description for the resources being fetched from the server. Several of them such as Etag, Expires, and If-Modified-Since are closely related to caching, while others such as DNT that stands for "Do Not Track" (http://www.w3.org/2011/tracking-protection/drafts/tracking-dnt.html) can be quite controversial. In this recipe, we will take a look at a way for using the custom X-Myapp header in our server and client-side code. Getting ready The server will be implemented using Node.js. In this example, again for simplicity, we will use restify (http://mcavage.github.io/node-restify/). Also, monitoring the console in your browser and server is crucial in order to understand what happens in the background. How to do it... We can start by defining the dependencies for the server side in package.json file: { "name" : "ch8-tip2-custom-headers", "dependencies" : ["restify"], "main" : "html5dasc", "version" : "0.0.1" } After that, we can call npm install from the command line that will automatically retrieve restify and place it in a node_modules folder created in the root directory of the project. After this part, we can proceed to creating the server-side code in a server.js file where we set the server to listen on port 8080 and add a route handler for 'hi' and for every other path when the request method is HTTP OPTIONS: var restify = require('restify'); var server = restify.createServer(); server.get('hi', addHeaders, respond); server.opts(/.*/, addHeaders, function (req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " with headersn"); res.send(200); return next(); }); server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); In most cases, the documentation should be enough when we write the application's build onto Restify, but sometimes, it is a good idea to take a look a the source code as well. It can be found on https://github.com/mcavage/node-restify/. One thing to notice is that we can have multiple chained handlers; in this case, we have addHeaders before the others. In order for every handler to be propagated, next() should be called: function addHeaders(req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, X-Myapp'); res.setHeader('Access-Control-Allow-Methods', 'GET, OPTIONS'); res.setHeader('Access-Control-Expose-Headers', 'X-Myapp, X-Requested-With'); return next(); }; The addHeaders adds access control options in order to enable cross-origin resource sharing. Cross-origin resource sharing (CORS ) defines a way in which the browser and server can interact to determine if the request should be allowed. It is more secure than allowing all cross-origin requests, but is more powerful than simply allowing all of them. After this, we can create the handler function that will return a JSON response with the headers the server received and a hello world kind of object: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " with headersn"); console.log("Request: ", req.headers); var hello = [{ 'id':'0', 'hello': 'world', 'headers': req.headers }]; res.send(hello); console.log('Response:n ', res.headers()); return next(); } We additionally log the request and response headers to the sever console log in order to see what happens in the background. For the client-side code, we need a plain "vanilla" JavaScript approach and jQuery method, so in order to do that, include example.js and exampleJquery.js as well as a few div elements that we will use for displaying data retrieved from the server: Hi <div id="data">loading</div> <hr/> Headers list from the request: <div id="headers"></div> <hr/> Data from jQuery: <div id="dataRecieved">loading</div> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> A simple way to add the headers is to call setRequestHeader on a XMLHttpRequest object after the call of open(): function getData(url, onSucess) { var request = new XMLHttpRequest(); request.open("GET", url, true); request.setRequestHeader("X-Myapp","super"); request.setRequestHeader("X-Myapp","awesome"); request.onload = function() { if (request.status === 200) { onSuccess(request.response); } }; request.send(null); } The XMLHttpRequest automatically sets headers, such as "Content-Length","Referer", and "User-Agent", and does not allow you to change them using JavaScript. A more complete list of headers and the reasoning behind this can be found in the W3C documentation at http://www.w3.org/TR/XMLHttpRequest/#the-setrequestheader%28%29-method. To print out the results, we add a function that will add each of the header keys and values to an unordered list: getData( 'http://localhost:8080/hi', function(response){ console.log('finished getting data'); var data = JSON.parse(response); document.getElementById('data').innerHTML = data[0].hello; var headers = data[0].headers, headersList = "<ul>"; for(var key in headers){ headersList += '<li><b>' + key + '</b>: ' + headers[key] +'</li>'; }; headersList += "</ul>"; document.getElementById('headers').innerHTML = headersList; }); When this gets executed. a list of all the request headers should be displayed on a page, and our custom x-myapp should be shown: host: localhost:8080 connection: keep-alive origin: http://localhost:8000 x-myapp: super, awesome user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.27 (KHTML, like Gecko) Chrome/26.0.1386.0 Safari/537.27 The jQuery approach is far simpler, we can use the beforeSend hook to call a function that will set the 'x-myapp' header. When we receive the response, write it down to the element with the ID dataRecived: $.ajax({ beforeSend: function (xhr) { xhr.setRequestHeader('x-myapp', 'this was easy'); }, success: function (data) { $('#dataRecieved').text(data[0].headers['x-myapp']); } Output from the jQuery example will be the data contained in x-myapp header: Data from jQuery: this was easy How it works... You may have noticed that on the server side, we added a route that has a handler for HTTP OPTIONS method, but we never explicitly did a call there. If we take a look at the server log, there should be something like the following output: Got HTTP OPTIONS on /hi with headers Got HTTP GET on /hi with headers This happens because the browser first issues a preflight request , which in a way is the browser's question whether or not there is a permission to make the "real" request. Once the permission has been received, the original GET request happens. If the OPTIONS response is cached, the browser will not issue any extra preflight calls for subsequent requests. The setRequestHeader function of XMLHttpRequest actually appends each value as a comma-separated list of values. As we called the function two times, the value for the header is as follows: 'x-myapp': 'super, awesome' There's more... For most use cases, we do not need custom headers to be part of our logic, but there are plenty of API's that make good use of them. For example, many server-side technologies add the X-Powered-By header that contains some meta information, such as JBoss 6 or PHP/5.3.0. Another example is Google Cloud Storage, where among other headers there are x-goog-meta-prefixed headers such as x-goog-meta-project-name and x-goog-meta-project-manager. Versioning your API We do not always have the best solution while doing the first implementation. The API can be extended up to a certain point, but afterwards needs to undergo some structural changes. But we might already have users that depend on the current version, so we need a way to have different representation versions of the same resource. Once a module has users, the API cannot be changed at our own will. One way to resolve this issue is to use a so-called URL versioning, where we simply add a prefix. For example, if the old URL was http://example.com/rest/employees, the new one could be http://example.com/rest/v1/employees, or under a subdomain it could be http://v1.example.com/rest/employee. This approach only works if you have direct control over all the servers and clients. Otherwise, you need to have a way of handling fallback to older versions. In this recipe, we are going implement a so-called "Semantic versioning", http://semver.org/, using HTTP headers to specify accepted versions. Getting ready The server will be implemented using Node.js. In this example, we will use restify (http://mcavage.github.io/node-restify/) for the server-side logic to monitor the requests to understand what is sent. How to do it... Let's perform the following steps. We need to define the dependencies first, and after installing restify, we can proceed to the creation of the server code. The main difference with the previous examples is the definition of the "Accept-version" header. restify has built-in handling for this header using versioned routes . After creating the server object, we can set which methods will get called for what version: server.get({ path: "hi", version: '2.1.1'}, addHeaders, helloV2, logReqRes); server.get({ path: "hi", version: '1.1.1'}, addHeaders, helloV1, logReqRes); We also need the handler for the HTTP OPTIONS, as we are using cross-origin resource sharing and the browser needs to do the additional request in order to get permissions: server.opts(/.*/, addHeaders, logReqRes, function (req, res, next) { res.send(200); return next(); }); The handlers for Version 1 and Version 2 will return different objects in order for us to easily notice the difference between the API calls. In the general case, the resource should be the same, but can have different structural changes. For Version 1, we can have the following: function helloV1(req, res, next) { var hello = [{ 'id':'0', 'hello': 'grumpy old data', 'headers': req.headers }]; res.send(hello); return next() } As for Version 2, we have the following: function helloV2(req, res, next) { var hello = [{ 'id':'0', 'awesome-new-feature':{ 'hello': 'awesomeness' }, 'headers': req.headers }]; res.send(hello); return next(); } One other thing we must do is add the CORS headers in order to enable the accept-version header, so in the route we included the addHeaders that should be something like the following: function addHeaders(req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, accept-version'); res.setHeader('Access-Control-Allow-Methods', 'GET, OPTIONS'); res.setHeader('Access-Control-Expose-Headers', 'X-Requested-With, accept-version'); return next(); }; Note that you should not forget to the call to next() in order to call the next function in the route chain. For simplicity, we will only implement the client side in jQuery, so we create a simple HTML document, where we include the necessary JavaScript dependencies: Old api: <div id="data">loading</div> <hr/> New one: <div id="dataNew"> </div> <hr/> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "exampleJQuery.js"></script> In the example.js file, we do two AJAX calls to our REST API, one is set to use the Version 1 and other to use Version 2: $.ajax({ url: 'http://localhost:8080/hi', type: 'GET', dataType: 'json', success: function (data) { $('#data').text(data[0].hello); }, beforeSend: function (xhr) { xhr.setRequestHeader('accept-version', '~1'); } }); $.ajax({ url: 'http://localhost:8080/hi', type: 'GET', dataType: 'json', success: function (data) { $('#dataNew').text(data[0]['awesome-new-feature'].hello); }, beforeSend: function (xhr) { xhr.setRequestHeader('accept-version', '~2'); } }); Notice that the accept-version header contains values ~1 and ~2. These designate that all the semantic versions such as 1.1.0 and 1.1.1 1.2.1 will get matched by ~1 and similarly for ~2. At the end, we should get an output like the following text: Old api:grumpy old data New one:awesomeness How it works... Versioned routes are a built-in feature of restify that work through the use of accept-version. In our example, we used Versions ~1 and ~2, but what happens if we don't specify a version? restify will do the choice for us, as the the request will be treated in the same manner as if the client has sent a * version. The first defined matching route in our code will be used. There is also an option to set up the routes to match multiple versions by adding a list of versions for a certain handler: server.get({path: 'hi', version: ['1.1.0', '1.1.1', '1.2.1']}, sendOld); The reason why this type of versioning is very suitable for use in constantly growing applications is because as the API changes, the client can stick with their version of the API without any additional effort or changes needed in the client-side development. Meaning that we don't have to do updates on the application. On the other hand, if the client is sure that their application will work on newer API versions, they can simply change the request headers. There's more... Versioning can be implemented by using custom content types prefixed with vnd for example, application/vnd.mycompany.user-v1. An example of this is Google Earth's content type KML where it is defined as application/vnd.google-earth.kml+xml. Notice that the content type can be in two parts; we could have application/vnd.mycompany-v1+json where the second part will be the format of the response. Fetching JSON data with JSONP JSONP or JSON with padding is a mechanism of making cross-domain requests by taking advantage of the <script> tag. AJAX transport is done by simply setting the src attribute on a script element or adding the element itself if not present. The browser will do an HTTP request to download the URL specified, and that is not subject to the same origin policy, meaning that we can use it to get data from servers that are not under our control. In this recipe, we will create a simple JSONP request, and a simple server to back that up. Getting ready We will make a simplified implementation of the server we used in previous examples, so we need Node.js and restify (http://mcavage.github.io/node-restify/) installed either via definition of package.json or a simple install. For working with Node.js. How to do it... First, we will create a simple route handler that will return a JSON object: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " responding"); var hello = [{ 'id':'0', 'what': 'hi there stranger' }]; res.send(hello); return next(); } We could roll our own version that will wrap the response into a JavaScript function with the given name, but in order to enable JSONP when using restify, we can simply enable the bundled plugin. This is done by specifying what plugin to be used: var server = restify.createServer(); server.use(restify.jsonp()); server.get('hi', respond); After this, we just set the server to listen on port 8080: server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); The built-in plugin checks the request string for parameters called callback or jsonp, and if those are found, the result will be JSONP with the function name of the one passed as value to one of these parameters. For example, in our case, if we open the browser on http://localhost:8080/hi, we get the following: [{"id":"0","what":"hi there stranger"}] If we access the same URL with the callback parameter or a JSONP set, such as http://localhost:8080/hi?callback=great, we should receive the same data wrapped with that function name: great([{"id":"0","what":"hi there stranger"}]); This is where the P in JSONP, which stands for padded, comes into the picture. So, what we need to do next is create an HTML file where we would show the data from the server and include two scripts, one for the pure JavaScript approach and another for the jQuery way: <b>Hello far away server: </b> <div id="data">loading</div> <hr/> <div id="oneMoreTime">...</div> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> We can proceed with the creation of example.js, where we create two functions; one will create a script element and set the value of src to http://localhost:8080/?callback=cool.run, and the other will serve as a callback upon receiving the data: var cool = (function(){ var module = {}; module.run = function(data){ document.getElementById('data').innerHTML = data[0].what; } module.addElement = function (){ var script = document.createElement('script'); script.src = 'http://localhost:8080/hi?callback=cool.run' document.getElementById('data').appendChild(script); return true; } return module; }()); Afterwards we only need the function that adds the element: cool.addElement(); This should read the data from the server and show a result similar to the following: Hello far away server: hi there stranger From the cool object, we can run the addElement function directly as we defined it as self-executable. The jQuery example is a lot simpler; We can set the datatype to JSONP and everything else is the same as any other AJAX call, at least from the API point of view: $.ajax({ type : "GET", dataType : "jsonp", url : 'http://localhost:8080/hi', success: function(obj){ $('#oneMoreTime').text(obj[0].what); } }); We can now use the standard success callback to handle the data received from the server, and we don't have to specify the parameter in the request. jQuery will automatically append a callback parameter to the URL and delegate the call to the success callback. How it works... The first large leap we are doing here is trusting the source of the data. Results from the server is evaluated after the data is downloaded from the server. There has been some efforts to define a safer JSONP on http://json-p.org/, but it is far from being widespread. The download itself is a HTTP GET method adding another major limitation to usability. Hypermedia as the Engine of Application State (HATEOAS ), among other things, defines the use of HTTP methods for the create, update, and delete operations, making JSONP very unstable for those use cases. Another interesting point is how jQuery delegates the call to the success callback. In order to achieve this, a unique function name is created and is sent to the callback parameter, for example: /hi?callback=jQuery182031846177391707897_1359599143721&_=1359599143727 This function later does a callback to the appropriate handler of jQuey.ajax. There's more... With jQuery, we can also use a custom function if the server parameter that should handle jsonp is not called callback. This is done using the flowing config: jsonp: false, jsonpCallback: "my callback" As with JSONP, we don't do XMLHttpRequest and expect any of the functions that are used with AJAX call to be executed or have their parameters filled as such call. It is a very common mistake to expect just that. More on this can be found in the jQuery documentation at http://api.jquery.com/category/ajax/.
Read more
  • 0
  • 0
  • 1391

article-image-so-what-markdown
Packt
02 Sep 2013
3 min read
Save for later

So, what is Markdown?

Packt
02 Sep 2013
3 min read
(For more resources related to this topic, see here.) Markdown is a lightweight markup language that simplifies the workflow of web writers. It was created in 2004 by John Gruber with contributions and feedback from Aaron Swartz. Markdown was described by John Gruber as: "A text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML)." Markdown is two different things: A simple syntax to create documents in plain text A software tool written in Perl that converts the plain text formatting to HTML Markdown's formatting syntax was designed with simplicity and readability as a design goal. We add rich formatting to plain text without considering that we are writing using a markup language. The main features of Markdown Markdown is: Easy to use: Markdown has an extremely simple syntax that you can learn quickly Fast: Writing is much faster than with HTML, we can dramatically reduce the time we spend crafting HTML tags Clean: We can clearly read and write documents that are always translated into HTML without mistakes or errors Flexible: It is suitable for many things such as writing on the Internet, e-mails, creating presentations Portable: Documents are just plain text; we can edit Markdown with any basic text editor in any operating system Made for writers: Writers can focus on distraction-free writing Here, we can see a quick comparison of the same document between HTML and Markdown. This is the final result that we achieve in both cases: The following code is written in HTML: <h1>Markdown</h1><p>This a <strong>simple</strong> example of Markdown.</p><h2>Features:</h2><ul><li>Simple</li><li>Fast</li><li>Portable</li></ul><p>Check the <a href="http://daringfireball.net/projects/markdown/">official website</a>.</p> The following code is an equivalent document written in Markdown: # Markdown This a **simple** example of Markdown. ## Features: - Simple - Fast - Portable Check the [official website]. [official website]:http://daringfireball.net/projects/markdown/ summary In this article, we learned the basics of Markdown and got to know its features. We also saw how convenient Markdown is, thus proving the fact that it's made for writers. Resources for Article: Further resources on this subject: Generating Reports in Notebooks in RStudio [Article] Database, Active Record, and Model Tricks [Article] Formatting and Enhancing Your Moodle Materials: Part 1 [Article]
Read more
  • 0
  • 0
  • 1248
article-image-handling-sessions-and-users
Packt
30 Aug 2013
4 min read
Save for later

Handling sessions and users

Packt
30 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready We will work from the app.py file from the sched directory and the models.py file. How to do it... Flask provides a session object, which behaves like a Python dictionary, and persists automatically across requests. You can, in your Flask application code: from flask import session# ... in a request ...session['spam'] = 'eggs'# ... in another request ...spam = session.get('spam') # 'eggs' Flask-Login provides a simple means to track a user in Flask's session. Update requirements.txt: FlaskFlask-LoginFlask-ScriptFlask-SQLAlchemyWTForms Then: $ pip install -r requirements.txt We can then load Flask-Login into sched's request handling, in app.py: from flask.ext.login import LoginManager, current_userfrom flask.ext.login import login_user, logout_userfrom sched.models import User# Use Flask-Login to track current user in Flask's session.login_manager = LoginManager()login_manager.setup_app(app)login_manager.login_view = 'login'@login_manager.user_loaderdef load_user(user_id):"""Flask-Login hook to load a User instance from ID."""return db.session.query(User).get(user_id) Flask-Login requires four methods on the User object, inside class User in models.py: def get_id(self):return str(self.id)def is_active(self):return Truedef is_anonymous(self):return Falsedef is_authenticated(self):return True Flask-Login provides a UserMixin (flask.ext.login.UserMixin) if you prefer to use its default implementation. We then provide routes to log the user in when authenticated and log out. In app.py: @app.route('/login/', methods=['GET', 'POST']) def login(): if current_user.is_authenticated(): return redirect(url_for('appointment_list')) form = LoginForm(request.form) error = None if request.method == 'POST' and form.validate(): email = form.username.data.lower().strip() password = form.password.data.lower().strip() user, authenticated = User.authenticate(db.session.query, email, password) if authenticated: login_user(user) return redirect(url_for('appointment_list')) else: error = 'Incorrect username or password.' return render_template('user/login.html', form=form, error=error) @app.route('/logout/') def logout(): logout_user() return redirect(url_for('login')) We then decorate every view function that requires a valid user, in app.py: from flask.ext.login import [email protected]('/appointments/')@login_requireddef appointment_list():# ... How it works... On login_user, Flask-Login gets the user object's ID from User.get_id and stores it in Flask's session. Flask-Login then sets a before_request handler to load the user instance into the current_user object, using the load_user hook we provide. The logout_user function then removes the relevant bits from the session. If no user is logged in, then current_user will provide an anonymous user object which results in current_user.is_anonymous() returning True and current_user. is_authenticated() returning False, which allows application and template code to base logic on whether the user is valid. (Flask-Login puts current_user into all template contexts.) You can use User.is_active to make user accounts invalid without actually deleting them, by returning False as appropriate. View functions decorated with login_required will redirect the user to the login view if the current user is not authenticated, without calling the decorated function. There's more... Flask's session supports display of messages and protection against request forgery. Flashing messages When you want to display a simple message to indicate a successful operation or a failure quickly, you can use Flask's flash messaging, which loads the message into the session until it is retrieved. In application code, inside request handling code: from flask import flashflash('Sucessfully did that thing.', 'success') In template code, where you can use the 'success' category for conditional display: {% for cat, m in get_flashed_messages(with_categories=true) %}<div class="alert">{{ m }}</div>{% endfor %} Cross-site request forgery protection Malicious web code will attempt to forge data-altering requests for other web services. To protect against forgery, you can load a randomized token into the session and into the HTML form, and reject the request when the two do not match. This is provided in the Flask-SeaSurf extension, pythonhosted.org/Flask-SeaSurf/ or the Flask-WTF extension (which integrates WTForms), pythonhosted.org/Flask-ETF/. Summary This article explained how to keep users logged in for on-going requests after authentication. It shed light on how Flask provides a session object, which behaves like a Python dictionary, and persists automatically across requests. It also spoke about coding in Flask application. We got acquainted with flashing messages and cross-site request forgery protection. Resources for Article: Further resources on this subject: Python Testing: Installing the Robot Framework [Article] Getting Started with Spring Python [Article] Creating Skeleton Apps with Coily in Spring Python [Article]
Read more
  • 0
  • 0
  • 5297

article-image-jquery-refresher
Packt
30 Aug 2013
6 min read
Save for later

jQuery refresher

Packt
30 Aug 2013
6 min read
(For more resources related to this topic, see here.) If you haven't used jQuery in a while, that's okay, we'll get you up to speed very quickly. The first thing to realize is that the Document.Ready function is extremely important when using UI. Although page loading happens incredibly fast, we always want our DOM (the HTML content) to be loaded before our UI code gets applied. Otherwise we have nothing to apply it to! We want to place our code inside the Document.Ready function, and we will be writing it the shorthand way as we did previously. Please remove the previous UI checking code in your header that you had previously: $(function() {// Your code here is called only once the DOM is completelyloaded}); Easy enough. Let's refresh on some jQuery selectors. We'll be using these a lot in our examples so we can manipulate our page. I'll write out a few DOM elements next and how you can select them. I will apply hide() to them so we know what's been selected and hidden. Feel free to place the JavaScript portion in your header script tags and the HTML elements within your <body> tags as follows: Selecting elements (unchanging the HTML entities) as follows: $('p').hide();<p>This is a paragraph</p><p>And here is another</p><p>All paragraphs will go hidden!</p> Selecting classes as follows: $('.edit').hide();<p>This is an intro paragraph</p><p class="edit">But this will go hidden!</p><p>Another paragraph</p><p class="edit">This will also go hidden!</p> Selecting IDs as follows: <div id="box">Hide the Box </div><div id="house">Just a random divider</div> Those are the three basic selectors. We can get more advanced and use the CSS3 selectors as follows: $("input[type=submit]").hide();<form><input type="text" name="name" /><input type="submit" /></form> Lastly, you can chain your DOM tree to hide elements more specifically: $("table tr td.hidden").hide(); <table> <tbody> <tr> <td>Data</td> <td class="hidden">Hide Me</td> </tr> </tbody> </table> Step 3 – console.log is your best friend I brought up that developing with the console open is very helpful. When you need to know details about a JavaScript item you have, whether it be the typeof type or value, a friend of yours is the console.log() method. Notice that it is always in lowercase. This allows you to place things in the console rather than somewhere on your page. For example, if I were having trouble figuring out what a value was returning to me, I would simply do the following: function add(a, b) {return a + b;}var total = add(5, 20);console.log(total); This will give me the result I wanted to know quickly and easily. Internet Explorer does not support console logging, it will prevent your JavaScript from running once it hits a console.log method. Make sure to comment out or remove all the console logs before releasing a live project or else all the IE users will have a serious problem. Step 4 – creating the slider widget Let's get busy! Open your template file and let's create a DOM element to attach a slider widget to. And to make it more interesting, we are also going to add an additional DIV to show a text value. Here is what I placed in my <body> tag: <div id="slider"></div><div id="text"></div> It doesn't have to be a <div> tag, but it's a good generic block-level element to use. Next, to attach a slider element we place the following in our <script> tags (the empty ones): $(function() {var my_slider = $("#slider").slider();}); Refresh your page, and you will have a widget that can slide along a bar. If you don't see a slider, first check your browser's development tools console to see if there are any JavaScript errors. If you don't see any still, make sure you don't have a JavaScript blocker on! The reason we assign a variable to the slider is because, later on, we may want to reference the options, which you'll see next. You are not required to do this, but if you want to access the slider outside of its initial setup, you must give it a variable name. Our widget doesn't do much now, but it feels cool to finally make something, whatever it is! Let's break down a few things we can customize. There are three categories: Options: These are defined in a JavaScript object ({}) and will determine how you want your widget to behave when it's loaded, for example, you could set your slider to have minimum and maximum values. Events: These are always a function and they are triggered when a user does something to your item. Methods: You can use methods to destroy a widget, get and set values from outside of the widget, and even set different options from what you started with. To play with a few categories, the easiest start is to adjust the options. Let's do it by creating an empty object inside our slider: var my_slider = $("#slider").slider({}); Then we'll create a minimum and maximum value for our slider using the following code: var my_slider = $("#slider").slider({min: 1,max: 50}); Now our slider will accept and move along a bar with 50 values. There are many more options at UI API located at api.jquery.com under slider. You'll find many other options we won't have time to cover such as a step option to make the slider count every two digits, as follows: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2}); If we want to attach this to a text field we created in the DOM, a good way to start is by assigning the minimum value in the DIV, as this way we only have to change it once: var min = my_slider.slider('option', 'min');$("#text").html(min); Next we want to update the text value every time the slider is moved, easy enough; this will introduce us to our first event. Let's add it: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2,change: function(event, ui) {$("#text").html(ui.value);}}); Summary This article describes the basis for all widgets. Creating them, setting the options, events, and methods. That is the very simple pattern that handles everything for us. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 2280