Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

404 Articles
Packt
03 Sep 2013
7 min read
Save for later

Quickstart – Creating an application

Packt
03 Sep 2013
7 min read
(For more resources related to this topic, see here.) Step 1 – Planning the workflow When you'll be writing a real application, you should start with the requirements to application functionality. For the blog example, they described in the Getting Started: Requirements Analysis section, at the very beginning of the tutorial. Direct URL is http://www.yiiframework.com/doc/blog/1.1/en/start.requirements. After you have written all the desired features, you basically start implementing them one by one. Of course, in serious software development there's a lot of gotchas included but overall it's the same. Blog example is a database driven application, so we need to prepare a database schema beforehand. Here's what they came up with for the blog demo. This image is a verbatim copy from the blog example demo. Note that there are two links missing. The posts table have tags field which is the storage area for tags written in raw and is not a foreign key to tags table. Also author field in comment should really be the foreign key to user table. Anyways, we'll not cover the actual database generation, and suggest you can do it yourself. The blog tutorial at the Yii website has all the relevant instructions addressed to total newbies. Next in this article we will see how easy it is with Yii to get a working user interface by which one will be able to manipulate our database. Step 2 – Linking to the database from your app Once you design and physically create, the database in some database management system like MySQL or maybe SQLite, you are ready to configure your app to point to this database. The skeleton app generated by the ./yiic webapp command needs to be configured to point to this database. To do this, you need to set a db component in the main config file located at protected/config/main.php. There is a section that contains an array of components. Below is the setup for a MySQL database located at the same server as the web application itself. You will find a commented-out template for this already present when you generate your app. /protected/config/main.php'components'=>array( /* other components */ 'db'=>array( 'connectionString' => 'mysql:host=localhost;dbname=DB_NAME, 'emulatePrepare' => true, 'username' => YOUR_USERNAME, 'password' => YOUR_PASSWORD, 'charset' => 'utf8', ), /* other components */), This is a default component having a class CDbConnection and is used by all of our ActiveRecord design patterns which we will create later. As with all application components, all configuration parameters corresponds to the public properties of the component's class, so, you can check the API documentation for details. By the way, you really want to understand more about the main application config. Read about it in the Definitive Guide to Yii at the official website, at Fundamentals | Application | Application Configuration. Direct URL is http://www.yiiframework.com/doc/guide/1.1/en/basics.application#application-configuration. Just remember that all configuration parameters are just properties of CWebApplication object, which you can read about it the API documentation, direct URL is http://www.yiiframework.com/doc/api/1.1/CWebApplication. Step 3 – Generating code automatically Now that we have our app linked up to a fully built database, we can start using one of Yii's greatest features: automatic code generation. To get started, there are two types of code generation that are necessary: Generate a model classes based on the tables in your database Run the CRUD generator that takes a model and sets up a corresponding controller and set of views for basic listing, creating, viewing, updating and deleting from the table Console way There are two ways to go about automatic code generating. Originally, there was only the yiic tool used earlier to create the skeleton app. For the automatic code generation features, you would use yiic shell index.php command, which would bring up a command-line interface where you could run subcommands for modeling and scaffolding. $ /usr/local/yii/framework/yiic shell index.phpYii Interactive Tool v1.1 (based on Yiiv1.1.13)Please type 'help' for help. Type 'exit' to quit.>> model Post tbl_post generate models/Post.php unchanged fixtures/tbl_post.php generate unit/PostTest.phpThe following model classes are successfully generated: PostIf you have a 'db' database connection, you can test these models nowwith: $model=Post::model()->find(); print_r($model);>> crud Post generate PostController.php generate PostTest.phpmkdir /var/www/app/protected/views/post generate create.php generate update.php generate index.php generate view.php As you can see, this is a quick and easy way to perform the model and crud actions. The model command produces just two files: For your actual model class For unit tests The crud command creates your controller and view files. Gii Console tools may be the preferred option for some, but for developers who like to use graphical tools, there is now solution for this, called Gii. To use Gii, it is necessary to turn it on in the main config file: protected/config/main.php. You will find the template for it already present, but it is commented out by default. Simply uncomment it, set your password, and decide from what hosts it may be accessed. The configuration looks like this: 'gii'=>array( 'class'=>'system.gii.GiiModule', 'password'=>'giiPassword', // If removed, Gii defaults to localhost only. // Edit carefully to taste. 'ipFilters'=>array('127.0.0.1','::1'), // For development purposes, // a wildcard will allow access from anywhere. // 'ipFilters'=>array('*'),), Once Gii is configured, it can be accessed by navigating to the app URL with ?r=gii after it. For example, http://www.example.com/index.php?r=gii. It will begin with a prompt asking for the password set in the config file. Once entered, it will display a list of generators. If the database is not set in the config file, you will see an error when you attempt to use one. The first most basic generator in Gii is the model generator. It asks for a table name from the database and a name to be used for the PHP class. Note that we can specify a table name prefix which will be ignored when generating the model class name. For instance, the blog demo's user table is tbl_user, where the tbl_ is a prefix. This feature exists to support some setups, especially common in shared hosting environments, where a single database holds tables for several distinct applications. In such an environment, it's a common practice to prefix something to names of tables to avoid getting into naming conflict and easily find tables relevant to some specific application. So, as this prefixes don't mean anything in the application itself, Gii offers a way to automatically ignore them. Model class names are being constructed from the remaining table names by the obvious rules: Underscores are converted to uppercasing the next letter The first letter of the class name is being uppercased as well. The first step in getting your application off the ground is to generate models for all the entity tables in your database. Things like bridge tables will not need models, as they simply relate two entities to one another, rather than actually being a distinct thing. Bridge tables are being used for generating relations between models, expressed in the relations method in model class. For the blog demo, basic models are User, Post, Comment, Tag, and Lookup. The second phase of scaffolding is to generate the CRUD code for each of these models. This will create a controller and a series of view templates. The controller (for example. PostController) will handle routing to actions related to the given model. The view files represent everything needed to list and view entities, as well as the forms needed to create and update individual entities. Summary In this article we created an application by following a series of steps such as planning the workflow, linking to the database from your app, and generating code automatically. Resources for Article : Further resources on this subject: Database, Active Record, and Model Tricks [Article] Building multipage forms (Intermediate) [Article] Creating a Recent Comments Widget in Agile [Article]
Read more
  • 0
  • 0
  • 2459

article-image-so-what-markdown
Packt
02 Sep 2013
3 min read
Save for later

So, what is Markdown?

Packt
02 Sep 2013
3 min read
(For more resources related to this topic, see here.) Markdown is a lightweight markup language that simplifies the workflow of web writers. It was created in 2004 by John Gruber with contributions and feedback from Aaron Swartz. Markdown was described by John Gruber as: "A text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML)." Markdown is two different things: A simple syntax to create documents in plain text A software tool written in Perl that converts the plain text formatting to HTML Markdown's formatting syntax was designed with simplicity and readability as a design goal. We add rich formatting to plain text without considering that we are writing using a markup language. The main features of Markdown Markdown is: Easy to use: Markdown has an extremely simple syntax that you can learn quickly Fast: Writing is much faster than with HTML, we can dramatically reduce the time we spend crafting HTML tags Clean: We can clearly read and write documents that are always translated into HTML without mistakes or errors Flexible: It is suitable for many things such as writing on the Internet, e-mails, creating presentations Portable: Documents are just plain text; we can edit Markdown with any basic text editor in any operating system Made for writers: Writers can focus on distraction-free writing Here, we can see a quick comparison of the same document between HTML and Markdown. This is the final result that we achieve in both cases: The following code is written in HTML: <h1>Markdown</h1><p>This a <strong>simple</strong> example of Markdown.</p><h2>Features:</h2><ul><li>Simple</li><li>Fast</li><li>Portable</li></ul><p>Check the <a href="http://daringfireball.net/projects/markdown/">official website</a>.</p> The following code is an equivalent document written in Markdown: # Markdown This a **simple** example of Markdown. ## Features: - Simple - Fast - Portable Check the [official website]. [official website]:http://daringfireball.net/projects/markdown/ summary In this article, we learned the basics of Markdown and got to know its features. We also saw how convenient Markdown is, thus proving the fact that it's made for writers. Resources for Article: Further resources on this subject: Generating Reports in Notebooks in RStudio [Article] Database, Active Record, and Model Tricks [Article] Formatting and Enhancing Your Moodle Materials: Part 1 [Article]
Read more
  • 0
  • 0
  • 1247

article-image-handling-sessions-and-users
Packt
30 Aug 2013
4 min read
Save for later

Handling sessions and users

Packt
30 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready We will work from the app.py file from the sched directory and the models.py file. How to do it... Flask provides a session object, which behaves like a Python dictionary, and persists automatically across requests. You can, in your Flask application code: from flask import session# ... in a request ...session['spam'] = 'eggs'# ... in another request ...spam = session.get('spam') # 'eggs' Flask-Login provides a simple means to track a user in Flask's session. Update requirements.txt: FlaskFlask-LoginFlask-ScriptFlask-SQLAlchemyWTForms Then: $ pip install -r requirements.txt We can then load Flask-Login into sched's request handling, in app.py: from flask.ext.login import LoginManager, current_userfrom flask.ext.login import login_user, logout_userfrom sched.models import User# Use Flask-Login to track current user in Flask's session.login_manager = LoginManager()login_manager.setup_app(app)login_manager.login_view = 'login'@login_manager.user_loaderdef load_user(user_id):"""Flask-Login hook to load a User instance from ID."""return db.session.query(User).get(user_id) Flask-Login requires four methods on the User object, inside class User in models.py: def get_id(self):return str(self.id)def is_active(self):return Truedef is_anonymous(self):return Falsedef is_authenticated(self):return True Flask-Login provides a UserMixin (flask.ext.login.UserMixin) if you prefer to use its default implementation. We then provide routes to log the user in when authenticated and log out. In app.py: @app.route('/login/', methods=['GET', 'POST']) def login(): if current_user.is_authenticated(): return redirect(url_for('appointment_list')) form = LoginForm(request.form) error = None if request.method == 'POST' and form.validate(): email = form.username.data.lower().strip() password = form.password.data.lower().strip() user, authenticated = User.authenticate(db.session.query, email, password) if authenticated: login_user(user) return redirect(url_for('appointment_list')) else: error = 'Incorrect username or password.' return render_template('user/login.html', form=form, error=error) @app.route('/logout/') def logout(): logout_user() return redirect(url_for('login')) We then decorate every view function that requires a valid user, in app.py: from flask.ext.login import [email protected]('/appointments/')@login_requireddef appointment_list():# ... How it works... On login_user, Flask-Login gets the user object's ID from User.get_id and stores it in Flask's session. Flask-Login then sets a before_request handler to load the user instance into the current_user object, using the load_user hook we provide. The logout_user function then removes the relevant bits from the session. If no user is logged in, then current_user will provide an anonymous user object which results in current_user.is_anonymous() returning True and current_user. is_authenticated() returning False, which allows application and template code to base logic on whether the user is valid. (Flask-Login puts current_user into all template contexts.) You can use User.is_active to make user accounts invalid without actually deleting them, by returning False as appropriate. View functions decorated with login_required will redirect the user to the login view if the current user is not authenticated, without calling the decorated function. There's more... Flask's session supports display of messages and protection against request forgery. Flashing messages When you want to display a simple message to indicate a successful operation or a failure quickly, you can use Flask's flash messaging, which loads the message into the session until it is retrieved. In application code, inside request handling code: from flask import flashflash('Sucessfully did that thing.', 'success') In template code, where you can use the 'success' category for conditional display: {% for cat, m in get_flashed_messages(with_categories=true) %}<div class="alert">{{ m }}</div>{% endfor %} Cross-site request forgery protection Malicious web code will attempt to forge data-altering requests for other web services. To protect against forgery, you can load a randomized token into the session and into the HTML form, and reject the request when the two do not match. This is provided in the Flask-SeaSurf extension, pythonhosted.org/Flask-SeaSurf/ or the Flask-WTF extension (which integrates WTForms), pythonhosted.org/Flask-ETF/. Summary This article explained how to keep users logged in for on-going requests after authentication. It shed light on how Flask provides a session object, which behaves like a Python dictionary, and persists automatically across requests. It also spoke about coding in Flask application. We got acquainted with flashing messages and cross-site request forgery protection. Resources for Article: Further resources on this subject: Python Testing: Installing the Robot Framework [Article] Getting Started with Spring Python [Article] Creating Skeleton Apps with Coily in Spring Python [Article]
Read more
  • 0
  • 0
  • 5001
Banner background image

article-image-creating-camel-project-simple
Packt
27 Aug 2013
8 min read
Save for later

Creating a Camel project (Simple)

Packt
27 Aug 2013
8 min read
(For more resources related to this topic, see here.) Getting ready For the examples in this article, we are going to use Apache Camel version 2.11 (http://maven.apache.org/) and Apache Maven version 2.2.1 or newer (http://maven.apache.org/) as a build tool. Both of these projects can be downloaded for free from their websites. The complete source code for all the examples in this article is available on github at https://github.com/bibryam/camel-message-routing-examples repository. It contains Camel routes in Spring XML and Java DSL with accompanying unit tests. The source code for this tutorial is located under the project: camel-message-routing-examples/creating-camel-project. How to do it... In a new Maven project add the following Camel dependency to the pom.xml: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>${camel-version}</version></dependency> With this dependency in place, creating our first route requires only a couple of lines of Java code: public class MoveFileRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file://source") .to("log://org.apache.camel.howto?showAll=true") .to("file://target"); }} Once the route is defined, the next step is to add it to CamelContext, which is the actual routing engine and run it as a standalone Java application: public class Main { public static void main(String[] args) throws Exception { CamelContext camelContext = new DefaultCamelContext(); camelContext.addRoutes(new MoveFileRoute()); camelContext.start(); Thread.sleep(10000); camelContext.stop(); }} That's all it takes to create our first Camel application. Now, we can run it using a Java IDE or from the command line with Maven mvn exec:java. How it works... Camel has a modular architecture; its core (camel-core dependency) contains all the functionality needed to run a Camel application—DSL for various languages, the routing engine, implementations of EIPs, a number of data converters, and core components. This is the only dependency needed to run this application. Then there are optional technology specific connector dependencies (called components) such as JMS, SOAP, JDBC, Twitter, and so on, which are not needed for this example, as the file and log components we used are all part of the camel-core. Camel routes are created using a Domain Specific Language (DSL), specifically tailored for application integration. Camel DSLs are high-level languages that allow us to easily create routes, combining various processing steps and EIPs without going into low-level implementation details. In the Java DSL, we create a route by extending RouteBuilder and overriding the configure method. A route represents a chain of processing steps applied to a message based on some rules. The route has a beginning defined by the from endpoint, and one or more processing steps commonly called "Processors" (which implement the Processor interface). Most of these ideas and concepts originate from the Pipes and Filters pattern from the Enterprise Integration Patterns articlee by Gregor Hohpe and Bobby Woolf. The article provides an extensive list of patterns, which are also available at http://www.enterpriseintegrationpatterns.com, and the majority of which are implemented by Camel. With the Pipes and Filters pattern, a large processing task is divided into a sequence of smaller independent processing steps (Filters) that are connected by channels (Pipes). Each filter processes messages received from the inbound channel, and publishes the result to the outbound channel. In our route, the processing steps are reading the file using a polling consumer, logging it and writing the file to the target folder, all of them piped by Camel in the sequence specified in the DSL. We can visualize the individual steps in the application with the following diagram: A route has exactly one input called consumer and identified by the keyword from. A consumer receives messages from producers or external systems, wraps them in a Camel specific format called Exchange , and starts routing them. There are two types of consumers: a polling consumer that fetches messages periodically (for example, reading files from a folder) and an event-driven consumer that listens for events and gets activated when a message arrives (for example, an HTTP server). All the other processor nodes in the route are either a type of integration pattern or producers used for sending messages to various endpoints. Producers are identified by the keyword to and they are capable of converting exchanges and delivering them to other channels using the underlying transport mechanism. In our example, the log producer logs the files using the log4J API, whereas the file producer writes them to a target folder. The route is not enough to have a running application; it is only a template that defines the processing steps. The engine that runs and manages the routes is called Camel Context. A high level view of CamelContext looks like the following diagram: CamelContext is a dynamic multithread route container, responsible for managing all aspects of the routing: route lifecycle, message conversions, configurations, error handling, monitoring, and so on. When CamelContext is started, it starts the components, endpoints and activates the routes. The routes are kept running until CamelContext is stopped again when it performs a graceful shutdown giving time for all the in-flight messages to complete processing. CamelContext is dynamic, it allows us to start, stop routes, add new routes, or remove running routes at runtime. In our example, after adding the MoveFileRoute, we start CamelContext and let it copy files for 10 seconds, and then the application terminates. If we check the target folder, we should see files copied from the source folder. There's more... Camel applications can run as standalone applications or can be embedded in other containers such as Spring or Apache Karaf. To make development and deployment to various environments easy, Camel provides a number of DSLs, including Spring XML, Blueprint XML, Groovy, and Scala. Next, we will have a look at the Spring XML DSL. Using Spring XML DSL Java and Spring XML are the two most popular DSLs in Camel. Both provide access to all Camel features and the choice is mostly a matter of taste. Java DSL is more flexible and requires fewer lines of code, but can easily become complicated and harder to understand with the use of anonymous inner classes and other Java constructs. Spring XML DSL, on the other hand, is easier to read and maintain, but it is too verbose and testing it requires a little more effort. My rule of thumb is to use Spring XML DSL only when Camel is going to be part of a Spring application (to benefit from other Spring features available in Camel), or when the routing logic has to be easily understood by many people. For the routing examples in the article, we are going to show a mixture of Java and Spring XML DSL, but the source code accompanying this article has all the examples in both DSLs. In order to use Spring, we also need the following dependency in our projects: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>${camel-version}</version></dependency> The same application for copying files, written in Spring XML DSL looks like the following: <beans xsi_schemaLocation=" http ://www.springframework.org/schema/beans http ://www.springframework.org/schema/beans/spring-beans.xsd http ://camel.apache.org/schema/spring http ://camel.apache.org/schema/spring/camel-spring.xsd"><camelContext > <route> <from uri="file://source"/> <to uri="log://org.apache.camel.howto?showAll=true"/> <to uri="file://target"/> </route></camelContext></beans> Notice that this is a standard Spring XML file with an additional CamelContext element containing the route. We can launch the Spring application as part of a web application, OSGI bundle, or as a standalone application: public static void main(String[] args) throws Exception { AbstractApplicationContext springContext = new ClassPathXmlApplicationContext("META-INF/spring/move-file-context.xml"); springContext.start(); Thread.sleep(10000); springContext.stop();} When the Spring container starts, it will instantiate a CamelContext, start it and add the routes without any other code required. That is the complete application written in Spring XML DSL. More information about Spring support in Apache Camel can be found at http://camel.apache.org/spring.html. Summary This article provides a high-level overview of Camel architecture, and demonstrates how to create a simple message driven application. Resources for Article: Further resources on this subject: Binding Web Services in ESB—Web Services Gateway [Article] Drools Integration Modules: Spring Framework and Apache Camel [Article] Routing to an external ActiveMQ broker [Article]
Read more
  • 0
  • 0
  • 4258

article-image-irc-style-chat-tcp-server-and-event-bus
Packt
27 Aug 2013
6 min read
Save for later

IRC-style chat with TCP server and event bus

Packt
27 Aug 2013
6 min read
(For more resources related to this topic, see here.) Step 1 – fresh start In a new folder called, for example, 1_PubSub_Chat, let's open our editor of choice and create here a file called pubsub_chat.js. Also, make sure that you have a terminal window open and you moved into the newly created project directory. Step 2 – creating the TCP server TCP servers are called net servers in Vert.x. Creating and using a net server is really similar to HTTP servers: Obtain the vertx bridge object to access the framework features: var vertx = require('vertx'); /* 1 */var netServer = vertx.createNetServer(); /* 2 */netServer.listen(1234); /* 3 */ Ask Vert.x to create a TCP server (called NetServer in Vert.x). Actually, start the server by telling it to listen on TCP port 1234. Let's test whether this works. This time we need another terminal to run the telnet command: $ telnet localhost 1234 The terminal should now be connected and waiting to send/receive characters. If you have "connection refused" errors, make sure the server is running. Step 3 – adding a connect handler Now, we need to place a block of code to be executed as soon as a client connects: Define a handler function. This function will be called every time a client connects to the server: var vertx = require('vertx')var server = vertx.createNetServer().connectHandler(function(socket) {// Composing a client address stringaddr = socket.remoteAddress();addr = addr.ipaddress + addr.port;socket.write('Welcome to the chat ' + addr + '!');}).listen(1234) A NetServer connect handler accepts the socket object as a parameter; this object is our gateway to reading, writing, or closing the connection to a client. Use the socket object to write a greeting to the newly connected clients. If we test this one as in step 2 (Step 2 – creating the TCP server), we see that the server now welcomes us with a message containing an identifier of the client as its origin host and origin port. Step 4 – adding a data handler We just learned how to execute a block of code at the moment in which the client connects. However now we are interested in doing something else at the time when we receive new data from a client connection. The socket object we used in the previous step for writing data back to the client, accepts a handler function too: the data handler. Let's add one: Add a data handler function to the socket object. This is going to be called every time the client sends a new string of data: var vertx = require('vertx') var server = vertx.createNetServer().connectHandler( function(socket) { // Composing a client address string addr = socket.remoteAddress(); addr = addr.ipaddress + addr.port; socket.write('Welcome to the chat ' + addr + '!'); socket.dataHandler(function(data) { var now = new Date(); now = now.getHours() + ':' + now.getMinutes() + ':' + now.getSeconds(); var msg = now + ' <' + addr + '> ' + data; socket.write(msg); }) }).listen(1234) React to the new data event by writing the same data back to the socket (plus a prefix). What we have now is a sort of an echo server, which returns back to the sender the same message with a prefix string. Step 5 – adding the event bus magic The base requirement of a chat server is that every time a client sends a message, the rest of the connected clients should receive it. We will use event bus, the messaging service provided by the framework, to send (publish) received messages to a broadcast address. Each client will subscribe to the address upon connection and receive other clients' messages from there: Add a data handler function to the socket object: var vertx = require('vertx') var server = vertx.createNetServer().connectHandler( function(socket) { // Composing a client address string addr = socket.remoteAddress(); addr = addr.ipaddress + addr.port; socket.write('Welcome to the chat ' + addr + '!'); vertx.eventBus.registerHandler('broadcast_address', function(event){ socket.write(event); }); socket.dataHandler(function(data) { var now = new Date(); now = now.getHours() + ':' + now.getMinutes() + ':' + now.getSeconds(); var msg = now + ' <' + addr + '> ' + data; vertx.eventBus.publish('broadcast_address', msg); }) }).listen(1234) As soon as a client connects, they listen to the event bus for new data to be published on the address broadcast_address. When a client sends a string of characters to the server, this data is published to the broadcast address, triggering a handler function that writes the string to all the clients' sockets. The chat server is now complete! To try it out, just open three terminals: Terminal 1: $ vertx run pubsub_chat.js Terminal 2: $ telnet localhost 1234 Terminal 3: $ telnet localhost 1234 Now, we have a server and two clients running and connected. Type something in terminal 2 or 3 and see the message being broadcast to both the other windows: $ telnet localhost 1234Trying ::1...Connected to localhost.Escape character is '^]'.Hello from terminal two!13:6:56 <0:0:0:0:0:0:0:155991> Hello from terminal two!13:7:24 <0:0:0:0:0:0:0:155992> Hi there, here's terminal three!13:7:56 <0:0:0:0:0:0:0:155992> Great weather today! Step 6 – Organizing a more complex project Since Vert.x is a polyglot platform, we can choose to write an application (or a part of it) in either of the many supported languages. The granularity of the language choice is at verticle level. It's important to give a good architecture to a non-trivial project from the beginning. Follow this list of generic principles to avoid performance bottlenecks or the need for massive refactoring in the future: Wrap synchronous libraries or legacy code inside a worker verticle (or a module). This will keep the blocking code away from the event loop threads. Divide the problem in isolated domains and write a verticle to handle each of them (for example, database persistor verticle, web server verticle, authenticator verticle, and cache manager verticle). Use a startup verticle. This will be the single entry point to the application. Its responsibilities will be to: Validate the configuration file Programmatically deploy other verticles in the correct order Decide how many instances of a verticle to create (the decision might depend on the environment: for example, the amount of available processors) Register periodic tasks Summary: In this article, we learned in a step-wise procedure how we can create an Internet Relay Chat using the TCP server, and interconnect the server with the clients using an event bus, and enable different types of communication between them. Resources for Article: Further resources on this subject: Getting Started with Zombie.js [Article] Building a Chat Application [Article] Accessing and using the RDF data in Stanbol [Article]
Read more
  • 0
  • 0
  • 2643

article-image-publishing-project-mobile
Packt
26 Aug 2013
5 min read
Save for later

Publishing the project for mobile

Packt
26 Aug 2013
5 min read
(For more resources related to this topic, see here.) Standard HTML5 publishing You will first publish your project using the standard HTML5 publishing options: Open the HDSB_publish.cptx file. Click on the publish icon situated right next to the preview icon in the main toolbar. Alternatively, you can also go to the File | Publish menu item. The Publish dialog contains all of the publishing options of Captivate 7, as shown in the following screenshot. In the left column of the dialog, of six icons marked as 1, represent the main publishing formats supported by Captivate. The area in the center, marked as 2, displays the options pertaining to the selected format. Take some time to click on each of the six icons of the left column one-by-one. While doing so, take a close look at the right area of the dialog to see how the set of available options changes based on the selected format. When done, return to the SWF/HTML5 format, which is the first icon at the top of the left column. Type hdStreet_standard in the Project Title field. Click on the Browse button associated with the Folder field and choose the /published folder of your exercises as the publish location. In the Output Format Options section, make sure that the HTML5 checkbox is the only the one selected. If necessary, adjust the other options so that the Publish dialog looks like the previous screenshot. When ready, click on the Publish button at the bottom-right corner of the dialog box to trigger the actual publishing process. This process can take some time depending on the size of the project to publish, and on the overall performances of your computer system. When done, Captivate displays a message, acknowledging the successful completion of the publishing process and asking you if you want to view the output. Click on No to close both the message and the Publish dialog. Make sure you save the file before proceeding. Publishing your project to HTML5 is that easy! We will also use Windows Explorer (Windows) or Finder (Mac) to take a closer look at the generated files. Examining the HTML5 output By publishing the project to HTML 5, Captivate has generated a whole bunch of HTML, CSS, and JavaScript files: Use Windows Explorer (Windows) or Finder (Mac) to go to the /published/hdStreet_standard folder of your exercises. Note that Captivate has created a subfolder in the /published folder we specified as the publish destination. Also notice that the name of that subfolder is what we typed in the Project Title field of the Publish dialog. The /published/hdstreet_standard folder should contain the index.html file and five subfolders, as illustrated by the following screenshot: The index.html file is the main HTML file. It is the file to open in a modern web browser in order to view the e-learning content. The /ar folder contains the audio assets of the project. These assets include the voice over narrations and the mouse-click sound in .mp3 format. Every single HTML5 Captivate project includes the same /assets folder. It contains the standard images, CSS, and JavaScript files used to power the objects and features that can be included in a project. The web developers reading these lines will probably recognize some of these files. For example, the jQuery library is included in the /assets/js folder. The /dr folder contains the images that are specific to this project. These images include the slide backgrounds in .png format, the mouse pointers, and the various states of the buttons used in this project. Finally, the /vr folder contains the video assets. These include the video we inserted on slide 2, as well as all of the full motion recording slides of the project. All of these files and folders are necessary for your HTML5 project to work as expected. In other words, you need to upload all of these files and folders to the web server (or to the LMS) to make the project available to your students. Never try to delete, rename, or move any of these files! Double-click on the index.html file to open the project in the default web browser. Make sure everything works as expected. When done, close the web browser and return to Captivate. This concludes our overview of the standard HTML5 publishing feature of Captivate 7. Testing the HTML5 content Producing content for mobile devices raises the issue of testing the content in a situation as close as possible to reality. Most of the time, you'll test the HTML5 output of Captivate only on the mobile device you own, or even worse, in the desktop version of an HTML5 web browser. If you are a Mac user, I've written a blog post on how to test the Captivate HTML5 content on iOS devices, without even owning such a device at http://www.dbr-training.eu/index.cfm/blog/test-your-html5-elearning-on-an-ios-device-without-an-ios-device/. Summary You learned about the publishing step of the typical Captivate production work flow. You learned how to publish your project using the standard HTML5 publishing options. We also used Windows Explorer (Windows) or Finder (Mac) to take a closer look at the generated files. By publishing the project to HTML 5, Captivate has generated a whole bunch of HTML, CSS, and JavaScript files. Resources for Article: Further resources on this subject: Top features you'll want to know about [Article] Remotely Preview and test mobile web pages on actual devices with Adobe Edge Inspect [Article] An Introduction to Flash Builder 4-Network Monitor [Article]
Read more
  • 0
  • 0
  • 1026
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-highcharts
Packt
20 Aug 2013
5 min read
Save for later

Highcharts

Packt
20 Aug 2013
5 min read
(For more resources related to this topic, see here.) Creating a line chart with a time axis and two Y axes We will now create the code for this chart: You start the creation of your chart by implementing the constructor of your Highcharts' chart: var chart = $('#myFirstChartContainer').highcharts({}); We will now set the different sections inside the constructor. We start by the chart section. Since we'll be creating a line chart, we define the type element with the value line. Then, we implement the zoom feature by setting the zoomType element. You can set the value to x, y, or xy depending on which axes you want to be able to zoom. For our chart, we will implement the possibility to zoom on the x-axis: chart: {type: 'line',zoomType: 'x'}, We define the title of our chart: title: {text: 'Energy consumption linked to the temperature'}, Now, we create the x axis. We set the type to datetime because we are using time data, and we remove the title by setting the text to null. You need to set a null value in order to disable the title of the xAxis: xAxis: {type: 'datetime',title: {text: null}}, We then configure the Y axes. As defined, we add two Y axes with the titles Temperature and Electricity consumed (in KWh), which we override with a minimum value of 0. We set the opposite parameter to true for the second axis in order to have the second y axis on the right side: yAxis: [{title: {text: 'Temperature'},min:0},{title: {text: 'Energy consumed (in KWh)'},opposite:true,min:0}], We will now customize the tooltip section. We use the crosshairs option in order to have a line for our tooltip that we will use to follow values of both series. Then, we set the shared value to true in order to have values of both series on the same tooltip. tooltip: {crosshairs: true,shared: true}, Further, we set the series section. For the datetime axes, you can set your series section by using two different ways. You can use the first way when your data follow a regular time interval and the second way when your data don't necessarily follow a regular time interval. We will use both the ways by setting the two series with two different options. The first series follows a regular interval. For this series, we set the pointInterval parameter where we define the data interval in milliseconds. For our chart, we set an interval of one day. We set the pointStart parameter with the date of the first value. We then set the data section with our values. The tooltip section is set with the valueSuffix element, where we define the suffix to be added after the value inside our tool tip. We set our yAxis element with the axis we want to associate with our series. Because we want to set this series to the first axis, we set the value to 0(zero). For the second series, we will use the second way because our data is not necessarily following the regular intervals. But you can also use this way, even if your data follows a regular interval. We set our data by couple, where the first element represents the date and the second element represents the value. We also override the tooltip section of the second series. We then set the yAxis element with the value 1 because we want to associate this series to the second axis. For your chart, you can also set your date values with a timestamp value instead of using the JavaScript function Date.UTC. series: [{name: 'Temperature',pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ' °C'},yAxis: 0},{name: 'Electricity consumption',data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ' KWh'},yAxis: 1}] You should have this as the final code: $(function () {var chart = $(‘#myFirstChartContainer’).highcharts({chart: {type: ‘line’,zoomType: ‘x’},title: {text: ‘Energy consumption linked to the temperature’},xAxis: {type: ‘datetime’,title: {text: null}},yAxis: [{title: {text: ‘Temperature’},min:0},{title: {text: ‘Electricity consumed’},opposite:true,min:0}],tooltip: {crosshairs: true,shared: true},series: [{name: ‘Temperature’,pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ‘ °C’},yAxis: 0},{name: ‘Electricity consumption’,data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ‘ KWh’},yAxis: 1}]});}); You should have the expected result as shown in the following screenshot: Summary In this article, we learned how to perform a task with the most important features of Highcharts. We created a line chart with a time axis and two Y-axes and realized that there are a wide variety of things that you can do with it. Also, we learned about the most commonly performed tasks and most commonly used features in Highcharts. Resources for Article : Further resources on this subject: Converting tables into graphs (Advanced) [Article] Line, Area, and Scatter Charts [Article] Data sources for the Charts [Article]
Read more
  • 0
  • 0
  • 1989

article-image-working-remote-data
Packt
20 Aug 2013
4 min read
Save for later

Working with remote data

Packt
20 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new document in your editor. How to do it... Copy the following code into your new document: <!DOCTYPE html> <html> <head> <title>Kendo UI Grid How-to</title> <link rel="stylesheet" type="text/css" href="kendo/styles/kendo.common.min.css"> <link rel="stylesheet" type="text/css" href="kendo/styles/kendo.default.min.css"> <script src = "kendo/js/jquery.min.js"></script> <script src = "kendo/js/kendo.web.min.js"></script> </head> <body> <h3 style="color:#4f90ea;">Exercise 12- Working with Remote Data</h3> <p><a href="index.html">Home</a></p> <script type="text/javascript"> $(document).ready(function () { var serviceURL = "http://gonautilus.com/kendogen/KENDO.cfc?method="; var myDataSource = new kendo.data.DataSource({ transport: { read: { url: serviceURL + "getArt", dataType: "JSONP" } }, pageSize: 20, schema: { model: { id: "ARTISTID", fields: { ARTID: { type: "number" }, ARTISTID: { type: "number" }, ARTNAME: { type: "string" }, DESCRIPTION: { type: "CLOB" }, PRICE: { type: "decimal" }, LARGEIMAGE: { type: "string" }, MEDIAID: { type: "number" }, ISSOLD: { type: "boolean" } } } } } ); $("#myGrid").kendoGrid({ dataSource: myDataSource, pageable: true, sortable: true, columns: [ { field: "ARTID", title: "Art ID"}, { field: "ARTISTID", title: "Artist ID"}, { field: "ARTNAME", title: "Art Name"}, { field: "DESCRIPTION", title: "Description"}, { field: "PRICE", title: "Price", template: '#= kendo.toString(PRICE,"c") #'}, { field: "LARGEIMAGE", title: "Large Image"}, { field: "MEDIAID", title: "Media ID"}, { field: "ISSOLD", title: "Sold"}] } ); } ); </script> <div id="myGrid"></div> </body> </html> How it works... This example shows you how to access a JSONP remote datasource. JSONP allows you to work with cross-domain remote datasources. The JSONP format is like JSON except it adds padding, which is what the "P" in JSONP stands for. The padding can be seen if you look at the result of the AJAX call being made by the Kendo Grid. It simply responds back with the callback argument that is passed and wraps the JSON in parentheses. You'll notice that we created a serviceURL variable that points to the service we are calling to return our data. On line 19, you'll see that we are calling the getArt method and specifying the value of dataType as JSONP. Everything else should look familiar. There's more... Generally, the most common format used for remote data is JavaScript Object Notation (JSON). You'll find several examples of using ODATA on the Kendo UI demo website. You'll also find examples of performing create, update, and delete operations on that site. Outputting JSON with ASP MVC In an ASP MVC or ASP.NET application, you'll want to set up your datasource like the following example. ASP has certain security requirements that force you to use POST instead of the default GET request when making AJAX calls. ASP also requires that you explicitly define the value of contentType as application/json when requesting JSON. By default, when you create a service as ASP MVC that has JsonResultAction, ASP will nest the JSON data in an element named d: var dataSource = new kendo.data.DataSource({ transport: { read: { type: "POST", url: serviceURL, dataType: "JSON", contentType: "application/json", data: serverData }, parameterMap: function (data, operation) { return kendo.stringify(data); } }, schema: { data: "d" } }); Summary This article discussed about how to work with aggregates with the help of an example of counting the number of items in a column. Resources for Article: Further resources on this subject: Constructing and Evaluating Your Design Solution [Article] Data Manipulation in Silverlight 4 Data Grid [Article] Quick start – creating your first grid [Article]
Read more
  • 0
  • 0
  • 1502

article-image-selecting-elements
Packt
16 Aug 2013
17 min read
Save for later

Selecting Elements

Packt
16 Aug 2013
17 min read
(For more resources related to this topic, see here.) Understanding the DOM One of the most powerful aspects of jQuery is its ability to make selecting elements in the DOM easy. The DOM serves as the interface between JavaScript and a web page; it provides a representation of the source HTML as a network of objects rather than as plain text. This network takes the form of a family tree of elements on the page. When we refer to the relationships that elements have with one another, we use the same terminology that we use when referring to family relationships: parents, children, and so on. A simple example can help us understand how the family tree metaphor applies to a document: <html> <head> <title>the title</title> </head> <body> <div> <p>This is a paragraph.</p> <p>This is another paragraph.</p> <p>This is yet another paragraph.</p> </div> </body> </html> Here, <html> is the ancestor of all the other elements; in other words, all the other elements are descendants of <html>. The <head> and <body> elements are not only descendants, but children of <html> as well. Likewise, in addition to being the ancestor of <head> and <body>, <html> is also their parent. The <p> elements are children (and descendants) of <div>, descendants of <body> and <html>, and siblings of each other. To help visualize the family tree structure of the DOM, we can use a number of software tools, such as the Firebug plugin for Firefox or the Web Inspector in Safari or Chrome. With this tree of elements at our disposal, we'll be able to use jQuery to efficiently locate any set of elements on the page. Our tools to achieve this are jQuery selectors and traversal methods. Using the $() function The resulting set of elements from jQuery's selectors and methods is always represented by a jQuery object. Such a jQuery object is very easy to work with when we want to actually do something with the things that we find on a page. We can easily bind events to these objects and add slick effects to them, as well as chain multiple modifications or effects together. Note that jQuery objects are different from regular DOM elements or node lists, and as such do not necessarily provide the same methods and properties for some tasks. In order to create a new jQuery object, we use the $() function. This function typically accepts a CSS selector as its sole parameter and serves as a factory returning a new jQuery object pointing to the corresponding elements on the page. Just about anything that can be used in a stylesheet can also be passed as a string to this function, allowing us to apply jQuery methods to the matched set of elements. Making jQuery play well with other JavaScript libraries In jQuery, the dollar sign ($) is simply an alias for jQuery. Because a $() function is very common in JavaScript libraries, conflicts could arise if more than one of these libraries were being used in a given page. We can avoid such conflicts by replacing every instance of $ with jQuery in our custom jQuery code. The three primary building blocks of selectors are tag name, ID, and class. They can be used either on their own or in combination with others. The following simple examples illustrate how these three selectors appear in code: Selector type CSS jQuery What it does Tag name p { } $('p') This selects all paragraphs in the document. ID #some-id { } $('#some-id') This selects the single element in the document that has an ID of some-id. Class .some-class { } $('.some-class') This selects all elements in the document that have a class of some-class. When we call methods of a jQuery object, the elements referred by the selector we passed to $() are looped through automatically and implicitly. Therefore, we can usually avoid explicit iteration, such as a for loop, that is so often required in DOM scripting. Now that we have covered the basics, we're ready to start exploring some more powerful uses of selectors. CSS selectors The jQuery library supports nearly all the selectors included in CSS specifications 1 through 3, as outlined on the World Wide Web Consortium's site: http://www.w3.org/Style/CSS/specs. This support allows developers to enhance their websites without worrying about which browsers might not understand more advanced selectors, as long as the browsers have JavaScript enabled. Progressive Enhancement Responsible jQuery developers should always apply the concepts of progressive enhancement and graceful degradation to their code, ensuring that a page will render as accurately, even if not as beautifully, with JavaScript disabled as it does with JavaScript turned on. We will continue to explore these concepts throughout the article. More information on progressive enhancement can be found at http://en.wikipedia.org/wiki/Progressive_enhancement. To begin learning how jQuery works with CSS selectors, we'll use a structure that appears on many websites, often for navigation – the nested unordered list: <ul id="selected-plays"> <li>Comedies <ul> <li><a href="/asyoulikeit/">As You Like It</a></li> <li>All's Well That Ends Well</li> <li>A Midsummer Night's Dream</li> <li>Twelfth Night</li> </ul> </li> <li>Tragedies <ul> <li><a href="hamlet.pdf">Hamlet</a></li> <li>Macbeth</li> <li>Romeo and Juliet</li> </ul> </li> <li>Histories <ul> <li>Henry IV (<a href="mailto:[email protected]">email</a>) <ul> <li>Part I</li> <li>Part II</li> </ul> <li><a href="http://www.shakespeare.co.uk/henryv.htm"> Henry V</a></li> <li>Richard II</li> </ul> </li> </ul> Notice that the first <ul> has an ID of selecting-plays, but none of the <li> tags have a class associated with them. Without any styles applied, the list looks like this: The nested list appears as we would expect it to—a set of bulleted items arranged vertically and indented according to their level. Styling list-item levels Let's suppose that we want the top-level items, and only the top-level items—Comedies, Tragedies, and Histories — to be arranged horizontally. We can start by defining a horizontal class in the stylesheet: .horizontal { float: left; list-style: none; margin: 10px; } The horizontal class floats the element to the left-hand side of the one following it, removes the bullet from it if it's a list item, and adds a 10-pixel margin on all sides of it. Rather than attaching the horizontal class directly in our HTML, we'll add it dynamically to the top-level list items only, to demonstrate jQuery's use of selectors: $(document).ready(function() { $('#selected-plays > li').addClass('horizontal '); }); Listing 2.1 We begin the jQuery code by calling $(document).ready(), which runs the function passed to it once the DOM has been loaded, but not before. The second line uses the child combinator (>) to add the horizontal class to all the top-level items only. In effect, the selector inside the $() function is saying, "Find each list item (li) that is a child (>) of the element with an ID of selected-plays (#selected-plays)". With the class now applied, the rules defined for that class in the stylesheet take effect, which in this case means that the list items are arranged horizontally rather than vertically. Now our nested list looks like this: Styling all the other items—those that are not in the top level—can be done in a number of ways. Since we have already applied the horizontal class to the top-level items, one way to select all sub-level items is to use a negation pseudo-class to identify all list items that do not have a class of horizontal. Note the addition of the third line of code: $(document).ready(function() { $('#selected-plays > li').addClass('horizontal'); $('#selected-plays li:not(.horizontal)').addClass('sub- level');li:not(.horizontal)').addClass('sub-level'); }); Listing 2.2 This time we are selecting every list item (<li>) that: Is a descendant of the element with an ID of selected-plays (#selected-plays) Does not have a class of horizontal (:not(.horizontal)) When we add the sub-level class to these items, they receive the shaded background defined in the stylesheet: .sub-level { background: #ccc; } Now the nested list looks like this: Attribute selectors Attribute selectors are a particularly helpful subset of CSS selectors. They allow us to specify an element by one of its HTML attributes, such as a link's title attribute or an image's alt attribute. For example, to select all images that have an alt attribute, we write the following: $('img[alt]') Styling links Attribute selectors accept a wildcard syntax inspired by regular expressions for identifying the value at the beginning (^) or end ($) of a string. They can also take an asterisk (*) to indicate the value at an arbitrary position within a string or an exclamation mark (!) to indicate a negated value. Let's say we want to have different styles for different types of links. We first define the styles in our stylesheet: a { color: #00c; } a.mailto { background: url(images/email.png) no-repeat right top; padding-right: 18px; } a.pdflink { background: url(images/pdf.png) no-repeat right top; padding-right: 18px; } a.henrylink { background-color: #fff; padding: 2px; border: 1px solid #000; } Then, we add the three classes—mailto, pdflink, and henrylink—to the appropriate links using jQuery. To add a class for all e-mail links, we construct a selector that looks for all anchor elements (a) with an href attribute ([href]) that begins with mailto: (^="mailto:"), as follows: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); }); Listing 2.3 Because of the rules defined in the page's stylesheet, an envelope image appears after the mailto: link on the page. To add a class for all the links to PDF files, we use the dollar sign rather than the caret symbol. This is because we're selecting links with an href attribute that ends with .pdf: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); $('a[href$=".pdf"]').addClass('pdflink'); }); Listing 2.4 The stylesheet rule for the newly added pdflink class causes an Adobe Acrobat icon to appear after each link to a PDF document, as shown in the following screenshot: Attribute selectors can be combined as well. We can, for example, add the class henrylink to all links with an href value that both starts with http and contains henry anywhere: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); $('a[href$=".pdf"]').addClass('pdflink'); $('a[href^="http"][href*="henry"]') .addClass('henrylink'); }); }); Listing 2.5 With the three classes applied to the three types of links, we should see the following: Note the PDF icon to the right-hand side of the Hamlet link, the envelope icon next to the email link, and the white background and black border around the Henry V link. Custom selectors To the wide variety of CSS selectors, jQuery adds its own custom selectors. These custom selectors enhance the already impressive capabilities of CSS selectors to locate page elements in new ways. Performance note When possible, jQuery uses the native DOM selector engine of the browser to find elements. This extremely fast approach is not possible when custom jQuery selectors are used. For this reason, it is recommended to avoid frequent use of custom selectors when a native option is available and performance is very important. Most of the custom selectors allow us to choose one or more elements from a collection of elements that we have already found. The custom selector syntax is the same as the CSS pseudo-class syntax, where the selector starts with a colon (:). For example, to select the second item from a set of <div> elements with a class of horizontal, we write this: $('div.horizontal:eq(1)') Note that :eq(1) selects the second item in the set because JavaScript array numbering is zero-based, meaning that it starts with zero. In contrast, CSS is one-based, so a CSS selector such as $('div:nth-child(1)') would select all div selectors that are the first child of their parent. Because it can be difficult to remember which selectors are zero-based and which are one-based, we should consult the jQuery API documentation at http://api.jquery.com/category/selectors/ when in doubt. Styling alternate rows Two very useful custom selectors in the jQuery library are :odd and :even. Let's take a look at how we can use one of them for basic table striping given the following tables: <h2>Shakespeare's Plays</h2> <table> <tr> <td>As You Like It</td> <td>Comedy</td> <td></td> </tr> <tr> <td>All's Well that Ends Well</td> <td>Comedy</td> <td>1601</td> </tr> <tr> <td>Hamlet</td> <td>Tragedy</td> <td>1604</td> </tr> <tr> <td>Macbeth</td> <td>Tragedy</td> <td>1606</td> </tr> <tr> <td>Romeo and Juliet</td> <td>Tragedy</td> <td>1595</td> </tr> <tr> <td>Henry IV, Part I</td> <td>History</td> <td>1596</td> </tr> <tr> <td>Henry V</td> <td>History</td> <td>1599</td> </tr> </table> <h2>Shakespeare's Sonnets</h2> <table> <tr> <td>The Fair Youth</td> <td>1–126</td> </tr> <tr> <td>The Dark Lady</td> <td>127–152</td> </tr> <tr> <td>The Rival Poet</td> <td>78–86</td> </tr> </table> With minimal styles applied from our stylesheet, these headings and tables appear quite plain. The table has a solid white background, with no styling separating one row from the next, as shown in the following screenshot: Now we can add a style to the stylesheet for all the table rows and use an alt class for the odd rows: tr { background-color: #fff; } .alt { background-color: #ccc; } Finally, we write our jQuery code, attaching the class to the odd-numbered table rows (<tr> tags): $(document).ready(function() { $('tr:even').addClass('alt'); }); Listing 2.6 But wait! Why use the :even selector for odd-numbered rows? Well, just as with the :eq() selector, the :even and :odd selectors use JavaScript's native zero-based numbering. Therefore, the first row counts as zero (even) and the second row counts as one (odd), and so on. With this in mind, we can expect our simple bit of code to produce tables that look like this: Note that for the second table, this result may not be what we intend. Since the last row in the Plays table has the alternate gray background, the first row in the Sonnets table has the plain white background. One way to avoid this type of problem is to use the :nth-child() selector instead, which counts an element's position relative to its parent element rather than relative to all the elements selected so far. This selector can take a number, odd, or even as its argument: $(document).ready(function() { $('tr:nth-child(odd)').addClass('alt'); }); Listing 2.7 As before, note that :nth-child() is the only jQuery selector that is one-based. To achieve the same row striping as we did earlier—except with consistent behavior for the second table—we need to use odd rather than even as the argument. With this selector in place, both tables are now striped nicely, as shown in the following screenshot: Finding elements based on textual content For one final custom-selector touch, let's suppose for some reason we want to highlight any table cell that referred to one of the Henry plays. All we have to do—after adding a class to the stylesheet to make the text bold and italicized ( .highlight {font-weight:bold; font-style: italic;} )—is add a line to our jQuery code using the :contains() selector: $(document).ready(function() { $('tr:nth-child(odd)').addClass('alt'); $('td:contains(Henry)').addClass('highlight'); }); Listing 2.8 So, now we can see our lovely striped table with the Henry plays prominently featured: It's important to note that the :contains() selector is case sensitive. Using $('td:contains(henry)') instead, without the uppercase "H", would select no cells. Admittedly, there are ways to achieve the row striping and text highlighting without jQuery—or any client-side programming, for that matter. Nevertheless, jQuery, along with CSS, is a great alternative for this type of styling in cases where the content is generated dynamically and we don't have access to either the HTML or server-side code. Form selectors The capabilities of custom selectors are not limited to locating elements based on their position. For example, when working with forms, jQuery's custom selectors and complementary CSS3 selectors can make short work of selecting just the elements we need. The following table describes a handful of these form selectors: Selector Match :Input Input, text area, select, and button elements :button Button elements and input elements with a type attribute equal to button :enabled Form elements that are enabled :disabled Form elements that are disabled :checked Radio buttons or checkboxes that are checked :selected Option elements that are selected As with the other selectors, form selectors can be combined for greater specificity. We can, for example, select all checked radio buttons (but not checkboxes) with $('input[type="radio"]:checked') or select all password inputs and disabled text inputs with $('input[type="password"], input[type="text"]:disabled'). Even with custom selectors, we can use the same basic principles of CSS to build the list of matched elements. Summary With the techniques that we have covered in this article, we should now be able to locate sets of elements on the page in a variety of ways. In particular, we learned how to style top-level and sub-level items in a nested list by using basic CSS selectors, how to apply different styles to different types of links by using attribute selectors, add rudimentary striping to a table by using either the custom jQuery selectors :odd and :even or the advanced CSS selector :nth-child(), and highlight text within certain table cells by chaining jQuery methods. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 1038

article-image-quick-start-creating-your-first-application
Packt
13 Aug 2013
14 min read
Save for later

Quick start - creating your first application

Packt
13 Aug 2013
14 min read
(For more resources related to this topic, see here.) By now you should have Meteor installed and ready to create your first app, but jumping in blindly would be more confusing than not. So let’s take a moment to discuss the anatomy of a Meteor application. We have already talked about how Meteor moves all the workload from the server to the browser, and we have seen firsthand the folder of plugins, which we can incorporate into our apps, so what have we missed? Well MVVM of course. MVVM stands for Model, View, and View-Model. These are the three components that make up a Meteor application. If you’ve ever studied programming academically, then you’ll know there’s a concept called separation of concerns. What this means is that you separate code with different intentions into different components. This allows you to keep things neat, but more importantly—if done right—it allows for better testing and customization down the line. A proper separation is one that allows you to remove a piece of code and replace it with another without disrupting the rest of your app. An example of this could be a simple function. If you print out debug messages to a file throughout your app, it would be a terrible practice to manually write this code out each time. A much better solution would be to “separate” this code out into its own function, and only reference it throughout your app. This way, down the line if you decide you want debug messages to be e-mailed instead of written to a file, you only need to change the one function and your app will continue to work without even knowing about the change. So we know separation is important but I haven’t clarified what MVVM is yet. To get a better idea let’s take a look at what kind of code should go in each component. Model: The Model is the section of your code that has to do with the backend code. This usually refers to your database, but it’s not exclusive to just that. In Meteor, you can generally consider the database to be your application’s model. View: The View is exactly what it sounds like, it’s your application’s view. It’s the HTML that you send to the browser. You want to keep these files as logic-less as possible, this will allow for better separation. It will assure that all your logic code is in one place, and it will help with testing and code re-use. View-Model: Now the View-Model is where all the magic happens. The View-Model has two jobs—one is to interface the model to the view and the second is to handle all the events. Basically, all your logic code will be going here. This is just a brief explanation on the MVVM pattern, but like most things I think an example is in order to better illustrate. Let’s pretend we have a site where people can share pictures, such as a typical social network would. On the Model side, you will have a database which contains all the user’s pictures. Now this is very nice but it’s private info and no user should be able to access it. That’s where the View-Model comes in. The View-Model accesses the main Model, and creates a custom version for the View. So, for instance, it creates a new dataset that only contains pictures from the user’s friends. That is the View-Model’s first job, to create datasets for the View with info from the Model. Next, the View accesses the View-Model and gets the information it needs to display the page; in our example this could be an array of pictures. Now the page is built and both the Model and View are done with their jobs. The last step is to handle page events, for example, the user clicks a button. If you remember, the views are logic-less, so when someone clicks a button, the event is sent back to the View-Model to be processed. If you’re still a bit fuzzy on the concept it should become clearer when we create our first application. Now that we have gone through the concepts we are ready to build our first application. To get started, open a terminal window and create a new folder for your Meteor applications: mkdir ~/meteorApps This creates a new directory in our home folder—which is represented by the tilde (~) symbol—called meteorApps. Next let’s enter this folder by typing: cd ~/meteorApps The cd (change directory) command will move the terminal to the location specified, which in our case is the meteorApps folder. The last step is to actually create a Meteor application and this is done by typing: meteor create firstApp You should be greeted with a message telling you how to run your app but we are going to hold of on that, for now just enter the directory by typing: cd firstAppls The cd command, you should already be familiar with what it does, and the ls function just lists the files in the current directory. If you didn’t play around with the skel folder from the last section, then you should have three files in your app’s folder—an HTML file, a JavaScript file, and a CSS file. The HTML and CSS files are the View in the MVVM pattern, while the JavaScript file is the View-Model. It’s a little difficult to begin explaining everything because we have a sort of chicken and egg paradox where we can’t explain one without the other. But let’s begin with the View as it’s the simpler of the two, and then we will move backwards to the View-Model. The View If you open the HTML file, you should see a couple of lines, mostly standard HTML, but there are a few commands from Meteor’s default templating language—Handlebars. This is not Meteor specific, as Handlebars is a templating language based on the popular mustache library, so you may already be familiar with it, even without knowing Meteor. But just in case, I’ll quickly run through the file: <head> <title>firstApp</title></head> This first part is completely standard HTML; it’s just a pair of head tags, with the page’s title being set inside. Next we have the body tag: <body> {{> hello}}</body> The outer body tags are standard HTML, but inside there is a Handlebars function. Handlebars allows you to define template partials, which are basically pieces of HTML that are given a name. That way you are able to add the piece wherever you want, even multiple times on the same page. In this example, Meteor has made a call to Handlebars to insert the template called hello inside the body tags. It’s a fairly easy syntax to learn; you just open two curly braces then you put a greater-than sign followed by the name of the template, finally closing it o ff with a pair of closing braces. The rest of the file is the definition of the hello template partial: <template name=”hello”> <h1>Hello World!</h1> {{greeting}} <input type=”button” value=”Click” /></template> Again it’s mostly standard HTML, just an H1 title and a button. The only special part is the greeting line in the middle, which is another Handlebars function to insert data. This is how the MVVM pattern works, I said earlier that you want to keep the view as simple as possible, so if you have to calculate anything you do it in the View-Model and then load the results to the View. You do this by leaving a reference; in our code the reference is to greeting , which means you place whatever greeting equals to here. It’s a placeholder for a variable, and if you guessed that the variable greeting will be in the View-Model, then you are 100 percent correct. Another thing to notice is the fact that we do have a button on the page, but you won’t find any event handlers here. That’s because, like I mentioned earlier, the events are handled in the View-Model as well. So it seems like we are done here, and the next logical step is to take a peek at the View-Model. If you remember, the View-Model is the .js file, so close this out and open the firstApp.js file. The JS file There is slightly more code here, but if you’re comfortable with JavaScript, then everything should feel right at home. At first glance you can see that the page is split up into two if statements— Meteor.isClient and Meteor.isServer. This is because the JS file is parsed on both the server and the user’s browser. These statements are used to write code for one and not the other. For now we aren’t going to be dealing with the server, so you don’t have to worry about the bottom section. The top section, on the other hand, has our HTML file’s data. While we were in the View, we saw a call to a template partial named hello and then inside it we referenced a placeholder called greeting . The way to set these placeholders is by referencing the global Template variable, and to set the value by following this pattern: Template.template_name.placeholder_name So in our example it would be: Template.hello.greeting And if you take a look at the first thing inside the isClient variable’s if statement, you will find exactly this. Here, it is set to a function, which returns a simple string. You can set it directly to a string, but then it’s not dynamic. Usually the only reason you are defining a View-Model variable is because it’s something that has to be computed via a function, so that’s why they did it like that. But there are cases where you may just want to reference a simple string, and that’s fine. To recap, so far in the View we have a reference to a piece of data named greeting inside a template partial called hello, which we are setting in the View-Model to the string Welcome to firstApp. The last part of the JS file is the part that handles events on the page; it does this by passing an event-map to a template’s events function. This follows the same notation as the previous, so you type: Template.template_name.events( events_map ); I’ll paste the example’s code here, for further illustration: Template.hello.events({ ‘click input’ : function () { // template data, if any, is available in ‘this’ if (typeof console !== ‘undefined’) console.log(“You pressed the button”); } }); Inside each events object, you place the action and target as the key, and you set a function as the value. The actions are standard JavaScript actions, so you have things such as click, dblclick, keydown, and so on. Targets use standard CSS notation, which is periods for classes, hash symbols for IDs, and just the tag name for HTML tags. Whenever the event happens (for example, the input is clicked) the attached function will be called. To view the full gist of event types, you can take a look at the full list here: http://docs.meteor.com/#template_events It would be a lot shorter if there wasn’t a comment or an if statement to make sure the console is defined. But basically the function will just output the words You pressed the button to the console every time you pressed the button. Pretty intuitive! So we went through the files, all that’s left to do is actually test them. To do this, go back to the terminal, and make sure you’re in the firstApps folder. This can be achieved by using ls again to make sure the three files are there, and by using cd ~/meteorApps/firstApp if you are not looking in the right folder. Next, just type meteor and hit Enter, which will cause Meteor to compile everything together and run the built-in web server. If this is done right, you should see a message saying something like: Running on: http: // localhost:3000/ Navigate your browser to the location specified (http : //localhost:3000), and you should see the app that we just created. If your browser has a console, you can open it up and click the button. Doing so will display the message You pressed the button, similar to the one we saw in the JS file. I hope it all makes sense now, but to drive the point home, we will make a few adjustments of our own. In the terminal window, press Ctrl + C to close the Meteor server, then open up the HTML file. A quick revision After the call to the hello template inside the body tags, add a call to another template named quickStart. Here is the new body section along with the completed quickStart template: <body> {{> hello}} {{> quickStart}}</body><template name=”quickStart”> <h3>Click Counter</h3> The Button has been pressed {{numClick}} time(s) <input type=”button” id=”counter” value=”CLICK ME!!!” /></template> Summary I wanted to keep it as similar to the other template as possible, not to throw too much at you all at once. It simply contains a title enclosed in the header tags followed by a string of text with a placeholder named numClick and a button with an id value of counter. There’s nothing radically different over the other template, so you should be fairly comfortable with it. Now save this and open the JS file. What we are adding to the page is a counter that will display the number of times the button was pressed. We do this by telling Meteor that the placeholder relies on a specific piece of data; Meteor will then track this data and every time it gets changed, the page will be automatically updated. The easiest way to set this up is by using Meteor’s Session object. Session is a key-value store object, which allows you to store and retrieve data inside Meteor. You set data using the set method, passing in a name (key) and value; you can then retrieve that stored info by calling the get method, passing in the same key. Besides the Session object bit, everything else is the same. So just add the following part right after the hello template’s events call, and make sure it’s inside the isClient variable’s if statement: Template.quickStart.numClick = function(){ var pcount = Session.get(“pressed_count”); return (pcount) ? pcount : 0; } This function gets the current number of clicks—stored with a key of pressed_count —and returns it, defaulting to zero if the value was never set. Since we are using the pressed_count property inside the placeholder’s function, Meteor will automatically update this part of the HTML whenever pressed_count changes. Last but not least we have to add the event-map; put the following code snippet right after the previous code: Template.quickStart.events({ ‘click #counter’ : function(){ var pcount = Session.get(“pressed_count”); pcount = (pcount) ? pcount + 1 : 1; Session.set(“pressed_count”, pcount); } }); Here we have a click event for our button with the counter ID, and the attached function just get’s the current count and increments it by one. To try it out, just save this file, and in the terminal window while still in the project’s directory, type meteor to restart the web server. Try clicking the button a few times, and if all went well the text should be updated with an incrementing value. Resources for Article: Further resources on this subject: Meteor.js JavaScript Framework: Why Meteor Rocks! [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2 [Article] YUI Test [Article]
Read more
  • 0
  • 0
  • 939
article-image-setting-node
Packt
07 Aug 2013
10 min read
Save for later

Setting up Node

Packt
07 Aug 2013
10 min read
(For more resources related to this topic, see here.) System requirements Node runs on POSIX-like operating systems, the various UNIX derivatives (Solaris, and so on), or workalikes (Linux, Mac OS X, and so on), as well as on Microsoft Windows, thanks to the extensive assistance from Microsoft. Indeed, many of the Node built-in functions are direct corollaries to POSIX system calls. It can run on machines both large and small, including the tiny ARM devices such as the Raspberry Pi microscale embeddable computer for DIY software/hardware projects. Node is now available via package management systems, limiting the need to compile and install from source. Installing from source requires having a C compiler (such as GCC), and Python 2.7 (or later). If you plan to use encryption in your networking code you will also need the OpenSSL cryptographic library. The modern UNIX derivatives almost certainly come with these, and Node's configure script (see later when we download and configure the source) will detect their presence. If you should have to install them, Python is available at http://python.org and OpenSSL is available at http://openssl.org. Installing Node using package managers The preferred method for installing Node, now, is to use the versions available in package managers such as apt-get, or MacPorts. Package managers simplify your life by helping to maintain the current version of the software on your computer and ensuring to update dependent packages as necessary, all by typing a simple command such as apt-get update. Let's go over this first. Installing on Mac OS X with MacPorts The MacPorts project (http://www.macports.org/) has for years been packaging a long list of open source software packages for Mac OS X, and they have packaged Node. After you have installed MacPorts using the installer on their website, installing Node is pretty much this simple: $ sudo port search nodejs nodejs @0.10.6 (devel, net) Evented I/O for V8 JavaScript nodejs-devel @0.11.2 (devel, net) Evented I/O for V8 JavaScript Found 2 ports. -- npm @1.2.21 (devel) node package manager $ sudo port install nodejs npm .. long log of downloading and installing prerequisites and Node Installing on Mac OS X with Homebrew Homebrew is another open source software package manager for Mac OS X, which some say is the perfect replacement for MacPorts. It is available through their home page at http://mxcl.github.com/homebrew/. After installing Homebrew using the instructions on their website, using it to install Node is as simple as this: $ brew search node leafnode node $ brew install node ==> Downloading http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz ######################################################################## 100.0% ==> ./configure –prefix=/usr/local/Cellar/node/0.10.7 ==> make install ==> Caveats Homebrew installed npm. We recommend prepending the following path to your PATH environment variable to have npm-installed binaries picked up: /usr/local/share/npm/bin ==> Summary /usr/local/Cellar/node/0.10.7: 870 files, 16M, built in 21.9 minutes Installing on Linux from package management systems While it's still premature for Linux distributions or other operating systems to prepackage Node with their OS, that doesn't mean you cannot install it using the package managers. Instructions on the Node wiki currently list packaged versions of Node for Debian, Ubuntu, OpenSUSE, and Arch Linux. See: https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager For example, on Debian sid (unstable): # apt-get update # apt-get install nodejs # Documentation is great. And on Ubuntu: # sudo apt-get install python-software-properties # sudo add-apt-repository ppa:chris-lea/node.js # sudo apt-get update # sudo apt-get install nodejs npm We can expect in due course that the Linux distros and other operating systems will routinely bundle Node into the OS like they do with other languages today. Installing the Node distribution from nodejs.org The nodejs.org website offers prebuilt binaries for Windows, Mac OS X, Linux, and Solaris. You simply go to the website, click on the Install button, and run the installer. For systems with package managers, such as the ones we've just discussed, it's preferable to use that installation method. That's because you'll find it easier to stay up-to-date with the latest version. However, on Windows this method may be preferred. For Mac OS X, the installer is a PKG file giving the typical installation process. For Windows, the installer simply takes you through the typical install wizard process. Once finished with the installer, you have a command line tool with which to run Node programs. The pre-packaged installers are the simplest ways to install Node, for those systems for which they're available. Installing Node on Windows using Chocolatey Gallery Chocolatey Gallery is a package management system, built on top of NuGet. Using it requires a Windows machine modern enough to support the Powershell and the .NET Framework 4.0. Once you have Chocolatey Gallery (http://chocolatey.org/), installing Node is as simple as this: C:> cinst install nodejs Installing the StrongLoop Node distribution StrongLoop (http://strongloop.com) has put together a supported version of Node that is prepackaged with several useful tools. This is a Node distribution in the same sense in which Fedora or Ubuntu are Linux distributions. StrongLoop brings together several useful packages, some of which were written by StrongLoop. StrongLoop tests the packages together, and distributes installable bundles through their website. The packages in the distribution include Express, Passport, Mongoose, Socket.IO, Engine.IO, Async, and Request. We will use all of those modules in this book. To install, navigate to the company home page and click on the Products link. They offer downloads of precompiled packages for both RPM and Debian Linux systems, as well as Mac OS X and Windows. Simply download the appropriate bundle for your system. For the RPM bundle, type the following: $ sudo rpm -i bundle-file-name For the Debian bundle, type the following: $ sudo dpkg -i bundle-file-name The Windows or Mac bundles are the usual sort of installable packages for each system. Simply double-click on the installer bundle, and follow the instructions in the install wizard. Once StrongLoop Node is installed, it provides not only the nodeand npmcommands (we'll go over these in a few pages), but also the slnodecommand. That command offers a superset of the npmcommands, such as boilerplate code for modules, web applications, or command-line applications. Installing from source on POSIX-like systems Installing the pre-packaged Node distributions is currently the preferred installation method. However, installing Node from source is desirable in a few situations: It could let you optimize the compiler settings as desired It could let you cross-compile, say for an embedded ARM system You might need to keep multiple Node builds for testing You might be working on Node itself Now that you have the high-level view, let's get our hands dirty mucking around in some build scripts. The general process follows the usual configure, make, and makeinstallroutine that you may already have performed with other open source software packages. If not, don't worry, we'll guide you through the process. The official installation instructions are in the Node wiki at https://github.com/joyent/node/wiki/Installation. Installing prerequisites As noted a minute ago, there are three prerequisites, a C compiler, Python, and the OpenSSL libraries. The Node installation process checks for their presence and will fail if the C compiler or Python is not present. The specific method of installing these is dependent on your operating system. These commands will check for their presence: $ cc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ python Python 2.6.6 (r266:84292, Feb 15 2011, 01:35:25) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Installing developer tools on Mac OS X The developer tools (such as GCC) are an optional installation on Mac OS X. There are two ways to get those tools, both of which are free. On the OS X installation DVD is a directory labeled Optional Installs, in which there is a package installer for—among other things—the developer tools, including Xcode. The other method is to download the latest copy of Xcode (for free) from http://developer.apple.com/xcode/. Most other POSIX-like systems, such as Linux, include a C compiler with the base system. Installing from source for all POSIX-like systems First, download the source from http://nodejs.org/download. One way to do this is with your browser, and another way is as follows: $ mkdir src $ cd src $ wget http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz $ tar xvfz node-v0.10.7.tar.gz $ cd node-v0.10.7 The next step is to configure the source so that it can be built. It is done with the typical sort of configure script and you can see its long list of options by running the following: $ ./configure –help. To cause the installation to land in your home directory, run it this way: $ ./configure –prefix=$HOME/node/0.10.7 ..output from configure If you want to install Node in a system-wide directory simply leave off the -prefixoption, and it will default to installing in /usr/local. After a moment it'll stop and more likely configure the source tree for installation in your chosen directory. If this doesn't succeed it will print a message about something that needs to be fixed. Once the configure script is satisfied, you can go on to the next step. With the configure script satisfied, compile the software: $ make .. a long log of compiler output is printed $ make install If you are installing into a system-wide directory do the last step this way instead: $ make $ sudo make install Once installed you should make sure to add the installation directory to your PATHvariable as follows: $ echo 'export PATH=$HOME/node/0.10.7/bin:${PATH}' >>~/.bashrc $ . ~/.bashrc For cshusers, use this syntax to make an exported environment variable: $ echo 'setenv PATH $HOME/node/0.10.7/bin:${PATH}' >>~/.cshrc $ source ~/.cshrc This should result in some directories like this: $ ls ~/node/0.10.7/ bin include lib share $ ls ~/node/0.10.7/bin node node-waf npm Maintaining multiple Node installs simultaneously Normally you won't have multiple versions of Node installed, and doing so adds complexity to your system. But if you are hacking on Node itself, or are testing against different Node releases, or any of several similar situations, you may want to have multiple Node installations. The method to do so is a simple variation on what we've already discussed. If you noticed during the instructions discussed earlier, the –prefixoption was used in a way that directly supports installing several Node versions side-by-side in the same directory: $ ./configure –prefix=$HOME/node/0.10.7 And: $ ./configure –prefix=/usr/local/node/0.10.7 This initial step determines the install directory. Clearly when Version 0.10.7, Version 0.12.15, or whichever version is released, you can change the install prefix to have the new version installed side-by-side with the previous versions. To switch between Node versions is simply a matter of changing the PATHvariable (on POSIX systems), as follows: $ export PATH=/usr/local/node/0.10.7/bin:${PATH} It starts to be a little tedious to maintain this after a while. For each release, you have to set up Node, npm, and any third-party modules you desire in your Node install; also the command shown to change your PATHis not quite optimal. Inventive programmers have created several version managers to make this easier by automatically setting up not only Node, but npmalso, and providing commands to change your PATHthe smart way: Node version manager: https://github.com/visionmedia/n Nodefront, aids in rapid frontend development: http://karthikv.github.io/nodefront/
Read more
  • 0
  • 0
  • 4296

article-image-introduction-nginx
Packt
31 Jul 2013
8 min read
Save for later

Introduction to nginx

Packt
31 Jul 2013
8 min read
(For more resources related to this topic, see here.) So, what is nginx? The best way to describe nginx (pronounced engine-x) is as an event-based multi-protocol reverse proxy. This sounds fancy, but it's not just buzz words and actually affects how we approach configuring nginx. It also highlights some of the flexibility that nginx offers. While it is often used as a web server and an HTTP reverse proxy, it can also be used as an IMAP reverse proxy or even a raw TCP reverse proxy. Thanks to the plug-in ready code structure, we can utilize a large number of first and third party modules to implement a diverse amount of features to make nginx an ideal fit for many typical use cases. A more accurate description would be to say that nginx is a reverse proxy first, and a web server second. I say this because it can help us visualize the request flow through the configuration file and rationalize how to achieve the desired configuration of nginx. The core difference this creates is that nginx works with URIs instead of files and directories, and based on that determines how to process the request. This means that when we configure nginx, we tell it what should happen for a certain URI rather than what should happen for a certain file on the disk. A beneficial part of nginx being a reverse proxy is that it fits into a large number of server setups, and can handle many things that other web servers simply aren't designed for. A popular question is "Why even bother with nginx when Apache httpd is available?" The answer lies in the way the two programs are designed. The majority of Apache setups are done using prefork mode, where we spawn a certain amount of processes and then embed our dynamic language in each process. This setup is synchronous, meaning that each process can handle one request at a time, whether that connection is for a PHP script or an image file. In contrast, nginx uses an asynchronous event-based design where each spawned process can handle thousands of concurrent connections. The downside here is that nginx will, for security and technical reasons, not embed programming languages into its own process - this means that to handle those we will need to reverse proxy to a backend, such as Apache, PHP-FPM, and so on. Thankfully, as nginx is a reverse proxy first and foremost, this is extremely easy to do and still allows us major benefits, even when keeping Apache in use. Let's take a look at a use case where Apache is used as an application server described earlier rather than just a web server. We have embedded PHP, Perl, or Python into Apache, which has the primary disadvantage of each request becoming costly. This is because the Apache process is kept busy until the request has been fully served, even if it's a request for a static file. Our online service has gotten popular and we now find that our server cannot keep up with the increased demand. In this scenario introducing nginx as a spoon-feeding layer would be ideal. When an nginx server with a spoon-feeding layer will sit between our end user and Apache and a request comes in, nginx will reverse proxy it to Apache if it is for a dynamic file, while it will handle any static file requests itself. This means that we offload a lot of the request handling from the expensive Apache processes to the more lightweight nginx processes, and increase the number of end users we can serve before having to spend money on more powerful hardware. Another example scenario is where we have an application being used from all over the world. We don't have any static files so we can't easily offload a number of requests from Apache. In this use case, our PHP process is busy from the time the request comes in until the user has finished downloading the response. Sadly, not everyone in the world has fast internet and, as a result, the sending process could be busy for a relatively significant period of time. Let's assume our visitor is on an old 56k modem and has a maximum download speed of 5 KB per second, it will take them five seconds to download a 25 KB gzipped HTML file generated by PHP. That's five seconds where our process cannot handle any other request. When we introduce nginx into this setup, we have PHP spending only microseconds generating the response but have nginx spend five seconds transferring it to the end user. Because nginx is asynchronous it will happily handle other connections in the meantime, and thus, we significantly increase the number of concurrent requests we can handle. In the previous two examples I used scenarios where nginx was used in front of Apache, but naturally this is not a requirement. nginx is capable of reverse proxying via, for instance, FastCGI, UWSGI, SCGI, HTTP, or even TCP (through a plugin) enabling backends, such as PHP-FPM, Gunicorn, Thin, and Passenger. Quick start – Creating your first virtual host It's finally time to get nginx up and running. To start out, let's quickly review the configuration file. If you installed via a system package, the default configuration file location is most likely /etc/nginx/nginx.conf. If you installed via source and didn't change the path pre fix, nginx installs itself into/usr/local/nginx and places nginx.conf in a /conf subdirectory. Keep this file open as a reference to help visualize many of the things described in this article. Step 1 – Directives and contexts To understand what we'll be covering in this section, let me first introduce a bit of terminology that the nginx community at large uses. Two central concepts to the nginx configuration file are those of directives and contexts. A directive is basically just an identifier for the various configuration options. Contexts refer to the different sections of the nginx configuration file. This term is important because the documentation often states which context a directive is allowed to have within. A glance at the standard configuration file should reveal that nginx uses a layered configuration format where blocks are denoted by curly brackets {}. These blocks are what are referred to as contexts. The topmost context is called main, and is not denoted as a block but is rather the configuration file itself. The main context has only a few directives we're really interested in, the two major ones being worker_processes and user. These directives handle how many worker processes nginx should run and which user/group nginx should run these under. Within the main context there are two possible subcontexts, the first one being called events. This block handles directives that deal with the event-polling nature of nginx. Mostly we can ignore every directive in here, as nginx can automatically configure this to be the most optimal; however, there's one directive which is interesting, namely worker_connections. This directive controls the number of connections each worker can handle. It's important to note here that nginx is a terminating proxy, so if you HTTP proxy to a backend, such as Apache httpd, that will use up two connections. The second subcontext is the interesting one called http. This context deals with everything related to HTTP, and this is what we will be working with almost all of the time. While there are directives that are configured in the http context, for now we'll focus on a subcontext within http called server. The server context is the nginx equivalent of a virtual host. This context is used to handle configuration directives based on the host name your sites are under. Within the server context, we have another subcontext called location. The location context is what we use to match the URI. Basically, a request to nginx will flow through each of our contexts, matching first the server block with the hostname provided by the client, and secondly the location context with the URI provided by the client. Depending on the installation method, there might not be any server blocks in the nginx.conf file. Typically, system package managers take advantage of the include directive that allows us to do an in-place inclusion into our configuration file. This allows us to separate out each virtual host and keep our configuration file more organized. If there aren't any server blocks, check the bottom of the file for an includedirective and check the directory from which it includes, it should have a file which contains a server block. Step 2 – Define your first virtual hosts Finally, let us define our first server block! server { listen 80; server_name example.com; root /var/www/website;} That is basically all we need, and strictly speaking, we don't even need to define which port to listen on as port 80 is default. However, it's generally a good practice to keep it in there should we want to search for all virtual hosts on port 80 later on. Summary This article provided the details about the important aspects of nginx. It also briefed about the configuration of our virtual host using nginx by explaining two simple steps, along with a configuration example. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 6723

article-image-digging-architecture
Packt
30 Jul 2013
31 min read
Save for later

Digging into the Architecture

Packt
30 Jul 2013
31 min read
(For more resources related to this topic, see here.) The big picture A very short description of a WaveMaker application could be: a Spring MVC server running in a Java container, such as Tomcat, serving file and JSON requests for a Dojo Toolkit-based JavaScript browser client. Unfortunately, such "elevator" descriptions can create more questions than they answer. For starters, although we will often refer to it as "the server," the WaveMaker server might be more aptly called an application server in most architectures. Sure, it is possible to have a useful application without additional servers or services beyond the WaveMaker server, but this is not typical. We could have a rich user interface to read against some in memory data set, for example. Far more commonly, the Java services running in the WaveMaker server are calling off to other servers or services, such as relational databases and RESTful web services. This means the WaveMaker server is often the middle or application tier server of a multi-tier application's architecture. Yet at the same time, the WaveMaker server can be eliminated completely. Applications can be packaged for uploading to PhoneGap build, http://build.phonegap.com/,directly from WaveMaker Studio. Both PhoneGap and the associated Apache project Cordova, http://cordova.apache.org,provide APIs to enable JavaScript to access native device functionality, such as capturing images with the camera and obtaining GPS location information. Packaged up and installed as a native application, the JavaScript files are loaded from the devices, file system instead of being downloaded from a server via HTTP. This means there is no origin domain to be constrained by. If the application only uses web services, or otherwise doesn't need additional services, such as database access, the WaveMaker server is neither used nor needed. Just because an application isn't installed on a mobile device from an app store doesn't mean we can't run it on a mobile device. Browsers on mobile devices are more capable than ever before. This means our client could be any device with a modern browser. You must also consider licensing in light of the bigger picture. WaveMaker, WaveMaker Studio, and the applications create with the Studio are released under the Apache 2.0 license, http://www.apache.org/licenses/LICENSE-2.0. The WaveMaker project was first released by WaveMaker Software in 2007. In March 2011, VMware (http://vmware.com) acquired the WaveMaker project. It was under VMware that WaveMaker 6.5 was released. In April 2013, Pramati Technlogies (http://pramati.com) acquired the assets of WaveMaker for its CloudJee (http://cloudjee.com) platform. WaveMaker continues to be developed and released by Pramati Technologies. Now that we understand where our client and server sit in the larger world, we will be primarily focused within and between those two parts. The overall picture of the client and server looks as shown in the following diagram: We will examine each piece of this diagram in detail during the course of this book. We shall start with the JavaScript client. Getting comfortable with the JavaScript client The client is a JavaScript client that runs in a modern browser. This means that most of the client, the HTML and DOM nodes that the browser interfaces with specifically, are created by JavaScript at runtime. The application is styled using CSS, and we can use HTML in our applications. However, we don't use HTML to define buttons and forms. Instead, we define components, such as widgets, and set their properties. These component class names and properties are used as arguments to functions that create DOM nodes for us. Dojo Toolkit To do this, WaveMaker uses the Dojo Toolkit, http://dojotoolkit.org/. Dojo, as it is generally referred to, is a modular, cross-browser, JavaScript framework with three sections. Dojo Core provides the base toolkit. On top of which are Dojo's visual widgets called Dijits. Finally, DojoX contains additional extensions such as charts and a color picker. DojoCampus' Dojo Explorer, http://dojocampus.com/explorer/, has a good selection of single unit demos across the toolkit, many with source code. Dojo allows developers to define widgets using HTML or JavaScript. WaveMaker users will better recognize the JavaScript approach. Specifically, WaveMaker 6.5.X uses version 1.6.1 of Dojo. Of the browsers supported by Dojo 1.6.1, http://dojotoolkit.org/reference-guide/1.8/releasenotes/1.6.html, Opera's "Dojo Core only" support prevents it from being supported by WaveMaker. This could change with Opera's move to WebKit. Building on top of the Dojo Toolkit, WaveMaker provides its own collections of widgets and underlying components. Although both can be called components, the name component is generally used for the non-visible parts, such as service calls to the server and the event notification system. Widgets, such as the Dijits, are visible components such as buttons and editors. Many, but not all, of the WaveMaker widgets extend functionality from Dojo widgets. When they do extend Dijits, WaveMaker widgets often add numerous functions and behaviors that are not part of Dojo. Examples include controlling the read-only state, formatting display values for currency, and merging components, such as buttons with icons in them. Combined with the WaveMaker runtime layers, these enhancements make it easy to assemble rich clients using only properties. WaveMaker's select editor (wm.SelectMenu) for example extends the Dojo Toolkit ComboBox (dijit.form.ComboBox) or the FilteringSelect (dijit.form.FilteringSelect) as needed. By default, a select menu has Dojo FilteringSelect as its editor, but it will use ComboBox instead if the user is on a mobile device or the developer has cleared the RestrictValues property tick box. A required select menu editor Let's consider the case of disabling a submit button when the user has not made a required list selection. In Dojo, this is done using JavaScript code, and for an experienced Dojo developer, this is not difficult. For those who may primarily consider Dojo a martial arts Studio however, it is likely another matter altogether. Using the WaveMaker framework provided widgets, no code is required to set up this inter-connection. This is simply a matter of visually linking or binding the button's disabled property to the lists' emptySelection property in the graphical binding dialog. Now the button will be disabled if the user has not made a selection in the grid's list of items. Logically, we can think of this as setting the disabled property to the value of the grid's emptySelection property, where emptySelection is true unless and until a row has been selected. Where WaveMaker most notably varies from the Dojo way of things is the layout engine. WaveMaker handles the layout of container widgets using its own engine. Containers are those widgets that contain other widgets, such as panels, tabs, and dialogs. This makes it easier for developers to arrange widgets in WaveMaker Studio. A result of this is that border, padding, and margin are set using properties on widgets, not by CSS. Border, padding, and margin are widget properties in WaveMaker, and are not controlled by CSS. Dojo made easy Having the Dojo framework available to us makes web development easier both when using the WaveMaker framework and when doing custom work. Dojo's modular and object-oriented functions, such as dojo.declare and dojo.inherited, for example, simplify creating custom components. The key takeaway here is that Dojo itself is available to you as a developer if you wish to use it directly. Many developers never need to utilize this capability, but it is available to you if you ever do wish to take advantage of it. Running the CRM Simple sample again from either the console in the browser development tools or custom project page code, we could use Dojo's byId() function to get a div, for example, the main title label: >dojo.byId("main_labelTitle"). In practice, the WaveMaker style of getting a DOM node via the component name, for example, main.labelTitle.domNode, is more practical and returns the same result. If a function or ability in Dojo is useful, the WaveMaker framework usually provides a wrapper of some sort for you. Just as often, the WaveMaker version is friendlier or otherwise easier to use in some way. For example, this.connect(), WaveMaker's version of dojo.connect(), tracks connections for you. This avoids the need for you to remember to call disconnect() to remove the reference added by every call to connect(). For more information about using Dojo functions in WaveMaker, see the Dojo framework page in the WaveMaker documentation at: http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Dojo+Framework. Binding and events Two solid examples of WaveMaker taking a powerful feature of Dojo and providing friendlier versions are topic notifications and event handling. Dojo.connect() enables you to register a method to be called when something happens. In other words: "when X happens, please also do Y". Studio provides visual tooling for this in the events section of a component's properties. Buttons have an event drop-down menu for their click event. Asynchronous server call components, live variables, and service variables, have tooled events for reviewing data just before the call is made and for the successful, and not so successful, returns from the call. These menus are populated with listings of likely components and if appropriate, functions. Invoking other service calls, particularly when a server call depends on data from the results of some previous server call, and navigation calls to other layers and pages within the application are easy examples of how WaveMaker's visual tooling of dojo.connect simplifies web development. WaveMaker's binding dialog is a graphical interface on the topic subscription system. Here we are "binding" a live variable that returns rows from the lineitem table to be filtered by the data value of the orderid editor in the form on the new order page: The result of this binding is that when the value of the orderid editor changes, the value in the filter parameter of this live variable will be updated. An event indicating that the value of this orderid editor has changed is published when the data value changes. This live variable's filter is being subscribed to that topic and can now update its value accordingly. Loading the client Web applications start from index.html, and a WaveMaker application is no different. If we examine index.html of a WaveMaker application, we see the total content is less than 100 lines. We have some meta tags in the head, mostly for Internet Explorer (MSIE) and iOS support. In the body, there are more entries to help out with older versions of MSIE, including script tags to use Chrome Frame if we so choose. If we cut all that away, index.html is rather simple. In the head, we load the CSS containing the projects theme and define a few lines of style classes for wavemakerNode and _wm_loading: <script>var wmThemeUrl = "/wavemaker/lib/wm/base/widget/themes/wm_default/theme.css";</script> <style type="text/css"> #wavemakerNode { height: 100%; overflow: hidden; position: relative; } #_wm_loading { text-align: center; margin: 25% 0px 25% 0px; } </style> Next we load the file config.js, which as its name suggests, is about configuration. The following line of code is used to load the file: <script type="text/javascript" src = "config.js"></script> Config.js defines the various settings, variables, and helper functions needed to initialize the application, such as the locale setting. Moving into the body tag of index.html, we find a div named wavemakerNode: <div id="wavemakerNode"> The next div tag is the loader gif, which is given in the following code: <div id="_wm_loading" style="z-index: 100;"> <table style='width:100%;height: 100%;'><tr><td align='center'><img alt="Loading" src = "/wavemaker/lib/boot/images/loader.gif" />&nbsp;&nbsp;Loading...</td></tr></table> </div> This is the standard spinner shown while the application is loading. With the loader gif now spinning, we begin the real work with runtimeLoader.js, as given in the following line of code: <script type="text/javascript" src = "/wavemaker/lib/runtimeLoader.js"></script> When running a project from Studio, the client runtime is loaded from Studio via WaveMaker. Config.js and index.html are modified for deployment while the client runtime is copied into the applications webapproot. runtimeLoader, as its name suggests, loads the WaveMaker runtime. With the runtime loaded, we can now load the top level project.a.js file, which defines our application using the dojo.declare() method. The following line of code loads the file: <script type="text/javascript" src = "project.a.js"></script> Finally, with our application class defined, we set up an instance of our application in wavemakerNode and run it. There are two modes for loading a WaveMaker application: debug and gzip mode. The debug mode is useful for debugging, as you would expect. The gzip mode is the default mode. The test mode of the Run , Test , or Compile button in Studio re-deploys the active project and opens it in debug mode. This is the only difference between using Test and Run in Studio. The Test button adds ?debug to the URL of the browser window; the Run button does not. Any WaveMaker application can be loaded in debug mode by adding debug to the URL parameters. For example, to load the CRM Simple application from with WaveMaker in debug mode, use the URL http://crm_simple.localhost:8094.com/?debug; detecting debug in the URL sets the djConfig.debugBoot flag, which alters the path used in runtimeLoader. djConfig.debugBoot = location.search.indexOf("debug") >=0; Like a compiled program, debug mode preserves variable names and all the other details that optimization removes which we would want available to use when debugging. However, JavaScript is not compiled into byte code or machine specific instructions. On the other hand, in gzip mode, the browser loads a few optimized packages containing all the source code in merged files. This reduces the number of files needed to load our application, which significantly improves loading time. These optimized packages are also minified. Minification removes whitespace and replaces variable names with short names, further reducing the volume of code to be parsed by the browser, and therefore further improving performance. The result is a significant reduction in the number of requests needed and the number of bytes transferred to load an application. A stock application in gzip mode requires 22 to 24 requests to load some 300 KB to 400 KB of content, depending on the application. In debug mode, the same app transfers over 1.5 MB in more than 500 requests. The index.html file, and when security is enabled, login.html, are yours to edit. If you are comfortable doing so, you can customize these files such as adding additional script tags. In practice, you shouldn't need to customize index.html, as you have full control of the application loaded into the wavemakerNode. Also, upgraded scripts in future versions of WaveMaker may need to programmatically update index.html and login.html. Changes to the X-US-Compatible meta tag are often required when support for newer versions of Internet Explorer becomes available, for example. These scripts can't possibly know about every customization you may make. Customization of index.html may cause these scripts to fail, and may require you to manually update these files. If you do encounter such a situation, simply use the index.html file from a project newly created in the new version as a template. Springing into the server side The WaveMaker server is a Java application running in a Java Virtual Machine (JVM). Like the client, it builds upon proven frameworks and libraries. In the case of the server, the foundational block is the SpringSource framework, http://www.springsource.org/SpringSource, or the Spring framework. The Spring framework is the most popular enterprise Java development framework today, and for good reason. The server of a WaveMaker application is a Spring application that includes the WaveMaker common, json, and runtime modules. More specifically, the WaveMaker server uses the Spring Web MVC framework to create a DispatcherServlet that delegates client requests to their handlers. WaveMaker uses only a handful of controllers, as we will see in the next section. The effective result is that it is the request URL that is used to direct a service call to the correct service. The method value of the request is the name of the client exposed function with the service to be called. In the case of overloaded functions, the signature of the params value is used to find the method matching by signature. We will look at example requests and responses shortly. Behind this controller is not only the power of the Spring framework, but also a number of leading frameworks such as Hibernate and, JaxWS, and libraries such as log4j and Apache commons. Here too, these libraries are available to you both directly in any custom work you might do and indirectly as tooled features of Studio. As we are working with a Spring server, we will be seeing Spring beans often as we examine the server-side configuration. One need not be familiar with Spring to reap its benefits when using custom Java in WaveMaker. Spring makes it easy to get access to other beans from our Java code. For example, if our project has imported a database as MyDB, we could get access to the service and any exposed functions in that service using getServiceBean().The following code illustrates the use of getServiceBean(): MyDB myDbSvc = (MyDB)RuntimeAccess.getInstance().getServiceBean("mydb"); We start by getting an instance of the WaveMaker runtime. From the returned runtime instance, we can use the getServiceBean() method to get a service bean for our mydb database service. There are other ways we could have got access to the service from our Java code; this one is pretty straightforward.  Starting from web.xml Just as the client side starts with index.html, a Java servlet starts in WEB-INF with web.xml. A WaveMaker application web.xml is a rather straightforward Spring MVC web.xml. You'll notice many servlet-mappings, a few listeners, and filters. Unlike index.html, web.xml is managed directly by Studio. If you need to add elements to the web-app context, add them to user-web.xml. The content of user-web.xml is merged into web.xml when generating the deployment package.  The most interesting entry is probably contextConfigLocation of /WEB-INF/project-springapp.xml. Project-springapp.xml is a Spring beans file. Immediately after the schema declaration is a series of resource imports. These imports include the services and entities that we create in Studio as we import databases and otherwise add services to our project. If you open project-spring.xml in WEB-INF, near the top of the file you'll see a comment noting how project-spring.xml is yours to edit. For experienced Spring users, here is the entry point to add any additional imports you may need. An example of such can be found at http://dev.wavemaker.com/wiki/bin/Spring. In that example, an additional XML file, ServerFileProcessor.xml, is used to enable component scanning on a package and sets some properties on those components. Project-spring.xml is then used to import ServerFileProcessor.xml into the application context. Many users of WaveMaker still think of Spring as the season between Winter and Summer. Such users do not need to think about these XML files. However, for those who are experienced with Java, the full power of the Spring framework is accessible to them. Also in project-springapp.xml is a list of URL mappings. These mappings specific request URLs that require handling by the file controller. Gzipped resources, for example, require the header Content-Encoding to be set to gzip. This informs the browser the content is gzip encoded and must be uncompressed before being parsed. >There are a few names that use ag in the server. WaveMaker Software the company was formerly known as ActiveGrid, and had a previous web development tool by the same name. The use of ag and com.activegrid stems back to the project's roots, first put down when the company was still known as ActiveGrid. Closing out web.xml is the Acegi filter mapping. Acegi is the security module used in WaveMaker 6.5 . Even when security is not enabled in an application, the Acegi filter mapping is included in web.xml. When security is not enabled in the project, an empty project-security.xml is used. Client and server communication Now that we've examined the client and server, we need to better understand the communication between the two. WaveMaker almost exclusively uses the HTTP methods GET and POST. In HTTP, GET is used, as you might suspect even without ever having heard of RFC 2626 (https://tools.ietf.org/html/rfc2616), to request, or get, a specific resource. Unless installed as a native application on a mobile device, a WaveMaker web application is loaded via a GET method. From index.html and runtimeLoad.js to the user defined pages and any images used on those images, the applications themselves are loaded into the browser using GET. All service calls, database reads and writes, or otherwise any invocations of a Java service functions, on the other hand, are POST. The URL of these POST functions is always the service named .json. For example, calls to a Java service named userPrefSvc would always be to the URL /userPrefSvc.json. Inside the POST method's request payload will be any required parameters including the method of the service to be invoked. The response will be the response returned from that call. PUT methods are not possible because we cannot nor do not want to know all possible WaveMaker server calls at "designtime", while the project files are open for writing in the Studio. This pattern avoids any URL length constraints, enabling lengthy datasets to be transferred while freeing up the URL to pass parameters such as page state. Let's take a look at an example. If you want to follow along in your browser's console, this is the third request of three when we select "Fog City Books" in the CRM Simple application when running the application with the console open. The following URL is the request URL: http://crm_simple.localhost:8094/services/runtimeService.json The following is request payload: {"params":["custpurchaseDB","com.custpurchasedb.data.Lineitem",null,{"properties":["id","item"],"filters":["id.orderid=9"],"matchMode":"start","ignoreCase":false},{"maxResults":500,"firstResult":0}],"method":"read","id":251422} The response is as follows: {"dataSetSize":2,"result":[{"id":{"itemid":2,"orderid":9},"item":{"itemid":2,"itemname":"Kidnapped","price":12.99},"quantity":2},{"id":{"itemid":10,"orderid":9},"item":{"itemid":10,"itemname":"Gravitys Rainbow","price":11.99},"quantity":1}]} As we expect, the request URL is to a service (in this case named runtime service), with the .json extension. Runtime service is the built-in WaveMaker service for reading and writing with the Hibernate (http://www.hibernate.org), data models generated by importing a database. Security service and WaveMaker service are the other built-in services used at runtime. The security service is used for security functions such as getUserName() and logout(). Note this does not include login, which is handled by Acegi. The WaveMaker service has functions such as getServerTimeOffset(), used to adjust for time zones, and remoteRESTCall(), used to proxy some web service calls. How the runtime service functions is easy to understand by observation. Inside the request payload we have, as the URL suggested, a JavaScript Object Notation (JSON) structure. JSON (http://www.json.org/), is a lightweight data-interchange format regularly used in AJAX applications. Dissecting our example request from the top of the structure enclosed in the outer-most {}'s looks like the following: {"params":[…….],"method":"read","id":251422} We have three top level name-value pairs to our request object: params, method, and id. The id is 251422; method is read and the params value is an array, as indicated by the [] brackets: ["custpurchaseDB","com.custpurchasedb.data.Lineitem",null,{},{ }] In our case, we have an array of five values. The first is the database service name, custpurchaseDB. Next we have what appears to be the package and class name we will be reading from, not unlike from in a SQL query. After which, we have a null and two objects. JSON is friendly to human reading, and we could continue to unwrap the two objects in this request in a similar fashion.  when we discuss database services and check out the response. At the top level, we have dataSetSize, the number of results, and the array of the results: {"dataSetSize":2,"result":[]} Inside our result array we have two objects: [{"id":{"itemid":2,"orderid":9},"item":{"itemid":2,"itemname":"Kidnapped","price":12.99},"quantity":2},{"id":{"itemid":10,"orderid":9},"item":{"itemid":10,"itemname":"Gravitys Rainbow","price":11.99},"quantity":1}]} Our first item has the compound key of itemid 2 with orderid 9. This is the item Kidnapped, which is a book costing $11.99. The other object in our result array also has the orderid 9, as we expect when reading line items from the selected order. This one is also a book, the item Gravity's Rainbow. Types To be more precise about the com.custpurchasdb.data.Lineitem parameter in our read request, it is actually the type name of the read request. WaveMaker projects define types from primitive types such as Boolean and custom complex types such as Lineitem. In our runtime read example, com.custpurchasedb.data.Lineitem is both the package and class name of the imported Hibernate entity and the type name for the line item entity in the project. Maintaining type information enables WaveMaker to ease a number of development issues. As the client knows the structure of the data it is getting from the server, it knows how to display that data with minimal developer configuration, if any. At design time, Studio uses type information in many areas to help us correctly configure our application. For example, when we set up a grid, type information enables Studio to present us with a list of possible column choices for the grid's dataset type. Likewise, when we add a form to the canvas for a database insert, it is type information that Studio uses to fill the form with appropriate editors. Line item is a project-wide type as it is defined in the server side. In the process of compiling the project's Java services sources, WaveMaker defines system types for any type returned to the client in a client facing function. To be added to the type system, a class must: Be public Define public getters and setters Be returned by a client exposed function Have a service class that extends JavaServiceSuperClass or uses the @ExposeToClient annotation WaveMaker 6.5.1 has a bug that prevents types from being generated as expected. Be certain to use 6.5.2 or newer versions to avoid this defect. It is possible to create new project types by adding a Java service class to the project that only defines types. Following is an example that creates a new simple type called Record to the project. Our definition of Record consists of an integer ID and a string. Note that there are two classes here. MyCustomTypes is the service class containing a method returning the type Record. As we will not be calling it, the function getNewRecord() need not do anything other than return a record. Creating a new default instance is an easy way to do this. The class Record is defined as an inner class. An inner class is a class defined within another class. In our case, Record is defined within MyCustomTypes: // Java Service class MyCustomTypes package com.myco.types; import com.wavemaker.runtime.javaservice.JavaServiceSuperClass; import com.wavemaker.runtime.service.annotations.ExposeToClient; public class MyCustomTypes extends JavaServiceSuperClass { public Record getNewRecord(){ return new Record(); } // Inner class Record public class Record{ private int id; private String name; public int getId(){ return id; } public void setId(int id){ this.id = id; } public String getName(){ return this.name; } public void setName(String name){ this.name = name; } } }  To add the preceding code to our WaveMaker project, we would add a Java service to the project using the class name MyCustomTypes in the Package and Class Name editor of the New Java Service dialog. The preceding code extends JavaServiceSuperClass and uses the package com.myco.types. A project can also have client-only types using the type definition option from the advanced section of the Studio insert menu. Type definitions are useful when we want to be able to pass structured data around within the client but we will not be sending or receiving that type to the server. For example, we may want to have application scoped wm.Variable storing a collection of current record selection information. This would enable us to keep track of a number of state items across all pages. Communication with the server is likely to be using only a few of those types at a time, so no such structure exists in the server side. Using wm.Variable enables us to bind each Record ID without using code. The insert type definition menu brings up the Type Definition Generator dialog. The generator takes JSON input and is pre-populated with a sample type. The sample type defines a person object, albeit an unusual one, with a name, an array of numbers for age, a Boolean (hasFoot), and a related person object, friend. Replace the sample type with your own JSON structure. Be certain to change the type name to something meaningful. After generating the type, you'll immediately see the newly minted type in type selectors, such as the type field of wm.Variable. Studio is pretty good at recognizing type changes. If for some reason Studio does not recognize a type change, the easiest thing to do is to get Studio to re-read the owning object. If a wm.Variable fails to show a newly added field to a type in its properties, change the type of the variable from the modified type to some other type and then back again. Studio is also an application One of the more complex WaveMaker applications is the Studio. That's right, Studio is itself an application built out of WaveMaker widgets and using the runtime and server. Being the large, complex application we use to build applications, it can sometimes be difficult to understand where the runtime ends and Studio begins. With that said, Studio remains a treasure trove of examples and ideas to explore. Let's open a finder, explorer, shell, or however you prefer to view the file system of a WaveMaker Studio installation. Let's look in the studio folder. If you've installed WaveMaker to c:program filesWaveMaker6.5.3.Release, the default on Windows, we're looking at c:program filesWaveMaker6.5.3.Releasestudio. This is the webapproot of the Studio project: For files, we've discussed index.html in loading the client. The type definition for the project types is types.js. The types.js definition is how the client learns of the server's Java types. Moving on to the directories alphabetically, we start with the app folder. The app folder can be considered a large utility folder these days. The branding folder, http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Branding, is a sample of the branding feature for when you want to easily re-brand applications for different customers. The build folder contains the optimized build files we discussed when loading our application in gzip mode. This build folder is for the Studio itself. The images folder is, as we would hope, where images are kept. The content of the doc in jsdoc is pretty old. Use jsref at the online wiki, http://dev.wavemaker.com/wiki/bin/wmjsref_6.5/WebHome, for a client API reference instead. Language contains the National Language Support (NLS) files to localize Studio into other languages. In 6.5.X, there is a Japanese (ja) and Spanish (es) directory in addition to the English (en) default thanks to the efforts of the WaveMaker community and a corporate partner. For more on internationalization applications with WaveMaker, navigate to http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Localization#HLocalizingtheClientLanguage. The lib folder is very interesting, so let's wrap up this top level before we dig into that one. The META-INF folder contains artifacts from the WaveMaker Maven build process that probably should be removed for 6.5.2. The pages folder contains the page definitions for Studio's pages. These pages can be opened in Studio. They also can be a treasure trove of tips and tricks if you see something when using Studio that you don't know how to do in your application. Be careful however, as some pages are old and use outdated classes or techniques. Other constructs are only used by Studio and aren't tooled. This means some pages use components that can only be created by code. The other major difference between a project's pages folder is that Studio page folders do not contain the same number of files. They do not have the optimized pageName.a.js file, for example. The services folder contains the Service Method Definition (SMD) files for Studio's services. These are summaries of a projects exposed services, one file per service, used at runtime by the client. Each callable function, its input parameter, and its return types are defined. Finally, WEB-INF we have discussed already when we examined web.xml. In Studio's case, replace project with studio in the file names. Also under WEB-INF, we have classes and lib. The classes folder contains Java class files and additional XML files. These files are on the classpath. WEB-INFlib contains JAR files. Studio requires significantly more JAR files, which are automatically added to projects created by Studio. Now let's get back to the lib folder. Astute readers of our walk through of index.html likely noticed the references to /wavemaker/lib in src tags for things such as runtimeloader. You might have also noticed that this folder was not present in the project and wondered how these tags could not fail. As a quick look at the URL of Studio running in a browser will demonstrate, /wavemaker is the Studio's context. This means the JavaScript runtime is only copied in as part of generating the deployment package. The lib folder is loaded directly from Studio's context when you test run an application from Studio using the Run or Test button. RuntimeLoader.js we encountered following index.html as it is the start of the loading of client modules. Manifest.js is an entry point into the loading process. Boot contains pre-initialization, such as the spinning loader image. Next we have another build folder. This one is the one used by applications and contains all possible build files. Not every JavaScript module is packaged up into an optimized build file. Some modules are so specific or rarely used that they are best loaded individually. Otherwise, if there's a build package available to applications, these them. Dojo lives in the dojo folder. I hope you don't find it surprising to find a dijit, dojo, and dojox folder in there. The folder github provides the library path github for JS Beautifier, http://jsbeautifier.org/. The images in the project images folder include a copy of Silk Icons, http://www.famfamfam.com/lab/icons/silk/, a great Creative Common licensed PNG icon set. This brings us to wm. We definitely saved the most interesting folder for our last stop on this tour. For in lib/wm, we have manifest.js, the top level of module loading when using debug mode in the runtime loader. In wm/lib/base, is the top level of the WaveMaker module space used at runtime. This means in wm/lib/base we have the WaveMaker components and widgets folders. These two folders contain the most commonly used sets of classes by WaveMaker developers using any custom JavaScript in a project. This also means we will be back in these folders again too. Summary In this article, we reviewed the WaveMaker architecture. We started with some context of what we mean by "client" and "server" in the context of this book. We then proceeded to dig into the client and the server. We reviewed how both build upon leading frameworks, the Dojo Toolkit and the SpringSource Framework in particular. We examined the running of an application from the network point of view and how the client and server communicated throughout. We dissected a JSON request to the runtime service and encountered project types. We also learned about both project and client type definitions. We ended by revisiting the file system. This time, however, we walked through a Studio installation. Studio is also a WaveMaker application. In the next article, we'll get comfortable with the Studio as a visual tool. We'll look at everything from the properties panels to the built-in source code editors. Resources for Article : Further resources on this subject: Binding Web Services in ESB—Web Services Gateway [Article] Service Oriented JBI: Invoking External Web Services from ServiceMix [Article] Web Services in Apache OFBiz [Article]
Read more
  • 0
  • 0
  • 1997
article-image-getting-started-adobe-premiere-pro-cs6-hotshot
Packt
11 Jul 2013
14 min read
Save for later

Getting Started with Adobe Premiere Pro CS6 Hotshot

Packt
11 Jul 2013
14 min read
(For more resources related to this topic, see here.) Getting the story right! This is basic housekeeping and ignoring it is like making your own editing life much more frustrating. So take a deep breath, think of calm blue oceans, and begin by getting this project organized. First you need to set the Timeline correctly and then you will create a short storyboard of the interview; again you will do this by focusing on the beginning, middle, and end of the story. Always start this way as a good story needs these elements to make sense. For frame-accurate editing it's advisable to use the keyboard as much as possible, although some actions will need to be performed with the mouse. Towards the end of this task you will cover some new ground as you add and expand Timeline tracks in preparation for the tasks ahead. Prepare for Lift Off Once you have completed all the preparations detailed in the Mission Checklist section, you are ready to go. Launch Premiere Pro CS6 in the usual way and then proceed to the first task. Engage Thrusters First you will open the project template, save it as a new file, and then create a three-clip sequence; the rough assembly of your story. Once done, perform the following steps: When the Recent Projects splash screen appears, select Hotshots Template – Montage. Wait for the project to finish loading and save this as Hotshots – Interview Project. Close any sequences open on the Timeline. Select Editing Optimized Workspace. Select the Project panel and open the Video bin without creating a separate window. If you would like Premiere Pro to always open a bin without creating a separate window, select Edit | Preferences | General from the menu. When the General Options window displays, locate the Bins option area and change the Double-Click option to Open in Place. Import all eight video files into the Video folder inside the Project 3 folder. Create a new sequence. Pick any settings at random, you will correct this in the next step. Rename the sequence as Project 3. Match the Timeline settings with any clip from the Video bin, and then delete the clip from the Timeline. Set the Project panel as the active panel and switch to List View if it is not already displayed. Create the basic elements of a short story for this scene using only three of the available clips in the Video bin. To do this, hold down the Ctrl or command key and click on the clips named ahead. Make sure you click on them in the same order as they are presented here: Intro_Shot.avi Two_Shot.avi Exit_Shot.avi Ensure the Timeline indicator is at the start of the Timeline and then click on the Automate to Sequence icon. When the Automate To Sequence window appears, change Ordering to Selection Order and leave Placement as the default (Sequentially). Uncheck the Apply Default Audio Transition , Apply Default Video Transition, and Ignore Audio checkboxes. Click on OK or press Enter on the keyboard to complete this action. Right-click on the Video 1 track and select Add Tracks from the context menu. When the Add Tracks window appears, set the number of video tracks to be added as 2 and the number of audio tracks to be added as 0. Click on OK or press Enter to confirm these changes. Dial open the Audio 1 track (hint – small triangle next to Audio 1), then expand the Audio 1 track by placing the cursor at the bottom of the Audio 1 area, and clicking on it, and dragging it downwards. Stop before the Master audio track disappears below the bottom of the Timeline panel. The Master audio track is used to control the output of all the audio tracks present on the Timeline; this is especially useful when you come to prepare your timeline for exporting to DVD. The Master audio track also allows you to view the left and right audio channels of your project. More details on the use of the Master audio track can be found in the Premiere Pro reference guide, which can be downloaded from http://helpx.adobe.com/pdf/premiere_pro_reference.pdf. Make sure the Timeline panel is active and zoom in to show all the clips present (hint – press backslash). You should end this section with a Timeline that looks something like the following screenshot. Save your project (Press Ctrl + S or command + S) before moving on to the next task. Objective Complete - Mini Debriefing How did you do? Review the shortcuts listed next. Did you remember them all? In this task you should have automatically matched up the Timeline to the clips with one drag-and-drop, plus a delete. You should have then sent three clips from the Project panel to the Timeline using the Automate to Sequence function. Finally you should have added two new video tracks and expanded the Audio 1 track. Keyboard shortcuts covered in this task are as follows: (backslash): Zoom the Timeline to show all populated clips Ctrl or command + double-click: Open bin without creating a separate Project panel (also see the tip after step 3 in the Engage Thrusters section) Ctrl or command + N: Create a new sequence Ctrl or command + (backslash): Create new bin in the Project panel Ctrl or command + I: Open the Import window Shift + 1: Set the Project panel as active Shift + 3: Set Timeline as active Classified Intel In this project, the Automate to Timeline function is being used to create a rough assembly of three clips. These are placed on the Timeline in the order that you clicked on them in the project bin. This is known as the selection order and allows the Automate to Timeline function to ignore the clips-relative location in the project bin. This is not a practical work flow if you have too many clips in your Project panel (how would you remember the selection order of twenty clips?). However, for a small number of clips, this is a practical workflow to quickly and easily send a rough draft of your story to the Timeline in just a few clicks. If you remember nothing else from this book, always remember how to correctly use Automate To Timeline! Extracting audio fat Raw material from every interview ever filmed will have lulls and pauses, and some stuttering. People aren't perfect and time spent trying to get lines and timing just right can lead to an unfortunate waste of filming time. As this performance is not live, you, the all-seeing editor, have the power to cut those distracting lulls and pauses, keeping the pace on beat and audience's attention on track. In this task you will move through the Timeline, cutting out some of the audio fat using Premiere Pro's Extract function, and to get this frame accurate, you will use as many keyboard shortcuts as possible. Engage Thrusters You will now use the Extract function to remove "dead" audio areas from the Timeline. Perform the following steps: Set the Timeline panel as active then play the timeline back by pressing the L key once. Make a mental note of the silences that occur in the first clip (Intro_Shot.avi). Return the Timeline indicator to the start of the Timeline using the Home key. Zoom in on the Timeline by pressing the + (plus) key on the main keyboard area. Do this until your Timeline looks something like the screenshot just after the following tip: To zoom in and out of the Timeline use the + (plus) and - (minus) keys in the main keyboard area, not the ones in the number pad area. Pressing the plus or minus key in the number area allows you to enter an exact number of frames into whichever tool is currently active. You should be able to clearly see the first area of silence starting at around 06;09 on the Timeline. Use the J, K, and L keyboard shortcuts to place the Timeline indicator at this point. Press the I key to set an In point here, then move the Timeline indicator to the end of the silence (around 08;17), and press the O key to set an Out point. Press the # (hash) key on your keyboard to remove the marked section of silence using Premiere Pro's Extract function. Important information on Sync Locking tracks The above step will only work if you have the Sync Lock icons toggled on for both the Video 1 and Audio 1 tracks. The Sync Lock icon controls which Timeline tracks will be altered when using a function such as Extract. For example; if the Sync Lock icon was toggled off for the Audio 1 track, then only the video would be extracted, which is counterproductive to what you are trying to achieve in this task! By default each new project should open with the Sync Lock icon toggled on for all video and audio tracks that already exist on the Timeline, and those added at a later point in the project. More information on Sync Lock can be found in the Premiere Pro reference guide (tinyurl.com/cz5fvh9). Repeat steps 5 and 6 to remove silences from the following Timeline areas (you should judge these points for yourself rather than slavishly following the suggestions given next): i. Set In point at 07;11 and Out point at 08;10. ii.Press # (hash). iii.Set In point at 11;05 and Out point at 12;13. iv.Press # (hash). Play back the Timeline to make sure you haven't extracted away too much audio and clipped the end of a sentence. Use the Trim tool to restore the full sentence. You may have spotted other silences on the Timeline; for the moment leave them alone. You will deal with these using other methods later in this project. Save the project before moving on to the next section. Objective Complete - Mini Debriefing At the end of this section you should have successfully removed three areas of silence from the Intro_Shot.avi clip. You did this using the Extract function, an elegant way of removing unwanted areas from your clips. You may also have refreshed your working knowledge of the Trim tool. If this still feels a lit le alien to you, don't worry, you will have a chance to practice trimming skills later in this project. Classified Intel Extract is another cunningly simple function that does exactly what it says; it extracts a section of footage from the Timeline, and then closes the gap created by this ac i on. In one step it replicates the razor cut and ripple delete. Creating a J-cut (away) One of the most common video techniques used in interviews and documentaries (not to mention a number of films) is called a J-cut. This describes cutting away some of the video, while leaving the audio beneath intact. The deleted video area is then replaced with alternative footage. This creates a voice-over effect that allows for a seamless transfer between the alternative viewpoints and the original speaker. In this task you will create a J-cut by replacing the video at the start of Intro_Shot.avi, leaving the voice of the newsperson and replacing his image with cutaway shots of what he is describing. You will make full use of four-point edits. Engage Thrusters Create J-cuts and cutaway shots using work flows you should now be familiar with. Perform the following steps to do so: Send the Cutaways_1.avi clip from the Project panel to the Source Monitor. In the Source Monitor, create an In point at 00;00 and an Out point just before the shot changes (around 04;24). Switch to the Timeline and send the Timeline indicator to the start of the Timeline (00;00). Create an In point here. Use a keyboard shortcut of your choice to identify the point just before the newsperson mentions the "Local village shop". (hint – roughly at 06;09). Create an Out point here. You want to create a J-cut, which means protecting the audio track that is already on the Timeline. To do this click once on the Audio 1 track header so it turns dark gray. Switch back to the Source Monitor and send the marked Cutaways_1.avi clip to the Timeline using the Overwrite function (hint – press the '.' (period) key). When the Fit Clip window appears, select Change Clip Speed (Fit to Fill), and click on OK or press Enter on the keyboard. The village scene cutaway shot should now appear on Video 1, but Audio 1 should retain the newsperson's dialog. His inserted village scene clip will have also slowed slightly to match what's being said by the newsperson. Repeat steps 2 to 7 to place the Cutaways_1.avi clip that shows the shot of the village shop, to match the village church and the village pub on the Timeline with the newsperson's dialog. The following are some suggestions on times, but try to do this step first of all without looking too closely at them: For the village shop cutaway, set the Source Monitor In point at 05;00 and Out point at 09;24. Set the Timeline In point at 06;10 and Out point at 07;13. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. For the village church cutaway, set the Source Monitor In point at 10;00 and Out point at 14;24. Set the Timeline In point at 07;14 and Out point at 09;03. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. For the pub cutaway, send Reconstruction_1.avi to the Source Monitor. Set the Source Monitor In point at 04;11 and Out point at 04;17. Set the Timeline In point at 09;04 and Out point at 12;00. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. The last cutaway shot here is part of the reconstruction reel and has been used because your camera person was unable (or forgot) to film a cutaway shot of the pub. This does sometimes happen and then it's down to you, the editor in charge, to get the piece on air with as few errors as possible. To do this you may find yourself scavenging footage from any of the other clips. In this case you have used just seven frames of Reconstruction_1.avi, but using the Premiere Pro feature, Fit to Fill , you are able to match the clip to the duration of the dialogue, saving your camera person from a production meeting dressing down! Review your edit decisions and use the Trim tool or the Undo command to alter edit points that you feel need adjustments. As always, being an editor is about experimentation, so don't be afraid to try something out of the box, you never know where it might lead. Once you are happy with your edit decisions, render any clips on the Timeline that display a red line above them. You should end up with a Timeline that looks something like the following screenshot; save your project before moving on to the next section. Objective Complete - Mini Debriefing In this task you have learned how to piece together cutaway shots to match the voice-over, creating an effective J-cut, as seen in the way the dialog seamlessly blends between the pub cutaway shot and the news reporter finishing his last sentence. You also learned how to scavenge source material from other reels in order to find the necessary shot to match the dialog. Classified Intel The last set of time suggestions given in this task allow the pub cutaway shot to run over the top of the newsperson saying "And now, much to the surprise…". This is an editorial decision that you can make on whether or not this cutaway should run over the dialog. It is simply a matter of taste, but you are the editor and the final decision is yours! In this article, we learned how to extract audio fat and create a J-cut. Resources for Article : Further resources on this subject: Responsive Design with Media Queries [Article] Creating a Custom HUD [Article] Top features you'll want to know about [Article]
Read more
  • 0
  • 0
  • 1818

article-image-using-jquery-and-jqueryui-widget-factory-plugins-requirejs
Packt
18 Jun 2013
5 min read
Save for later

Using jQuery and jQueryUI Widget Factory plugins with RequireJS

Packt
18 Jun 2013
5 min read
(For more resources related to this topic, see here.) How to do it... We must declare the jquery alias name within our Require.js configuration file. require.config({// 3rd party script alias namespaths: {// Core Libraries// --------------// jQuery"jquery": "libs/jquery",// Plugins// -------"somePlugin": "libs/plugins/somePlugin"}}); If a jQuery plugin does not register itself as AMD compatible, we must also create a Require.js shim configuration to make sure Require.js loads jQuery before the jQuery plugin. shim: {// Twitter Bootstrap plugins depend on jQuery"bootstrap": ["jquery"]} We will now be able to dynamically load a jQuery plugin with the require() method. // Dynamically loads a jQuery plugin using the require() methodrequire(["somePlugin"], function() {// The callback function is executed after the pluginis loaded}); We will also be able to list a jQuery plugin as a dependency to another module. // Sample file// -----------// The define method is passed a dependency array and a callbackfunctiondefine(["jquery", "somePlugin"], function ($) {// Wraps all logic inside of a jQuery.ready event$(function() {});}); When using a jQueryUI Widget Factory plugin, we create Require.js path names for both the jQueryUI Widget Factory and the jQueryUI Widget Factory plugin: "jqueryui": "libs/jqueryui","selectBoxIt": "libs/plugins/selectBoxIt" Next, create a shim configuration property: // The jQueryUI Widget Factory depends on jQuery"jqueryui": ["jquery"],// The selectBoxIt plugin depends on both jQuery and the jQueryUIWidget Factory"selectBoxIt": ["jqueryui"] We will now be able to dynamically load the jQueryUI Widget Factory plugin with the require() method: // Dynamically loads the jQueryUI Widget Factory plugin, selectBoxIt,using the Require() methodrequire(["selectBoxIt"], function() {// The callback function is executed after selectBoxIt.js(and all of its dependencies) have been loaded}); We will also be able to list the jQueryUI Widget Factory plugin as a dependency to another module: // Sample file// -----------// The define method is passed a dependency array and a callbackfunctiondefine(["jquery", "selectBoxIt"], function ($) {// Wraps all logic inside of a jQuery.ready event$(function() {});}); How it works... Luckily for us, jQuery adheres to the AMD specification and registers itself as a named AMD module. If you are confused about how/why they are doing that, let's take a look at the jQuery source: // Expose jQuery as an AMD moduleif ( typeof define === "function" && define.amd && define.amd.jQuery ){define( "jquery", [], function () { return jQuery; } );} jQuery first checks to make sure there is a global define() function available on the page. Next, jQuery checks if the define function has an amd property, which all AMD loaders that adhere to the AMD API should have. Remember that in JavaScript, functions are first class objects, and can contain properties. Finally, jQuery checks to see if the amd property contains a jQuery property, which should only be there for AMD loaders that understand the issues with loading multiple versions of jQuery in a page that all might call the define() function. Essentially, jQuery is checking that an AMD script loader is on the page, and then registering itself as a named AMD module (jquery). Since jQuery exports itself as the named AMD module, jquery, you must use this exact name when setting the path configuration to your own version of jQuery, or Require.js will throw an error. If a jQuery plugin registers itself as an anonymous AMD module and jQuery is also listed with the proper lowercased jquery alias name within your Require.js configuration file, using the plugin with the require() and define() methods will work as you expect. Unfortunately, most jQuery plugins are not AMD compatible, and do not wrap themselves in an optional define() method and list jquery as a dependency. To get around this issue, we can use the Require.js shim object configuration like we have seen before to tell Require. js that a file depends on jQuery. The shim configuration is a great solution for jQuery plugins that do not register themselves as AMD modules. Unfortunately, unlike jQuery, the jQueryUI does not currently register itself as a named AMD module, which means that plugin authors that use the jQueryUI Widget Factory cannot provide AMD compatibility. Since the jQueryUI Widget Factory is not AMD compatible, we must use a workaround involving the paths and shim configuration objects to properly define the plugin as an AMD module. There's more... You will most likely always register your own files as anonymous AMD modules, but jQuery is a special case. Registering itself as a named AMD module allows other third-party libraries that depend on jQuery, such as jQuery plugins, to become AMD compatible by calling the define() method themselves and using the community agreed upon module name, jquery, to list jQuery as a dependency. Summary This article demonstrates how to use jQuery and jQueryUI Widget Factory plugins with Require.js. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] HTML5 Presentations - creating our initial presentation [Article] Tips & Tricks for Ext JS 3.x [Article]
Read more
  • 0
  • 0
  • 2334