Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-so-what-easeljs
Packt
18 Apr 2013
7 min read
Save for later

So, what is EaselJS?

Packt
18 Apr 2013
7 min read
(For more resources related to this topic, see here.) EaselJS is part of the CreateJS suite, a JavaScript library for building rich and interactive experiences, such as web applications and web-based games that run on desktop and mobile web browsers. The standard HTML5 canvas' syntax can be very hard for beginners, especially if you need to animate and draw many objects. EaselJS greatly simplifies application development in HTML5 canvas using a syntax and an architecture very similar to the ActionScript 3.0 language. As a result, Flash/Flex developers will immediately feel at home, but it's very easy to learn even if you've never opened Flash in your life. CreateJS is currently supported by Adobe, AOL, and Microsoft, and it's developed by Grant Skinner, an internationally recognized leader in the field of rich Internet application development. Thanks to EaselJS, you can easily manage many types of graphic elements (vector shapes, bitmap, spritesheets, texts, and HTML elements) and it also supports touch events, animations, and many other interesting features in order to quickly develop cross-platform HTML5 games and applications, providing a look and feel as well as a behavior very similar to native applications for iOS and Android. Following are the five reasons to choose EaselJS and HTML5 canvas to build your applications: Cross-platform — Using this technology will help you create HTML5 canvas applications that will be supported from: Desktop browsers such as Chrome, Safari, Firefox, Opera, and IE9+ iPhone, iPad, and iPod 4+ (iOS 3.2+) Android smartphones and tablets (OS 2.1+) BlackBerry browser (7.0 and 10.0+) Every HTML5 browser (go to http://caniuse.com/canvas for more information) The following screenshot shows how the same application can run on different devices and resolutions: Easy Integration — EaselJS applications run on browsers and finally can be seen by almost every desktop and mobile user without any plugin installed. The HTML5 canvas element behaves just like any other HTML element. It can overlap other elements or become part of an existing HTML page. So, your canvas application can fill the entire browser area or just a small part of an existing HTML page. You can create amazing image galleries for your sites, product configurators, microsites, games, and interactive banners, and replicate a lot of features that used to be created with Adobe Flash or Apache Flex. One source code — A single codebase can be used to create a responsive application that works on almost all devices and resolutions. If you've ever created a liquid or fluid layout using HTML, Flash, or Flex then you already know this concept. As shown in the previous screenshot, you can also adapt UI and change behaviors according to the size of the device being used. No creativity limits — As in Flash, you can now forget HTML DOM compatibility issues. When you display a graphic element using EaselJS, you can be sure it will be placed at the same position in every browser, desktop and mobile (except for texts because every browser uses a different font renderer, and there may be some minor differences between them and of course Internet Explorer 8 and lower versions that do not support HTML5 syntax). Furthermore the CreateJS suite includes a lot of additional tools helping developers and designers to create amazing stuff: TweenJS: An useful tween engine to create runtime animations PreloadJS: To load assets and create nice preloaders Zoë: To convert SWF (Adobe Flash native web format) into spritesheets and JSON for EaselJS SoundJS: A library to play sounds (this topic is not covered in this book) CreateJS Toolkit for Flash CS6: To export Flash timeline animations in an EaselJS-compatible format Freedom — Developers can now create and publish games and applications skipping the App Store submission process. Of course, the performance of HTML5 applications are not comparable to those achieved by the native applications but can still be an alternative solution to many needs. From a business perspective, it's a great opportunity because it is now possible to avoid following the Apple guidelines that usually don't allow publishing applications that are primarily marketing material or advertisements, duplicated applications or applications that are not very useful, or simply websites bundled as applications. Users can now have a cool touch experience directly while navigating through a website, avoiding having to download, install, and open a native application. Furthermore, developers can also use PhoneGap (http://www.phonegap.com) and many other technologies to convert their HTML applications in native applications for iOS, Android, Windows Phones, BlackBerry, Bada, or WebOS. After the previous introduction you will be guided through the process of downloading, installing and configuring EaselJS in your local machine (this part of the book is not copied in this article). The book continues with the traditional "Hello World" example, as shown in the next paragraph: Quick start — creating your first canvas application Now we'll see how to create our first HTML5 canvas application with EaselJS. Step 1 — creating the HTML template Take a look at the following code that represents the boilerplate we'll use: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>EaselJS Starter: Template Page</title> <script src = "lib/easeljs-0.6.0.min.js"></script> <script> // Your code here function init() { } </script> </head> <body onload="init();" style="background-color:# ccc "> <h1> EaselJS Starter: Template page </h1> <canvas id="mycanvas" width="960" height="450" style="background-color:#fff"></canvas> </body> </html> The following are the most important steps of the previous code: Define an HTML5 <canvas> object with a width of 960 pixels and a height of 450 pixels. This represents the drawing area of your EaselJS application. When the page is completely loaded, the onload event is fired and the init() function is called. The <script> block is the place where you have to add the code but you should always wait for the onload events before you do anything. Set the <body> and <canvas> background CSS styles. The result is a white container inside an HTML page, as shown in the following screenshot: Step 2 – creating a "Hello World" example Now replace the init() function with the following code: function init() { var canvas = document.getElementById("mycanvas"); var stage = new createjs.Stage(canvas); var text = new createjs.Text("Hello World!", "36px Arial", "#777"); stage.addChild(text); text.x = 360; text.y = 200; stage.update(); } Congrats! You have created your first canvas application! The following screenshot shows the output of the previous code, with a text field at the center of the canvas: The following are the most important steps of the previous code: Use the getElementById method to get a canvas reference. In order to use EaselJS, create a Stage property, passing the canvas reference as a parameter. Create a new Text property and add it to the stage. Assign values for the x and y coordinates in order to see the text at the center of the stage. Call the update() method on the stage to render it to the canvas. The Stage property represents the root level for the display list, which is the main container for all the other graphic elements. Now you only need to know that every graphic element must be added to the Stage property, and that every time you need to update your content you have to refresh the stage calling the update() method. Summary After the previous "Hello World" example the book will help you to learn how to use the most important EaselJS topics with practical examples, technical information, and a lot of tip and tricks, creating a small advertising interactive web application. By the end of book you will be able to draw graphic primitives and texts, load and preload images, handle mouse events, add animations and spritesheets, use TweenJS, PreloadJS, and Zoe and optimize your code for desktop and mobile devices. This article helped you to learn what EaselJS actually is, what you can do with it, and why it's so great. It also helped on hoe to create your first HTML5 canvas application "Hello World". Resources for Article : Further resources on this subject: HTML5: Developing Rich Media Applications using Canvas [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] HTML5: Getting Started with Paths and Text [Article]
Read more
  • 0
  • 0
  • 4716

article-image-nginx-http-server
Packt
18 Apr 2013
28 min read
Save for later

The NGINX HTTP Server

Packt
18 Apr 2013
28 min read
(For more resources related to this topic, see here.) NGINX's architecture NGINX consists of a single master process and multiple worker processes. Each of these is single-threaded and designed to handle thousands of connections simultaneously. The worker process is where most of the action takes place, as this is the component that handles client requests. NGINX makes use of the operating system's event mechanism to respond quickly to these requests. The NGINX master process is responsible for reading the configuration, handling sockets, spawning workers, opening log files, and compiling embedded Perl scripts. The master process is the one that responds to administrative requests via signals. The NGINX worker process runs in a tight event loop to handle incoming connections. Each NGINX module is built into the worker, so that any request processing, filtering, handling of proxy connections, and much more is done within the worker process. Due to this worker model, the operating system can handle each process separately and schedule the processes to run optimally on each processor core. If there are any processes that would block a worker, such as disk I/O, more workers than cores can be configured to handle the load. There are also a small number of helper processes that the NGINX master process spawns to handle dedicated tasks. Among these are the cache loader and cache manager processes. The cache loader is responsible for preparing the metadata for worker processes to use the cache. The cache manager process is responsible for checking cache items and expiring invalid ones. NGINX is built in a modular fashion. The master process provides the foundation upon which each module may perform its function. Each protocol and handler is implemented as its own module. The individual modules are chained together into a pipeline to handle connections and process requests. After a request is handled, it is then passed on to a series of filters, in which the response is processed. One of these filters is responsible for processing subrequests, one of NGINX's most powerful features. Subrequests are how NGINX can return the results of a request that differs from the URI that the client sent. Depending on the configuration, they may be multiply nested and call other subrequests. Filters can collect the responses from multiple subrequests and combine them into one response to the client. The response is then finalized and sent to the client. Along the way, multiple modules come into play. See http://www.aosabook.org/en/nginx.html for a detailed explanation of NGINX internals. We will be exploring the http module and a few helper modules in the remainder of this article. The HTTP core module The http module is NGINX's central module, which handles all interactions with clients over HTTP. We will have a look at the directives in the rest of this section, again divided by type. The server The server directive starts a new context. We have already seen examples of its usage throughout the book so far. One aspect that has not yet been examined in-depth is the concept of a default server. A default server in NGINX means that it is the first server defined in a particular configuration with the same listen IP address and port as another server. A default server may also be denoted by the default_server parameter to the listen directive. The default server is useful to define a set of common directives that will then be reused for subsequent servers listening on the same IP address and port: server { listen 127.0.0.1:80; server_name default.example.com; server_name_in_redirect on; } server { listen 127.0.0.1:80; server_name www.example.com; } In this example, the www.example.com server will have the server_name_in_redirect directive set to on as well as the default.example.com server. Note that this would also work if both servers had no listen directive, since they would still both match the same IP address and port number (that of the default value for listen, which is *:80). Inheritance, though, is not guaranteed. There are only a few directives that are inherited, and which ones are changes over time. A better use for the default server is to handle any request that comes in on that IP address and port, and does not have a Host header. If you do not want the default server to handle requests without a Host header, it is possible to define an empty server_name directive. This server will then match those requests. server { server_name ""; } The following table summarizes the directives relating to server: Table: HTTP server directives Directive Explanation port_in_redirect Determines whether or not the port will be specified in a redirect issued by NGINX. server Creates a new configuration context, defining a virtual host. The listen directive specifies the IP address(es) and port(s); the server_name directive lists the Host header values that this context matches. server_name Configures the names that a virtual host may respond to. server_name_in_redirect Activates using the first value of the server_name directive in any redirect issued by NGINX within this context. server_tokens Disables sending the NGINX version string in error messages and the Server response header (default value is on). Logging NGINX has a very flexible logging model . Each level of configuration may have an access log. In addition, more than one access log may be specified per level, each with a different log_format. The log_format directive allows you to specify exactly what will be logged, and needs to be defined within the http section. The path to the log file itself may contain variables, so that you can build a dynamic configuration. The following example describes how this can be put into practice: http { log_format vhost '$host $remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; log_format downloads '$time_iso8601 $host $remote_addr ' '"$request" $status $body_bytes_sent $request_ time'; open_log_file_cache max=1000 inactive=60s; access_log logs/access.log; server { server_name ~^(www.)?(.+)$; access_log logs/combined.log vhost; access_log logs/$2/access.log; location /downloads { access_log logs/downloads.log downloads; } } } The following table describes the directives used in the preceding code: Table: HTTP logging directives Directive Explanation access_log Describes where and how access logs are to be written. The first parameter is a path to the file where the logs are to be stored. Variables may be used in constructing the path. The special value off disables the access log. An optional second parameter indicates log_format that will be used to write the logs. If no second parameter is configured, the predefined combined format is used. An optional third parameter indicates the size of the buffer if write buffering should be used to record the logs. If write buffering is used, this size cannot exceed the size of the atomic disk write for that filesystem. If this third parameter is gzip, then the buffered logs will be compressed on-the-fly, provided that the nginx binary was built with the zlib library. A final flush parameter indicates the maximum length of time buffered log data may remain in memory before being flushed to disk. log_format Specifies which fields should appear in the log file and what format they should take. See the next table for a description of the log-specific variables. log_not_found Disables reporting of 404 errors in the error log (default value is on). log_subrequest Enables logging of subrequests in the access log (default value is off ). open_log_file_cache Stores a cache of open file descriptors used in access_logs with a variable in the path. The parameters used are: max: The maximum number of file descriptors present in the cache inactive: NGINX will wait this amount of time for something to be written to this log before its file descriptor is closed min_uses: The file descriptor has to be used this amount of times within the inactive period in order to remain open valid: NGINX will check this often to see if the file descriptor still matches a file with the same name off: Disables the cache In the following example, log entries will be compressed at a gzip level of 4. The buffer size is the default of 64 KB and will be flushed to disk at least every minute. access_log /var/log/nginx/access.log.gz combined gzip=4 flush=1m; Note that when specifying gzip the log_format parameter is not optional.The default combined log_format is constructed like this: log_format combined '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; As you can see, line breaks may be used to improve readability. They do not affect the log_format itself. Any variables may be used in the log_format directive. The variables in the following table which are marked with an asterisk ( *) are specific to logging and may only be used in the log_format directive. The others may be used elsewhere in the configuration, as well. Table: Log format variables Variable Name Value $body_bytes_sent The number of bytes sent to the client, excluding the response header. $bytes_sent The number of bytes sent to the client. $connection A serial number, used to identify unique connections. $connection_requests The number of requests made through a particular connection. $msec The time in seconds, with millisecond resolution. $pipe * Indicates if the request was pipelined (p) or not (.). $request_length * The length of the request, including the HTTP method, URI, HTTP protocol, header, and request body. $request_time The request processing time, with millisecond resolution, from the first byte received from the client to the last byte sent to the client. $status The response status. $time_iso8601 * Local time in ISO8601 format. $time_local * Local time in common log format (%d/%b/%Y:%H:%M:%S %z). In this section, we have focused solely on access_log and how that can be configured. You can also configure NGINX to log errors. Finding files In order for NGINX to respond to a request, it passes it to a content handler, determined by the configuration of the location directive. The unconditional content handlers are tried first: perl, proxy_pass, flv, mp4, and so on. If none of these is a match, the request is passed to one of the following, in order: random index, index, autoindex, gzip_static, static. Requests with a trailing slash are handled by one of the index handlers. If gzip is not activated, then the static module handles the request. How these modules find the appropriate file or directory on the filesystem is determined by a combination of certain directives. The root directive is best defined in a default server directive, or at least outside of a specific location directive, so that it will be valid for the whole server: server { root /home/customer/html; location / { index index.html index.htm; } location /downloads { autoindex on; } } In the preceding example any files to be served are found under the root /home/customer/html. If the client entered just the domain name, NGINX will try to serve index.html. If that file does not exist, then NGINX will serve index.htm. When a user enters the /downloads URI in their browser, they will be presented with a directory listing in HTML format. This makes it easy for users to access sites hosting software that they would like to download. NGINX will automatically rewrite the URI of a directory so that the trailing slash is present, and then issue an HTTP redirect. NGINX appends the URI to the root to find the file to deliver to the client. If this file does not exist, the client receives a 404 Not Found error message. If you don't want the error message to be returned to the client, one alternative is to try to deliver a file from different filesystem locations, falling back to a generic page, if none of those options are available. The try_files directive can be used as follows: location / { try_files $uri $uri/ backups/$uri /generic-not-found.html; } As a security precaution, NGINX can check the path to a file it's about to deliver, and if part of the path to the file contains a symbolic link, it returns an error message to the client: server { root /home/customer/html; disable_symlinks if_not_owner from=$document_root; } In the preceding example, NGINX will return a "Permission Denied" error if a symlink is found after /home/customer/html, and that symlink and the file it points to do not both belong to the same user ID. The following table summarizes these directives: Table: HTTP file-path directives Directive Explanation disable_symlinks Determines if NGINX should perform a symbolic link check on the path to a file before delivering it to the client. The following parameters are recognized: off : Disables checking for symlinks (default) on: If any part of a path is a symlink, access is denied if_not_owner: If any part of a path contains a symlink in which the link and the referent have different owners, access to the file is denied from=part: When specified, the path up to part is not checked for symlinks, everything afterward is according to either the on or if_not_owner parameter root Sets the path to the document root. Files are found by appending the URI to the value of this directive. try_files Tests the existence of files given as parameters. If none of the previous files are found, the last entry is used as a fallback, so ensure that this path or named location exists, or is set to return a status code indicated by  =<status code>. Name resolution If logical names instead of IP addresses are used in an upstream or *_pass directive, NGINX will by default use the operating system's resolver to get the IP address, which is what it really needs to connect to that server. This will happen only once, the first time upstream is requested, and won't work at all if a variable is used in the *_pass directive. It is possible, though, to configure a separate resolver for NGINX to use. By doing this, you can override the TTL returned by DNS, as well as use variables in the *_pass directives. server { resolver 192.168.100.2 valid=300s; } Table: Name resolution directives Directive Explanation resolver   Configures one or more name servers to be used to resolve upstream server names into IP addresses. An optional  valid parameter overrides the TTL of the domain name record. In order to get NGINX to resolve an IP address anew, place the logical name into a variable. When NGINX resolves that variable, it implicitly makes a DNS look-up to find the IP address. For this to work, a resolver directive must be configured: server { resolver 192.168.100.2; location / { set $backend upstream.example.com; proxy_pass http://$backend; } } Of course, by relying on DNS to find an upstream, you are dependent on the resolver always being available. When the resolver is not reachable, a gateway error occurs. In order to make the client wait time as short as possible, the resolver_timeout parameter should be set low. The gateway error can then be handled by an error_ page designed for that purpose. server { resolver 192.168.100.2; resolver_timeout 3s; error_page 504 /gateway-timeout.html; location / { proxy_pass http://upstream.example.com; } } Client interaction There are a number of ways in which NGINX can interact with clients. This can range from attributes of the connection itself (IP address, timeouts, keepalive, and so on) to content negotiation headers. The directives listed in the following table describe how to set various headers and response codes to get the clients to request the correct page or serve up that page from its own cache: Table: HTTP client interaction directives Directive Explanation default_type Sets the default MIME type of a response. This comes into play if the MIME type of the file cannot be matched to one of those specified by the types directive. error_page Defines a URI to be served when an error level response code is encountered. Adding an = parameter allows the response code to be changed. If the argument to this parameter is left empty, the response code will be taken from the URI, which must in this case be served by an upstream server of some sort. etag Disables automatically generating the ETag response header for static resources (default is on). if_modified_since Controls how the modification time of a response is compared to the value of the If-Modified-Since request header: off: The If-Modified-Since header is ignored exact: An exact match is made (default) before: The modification time of the response is less than or equal to the value of the If-Modified-Since header ignore_invalid_headers Disables ignoring headers with invalid names (default is on). A valid name is composed of ASCII letters, numbers, the hyphen, and possibly the underscore (controlled by the underscores_in_headers directive). merge_slashes Disables the removal of multiple slashes. The default value of on means that NGINX will compress two or more / characters into one. recursive_error_pages Enables doing more than one redirect using the error_page directive (default is off). types Sets up a map of MIME types to file name extensions. NGINX ships with a conf/mime.types file that contains most MIME type mappings. Using include to load this file should be sufficient for most purposes. underscores_in_headers Enables the use of the underscore character in client request headers. If left at the default value off , evaluation of such headers is subject to the value of the ignore_invalid_headers directive. The error_page directive is one of NGINX's most flexible. Using this directive, we may serve any page when an error condition presents. This page could be on the local machine, but could also be a dynamic page produced by an application server, and could even be a page on a completely different site. http { # a generic error page to handle any server-level errors error_page 500 501 502 503 504 share/examples/nginx/50x.html; server { server_name www.example.com; root /home/customer/html; # for any files not found, the page located at # /home/customer/html/404.html will be delivered error_page 404 /404.html; location / { # any server-level errors for this host will be directed # to a custom application handler error_page 500 501 502 503 504 = @error_handler; } location /microsite { # for any non-existent files under the /microsite URI, # the client will be shown a foreign page error_page 404 http://microsite.example.com/404.html; } # the named location containing the custom error handler location @error_handler { # we set the default type here to ensure the browser # displays the error page correctly default_type text/html; proxy_pass http://127.0.0.1:8080; } } } Using limits to prevent abuse We build and host websites because we want users to visit them. We want our websites to always be available for legitimate access. This means that we may have to take measures to limit access to abusive users. We may define "abusive" to mean anything from one request per second to a number of connections from the same IP address. Abuse can also take the form of a DDOS (distributed denial-of-service) attack, where bots running on multiple machines around the world all try to access the site as many times as possible at the same time. In this section, we will explore methods to counter each type of abuse to ensure that our websites are available. First, let's take a look at the different configuration directives that will help us achieve our goal: Table: HTTP limits directives Directive Explanation limit_conn Specifies a shared memory zone (configured with limit_conn_zone) and the maximum number of connections that are allowed per key value. limit_conn_log_level When NGINX limits a connection due to the limit_conn directive, this directive specifies at which log level that limitation is reported. limit_conn_zone Specifies the key to be limited in limit_conn as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of connections per key and the size of that zone (name:size). limit_rate Limits the rate (in bytes per second) at which clients can download content. The rate limit works on a connection level, meaning that a single client could increase their throughput by opening multiple connections. limit_rate_after Starts the limit_rate after this number of bytes have been transferred. limit_req Sets a limit with bursting capability on the number of requests for a specific key in a shared memory store (configured with limit_req_zone). The burst can be specified with the second parameter. If there shouldn't be a delay in between requests up to the burst, a third parameter nodelay needs to be configured. limit_req_log_level When NGINX limits the number of requests due to the limit_req directive, this directive specifies at which log level that limitation is reported. A delay is logged at a level one less than the one indicated here. limit_req_zone Specifies the key to be limited in limit_req as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of requests per key and the size of that zone ( name:size). The third parameter, rate, configures the number of requests per second (r/s) or per minute (r/m) before the limit is imposed. max_ranges Sets the maximum number of ranges allowed in a byte-range request. Specifying 0 disables byte-range support. Here we limit access to 10 connections per unique IP address. This should be enough for normal browsing, as modern browsers open two to three connections per host. Keep in mind, though, that any users behind a proxy will all appear to come from the same address. So observe the logs for error code 503 (Service Unavailable), meaning that this limit has come into effect: http { limit_conn_zone $binary_remote_addr zone=connections:10m; limit_conn_log_level notice; server { limit_conn connections 10; } } Limiting access based on a rate looks almost the same, but works a bit differently. When limiting how many pages per unit of time a user may request, NGINX will insert a delay after the first page request, up to a burst. This may or may not be what you want, so NGINX offers the possibility to remove this delay with the nodelay parameter: http { limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_req_log_level warn; server { limit_req zone=requests burst=10 nodelay; } } Using $binary_remote_addr We use the $binary_remote_addr variable in the preceding example to know exactly how much space storing an IP address will take. This variable takes 32 bytes on 32-bit platforms and 64 bytes on 64-bit platforms. So the 10m zone we configured previously is capable of holding up to 320,000 states on 32-bit platforms or 160,000 states on 64-bit platforms. We can also limit the bandwidth per client. This way we can ensure that a few clients don't take up all the available bandwidth. One caveat, though: the limit_rate directive works on a connection basis. A single client that is allowed to open multiple connections will still be able to get around this limit: location /downloads { limit_rate 500k; } Alternatively, we can allow a kind of bursting to freely download smaller files, but make sure that larger ones are limited: location /downloads { limit_rate_after 1m; limit_rate 500k; } Combining these different rate limitations enables us to create a configuration that is very flexible as to how and where clients are limited: http { limit_conn_zone $binary_remote_addr zone=ips:10m; limit_conn_zone $server_name zone=servers:10m; limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_conn_log_level notice; limit_req_log_level warn; reset_timedout_connection on; server { # these limits apply to the whole virtual server limit_conn ips 10; # only 1000 simultaneous connections to the same server_name limit_conn servers 1000; location /search { # here we want only the /search URL to be rate-limited limit_req zone=requests burst=3 nodelay; } location /downloads { # using limit_conn to ensure that each client is # bandwidth-limited # with no getting around it limit_conn connections 1; limit_rate_after 1m; limit_rate 500k; } } } Restricting access In the previous section, we explored ways to limit abusive access to websites running under NGINX. Now we will take a look at ways to restrict access to a whole website or certain parts of it. Access restriction can take two forms here: restricting to a certain set of IP addresses, or restricting to a certain set of users. These two methods can also be combined to satisfy requirements that some users can access the website either from a certain set of IP addresses or if they are able to authenticate with a valid username and password. The following directives will help us achieve these goals: Table: HTTP access module directives Directive Explanation allow Allows access from this IP address, network, or all. auth_basic Enables authentication using HTTP Basic Authentication. The parameter string is used as the realm name. If the special value off is used, this indicates that the auth_basic value of the parent configuration level is negated. auth_basic_user_file Indicates the location of a file of username:password:comment tuples used to authenticate users. The password field needs to be encrypted with the crypt algorithm. The comment field is optional. deny Denies access from this IP address, network, or all. satisfy Allows access if all or any of the preceding directives grant access. The default value all indicates that a user must come from a specific network address and enter the correct password. To restrict access to clients coming from a certain set of IP addresses, the allow and deny directives can be used as follows: location /stats { allow 127.0.0.1; deny all; } This configuration will allow access to the /stats URI from the localhost only. To restrict access to authenticated users, the auth_basic and auth_basic_user_file directives are used as follows: server { server_name restricted.example.com; auth_basic "restricted"; auth_basic_user_file conf/htpasswd; } Any user wanting to access restricted.example.com would need to provide credentials matching those in the htpasswd file located in the conf directory of NGINX's root. The entries in the htpasswd file can be generated using any available tool that uses the standard UNIX crypt() function. For example, the following Ruby script will generate a file of the appropriate format: #!/usr/bin/env ruby # setup the command-line options require 'optparse' OptionParser.new do |o| o.on('-f FILE') { |file| $file = file } o.on('-u', "--username USER") { |u| $user = u } o.on('-p', "--password PASS") { |p| $pass = p } o.on('-c', "--comment COMM (optional)") { |c| $comm = c } o.on('-h') { puts o; exit } o.parse! if $user.nil? or $pass.nil? puts o; exit end end # initialize an array of ASCII characters to be used for the salt ascii = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a + [ ".", "/" ] $lines = [] begin # read in the current http auth file File.open($file) do |f| f.lines.each { |l| $lines << l } end rescue Errno::ENOENT # if the file doesn't exist (first use), initialize the array $lines = ["#{$user}:#{$pass}n"] end # remove the user from the current list, since this is the one we're editing $lines.map! do |line| unless line =~ /#{$user}:/ line end end # generate a crypt()ed password pass = $pass.crypt(ascii[rand(64)] + ascii[rand(64)]) # if there's a comment, insert it if $comm $lines << "#{$user}:#{pass}:#{$comm}n" else $lines << "#{$user}:#{pass}n" end # write out the new file, creating it if necessary File.open($file, File::RDWR|File::CREAT) do |f| $lines.each { |l| f << l} end Save this file as http_auth_basic.rb and give it a filename (-f), a user (-u), and a password (-p), and it will generate entries appropriate to use in NGINX's auth_ basic_user_file directive: $ ./http_auth_basic.rb -f htpasswd -u testuser -p 123456 To handle scenarios where a username and password should only be entered if not coming from a certain set of IP addresses, NGINX has the satisfy directive. The any parameter is used here for this either/or scenario: server { server_name intranet.example.com; location / { auth_basic "intranet: please login"; auth_basic_user_file conf/htpasswd-intranet; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; satisfy any; } If, instead, the requirements are for a configuration in which the user must come from a certain IP address and provide authentication, the all parameter is the default. So, we omit the satisfy directive itself and include only allow, deny, auth_basic, and auth_basic_user_file: server { server_name stage.example.com; location / { auth_basic "staging server"; auth_basic_user_file conf/htpasswd-stage; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; } Streaming media files NGINX is capable of serving certain video media types. The flv and mp4 modules, included in the base distribution, can perform what is called pseudo-streaming. This means that NGINX will seek to a certain location in the video file, as indicated by the start request parameter. In order to use the pseudo-streaming capabilities, the corresponding module needs to be included at compile time: --with-http_flv_module for Flash Video (FLV) files and/or --with-http_mp4_module for H.264/AAC files. The following directives will then become available for configuration: Table: HTTP streaming directives Directive Explanation flv Activates the flv  module for this location. mp4 Activates the mp4  module for this location. mp4_buffer_size Sets the initial buffer size for delivering MP4 files. mp4_max_buffer_size Sets the maximum size of the buffer used to process MP4 metadata. Activating FLV pseudo-streaming for a location is as simple as just including the flv keyword: location /videos { flv; } There are more options for MP4 pseudo-streaming, as the H.264 format includes metadata that needs to be parsed. Seeking is available once the "moov atom" has been parsed by the player. So to optimize performance, ensure that the metadata is at the beginning of the file. If an error message such as the following shows up in the logs, the mp4_max_buffer_size needs to be increased: mp4 moov atom is too large mp4_max_buffer_size can be increased as follows: location /videos { mp4; mp4_buffer_size 1m; mp4_max_buffer_size 20m; } Predefined variables NGINX makes constructing configurations based on the values of variables easy. Not only can you instantiate your own variables by using the set or map directives, but there are also predefined variables used within NGINX. They are optimized for quick evaluation and the values are cached for the lifetime of a request. You can use any of them as a key in an if statement, or pass them on to a proxy. A number of them may prove useful if you define your own log file format. If you try to redefine any of them, though, you will get an error message as follows: <timestamp> [emerg] <master pid>#0: the duplicate "<variable_name>" variable in <path-to-configuration-file>:<line-number> They are also not made for macro expansion in the configuration—they are mostly used at run time. Summary In this article, we have explored a number of directives used to make NGINX serve files over HTTP. Not only does the http module provide this functionality, but there are also a number of helper modules that are essential to the normal operation of NGINX. These helper modules are enabled by default. Combining the directives of these various modules enables us to build a configuration that meets our needs. We explored how NGINX finds files based on the URI requested. We examined how different directives control how the HTTP server interacts with the client, and how the error_page directive can be used to serve a number of needs. Limiting access based on bandwidth usage, request rate, and number of connections is all possible. We saw, too, how we can restrict access based on either IP address or through requiring authentication. We explored how to use NGINX's logging capabilities to capture just the information we want. Pseudo-streaming was examined briefly, as well. NGINX provides us with a number of variables that we can use to construct our configurations. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 3065

article-image-learning-fly-forcecom
Packt
17 Apr 2013
20 min read
Save for later

Learning to Fly with Force.com

Packt
17 Apr 2013
20 min read
(For more resources related to this topic, see here.) What is cloud computing? If you have been in the IT industry for some time, you probably know what cloud means. For the rest, it is used as a metaphor for the worldwide network or the Internet. Computing normally indicates the use of computer hardware and software. Combining these two terms, we get a simple definition—use of computer resources over the Internet (as a service). In other words, when the computing is delegated to resources available over the Internet, we get what is called cloud computing. As Wikipedia defines it: Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). Still confused? A simple example will help clarify it. Say you are managing the IT department of an organization, where you are responsible for purchasing hardware and software (licenses) for your employees and making sure they have the right resources to do their jobs. Whenever there is a new hire, you need to go through all the purchase formalities once again to get your user the necessary resources. Soon this turns out to be a nightmare of managing all your software licenses! Now, what if you could find an alternative where you host an application on the Web, which your users can access through their browsers and interact with it? You are freed from maintaining individual licenses and maintaining high-end hardware at the user machines. Voila, we just discovered cloud computing! Cloud computing is the logical conclusion drawn from observing the drawbacks of in-house solutions. The trend is now picking up and is quickly replacing the onpremise software application delivery models that are accompanied with high costs of managing data centers, hardware, and software. All users pay for is the quantum of the services that they use. That is why it's sometimes also known as utility-based computing, as the corresponding payment is resource usage based. Chances are that even before you ever heard of this term, you had been using it unknowingly. Have you ever used hosted e-mail services such as Yahoo, Hotmail, or Gmail where you accessed all of their services through the browser instead of an e-mail client on your computer? Now that is a typical example of cloud computing. Anything that is offered as a service (aaS) is usually considered in the realm of cloud computing. Everything in the cloud means no hardware, no software, so no maintenance and that is what the biggest advantage is. Different types of services that are most prominently delivered on the cloud are as follows: Infrastructure as a service (IaaS) Platform as a service (PaaS) Software as a service (SaaS) Infrastructure as a service (IaaS) Sometimes referred to hardware as a service, infrastructure as a service offers the IT infrastructure, which includes servers, routers, storages, firewalls, computing resources, and so on, in physical or virtualized forms as a service. Users can subscribe to these services and pay on the basis of need and usage. The key player in this domain is Amazon.com, with EC2 and S3 as examples of typical IaaS. Elastic Cloud Computing (EC2) is a web service that provides resizable computing capacity in the cloud. Computing resources can be scaled up or down within minutes, allowing users to pay for the actual capacity being used. Similarly, S3 is an online storage web service offered by Amazon, which provides 99.999999999 percent durability and 99.99 percent availability of objects over a given year and stores arbitrary objects (computer files) up to 5 terabytes in size! Platform as a service (PaaS) PaaS provides the infrastructure for development of software applications. Accessed over the cloud, it sits between IaaS and SaaS where it hides the complexities of dealing with underlying hardware and software. It is an application-centric approach that allows developers to focus more on business applications rather than infrastructure-level issues. Developers no longer have to worry about the server upgrades, scalability, load balancing, service availability, and other infrastructure hassles, as these are delegated to the platform vendors. Paas allows development of custom applications by providing the appropriate building blocks and the necessary infrastructure available as a service. An excellent example, in this category, is the Force.com platform, which is a game changer in the aaS, specially in the PaaS domain. It exposes a proprietary application development platform, which is woven around a relational database. It stands at a higher level than another key player in this domain, Google App Engine, which supports scalable web application development in Java and Python on the appropriate application server stack, but does not provide equivalent robust proprietary components or the building blocks as Force.com. Another popular choice (or perhaps not) is Microsoft's application platform called Widows Azure, which can be used to build websites (developed in ASP.NET, PHP, Node.JS), provision virtual machines, and provide cloud services (containers of hosted applications). A limitation with applications built on these platforms is the quota limits, or the strategy to prohibit the monopolization of the shared resources in the multitenant environment. Some developers see this as a restriction, which allows them to build applications with limited capability, but we reckon this as an opportunity to build highly efficient solutions to work within governor limits, while still maintaining the business process sanctity. Specificcally for the Force.com platform, some people consider shortage of skilled resources as a possible limitation, but we think the learning curve is steep on this platform and an experienced resource can pick proprietary languages pretty quickly, average ramp up time spanning anywhere from 15 to 30 days! Software as a service (SaaS) The opposite end of IaaS is SaaS. Business applications are offered as services over the Internet to users who don't have to go through the complex custom application development and implementation cycles. They also don't invest upfront on the IT infrastructure or maintain their software with regular upgrades. All this is taken care of by the SaaS vendors. These business applications normally provide the customization capability to accommodate specific business needs such as user interfaces, business workflows, and so on. Some good examples in this category are the Salesforce.com CRM system and Google Apps services. What is Force.com? Force.com is a natural progression from Salesforce.com, which was started as a sales force automation system offered as a service (SaaS). The need to go beyond the initially offered customizable CRM application and develop custom-based solutions, resulted in a radical shift of cloud delivery model from SaaS to PaaS. The technology that powers Salesforce CRM, whose design fulfills all the prerequisites of being a cloud application, is now available for developing enterprise-level applications. An independent study of the Force.com platform concluded that compared to the traditional Java-based application development platform, development with the Force.com platform is almost five times faster, with about a 40 percent smaller overall project cost and better quality due to rapid prototyping during the requirement gathering—thanks to the declarative aspect of the Force.com development—and less testing due to proven code re-use. What empowers Force.com? Why is Force.com application development so successful? Primarily because of its key architectural features, discussed in the following sections. Multitenancy Multitenancy is a concept that is the opposite of single-tenancy. In the Cloud Computing jargon, a customer or an organization is referred to as tenant. The various downsides and cost inefficiencies of single-tenant models are overcame by the multitenant model. A multitenant application caters to multiple organizations, each working in its own isolated virtual environment called org and sharing a single physical instance and version of the application hosted on the Force.com infrastructure. It is isolated because although the infrastructure is shared, every customer's data, customizations, and code remain secure and insulated from other customers. Multitenant applications run on a single physical instance and version of the application, providing the same robust infrastructure to all their customers. This also means freedom from upfront costs, ongoing upgrades, and maintenance costs. The test methods written by the customers on respective orgs ensure more than 75 percent code coverage and thus help Salesforce.com in regression testing of the Force.com upgrades, releases, and patches. The same is difficult to even visualize with an in-house software application development. Metadata What drives the multitenant applications on Force.com? Nothing else but the metadata-driven architecture of the platform! Think about the following: The platform allows all tenants to coexist at the same time Tenants can extend the standard common object model without affecting others Tenants' data is kept isolated from others in a shared database The platform customizes the interface and business logic without disrupting the services for others The platform's codebase can be upgraded to offer new features without affecting the tenants' customizations The platform scales up with rising demands and new customers To meet all the listed challenges, Force.com has been built upon a metadata-driven architecture, where the runtime engine generates application components from the metadata. All customizations to the standard platform for each tenant are stored in the form of metadata, thus keeping the core Force.com application and the client customizations distinctly separate, making it possible to upgrade the core without affecting the metadata. The core Force.com application comprises the application data and the metadata describing the base application, thus forming three layers sitting on top of each other in a common database, with the runtime engine interpreting all these and rendering the final output in the client browser. As metadata is a virtual representation of the application components and customizations of the standard platform, the statically compiled Force.com application's runtime engine is highly optimized for dynamic metadata access and advanced caching techniques to produce remarkable application response times. Understanding the Force.com stack A white paper giving an excellent explanation of the Force.com stack has been published. It describes various layers of technologies and services that make up the platform. We will also cover it here briefly. The application stack is shown in the following diagram: Infrastructure as a service Infrastructure is the first layer of the stack on top of which other services function. It acts as the foundation for securely and reliably delivering the cloud applications developed by the customers as well as the core Salesforce CRM applications. It powers more than 200 million transactions per day and more than 1.5 million subscribers. The highly managed data centers provide unparalleled redundancy with near-real-time replication, world class security at physical, network, host, data transmission, and database levels, and excellent design to scale both vertically and horizontally. Database as a service The powerful and reliable data persistence layer in the Force.com stack is known as the Force.com database. It sits on top of the infrastructure and provides the majority of the Force.com platform capabilities. The declarative web interface allows user to create objects and fields generating the native application UI around them. Users can also define relationships between objects, create validation rules to ensure data integrity, track history on certain fields, create formula fields to logically derive new data values, create fine-grained security access with the point and click operations, and all of this without writing a single line of code or even worrying about the database backup, tuning, upgrade, and scalability issues! As compared with the relational database, it is similar in the sense that the object (a data instance) and fields are analogous to tables and columns, and Force.com relationships are similar to the referential integrity constraints in a relation DB. But unlike physically separate tables with dedicated storage, Force.com objects are maintained as a set of metadata interpreted on the fly by the runtime engine and all of the application data is stored in a set of a few large database tables. This data is represented as virtual records based on the interpretation of tenants' customizations stored as metadata. Integration as a service Integration as a service utilizes the underlying Force.com database layer and provides the platform's integration capabilities through the open-standards-based web services API. In today's world, most organizations have their applications developed on disparate platforms, which have to work in conjunction to correctly represent and support their internal business processes. Customers' existing applications can connect with Force.com through the SOAP or REST web services to access data and create mashups to combine data from multiple sources. The Force.com platform also allows native applications to integrate with third-party web services through callouts to include information from external systems in organizations' business processes. These integration capabilities of the platform through API (for example, Bulk API, Chatter API, Metadata API, Apex REST API, Apex SOAP API, Streaming API, and so on) can be used by developers to build custom integration solutions to both produce and consume web services. Accordingly, it's been leveraged by many third parties such as Informatica, Cast Iron, Talend, and so on, to create prepackaged connectors for applications and systems such as Outlook, Lotus Notes, SAP, Oracle Financials, and so on. It also allows clouds such as Facebook, Google, and Amazon to talk to each other and build useful mashups. The integration ability is the key for developing mobile applications for various device platforms, which solely rely on the web services exposed by the Force.com platform. Logic as a service A development platform has to have the capability to create business processes involving complex logic. The Force.com platform oversimplifies this task to automate a company's business processes and requirements. The platform logic features can be utilized by both developers and business analysts to build smart database applications that help increase user productivity, improve data quality, automate manual processes, and adapt quickly to changing requirements. The platform allows creating the business logic either through a declarative interface in the form of workflow rules, approval processes, required and unique fields, formula fields, validation rules, or in an advanced form by writing triggers and classes in the platform's programming language—Apex—to achieve greater levels of flexibility, which help define any kind of functionality and business requirement that otherwise may not be possible through the point and click operations. User interface as a service The user interface of platform applications can be created and customized by either of the two approaches. The Force.com builder application, an interface based on point-and-click/drag-and-drop, allows users to build page layouts that are interpreted from the data model and validation rules with user defined customizations, define custom application components, create application navigation structures through tabs, and define customizable reports and user-specific views. For more complex pages and tighter control over the presentation layer, a platform allows users to build custom user interfaces through a technology called Visualforce (VF), which is based on the XML markup tags. The custom VF pages may or may not adopt the standard look and feel based on the stylesheet applied and present data returned from the controller or the logic layer in the structured format. The Visualforce interfaces are either public, private, or a mix of the two. Private interfaces require users to log in to the system before they can access resources, whereas public interfaces, called sites, can be made available on the Internet to anonymous users. Development as a service This a set of features that allow developers to utilize traditional practices for building cloud applications. These features include the following: Force.com Metadata API: Lets developers push changes directly into the XML files describing the organization's customizations and acts as an alternative to platform's interface to manage applications IDE (Integrated Development Environment): A powerful client application built on the Eclipse platform, allowing programmers to code, compile, test, package, and deploy applications A development sandbox: A separate application environment for development, quality assurance, and training of programmers Code Share: A service for users around the globe to collaborate on development, testing, and deployment of the cloud applications Force.com also allows online browser based development providing code assist functionality, repository search, debugging, and so on, thus eliminating the need of a local machine specific IDE. DaaS expands the Cloud Computing development process to include external tools such as integrated development environments, source control systems, and batch scripts to facilitate developments and deployments. Force.com AppExchange This is a cloud marketplace (accessible at http://appexchange.salesforce.com/) that helps commercial application vendors to publish their custom development applications as packages and then reach out to potential customers who can install them on their orgs with merely a button click through the web interface, without going through the hassles of software installation and configuration. Here, you may find good apps that provide functionality, that are not available in Salesforce, or which may require some heavy duty custom development if carried out on-premises! Introduction to governor limits Any introduction to Force.com is incomplete without a mention of governor limits. By nature, all multitenant architecture based applications such as Force.com have to have a mechanism that does not allow the code to abuse the shared resources so that other tenants in the infrastructure remain unaffected. In the Force.com world, it is the Apex runtime engine that takes care of such malicious code by enforcing runtime limits (called governor limits) in almost all areas of programming on the Force.com platform. If these governor limits had not been in place, even the simplest code, such as an endless loop, would consume enough resources to disrupt the service to the other users of the system, as they all share the same physical infrastructure. The concept of governor limits is not just limited to Force.com, but extends to all SaaS/PaaS applications, such as Google App Engine, and is critical for making the cloud-based development platform stable. This concept may prove to be very painful for some people, but there is a key logic to it. The platform enforces the best practices so that the application is practically usable and makes an optimal usage of resources, keeping the code well under governor limits. So the longer you work on Force.com, the more you become familiar with these limits, the more stable your code becomes over time, and the easier it becomes to work around these limits. In one of the forthcoming chapters, we will discover how to work with these governor limits and not against them, and also talk about ways to work around them, if required. Salesforce environments An environment is a set of resources, physical or logical, that let users build, test, deploy, and use applications. In the traditional development model, one would expect to have application servers, web servers, databases, and their costly provisioning and configuration. But in the Force.com paradigm, all that's needed is a computer and an Internet connection to immediately get started to build and test a SaaS application. An environment, or a virtual or logical instance of the Force.com infrastructure and platform, is also called an organization or just org, which is provisioned in the cloud on demand. It has the following characteristics: Used for development, testing, and/or production Contains data and customizations Based on the edition containing specific functionality, objects, storage, and limits Certain restricted functionalities, such as the multicurrency feature (which is not available by default), can be enabled on demand All environments are accessible through a web browser There are broadly three types of environments available for developing, testing, and deploying applications: Production environments: The Salesforce.com environments that have active paying users accessing the business critical data. Development environments: These environments are used strictly for the development and testing applications with data that is not business critical, without affecting production environment. Developer environments are of two types: Developer Edition: This is a free, full-featured copy of the Enterprise Edition, with less storage and users. It allows users to create packaged applications suitable for any Salesforce production environment. It can be of two types: Regular Developer Edition: This is a regular DE org whose sign up is free and the user can register for any number of DE orgs. This is suitable when you want to develop managed packages for distribution through AppExchange or Trialforce, when you are working with an edition where sandbox is not available, or if you just want to explore the Force.com platform for free. Partner Developer Edition: This is a regular DE org but with more storage, features, and licenses. This is suitable when you expect a larger team to work who need a bigger environment to test the application against a larger real-life dataset. Note that this org can only be created with the Salesforce Consulting partners or Force.com ISV. Sandbox: This is nearly an identical copy of the production environment available to Enterprise or Unlimited Edition customers, and can contain data and/or customizations. This is suitable when developing applications for production environments only with no plans to distribute applications commercially through AppExchange or Trialforce, or if you want to test the beta-managed packages. Note that sandboxes are completely isolated from your Salesforce production organization, so operations you perform in your sandboxes do not affect your Salesforce production organization, and vice versa. Types of sandboxes are as follows: Full copy sandbox: Nearly an identical copy of the production environment, including data and customizations Configuration-only sandbox: Contains only configurations and not data from the production environment Developer sandbox: Same as Configuration-only sandbox but with less storage Test environments: These can be either production or developer environments, used speficially for testing application functionality before deploying to production or releasing to customers. These environments are suitable when you want to test applications in production such as environments with more users and storage to run real-life tests. Summary This article talked about the basic concepts of cloud computing. The key takeaway items from this article are the explanations of the different types of cloud-based services such as IaaS, SaaS, and PaaS. We introduced the Force.com platform and its key architectural features that power the platform types, such as multitenant and metadata. We briefly covered the application stack—technology and services layers—that makes up the Force.com platform. We gave an overview of governor limits without going too much detail about their use. We discussed situations where adopting cloud computing may be beneficial. We also discussed the guidelines that help you decide whether your software project should be developed on the Force.com platform or not. Last, but not least, we discussed various environments available to developers and business users and their characteristics and usage. Resources for Article : Further resources on this subject: Monitoring and Responding to Windows Intune Alerts [Article] Sharing a Mind Map: Using the Best of Mobile and Web Featuressil [Article] Force.com: Data Management [Article]
Read more
  • 0
  • 0
  • 1567
Visually different images

article-image-liferay-its-installation-and-setup
Packt
15 Apr 2013
7 min read
Save for later

Liferay, its Installation and setup

Packt
15 Apr 2013
7 min read
(For more resources related to this topic, see here.) Overview about portals Well, to understand more about what portals are, let me throw some familiar words at you. Have you used, heard, or seen iGoogle, the Yahoo! home page, or MSN? If the answer is yes, then you have been using portals already. All these websites have two things in common. A common dashboard Information from various sources shown on a single page, giving a uniform experience For example, on iGoogle, you can have a gadget showing the weather in Chicago, another gadget to play your favorite game of Sudoku, and a third one to read news from around the globe, everything on the same page without you knowing that all of these are served from different websites! That is what a portal is all about. So, a portal (or web portal) can be thought of as a website that shows, presents, displays, or brings together information or data from various sources and gives the user a uniform browsing experience. The small chunks of information that form the web page are given different names such as gadgets or widgets, portlets or dashlets. Introduction to Liferay Now that you have some basic idea about what portals are, let us revisit the initial statement I made about Liferay. Liferay is an open source portal solution. If you want to create a portal, you can use Liferay to do this. It is written in Java. It is an open source solution, which means the source code is freely available to everyone and people can modify and distribute it. With Liferay you can create basic intranet sites with minimal tweaking. You can also go for a full-fledged enterprise banking portal website with programming, and heavy customizations and integrations. Besides the powerful portal capabilities, Liferay also provides the following: Awesome enterprise and web content management capabilities Robust document management which supports protocols such as CMIS and WebDAV Good social collaboration features Liferay is backed up by a solid and active community, whose members are ever eager to help. Sounds good? So what are we waiting for? Let's take a look at Liferay and its features. Installation and setup In four easy steps, you can install Liferay and run it on your system. Step 1 – Prerequisites Before we go and start our Liferay download, we need to check if we have the requirements for the installation. They are as follows: Memory: 2 GB (minimum), 4 GB (recommended). Disk space: Around 5 GB of free space should be more than enough for the exercises mentioned in the book. The exercises performed in this book are done on Windows XP. So you can use the same or any subsequent versions of Windows OS. Although Liferay can be run on Mac OSX and Linux, it is beyond the scope of this book how to set up Liferay on them. The MySQL database should be installed. As with the OS, Liferay can be run on most of the major databases out there in the market. Liferay is shipped with the Hypersonic database by default for demo purpose, which should not be used for a production environment. Unzip tools such as gzip or 7-Zip. Step 2 – Downloading Liferay You can download the latest stable version of Liferay from https://www.liferay.com/downloads/liferay-portal/available-releases. Liferay comes in the following two versions: Enterprise Edition: This version is not free and you would have to purchase it. This version has undergone rigorous testing cycles to make sure that all the features are bug free, providing the necessary support and patches. Community Edition: This is a free downloadable version that has all the features but no enterprise support provided. Liferay is supported by a lot of open source application servers and the folks at Liferay have made it easy for end users by packaging everything as a bundle. What this means is that if you are asked to have Liferay installed in a JBoss application server, you can just go to the URL previously mentioned and select the Liferay-JBoss bundle to download, which gives you the JBoss Application server installed with Liferay. We will download the Community Edition of the Liferay-Tomcat bundle, which has Liferay preinstalled in the Tomcat server. The stable version at the time of writing this book was Liferay 6.1 GA2. As shown in the following screenshot, just click on Download after making sure that you have selected Liferay bundled with Tomcat and save the ZIP file at an appropriate location: Step 3 – Starting the server After you have downloaded the bundle, extract it to the location of your choice on your machine. You can see a folder named liferay-portal-6.1.1-ce-ga2. The latter part of the name can change based on the version that you download. Let us take a moment to have a look at the folder structure as shown in the following screenshot: The liferay-portal-6.1.1-ce-ga2 folder is what we will refer to as LIFERAY_HOME. This folder contains the server, which in our case is tomcat-7.0.27. Let's refer to this folder as SERVER_HOME. Liferay is created using Java, so to run Liferay we need Java Runtime Environment (JRE). The Liferay bundle is shipped with a JRE by default (as you can see inside our SERVER_HOME). So if you are running a Windows OS, you can directly start and run Liferay. If you are using any other OS, you need to set the JAVA_HOME environment variable. Navigate to SERVER_HOME/webapps. This is where all the web applications are deployed. Delete everything in this folder except marketplace-portlet and ROOT. Now go to SERVER/bin and double-click on startup.bat, since we are using Windows OS. This will bring up a console showing the server startup. Wait till you see the Server Startup message in the console, after which you can access Liferay from the browser. Step 4 – Doing necessary first-time configurations Once the server is up, open your favorite browser and type in http://localhost:8080. You will be shown a screen that performs basic configurations, such as changing the database and name of your portal, deciding what should be the admin name and e-mail address, or changing the default locale. This is a new feature introduced in Liferay 6.1 to ease the first-time setup, which on previous versions had to be done using the property file. Go change the name of the portal, administrator username, and e-mail address. Keep the locale as it is. As I stated earlier, Liferay is shipped with a default Hypersonic database which is normally used for demo purposes. You can change it to MySQL if you want, by selecting the database type from the drop-down list presented, and typing in the necessary JDBC details. I have created a database in MySQL by the name Portal Starter; hence my JDBC URL would contain that. You can create a blank database in MySQL and accordingly change the JDBC URL. Once you are done making your changes, click on the Finish Configuration button as shown in the following screenshot: This will open up a screen, which will show the path where this configuration is saved. What Liferay does behind the scenes is creates a property file named portal-setup-wizard.properties and put all the configurations in that. This, as I said earlier, was created manually in the previous versions of Liferay. Clicking on the Go to my portal button on this screen will take the user to the Terms of Use page. Agree to the terms and proceed further. A screen will be shown to change the password for your admin user that you earlier specified in the Basic Configuration screen. After you change the password, you will be presented with a screen to select a password reminder question. Select a question or create your own question from the drop-down list, set the password reminder, and move on. And that's it!! Finally, you can see the home page of Liferay. That's it and you are done setting up your very first Liferay instance. Summary So, we just gained a quick understanding about portals and Liferay and its installation and setup that teaches you how to set up Liferay on your local machine. Resources for Article : Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Setting up and Configuring a Liferay Portal [Article] User Interface in Production [Article]
Read more
  • 0
  • 0
  • 3828

article-image-improving-performance-parallel-programming
Packt
12 Apr 2013
11 min read
Save for later

Improving Performance with Parallel Programming

Packt
12 Apr 2013
11 min read
(For more resources related to this topic, see here.) Parallelizing processing with pmap The easiest way to parallelize data is to take a loop we already have and handle each item in it in a thread. That is essentially what pmap does. If we replace a call to map with pmap, it takes each call to the function argument and executes it in a thread pool. pmap is not completely lazy, but it's not completely strict, either: it stays just ahead of the output consumed. So if the output is never used, it won't be fully realized. For this recipe, we'll calculate the Mandelbrot set. Each point in the output takes enough time that this is a good candidate to parallelize. We can just swap map for pmap and immediately see a speed-up. How to do it... The Mandelbrot set can be found by looking for points that don't settle on a value after passing through the formula that defines the set quickly. We need a function that takes a point and the maximum number of iterations to try and return the iteration that it escapes on. That just means that the value gets above 4. (defn get-escape-point [scaled-x scaled-y max-iterations] (loop [x 0, y 0, iteration 0] (let [x2 (* x x), y2 (* y y)] (if (and (< (+ x2 y2) 4) (< iteration max-iterations)) (recur (+ (- x2 y2) scaled-x) (+ (* 2 x y) scaled-y) (inc iteration)) iteration)))) The scaled points are the pixel points in the output, scaled to relative positions in the Mandelbrot set. Here are the functions that handle the scaling. Along with a particular x-y coordinate in the output, they're given the range of the set and the number of pixels each direction. (defn scale-to ([pixel maximum [lower upper]] (+ (* (/ pixel maximum) (Math/abs (- upper lower))) lower))) (defn scale-point ([pixel-x pixel-y max-x max-y set-range] [(scale-to pixel-x max-x (:x set-range)) (scale-to pixel-y max-y (:y set-range))])) The function output-points returns a sequence of x, y values for each of the pixels in the final output. (defn output-points ([max-x max-y] (let [range-y (range max-y)] (mapcat (fn [x] (map #(vector x %) range-y)) (range max-x))))) For each output pixel, we need to scale it to a location in the range of the Mandelbrot set and then get the escape point for that location. (defn mandelbrot-pixel ([max-x max-y max-iterations set-range] (partial mandelbrot-pixel max-x max-y max-iterations set-range)) ([max-x max-y max-iterations set-range [pixel-x pixel-y]] (let [[x y] (scale-point pixel-x pixel-y max-x max-y set-range)] (get-escape-point x y max-iterations)))) At this point, we can simply map mandelbrot-pixel over the results of outputpoints. We'll also pass in the function to use (map or pmap). (defn mandelbrot ([mapper max-iterations max-x max-y set-range] (doall (mapper (mandelbrot-pixel max-x max-y max-iterations set-range) (output-points max-x max-y))))) Finally, we have to define the range that the Mandelbrot set covers. (def mandelbrot-range {:x [-2.5, 1.0], :y [-1.0, 1.0]}) How do these two compare? A lot depends on the parameters we pass them. user=> (def m (time (mandelbrot map 500 1000 1000 mandelbrot-range))) "Elapsed time: 28981.112 msecs" #'user/m user=> (def m (time (mandelbrot pmap 500 1000 1000 mandelbrot-range))) "Elapsed time: 34205.122 msecs" #'user/m user=> (def m (time (mandelbrot map 1000 10001000 mandelbrot-range))) "Elapsed time: 85308.706 msecs" #'user/m user=> (def m (time (mandelbrot pmap 1000 10001000 mandelbrot-range))) "Elapsed time: 49067.584 msecs" #'user/m Refer to the following chart: If we only iterate at most 500 times for each point, it's slightly faster to use map and work sequentially. However, if we iterate 1,000 times each, pmap is faster. How it works... This shows that parallelization is a balancing act. If each separate work item is small, the overhead of creating the threads, coordinating them, and passing data back and forth takes more time than doing the work itself. However, when each thread has enough to do to make it worth it, we can get nice speed-ups just by using pmap. Behind the scenes, pmap takes each item and uses future to run it in a thread pool. It forces only a couple more items than you have processors, so it keeps your machine busy, without generating more work or data than you need. There's more... For an in-depth, excellent discussion of the nuts and bolts of pmap, along with pointers about things to watch out for, see David Liebke's talk, From Concurrency to Parallelism (http://blip.tv/clojure/david-liebke-from-concurrency-to-parallelism-4663526). See also The Partitioning Monte Carlo Simulations for better pmap performance recipe Parallelizing processing with Incanter One of its nice features is that it uses the Parallel Colt Java library (http://sourceforge.net/projects/parallelcolt/) to actually handle its processing, so when you use a lot of the matrix, statistical, or other functions, they're automatically executed on multiple threads. For this, we'll revisit the Virginia housing-unit census data and we'll fit it to a linear regression. Getting ready We'll need to add Incanter to our list of dependencies in our Leiningen project.clj file: :dependencies [[org.clojure/clojure "1.5.0"] [incanter "1.3.0"]] We'll also need to pull those libraries into our REPL or script: (use '(incanter core datasets io optimize charts stats)) We can use the following filename: (def data-file "data/all_160_in_51.P35.csv") How to do it... For this recipe, we'll extract the data to analyze and perform the linear regression. We'll then graph the data afterwards. First, we'll read in the data and pull the population and housing unit columns into their own matrix. (def data (to-matrix (sel (read-dataset data-file :header true) :cols [:POP100 :HU100]))) From this matrix, we can bind the population and the housing unit data to their own names. (def population (sel data :cols 0)) (def housing-units (sel data :cols 1)) Now that we have those, we can use Incanter to fit the data. (def lm (linear-model housing-units population)) Incanter makes it so easy, it's hard not to look at it. (def plot (scatter-plot population housing-units :legend true)) (add-lines plot population (:fitted lm)) (view plot) Here we can see that the graph of housing units to families makes a very straight line: How it works… Under the covers, Incanter takes the data matrix and partitions it into chunks. It then spreads those over the available CPUs to speed up processing. Of course, we don't have to worry about this. That's part of what makes Incanter so powerful. Partitioning Monte Carlo simulations for better pmap performance In the Parallelizing processing with pmap recipe, we found that while using pmap is easy enough, knowing when to use it is more complicated. Processing each task in the collection has to take enough time to make the costs of threading, coordinating processing, and communicating the data worth it. Otherwise, the program will spend more time concerned with how (parallelization) and not enough time with what (the task). The way to get around this is to make sure that pmap has enough to do at each step that it parallelizes. The easiest way to do that is to partition the input collection into chunks and run pmap on groups of the input. For this recipe, we'll use Monte Carlo methods to approximate pi . We'll compare a serial version against a naïve parallel version against a version that uses parallelization and partitions. Getting ready We'll use Criterium to handle benchmarking, so we'll need to include it as a dependency in our Leiningen project.clj file, shown as follows: :dependencies [[org.clojure/clojure "1.5.0"] [criterium "0.3.0"]] We'll use these dependencies and the java.lang.Math class in our script or REPL. (use 'criterium.core) (import [java.lang Math]) How to do it… To implement this, we'll define some core functions and then implement a Monte Carlo method for estimating pi that uses pmap. We need to define the functions necessary for the simulation. We'll have one that generates a random two-dimensional point that will fall somewhere in the unit square. (defn rand-point [] [(rand) (rand)]) Now, we need a function to return a point's distance from the origin. (defn center-dist [[x y]] (Math/sqrt (+ (* x x) (* y y)))) Next we'll define a function that takes a number of points to process, and creates that many random points. It will return the number of points that fall inside a circle. (defn count-in-circle [n] (->> (repeatedly n rand-point) (map center-dist) (filter #(<= % 1.0)) count)) That simplifies our definition of the base (serial) version. This calls count-incircle to get the proportion of random points in a unit square that fall inside a circle. It multiplies this by 4, which should approximate pi. (defn mc-pi [n] (* 4.0 (/ (count-in-circle n) n))) We'll use a different approach for the simple pmap version. The function that we'll parallelize will take a point and return 1 if it's in the circle, or 0 if not. Then we can add those up to find the number in the circle. (defn in-circle-flag [p] (if (<= (center-dist p) 1.0) 1 0)) (defn mc-pi-pmap [n] (let [in-circle (->> (repeatedly n rand-point) (pmap in-circle-flag) (reduce + 0))] (* 4.0 (/ in-circle n)))) For the version that chunks the input, we'll do something different again. Instead of creating the sequence of random points and partitioning that, we'll have a sequence that tells how large each partition should be and have pmap walk across that, calling count-in-circle. This means that creating the larger sequences are also parallelized. (defn mc-pi-part ([n] (mc-pi-part 512 n)) ([chunk-size n] (let [step (int (Math/floor (float (/ n chunk-size)))) remainder (mod n chunk-size) parts (lazy-seq (cons remainder (repeat step chunk-size))) in-circle (reduce + 0 (pmap count-in-circle parts))] (* 4.0 (/ in-circle n))))) Now, how do these work? We'll bind our parameters to names, and then we'll run one set of benchmarks before we look at a table of all of them. We'll discuss the results in the next section. user=> (def chunk-size 4096) #'user/chunk-size user=> (def input-size 1000000) #'user/input-size user=> (quick-bench (mc-pi input-size)) WARNING: Final GC required 4.001679309213317 % of runtime Evaluation count : 6 in 6 samples of 1 calls. Execution time mean :634.387833 ms Execution time std-deviation : 33.222001 ms Execution time lower quantile : 606.122000 ms ( 2.5%) Execution time upper quantile : 677.273125 ms (97.5%) nil Here's all the information in the form of a table: Function Input Size Chunk Size Mean Std Dev. GC Time mc-pi 1,000,000 NA 634.39ms 33.22 ms 4.0%   mc-pi-pmap 1,000,000 NA 1.92 sec 888.52 ms 2.60%   mc-pi-part 1,000,000 4,096 455.94 ms 4.19 ms 8.75%   Here's a chart with the same information: How it works… There are a couple of things we should talk about here. Primarily, we'll need to look at chunking the inputs for pmap, but we should also discuss Monte Carlo methods. Estimating with Monte Carlo simulations Monte Carlo simulations work by throwing random data at a problem that is fundamentally deterministic, but when it's practically infeasible to attempt a more straightforward solution. Calculating pi is one example of this. By randomly filling in points in a unit square, p/4 will be approximately the ratio of points that will fall within a circle centered on 0, 0. The more random points that we use, the better the approximation. I should note that this makes a good demonstration of Monte Carlo methods, but it's a terrible way to calculate pi. It tends to be both slower and less accurate than the other methods. Although not good for this task, Monte Carlo methods have been used for designing heat shields, simulating pollution, ray tracing, financial option pricing, evaluating business or financial products, and many, many more things. For a more in-depth discussion, Wikipedia has a good introduction to Monte Carlo methods at http://en.wikipedia.org/wiki/Monte_Carlo_method. Chunking data for pmap The table we saw earlier makes it clear that partitioning helped: the partitioned version took just 72 percent of the time that the serial version did, while the naïve parallel version took more than three times longer. Based on the standard deviations, the results were also more consistent. The speed up is because each thread is able to spend longer on each task. There is a performance penalty to spreading the work over multiple threads. Context switching (that is, switching between threads) costs time, and coordinating between threads does as well. But we expect to be able to make that time and more up by doing more things at once. However, if each task itself doesn't take long enough, then the benefit won't out-weigh the costs. Chunking the input—and effectively creating larger individual tasks for each thread— gets around this by giving each thread more to do, and thereby spending less time context switching and coordinating, relative to the overall time spent running.
Read more
  • 0
  • 0
  • 1460

article-image-advanced-performance-strategies
Packt
12 Apr 2013
6 min read
Save for later

Advanced Performance Strategies

Packt
12 Apr 2013
6 min read
(For more resources related to this topic, see here.) General tips Before diving into some advanced strategies for improving performance and scalability, let's briefly recap some of the general performance tips already spread across the book: When mapping your entity classes for Hibernate Search, use the optional elements of the @Field annotation to strip the unnecessary bloat from your Lucene indexes: If you are definitely not using index-time boosting , then there is no reason to store the information needed to make this possible. Set the norms element to Norms.NO . By default, the information needed for a projection-based query is not stored unless you set the store element to Store.YES or Store. COMPRESS. If you had projection-based queries that are no longer being used, then remove this element as part of the cleanup. Use conditional indexing and partial indexing to reduce the size of Lucene indexes. Rely on filters to narrow your results at the Lucene level, rather than using a WHERE clause at the database query level. Experiment with projection-based queries wherever possible , to reduce or eliminate the need for database calls. Be aware that with advanced database caching, the benefits might not always justify the added complexity. Test various index manager options , such as trying the near-real-time index manager or the async worker execution mode. Running applications in a cluster Making modern Java applications scale in a production environment usually involves running them in a cluster of server instances. Hibernate Search is perfectly at home in a clustered environment, and offers multiple approaches for configuring a solution. Simple clusters The most straightforward approach requires very little Hibernate Search configuration. Just set up a file server for hosting your Lucene indexes and make it available to every server instance in your cluster (for example, NFS, Samba, and so on): A simple cluster with multiple server nodes using a common Lucene index on a shared drive Each application instance in the cluster uses the default index manager, and the usual filesystem directory provider. In this arrangement, all of the server nodes are true peers. They each read from the same Lucene index, and no matter which node performs an update, that node is responsible for the write. To prevent corruption, Hibernate Search depends on simultaneous writes being blocked, by the locking strategy (that is, either "simple" or "native"). Recall that the "near-real-time" index manager is explicitly incompatible with a clustered environment. The advantage of this approach is two-fold. First and foremost is simplicity. The only steps involved are setting up a filesystem share, and pointing each application instance's directory provider to the same location. Secondly, this approach ensures that Lucene updates are instantly visible to all the nodes in the cluster. However, a serious downside is that this approach can only scale so far. Very small clusters may work fine, but larger numbers of nodes trying to simultaneously access the same shared files will eventually lead to lock contention. Also, the file server on which the Lucene indexes are hosted is a single point of failure. If the file share goes down, then your search functionality breaks catastrophically and instantly across the entire cluster. Master-slave clusters When your scalability needs outgrow the limitations of a simple cluster, Hibernate Search offers more advanced models to consider. The common element among them is the idea of a master node being responsible for all Lucene write operations. Clusters may also include any number of slave nodes. Slave nodes may still initiate Lucene updates, and the application code can't really tell the difference. However, under the covers, slave nodes delegate that work to be actually performed by the master node. Directory providers In a master-slave cluster, there is still an "overall master" Lucene index, which logically stands apart from all of the nodes. This may be filesystem-based, just as it is with a simple cluster. However, it may instead be based on JBoss Infinispan (http://www.jboss.org/infinispan), an open source in-memory NoSQL datastore sponsored by the same company that principally sponsors Hibernate development: In a filesystem-based approach, all nodes keep their own local copies of the Lucene indexes. The master node actually performs updates on the overall master indexes, and all of the nodes periodically read from that overall master to refresh their local copies. In an Infinispan-based approach, the nodes all read from the Infinispan index (although it is still recommended to delegate writes to a master node). Therefore, the nodes do not need to maintain their own local index copies. In reality, because Infinispan is a distributed datastore, portions of the index will reside on each node anyway. However, it is still best to visualize the overall index as a separate entity. Worker backends There are two available mechanisms by which slave nodes delegate write operations to the master node: A JMS message queue provider creates a queue, and slave nodes send messages to this queue with details about Lucene update requests. The master node monitors this queue, retrieves the messages, and actually performs the update operations. You may instead replace JMS with JGroups (http://www.jgroups.org), an open source multicast communication system for Java applications. This has the advantage of being faster and more immediate. Messages are received in real-time, synchronously rather than asynchronously. However, JMS messages are generally persisted to a disk while awaiting retrieval, and therefore can be recovered and processed later, in the event of an application crash. If you are using JGroups and the master node goes offline, then all the update requests sent by slave nodes during that outage period will be lost. To fully recover, you would likely need to reindex your Lucene indexes manually. A master-slave cluster using a directory provider based on filesystem or Infinispan, and worker based on JMS or JGroups. Note that when using Infinispan, nodes do not need their own separate index copies.   Summary In this article, we explored the options for running applications in multi-node server clusters, to spread out and handle user requests in a distributed fashion. We also learned how to use sharding to help make our Lucene indexes faster and more manageable. Resources for Article : Further resources on this subject: Integrating Spring Framework with Hibernate ORM Framework: Part 1 [Article] Developing Applications with JBoss and Hibernate: Part 1 [Article] Hibernate Types [Article]
Read more
  • 0
  • 0
  • 1403
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-show-hide-rows-and-highlighting-cells
Packt
09 Apr 2013
7 min read
Save for later

Show/hide rows and Highlighting cells

Packt
09 Apr 2013
7 min read
(For more resources related to this topic, see here.) Show/hide rows Click a link to trigger hiding or displaying of table rows. Getting ready Once again, start off with an HTML table. This one is not quite as simple a table as in previous recipes. You'll need to create a few <td> tags that span the entire table, as well as provide some specific classes to certain elements. How to do it... Again, give the table an id attribute. Each of the rows that represent a department, specifically the rows that span the entire table, should have a class attribute value of dept. <table border="1" id="employeeTable"> <thead> <tr> <th>Last Name</th> <th>First Name</th> <th>Phone</th> </tr> </thead> <tbody> <tr> <td colspan="3" class="dept"> </td> </tr> Each of the department names should be links where the <a> elements have a class of rowToggler. <a href="#" class="rowToggler">Accounting</a> Each table row that contains employee data should have a class attribute value that corresponds to its department. Note that class names cannot contain spaces. So in the case of the Information Technology department, the class names should be InformationTechnology without a space. The issue of the space will be addressed later. <tr class="Accounting"> <td>Frang</td> <td>Corey</td> <td>555-1111</td> </tr> The following script makes use of the class names to create a table whose rows can be easily hidden or shown by clicking a link: <script type="text/javascript"> $( document ).ready( function() { $( "a.rowToggler" ).click( function( e ) { e.preventDefault(); var dept = $( this ).text().replace( /s/g, "" ); $( "tr[class=" + dept + "]" ).toggle(); }) }); </script> With the jQuery implemented, departments are "collapsed", and will only reveal the employees when the link is clicked. How it works... The jQuery will "listen" for a click event on any <a> element that has a class of rowToggler. In this case, capture a reference to the event that triggered the action by passing e to the click handler function. $( "a.rowToggler" ).click( function( e ) In this case, e is simply a variable name. It can be any valid variable name, but e is a standard convention. The important thing is that jQuery has a reference to the event. Why? Because in this case, the event was that an <a> was clicked. The browser's default behavior is to follow a link. This default behavior needs to be prevented. As luck would have it, jQuery has a built-in function called preventDefault(). The first line of the function makes use of this by way of the following: e.preventDefault(); Now that you've safely prevented the browser from leaving or reloading the page, set a variable with a value that corresponds to the name of the department that was just clicked. var dept = $( this ).text().replace( /s/g, "" ); Most of the preceding line should look familiar. $( this ) is a reference to the element that was clicked, and text() is something you've already used. You're getting the text of the <a> tag that was clicked. This will be the name of the department. But there's one small issue. If the department name contains a space, such as "Information Technology", then this space needs to be removed. .replace( /s/g, "" ) replace() is a standard JavaScript function that uses a regular expression to replace spaces with an empty string. This turns "Information Technology" into "InformationTechnology", which is a valid class name. The final step is to either show or hide any table row with a class that matches the department name that was clicked. Ordinarily, the selector would look similar to the following: $( "tr.InformationTechnology" ) Because the class name is a variable value, an alternate syntax is necessary. jQuery provides a way to select an element using any attribute name and value. The selector above can also be represented as follows: $( "tr[class=InformationTechnology]" ) The entire selector is a literal string, as indicated by the fact that it's enclosed in quotes. But the department name is stored in a variable. So concatenate the literal string with the variable value: $( "tr[class=" + dept + "]" ) With the desired elements selected, either hide them if they're displayed, or display them if they're hidden. jQuery makes this very easy with its built-in toggle() method. Highlighting cells Use built-in jQuery traversal methods and selectors to parse the contents of each cell in a table and apply a particular style (for example, a yellow background or a red border) to all cells that meet a specified set of criteria. Getting ready Borrowing some data from Tiobe (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html), create a table of the top five programming languages for 2012. To make it "pop" a bit more, each <td> in the Ratings column that's over 10 percent will be highlighted in yellow, and each <td> in the Delta column that's less than zero will be highlighted in red. Each <td> in the Ratings column should have a class of ratings, and each <td> in the Delta column should have a class of delta. Additionally, set up two CSS classes for the highlights as follows: .highlight { background-color: #FFFF00; } /* yellow */ .highlight-negative { background-color: #FF0000; } /* red */ Initially, the table should look as follows: How to do it... Once again, give the table an id attribute (but by now, you knew that), as shown in the following code snippet: <table border="1" id="tiobeTable"> <thead> <tr> <th>Position<br />Dec 2012</th> <th>Position<br />Dec 2011</th> <th>Programming Language</th> <th>Ratings<br />Dec 2012</th> <th>Delta<br />Dec 2011</th> </tr> </thead> Apply the appropriate class names to the last two columns in each table row within the <tbody>, as shown in the following code snippet: <tbody> <tr> <td>1</td> <td>2</td> <td>C</td> <td class="ratings">18.696%</td> <td class="delta">+1.64%</td> </tr> With the table in place and properly marked up with the appropriate class names, write the script to apply the highlights as follows: <script type="text/javascript"> $( document ).ready( function() { $( "#tiobeTable tbody tr td.ratings" ).each( function( index ) { if ( parseFloat( $( this ).text() ) > 10 ) { $( this ).addClass( "highlight" ); } }); $( "#tiobeTable tbody tr td.delta" ).each( function( index ) { if ( parseFloat( $( this ).text() ) < 0 ) { $( this ).addClass( "highlight-negative" ); } }); }); </script> Now, you will see a much more interesting table with multiple visual cues: How it works... Select the <td> elements within the tbody tag's table rows that have a class of ratings. For each iteration of the loop, test whether or not the value (text) of the <td> is greater than 10. Because the values in <td> contain non-numeric characters (in this case, % signs), we use JavaScript's parseFloat() to convert the text to actual numbers: parseFloat( $( this ).text() ) Much of that should be review. $( this ) is a reference to the element in question. text() retrieves the text from the element. parseFloat() ensures that the value is numeric so that it can be accurately compared to the value 10. If the condition is met, use addClass() to apply the highlight class to <td>. Do the same thing for the Delta column. The only difference is in checking to see if the text is less than zero. If it is, apply the class highlight-negative. The end result makes it much easier to identify specific data within the table. Summary In this article we covered two recipes Show/hide rows and Highlighting cells. Resources for Article : Further resources on this subject: Tips and Tricks for Working with jQuery and WordPress5 [Article] Using jQuery Script for Creating Dynamic Table of Contents [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 2106

article-image-adding-feedback-moodle-quiz-questions
Packt
08 Apr 2013
4 min read
Save for later

Adding Feedback to the Moodle Quiz Questions

Packt
08 Apr 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Any learner taking a quiz may want to know how well he/she has answered the questions posed. Often, working with Moodle, the instructor is at a distance from the learner. Providing feedback is a great way of enhancing communication between learner and instructor. Learner feedback can be provided at multiple levels using Moodle Quiz. You can create feedback at various levels in both the questions and the overall quiz. Here we will examine feedback at the question level. General feedback When we add General Feedback to a question, every student sees the feedback, regardless of their answer to the question. This is good opportunity to provide clarification for the learner who had guessed a correct answer, as well as for the learner whose response was incorrect. Individual response feedback We can create feedback tailored to each possible response in a multiple choice question. This feedback can be more focused in nature. Often, a carefully crafted distracter in a multiple choice can reveal misconceptions and the feedback can provide the correction required as soon as the learner completes the quiz. Feedback given when the question is fresh in the learner's mind, is very effective. How to do it... Let's create some learner feedback for some of the questions that we have created in the question bank: First of all, let's add general feedback to a question. Returning to our True-False question on Texture, we can see that general feedback is effective when there are only two choices. Remember that this type of feedback will appear for all learners, regardless of the answer they submitted. The intention of this feedback is to reflect the correct solution and also give more background information to enhance the teaching opportunity. Let's take a look at how to create a specific feedback for each possible response that a learner may submit. This is done by adding individual response feedback. Returning to our multiple choice question on application of the element line, a specific feedback response tailored to each possible choice will provide helpful clarification for the student. This type of feedback is entered after each possible choice. Here is an example of a feedback to reinforce a correct response and a feedback for an incorrect response: In this case, the feedback the learner receives is tailored to the response they have submitted. This provides much more specific feedback to the learner's choice of responses. For the embedded question (Cloze), feedback is easy to add in Moodle 2.0. In the following screenshot, we can see the question that we created with feedback added: And this is what the feedback looks like to the student: How it works... We have now improved questions in our exam bank by providing feedback for the learner. We have created both general feedback that all learners will see and specific feedback for each response the learner may choose. As we think about the learning experience for the learner, we can see that immediate feedback with our questions is an effective way to reinforce learning. This is another feature that makes Moodle Quiz such a powerful tool. There's more... As we think about the type of feedback we want for the learner, we can combine feedback for individual responses with general feedback. Also there are options for feedback for any correct response, for any partially correct response, or for any incorrect response. Feedback serves to engage the learners and personalize the experience. We created question categories, organized our questions into categories, and learned how to add learner feedback at various levels inside the questions. We are now ready to configure a quiz. Summary In the article we have seen how we can add feedback to the questions of the Moodle Quiz. Resources for Article : Further resources on this subject: Integrating Moodle 2.0 with Mahara and GoogleDocs for Business [Article] What's New in Moodle 2.0 [Article] Moodle 2.0 FAQs [Article]
Read more
  • 0
  • 0
  • 5308

article-image-getting-started-primefaces
Packt
04 Apr 2013
14 min read
Save for later

Getting Started with PrimeFaces

Packt
04 Apr 2013
14 min read
Setting up and configuring the PrimeFaces library PrimeFaces is a lightweight JSF component library with one JAR file, which needs no configuration and does not contain any required external dependencies. To start with the development of the library, all we need is to get the artifact for the library. Getting ready You can download the PrimeFaces library from http://primefaces.org/downloads.html and you need to add the primefaces-{version}.jar file to your classpath. After that, all you need to do is import the namespace of the library, which is necessary to add the PrimeFaces components to your pages, to get started. If you are using Maven (for more information on installing Maven, please visit http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html), you can retrieve the PrimeFaces library by defining the Maven repository in your Project Object Model (POM) file as follows: <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> </repository> Add the dependency configuration as follows: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>3.4</version> </dependency> At the time of writing this book, the latest and most stable version of PrimeFaces was 3.4. To check out whether this is the latest available or not, please visit http://primefaces.org/downloads.html The code in this book will work properly with PrimeFaces 3.4. In prior versions or the future versions, some methods, attributes, or components' behaviors may change. How to do it... In order to use PrimeFaces components, we need to add the namespace declarations into our pages. The namespace for PrimeFaces components is as follows: For PrimeFaces Mobile, the namespace is as follows: That is all there is to it. Note that the p prefix is just a symbolic link and any other character can be used to define the PrimeFaces components. Now you can create your first page with a PrimeFaces component as shown in the following code snippet: <html > <f:view contentType="text/html"> <h:head /> <h:body> <h:form> <p:spinner /> </h:form> </h:body> </f:view> </html> This will render a spinner component with an empty value as shown in the following screenshot: A link to the working example for the given page is given at the end of this recipe. How it works... When the page is requested, the p:spinner component is rendered with the renderer implemented by the PrimeFaces library. Since the spinner component is a UI input component, the request-processing lifecycle will get executed when the user inputs data and performs a post back on the page. For the first page, we also needed to provide the contentType parameter for f:view, since the WebKit-based browsers, such as Google Chrome and Apple Safari, request the content type application/xhtml+xml by default. This would overcome unexpected layout and styling issues that might occur. There's more... PrimeFaces only requires Java 5+ runtime and a JSF 2.x implementation as mandatory dependencies. There are some optional libraries for certain features. Dependency Version Type Description JSF runtime iText Apache POI Rome commons-fileupload commons-io 2.0 or 2.1 2.1.7 3.7 1.0 1.2.1 1.4 Required Optional Optional Optional Optional Optional Apache MyFaces or Oracle Mojarra DataExporter (PDF) DataExporter (Excel) FeedReader FileUpload FileUpload Please ensure that you have only one JAR file of PrimeFaces or specific PrimeFaces Theme in your classpath in order to avoid any issues regarding resource rendering. Currently PrimeFaces supports the web browsers IE 7, 8, or 9, Safari, Firefox, Chrome, and Opera. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. When the server is running, the showcase for the recipe is available at http://localhost:8080/primefaces-cookbook/views/chapter1 /yourFirstPage.jsf" AJAX basics with Process and Update PrimeFaces provides a partial page rendering (PPR) and view-processing feature based on standard JSF 2 APIs to enable choosing what to process in the JSF lifecycle and what to render in the end with AJAX. PrimeFaces AJAX Framework is based on standard server-side APIs of JSF 2. On the client side, rather than using the client-side API implementations of JSF implementations, such as Mojarra and MyFaces, PrimeFaces scripts are based on the jQuery JavaScript library. How to do it... We can create a simple page with a command button to update a string property with the current time in milliseconds on the server side and an output text to show the value of that string property, as follows: <p:commandButton update="display" action="#{basicPPRController. updateValue}" value="Update" /> <h:outputText id="display" value="#{basicPPRController.value}"/> If we would like to update multiple components with the same trigger mechanism, we can provide the IDs of the components to the update attribute by providing them a space, comma, or both, as follows: <p:commandButton update="display1,display2" /> <p:commandButton update="display1 display2" /> <p:commandButton update="display1,display2 display3" /> In addition, there are reserved keywords that are used for a partial update. We can also make use of these keywords along with the IDs of the components, as described in the following table: Keyword Description @this The component that triggers the PPR is updated @parent The parent of the PPR trigger is updated @form The encapsulating form of the PPR trigger is updated @none PPR does not change the DOM with AJAX response @all The whole document is updated as in non-AJAX requests We can also update a component that resides in a different naming container from the component that triggers the update. In order to achieve this, we need to specify the absolute component identifier of the component that needs to be updated. An example for this could be the following: <h:form id="form1"> <p:commandButton update=":form2:display" action="#{basicPPRController.updateValue}" value="Update" /> </h:form> <h:form id="form2"> <h:outputText id="display" value="#{basicPPRController.value}"/> </h:form> public String updateValue() { value = String.valueOf(System.currentTimeMillis()); return null; } PrimeFaces also provides partial processing, which executes the JSF lifecycle phases—Apply Request Values, Process Validations, Update Model, and Invoke Application—for determined components with the process attribute. This provides the ability to do group validation on the JSF pages easily. Mostly group-validation needs arise in situations where different values need to be validated in the same form, depending on an action that gets executed. By grouping components for validation, errors that would arise from other components when the page has been submitted can be overcome easily. Components like commandButton, commandLink, autoComplete, fileUpload, and many others provide this attribute to process partially instead of the whole view. Partial processing could become very handy in cases when a drop-down list needs to be populated upon a selection on another drop down and when there is an input field on the page with the required attribute set to true. This approach also makes immediate subforms and regions obsolete. It will also prevent submission of the whole page, thus this will result in lightweight requests. Without partially processing the view for the drop downs, a selection on one of the drop downs will result in a validation error on the required field. An example for this is shown in the following code snippet: <h:outputText value="Country: " /> <h:selectOneMenu id="countries" value="#{partialProcessingController. country}"> <f:selectItems value="#{partialProcessingController.countries}" /> <p:ajax listener= "#{partialProcessingController.handleCountryChange}" event="change" update="cities" process="@this"/> </h:selectOneMenu> <h:outputText value="City: " /> <h:selectOneMenu id="cities" value="#{partialProcessingController. city}"> <f:selectItems value="#{partialProcessingController.cities}" /> </h:selectOneMenu> <h:outputText value="Email: " /> <h:inputText value="#{partialProcessingController.email}" required="true" /> With this partial processing mechanism, when a user changes the country, the cities of that country will be populated in the drop down regardless of whether any input exists for the email field. How it works... As seen in partial processing example for updating a component in a different naming container, <p:commandButton> is updating the <h:outputText> component that has the ID display, and absolute client ID :form2:display, which is the search expression for the findComponent method. An absolute client ID starts with the separator character of the naming container, which is : by default. The <h:form>, <h:dataTable>, composite JSF components along with <p:tabView>, <p:accordionPanel>, <p:dataTable>, <p:dataGrid>, <p:dataList>, <p:carousel>, <p:galleria>, <p:ring>, <p:sheet>, and <p:subTable> are the components that implement the NamingContainer interface. The findComponent method, which is described at http://docs.oracle.com/javaee/6/api/javax/faces/component/UIComponent.html, is used by both JSF core implementation and PrimeFaces. There's more... JSF uses : (a colon) as the separator for the NamingContainer interface. The client IDs that will be rendered in the source page will be like :id1:id2:id3. If needed, the configuration of the separator can be changed for the web application to something other than the colon with a context parameter in the web.xml file of the web application, as follows: <context-param> <param-name>javax.faces.SEPARATOR_CHAR</param-name> <param-value>_</param-value> </context-param> It's also possible to escape the : character, if needed, in the CSS files with the character, as :. The problem that might occur with the colon is that it's a reserved keyword for the CSS and JavaScript frameworks, like jQuery, so it might need to be escaped. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Basic Partial Page Rendering is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/basicPPR.jsf Updating Component in Different Naming Container is available at http://localhost:8080/primefaces-cookbook/views/chapter1/ componentInDifferentNamingContainer.jsf A Partial Processing example at http://localhost:8080/primefacescookbook/ views/chapter1/partialProcessing.jsf Internationalization (i18n) and Localization (L10n) Internationalization (i18n) and Localization (L10n) are two important features that should be provided in the web application's world to make it accessible globally. With Internationalization, we are emphasizing that the web application should support multiple languages; and with Localization, we are stating that the texts, dates, or any other fields should be presented in the form specific to a region. PrimeFaces only provides the English translations. Translations for the other languages should be provided explicitly. In the following sections, you will find the details on how to achieve this. Getting ready For Internationalization, first we need to specify the resource bundle definition under the application tag in faces-config.xml, as follows: <application> <locale-config> <default-locale>en</default-locale> <supported-locale>tr_TR</supported-locale> </locale-config> <resource-bundle> <base-name>messages</base-name> <var>msg</var> </resource-bundle> </application> A resource bundle would be a text file with the .properties suffix that would contain the locale-specific messages. So, the preceding definition states that the resource bundle messages_{localekey}.properties file will reside under classpath and the default value of localekey is en, which is English, and the supported locale is tr_TR, which is Turkish. For projects structured by Maven, the messages_{localekey}.properties file can be created under the src/main/resources project path. How to do it... For showcasing Internationalization, we will broadcast an information message via FacesMessage mechanism that will be displayed in the PrimeFaces growl component. We need two components, the growl itself and a command button, to broadcast the message. <p:growl id="growl" /> <p:commandButton action="#{localizationController.addMessage}" value="Display Message" update="growl" /> The addMessage method of localizationController is as follows: public String addMessage() { addInfoMessage("broadcast.message"); return null; } That uses the addInfoMessage method, which is defined in the static MessageUtil class as follows: public static void addInfoMessage(String str) { FacesContext context = FacesContext.getCurrentInstance(); ResourceBundle bundle = context.getApplication(). getResourceBundle(context, "msg"); String message = bundle.getString(str); FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, message, "")); } Localization of components, such as calendar and schedule, can be achieved by providing the locale attribute. By default, locale information is retrieved from the view's locale and it can be overridden by a string locale key or the java.util.Locale instance. Components such as calendar and schedule use a shared PrimeFaces.locales property to display labels. PrimeFaces only provides English translations, so in order to localize the calendar we need to put corresponding locales into a JavaScript file and include the scripting file to the page. The content for the German locale of the Primefaces.locales property for calendar would be as shown in the following code snippet. For the sake of the recipe, only the German locale definition is given and the Turkish locale definition is omitted. PrimeFaces.locales['de'] = { closeText: 'Schließen', prevText: 'Zurück', nextText: 'Weiter', monthNames: ['Januar', 'Februar', 'März', 'April', 'Mai', 'Juni', 'Juli', 'August', 'September', 'Oktober', 'November', 'Dezember'], monthNamesShort: ['Jan', 'Feb', 'Mär', 'Apr', 'Mai', 'Jun', 'Jul', 'Aug', 'Sep', 'Okt', 'Nov', 'Dez'], dayNames: ['Sonntag', 'Montag', 'Dienstag', 'Mittwoch', 'Donnerstag', 'Freitag', 'Samstag'], dayNamesShort: ['Son', 'Mon', 'Die', 'Mit', 'Don', 'Fre', 'Sam'], dayNamesMin: ['S', 'M', 'D', 'M ', 'D', 'F ', 'S'], weekHeader: 'Woche', FirstDay: 1, isRTL: false, showMonthAfterYear: false, yearSuffix: '', timeOnlyTitle: 'Nur Zeit', timeText: 'Zeit', hourText: 'Stunde', minuteText: 'Minute', secondText: 'Sekunde', currentText: 'Aktuelles Datum', ampm: false, month: 'Monat', week: 'Woche', day: 'Tag', allDayText: 'Ganzer Tag' }; Definition of the calendar components with the locale attribute would be as follows: <p:calendar showButtonPanel="true" navigator="true" mode="inline" id="enCal"/> <p:calendar locale="tr" showButtonPanel="true" navigator="true" mode="inline" id="trCal"/> <p:calendar locale="de" showButtonPanel="true" navigator="true" mode="inline" id="deCal"/> They will be rendered as follows: How it works... For Internationalization of the Faces message, the addInfoMessage method retrieves the message bundle via the defined variable msg. It then gets the string from the bundle with the given key by invoking the bundle.getString(str) method. Finally, the message is added by creating a new Faces message with severity level FacesMessage.SEVERITY_INFO. There's more... For some components, Localization could be accomplished by providing labels to the components via attributes, such as with p:selectBooleanButton. <p:selectBooleanButton value="#{localizationController.selectedValue}" onLabel="#{msg['booleanButton.onLabel']}" offLabel="#{msg['booleanButton.offLabel']}" /> The msg variable is the resource bundle variable that is defined in the resource bundle definition in Faces configuration file. The English version of the bundle key definitions in the messages_en.properties file that resides under classpath would be as follows: booleanButton.onLabel=Yes booleanButton.offLabel=No PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Internationalization is available at http://localhost:8080/primefacescookbook/ views/chapter1/internationalization.jsf Localization of the calendar component is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localization.jsf Localization with resources is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localizationWithResources. jsf For already translated locales of the calendar, see https://code.google.com/archive/p/primefaces/wikis/PrimeFacesLocales.wiki
Read more
  • 0
  • 0
  • 5003

article-image-building-custom-version-jquery
Packt
04 Apr 2013
9 min read
Save for later

Building a Custom Version of jQuery

Packt
04 Apr 2013
9 min read
(For more resources related to this topic, see here.) Why Is It Awesome? While it's fairly common for someone to say that they use jQuery in every site they build (this is usually the case for me), I would expect it much rarer for someone to say that they use the exact same jQuery methods in every project, or that they use a very large selection of the available methods and functionality that it offers. The need to reduce file size as aggressively as possible to cater for the mobile space, and the rise of micro-frameworks such as Zepto for example, which delivers a lot of jQuery functionality at a much-reduced size, have pushed jQuery to provide a way of slimming down. As of jQuery 1.8, we can now use the official jQuery build tool to build our own custom version of the library, allowing us to minimize the size of the library by choosing only the functionality we require. For more information on Zepto, see http://zeptojs.com/. Your Hotshot Objectives To successfully conclude this project we'll need to complete the following tasks: Installing Git and Make Installing Node.js Installing Grunt.js Configuring the environment Building a custom jQuery Running unit tests with QUnit Mission Checklist We'll be using Node.js to run the build tool, so you should download a copy of this now. The Node website (http://nodejs.org/download/) has an installer for both 64 and 32-bit versions of Windows, as well as a Mac OS X installer. It also features binaries for Mac OS X, Linux, and SunOS. Download and install the appropriate version for your operating system. The official build tool for jQuery (although it can do much more besides build jQuery) is Grunt.js, written by Ben Alman. We don't need to download this as it's installed via the Node Package Manager (NPM). We'll look at this process in detail later in the project. For more information on Grunt.js, visit the official site at http://gruntjs.com. First of all we need to set up a local working area. We can create a folder in our root project folder called jquery-source. This is where we'll store the jQuery source when we clone the jQuery Github repository, and also where Grunt will build the final version of jQuery. Installing Git and Make The first thing we need to install is Git, which we'll need in order to clone the jQuery source from the Github repository to our own computer so that we can work with the source files. We also need something called Make, but we only need to actually install this on Mac platforms because it gets installed automatically on Windows when Git is installed. As the file we'll create will be for our own use only and we don't want to contribute to jQuery by pushing code back to the repository, we don't need to worry about having an account set up on Github. Prepare for Lift Off First we'll need to download the relevant installers for both Git and Make. Different applications are required depending on whether you are developing on Mac or Windows platforms. Mac developers Mac users can visit http://git-scm.com/download/mac for Git. Next we can install Make. Mac developers can get this by installing XCode. This can be downloaded from https://developer.apple.com/xcode/. Windows developers Windows users can install msysgit, which can be obtained by visiting https://code.google.com/p/msysgit/downloads/detail?name=msysGit-fullinstall-1.8.0-preview20121022.exe. Engage Thrusters Once the installers have downloaded, run them to install the applications. The defaults selected by the installers should be fine for the purposes of this mission. First we should install Git (or msysgit on Windows). Mac developers Mac developers simply need to run the installer for Git to install it to the system. Once this is complete we can then install XCode. All we need to do is run the installer and Make, along with some other tools, will be installed and ready. Windows developers Once the full installer for msysgit has finished, you should be left with a Command Line Interface (CLI) window (entitled MINGW32) indicating that everything is ready for you to hack. However, before we can hack, we need to compile Git. To do this we need to run a file called initialize.sh. In the MINGW32 window, cd into the msysgit directory. If you allowed this to install to the default location, you can use the following command: cd C:msysgitmsysgitsharemsysGit Once we are in the correct directory, we can then run initialize.sh in the CLI. Like the installation, this process can take some time, so be patient and wait for the CLI to return a flashing cursor at the $ character. An Internet connection is required to compile Git in this way. Windows developers will need to ensure that the Git.exe and MINGW resources can be reached via the system's PATH variable. This can be updated by going to Control Panel | System | Advanced system settings | Environment variables. In the bottom section of the dialog box, double-click on Path and add the following two paths to the git.exe file in the bin folder, which is itself in a directory inside the msysgit folder wherever you chose to install it: ;C:msysgitmsysgitbin; C:msysgitmsysgitmingwbin; Update the path with caution! You must ensure that the path to Git.exe is separated from the rest of the Path variables with a semicolon. If the path does not end with a semicolon before adding the path to Git.exe, make sure you add one. Incorrectly updating your path variables can result in system instability and/or loss of data. I have shown a semicolon at the start of the previous code sample to illustrate this. Once the path has been updated, we should then be able to use a regular command prompt to run Git commands. Post-installation tasks In a terminal or Windows Command Prompt (I'll refer to both simply as the CLI from this point on for conciseness) window, we should first cd into the jquery-source folder we created at the start of the project. Depending on where your local development folder is, this command will look something like the following: cd c:jquery-hotshotsjquery-source To clone the jQuery repository, enter the following command in the CLI: git clone git://github.com/jquery/jquery.git Again, we should see some activity on the CLI before it returns to a flashing cursor to indicate that the process is complete. Depending on the platform you are developing on, you should see something like the following screenshot: Objective Complete — Mini Debriefing We installed Git and then used it to clone the jQuery Github repository in to this directory in order to get a fresh version of the jQuery source. If you're used to SVN, cloning a repository is conceptually the same as checking out a repository. Again, the syntax of these commands is very similar on Mac and Windows systems, but notice how we need to escape the backslashes in the path when using Windows. Once this is complete, we should end up with a new directory inside our jquery-source directory called jquery. If we go into this directory, there are some more directories including: build: This directory is used by the build tool to build jQuery speed: This directory contains benchmarking tests src: This directory contains all of the individual source files that are compiled together to make jQuery Test: This directory contains all of the unit tests for jQuery It also has a range of various files, including: Licensing and documentation, including jQuery's authors and a guide to contributing to the project Git-specific files such as .gitignore and .gitmodules Grunt-specific files such as Gruntfile.js JSHint for testing and code-quality purposes Make is not something we need to use directly, but Grunt will use it when we build the jQuery source, so it needs to be present on our system. Installing Node.js Node.js is a platform for running server-side applications built with JavaScript. It is trivial to create a web-server instance, for example, that receives and responds to HTTP requests using callback functions. Server-side JS isn't exactly the same as its more familiar client-side counterpart, but you'll find a lot of similarities in the same comfortable syntax that you know and love. We won't actually be writing any server-side JavaScript in this project – all we need Node for is to run the Grunt.js build tool. Prepare for Lift Off To get the appropriate installer for your platform, visit the Node.js website at http://nodejs.org and hit the download button. The correct installer for your platform, if supported, should be auto-detected. Engage Thrusters Installing Node is a straightforward procedure on either the Windows or Mac platforms as there are installers for both. This task will include running the installer, which is obviously simple, and testing the installation using a CLI. Installing Node is a straightforward procedure on either the Windows or Mac platforms as there are installers for both. This task will include running the installer, which is obviously simple, and testing the installation using a CLI. On Windows or Mac platforms, run the installer and it will guide you through the installation process. I have found that the default options are fine in most cases. As before, we also need to update the Path variable to include Node and Node's package manager NPM. The paths to these directories will differ between platforms. Mac Mac developers should check that the $PATH variable contains a reference to usr/local/bin. I found that this was already in my $PATH, but if you do find that it's not present, you should add it. For more information on updating your $PATH variable, see http://www.tech-recipes.com/rx/2621/os_x_change_path_environment_variable/. Windows Windows developers will need to update the Path variable, in the same way as before, with the following paths: C:Program Filesnodejs; C:UsersDesktopAppDataRoamingnpm; Windows developers may find that the Path variable already contains an entry for Node so may just need to add the path to NPM. Objective Complete — Mini Debriefing Once Node is installed, we will need to use a CLI to interact with it. To verify Node has installed correctly, type the following command into the CLI: node -v The CLI should report the version in use, as follows: We can test NPM in the same way by running the following command: npm -v
Read more
  • 0
  • 0
  • 5369
article-image-creating-and-optimizing-your-first-retina-image
Packt
03 Apr 2013
6 min read
Save for later

Creating and optimizing your first Retina image

Packt
03 Apr 2013
6 min read
(For more resources related to this topic, see here.) Creating your first Retina image (Must know) Apple's Retina Display is a brand name for their high pixel density screens. These screens have so many pixels within a small space that the human eye cannot see pixelation, making images and text appear smoother. To compete with Apple's display, other manufacturers are also releasing devices using high-density displays. These types of displays are becoming standard in high quality devices. When you first start browsing the Web using a Retina Display, you'll notice that many images on your favorite sites are blurry. This is a result of low-resolution images being stretched to fill the screen. The effect can make an otherwise beautiful website look unattractive. The key to making your website look exceptional on Retina Displays is the quality of the images that you are using. In this recipe, we will cover the basics of creating high-resolution images and suggestions on how to name your files. Then we'll use some simple HTML to display the image on a web page. Getting ready Creating a Retina-ready site doesn't require any special software beyond what you're already using to build web pages. You'll need a graphics editor (such as Photoshop or GIMP) and your preferred code/text editor. To test the code on Retina Display you'll also need a web server that you can reach from a browser, if you aren't coding directly on the Retina device. The primary consideration in getting started is the quality of your images. A Retina image needs to be at least two times as large as it will be displayed on screen. If you have a photo you'd like to add to your page that is 500 pixels wide, you'll want to start out with an image that is at least 1000 pixels wide. Trying to increase the size of a small image won't work because the extra pixels are what make your image sharp. When designing your own graphics, such as icons and buttons, it's best to create them using a vector graphics program so they will be easy to resize without affecting the quality. Once you have your high-resolution artwork gathered, we're ready to start creating Retina images. How to do it... To get started, let's create a folder on your computer called retina. Inside that folder, create another folder called images. We'll use this as the directory for building our test website. To create your first Retina image, first open a high-resolution image in your graphics editor. You'll want to set the image size to be double the size of what you want to display on the page. For example, if you wanted to display a 700 x 400 pixel image, you would start with an image that is 1400 x 800 pixels. Make sure you aren't increasing the size of the original image or it won't work correctly. Next, save this image as a .jpg file with the filename [email protected] inside of the /images/ folder within the /retina/ folder that we created. Then resize the image to 50 percent and save it as myImage.jpg to the same location. Now we're ready to add our new images to a web page. Create an HTML document called retinaTest.html inside the /retina/ folder. Inside of the basic HTML structure add the two images we created and set the dimensions for both images to the size of the smaller image. <body> <img src = "images/[email protected]" width="700" height="400" /> <img src = "images/myImage.jpg" width="700" height="400" /> </body> If you are working on a Retina device you should be able to open this page locally; if not, upload the folder to your web server and open the page on your device. You will notice how much sharper the first image is than the second image. On a device without a Retina Display, both images will look the same. Congratulations! you've just built your first Retina-optimized web page. How it works... Retina Displays have a higher amount of pixels per inch (PPI) than a normal display. In Apple's devices they have double the PPI of older devices, which is why we created an image that was two times as large as the final image we wanted to display. When that large image is added to the code and then resized to 50 percent, it has more data than what is being shown on a normal display. A Retina device will see that extra pixel data and use it to fill the extra PPI that its screen, contains. Without the added pixel data, the device will use the data available to fill the screen creating a blurry image. You'll notice that this effect is most obvious on large photos and computer graphics like icons. Keep in mind this technique will work with any image format such as .jpg, .png, or .gif. There's more... As an alternative to using the image width and height attributes in HTML, like the previous code, you can also give the image a CSS class with width and height attributes. This is only recommended if you will be using many images that are of the same size and you want to be able to change them easily. <style> .imgHeader { width: 700px; height: 400px; } </style> <img src = "images/[email protected]" class="imgHeader" /> Tips for creating images We created both a Retina and a normal image. It's always a good idea to create both images because the Retina image will be quite a bit larger than the normal one. Then you'll have the option of which image you'd like to have displayed so users without a Retina device don't have to download the larger file. You'll also notice that we added @2x to the filename of the larger image. It's a good practice to create consistent filenames to differentiate the images that are high-resolution. It'll make our coding work much easier going forward. Pixels per inch and dots per inch When designers with a print background first look into creating graphics for Retina Displays there can be some confusion regarding dots per inch (DPI). Keep in mind that computer displays are only concerned with the number of pixels in an image. An 800 x 600 pixel image at 300 DPI will display the same as an 800 x 600 pixel image at 72 DPI.
Read more
  • 0
  • 0
  • 12231

article-image-introduction-rwd-frameworks
Packt
29 Mar 2013
8 min read
Save for later

Introduction to RWD frameworks

Packt
29 Mar 2013
8 min read
(For more resources related to this topic, see here.) Certainly, whether you are a beginner designer or an expert, creating a responsive website from the ground up can be convoluted. This is probably because of some indispensable technical issues in RWD, such as determining the proper number of columns in the grid and calculating the percentage of the width for each column, determining the correct breakpoint, and other technicalities that usually appear in the development stage. Many threads regarding the issues of creating responsive websites are open on StackOverflow: CSS Responsive grid 1px gap issue (http://stackoverflow.com/questions/12797183/cssresponsive-grid-1px-gap-issue) @media queries - one rule overrides another? (http://stackoverflow.com/questions/12822984/media-queriesone-rule-overrides-another) Why use frameworks? Following are a few reasons why using a framework is considered a good option: Time saver: If done right, using a framework could obviously save a lot of time. A framework generally comes with predefined styles and rules, such as the width of the gird, the button styles, font sizes, form styles, CSS reset, and other aspects to build a website. So, we don't have to repeat the same process from the beginning but simply follow the instructions to apply the styles and structure the markup. Bootstrap, for example, has been equipped with grid styles (http://twitter.github.com/bootstrap/scaffolding.html), basic styles (http://twitter.github.com/bootstrap/base-css.html), and user interface styles (http://twitter.github.com/bootstrap/components.html). Community and extension: A popular framework will most likely have an active community that extends the framework functionality. jQuery UI Bootstrap is perhaps a good example in this case; it is a theme for jQuery UI that matches the look and feel of the Bootstrap original theme. Also, Skeleton, has been extended to the WordPress theme (http://themes.simplethemes.com/skeleton/) and to Drupal (http://demo.drupalizing.com/?theme=skeleton). Cross browser compatibility : This task of assuring how the web page is displayed on different browsers is a really painful one. With a framework, we can minimize this hurdle, since the developers, most likely, have done this job before the framework is released publicly. Foundation is a good example in this case. It has been tested in the iOS, Android, and Windows Phone 7 browsers (http://foundation.zurb.com/docs/support.html). Documentation: A good framework also comes with documentation. The documentation will be very helpful when we are working with a team, to get members on the same page and make them follow the standard code-writing convention. Bootstrap ( http://twitter.github.com/bootstrap/getting-started.html) and Foundation ( http://foundation.zurb.com/docs/index.php), for example, have provided detailed documentation on how to use the framework. There are actually many responsive frameworks to choose from, such as Skeleton, Bootstrap, and Foundation. Let's take a look. Skeleton Skeleton (http://www.getskeleton.com/) is a minimal responsive framework; if you have been working with the 960.gs framework (http://960.gs/), Skeleton should immediately look familiar. Skeleton is 960 pixels wide with 16 columns in its basic grid; the only difference is that the grid is now responsive by integrating the CSS3 media queries. In case this is the first time you have heard about 960.gs or Grid System, you can follow the screencast tutorial by Jeffrey Way available at http://learncss.tutsplus.com/lesson/css-frameworks/. In this screencast, he shows how Grid System works and also guides you to create a website with 960.gs. It is a good place to start with Grid System. Bootstrap Bootstrap (http://twitter.github.com/bootstrap/) was originally built by Mark Otto (http://markdotto.com) and only intended for internal use in Twitter. Short story: Bootstrap was then launched as a free software for public. In it's early development, the responsive feature was not yet included; it was then added in Version 2 in response to the increasing demand for RWD. Bootstrap has a lot more added features as compared to Skeleton. It is packed with styled user interface components of commonly-used interfaces on a website, such as buttons, navigation, pagination, and forms. Beyond that, Bootstrap is also powered with some custom jQuery plugins, such as a tab, carousel, popover, and modal box. To get started with Bootstrap, you can follow the tutorial series (http://www.youtube.com/playlist?list=PLA615C8C2E86B555E) by David Cochran (https://twitter.com/davidcochran). He has thoroughly explained from the basics to utilizing the plugins in this series. Bootstrap has been associated with Twitter so far, but since the author has departed from Twitter and Bootstrap itself has grown beyond expectation, Bootstrap is likely to get separated from the Twitter brand as well (http://blog.getbootstrap.com/2012/09/29/onward/). Foundation Foundation (http://foundation.zurb.com) was built by a team at ZURB (http://www.zurb.com/about/), a product design agency based in California. Similar to Bootstrap, Foundation is beyond just a responsive CSS framework; it is equipped with predefined styles for a common web user interface, such as buttons (http://foundation.zurb.com/docs/components/buttons.html), navigation (http://foundation.zurb.com/docs/components/top-bar.html), and forms. In addition to this, it has also been powered up with some jQuery plugins. A few high-profile brands, such as Pixar (http://projection.pixar.com/) and National Geographic Channel (http://globalcloset.education.nationalgeographic.com/), have built their website on top of this framework. Who is using these frameworks? Now, apart from the two high-profile names we have mentioned in the preceding section, it will be nice to see what other brands and websites have been doing with these frameworks to get inspired. Let's take a look. Hivemind Hivemind is a design firm based in Wisconsin. Their website (www.ourhivemind.com) has been built using Skeleton. As befits the Skeleton framework, their website is very neat, simple, and well structured. The following screenshot shows how it responds in different viewport sizes: Living.is Living.is (http://living.is) is a social sharing website for living room stuff, ideas, and inspiration, such as sofas, chairs, and shelves. Their website has been built using Bootstrap. If you have been examining the Bootstrap UI components yourself, you will immediately recognize this from the button styles. The following screenshot shows how the Living.is page is displayed in the large viewport size: When viewed in a smaller viewport, the menu navigation is concatenated, turning into a navigation button with three stripes, as shown in the following screenshot. This approach now seems to be a popular practice, and this type of button is generally agreed to be a navigation button; the new Google Chrome website has also applied this button approach in their new release. When we click or tap on this button, it will expand the navigation downward, as shown in the following screenshot: To get more inspiration from websites that are built with Bootstrap, you can visit http://builtwithbootstrap.com/. However, the websites listed are not all responsive. Swizzle Swizzle (www.getswizzle.com) is an online service and design studio based in Canada. Their website is built on Foundation. The following screenshot shows how it is displayed in the large viewport size: Swizzle used a different way to deliver their navigation in a smaller viewport. Rather than expanding the menu as Bootstrap does, Swizzle replaces the menu navigation with a MENU link that refers to the navigation at the footer. The cons Using a framework also comes with its own problems. The most common problems found when adopting a framework are as follows: Excessive codes: Since a framework is likely to be used widely, it needs to cover every design scenario, and so it also comes with extra styles that you might not need for your website. Surely, you can sort out the styles and remove them, but this process, depending on the framework, could take a lot of time and could also be a painful task. Learning curve: The first time, it is likely that you will need to spend some time to learn how the framework works, including examining the CSS classes, the ID, and the names, and structuring HTML properly. But, this probably will only happen in your first try and won't be an issue once you are familiar with the framework. Less flexibility: A framework comes with almost everything set up, including the grid width, button styles, and border radius, and follows the standard of its developers. If things don't work the way we want them to, changing it could take a lot of time, and if it is not done properly, it could ruin all other code structure. TOther designers may also have particular issues regarding using a framework; you can further follow the discussion on this matter at http://stackoverflow.com/questions/203069/ what-is-the-best-css-framework-and-are-they-worth-the-effort. The CSS Trick forum has also opened a similar thread on this topic at http://css-tricks.com/forums/discussion/11904/css-frameworks-the-pros-and-cons/p1. Summary In this article we discussed some basic things about Responsive Web Design framework. Resources for Article : Further resources on this subject: Creating mobile friendly themes [Article] Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 [Article] Debugging REST Web Services [Article]
Read more
  • 0
  • 0
  • 2824

article-image-so-what-django
Packt
26 Mar 2013
7 min read
Save for later

So, what is Django?

Packt
26 Mar 2013
7 min read
(For more resources related to this topic, see here.) I would like to introduce you to Django by using a definition straight from its official website: Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. The first part of this definition makes clear that Django is a software framework written in Python and designed to support the development of web applications by offering a series of solutions to common problems and an abstraction of common design patterns for web development. The second part already gives you a it clear idea of the basic concepts on which Django is built, by highlighting its capabilities on rapid development without compromising the quality and the maintainability of the code. To get the job done in a fast and clean way, the Django stack is made up of a series of layers that have nearly no dependencies between them. This introduces great benefits as it will drive you to code with almost no knowledge sharing between components making future changes easy to apply and avoiding side effects on other components. All this identifies Django as a loosely coupled framework and its structure is a consequence of the just described approach and can be defined as the Model-Template-View (MTV) framework, a variation of the well know architectural pattern called Model-View-Controller (MVC). The MTV structure can be explained in the following way: Model: The application data View: Which data is presented Template: How the data is presented As you can understand from the architectural structure of the framework one of the most basic and important Django components is the Object Relational Mapper (ORM) that lets you define your data models entirely in Python and offers a complete dynamic API to access your database. The template engine also plays an important role in making the framework so great and easy to use—it is built to be designer-friendly. This means the templates are just HTML and that the template language doesn't add any variable assignments or advanced logic, offering only "programming-esque" functionality such as looping. Another innovative concept in the Django template engine is the introduction of template inheritance. The possibility to extend a base template discourages redundancy and helps you to keep the information in one place. The key to the success of a web framework is to also make it possible to easily plug third part modules in it. Django uses this concept and it comes—like Python—with "batteries included". It is built with a system to plug in applications in an easy way and the framework itself already includes a series of useful applications that you can feel free to use or not. One of the included applications that makes Django successful is the automatic admin interface, a complete, user-friendly, and production-ready web admin interface for your projects. It's easy to customize and extend and is a great added value that helps you to speed up most common web projects. In modern web application development, systems are often built for a global audience, and web frameworks have to take into account the need to provide support for internalization and localization. Django has full support for the translation of text, formatting of dates, times, and numbers, and time zones, and all this makes it possible to create multilingual web projects in a clear and easy way. On top of all these great features, Django is shipped with a complete cache framework that is a must-have support in a web framework if we want to grant great performance with high load. This component makes caching an easy task offering supports for different types of cache backends, from memory cache to the most famous, memacached. There are several other reasons that make Django a great framework and most of them can be really understood by diving into the framework, so do not hesitate and let's jump into Django. Installation Installing Django on your system is very easy. As it is just Python, you will only need a small effort to get it up and running on your machine. We will do it in two easy steps: Step 1 – What do I need? The only thing you need on your system to get Django running is obviously Python. At the time of writing this book, the latest version available is the 1.5c1 (release candidate) and it works on all Python versions from 2.6.5 to 2.7, and it also features experimental support for Version 3.2 and Version 3.3. Get the right Python package for your system at http://www.python.org. If you are running Linux or Mac OSX, Python is probably already installed in your operating system. If you are using Windows you will need to add the path of the Python installation folder (C:Python27) to the environment variables. You can verify that Python is installed by typing python in your shell. The expected result should look similar to the following output: Python 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Step 2 – Get and install Django Now we will see two methods to install Django: through a Python package manager tool called pip and the manual way. Feel free to use the one that you prefer. At the time of writing this book the Django Version 1.5 is in the release candidate status, if this is still the case jump to the manual installation step and download the 1.5 release candidate package in place of the last stable one. Installing Django with pip Install pip; the easiest way is to get the installer from http://www.pip-installer.org: If you are using a Unix OS execute the following command: $ sudo pip install Django If you are using Windows you will need to start a shell with administrator privileges and run the following command: $ pip install Django Installing Django manually Download the last stable release from the official Django website https://www.djangoproject.com/download/. Uncompress the downloaded file using the tool that you prefer. Change to the directory just created (cd Django-X.Y). If you are using a Unix OS execute the following command: $ sudo python setup.py install If you are using Windows you will need to start a shell with administrator privileges and run the command: $ python setup.py install Django Verifying the Django installation To verify that Django is installed on your system you just need to open a shell and launch a Python console by typing python. In the Python console try to import Django: >>> import django >>> django.get_version() '1.5' c1 And that's it!! Now that Django is installed on your system we can start to explore all its potential. Summary In this article we learned about what Django actually is, what you can do with it, and why it's so great. We also learned how to download and install Django with minimum fuss and then set it up so that you can use it as soon as possible. Resources for Article : Further resources on this subject: Creating an Administration Interface in Django [Article] Creating an Administration Interface with Django 1.0 [Article] Views, URLs, and Generic Views in Django 1.0 [Article]
Read more
  • 0
  • 0
  • 3384
article-image-getting-started-impressive-presentations
Packt
25 Mar 2013
8 min read
Save for later

Getting Started with Impressive Presentations

Packt
25 Mar 2013
8 min read
(For more resources related to this topic, see here.) What is impress.js? impress.js is a presentation framework build upon the powerful CSS3 transformations and transitions on modern web browsers. Bartek Szopka is the creator of this amazing framework. According to the creator, the idea came to him while he was playing with CSS transformations. Prezi.com was the source that got him inspired. On w3.org we have the following mentioned about CSS transforms: CSS transforms allows elements styled with CSS to be transformed in twodimensional or three-dimensional space For more information on CSS transformations for those who are interested, visit http://www.w3.org/TR/css3-transforms/. Creating presentations with impress.js is not a difficult task once you get used to the basics of the framework. Slides in impress.js presentations are called steps and they go beyond the conventional presentation style. We can have multiple steps visible at the same time with different dimensions and effects. impress.js step designs are built upon HTML. This means we can create unlimited effects and the only limitation is your imagination. Built-in features impress.js comes with advanced support for most CSS transformations. We can combine these features to provide more advanced visualizations in modern browsers. These features are as follows: Positioning: Elements can be placed in certain areas of the browser window enabling us to move between slides. Scaling: Elements can be scaled up or scaled down to show an overview or a detailed view of elements. Rotating: Elements can be rotated across any given axis. Working on 3D space: Presentations are not limited to 2D space. All the previously mentioned effects can be applied to 3D space with the z axis. Beyond presentations with impress.js This framework was created to build online presentations with awesome effects with the power of CSS and JavaScript. Bartek, who is the creator of this framework, mentions that it has been used for various different purposes expanding the original intention. Here are some of the most common usages of the impress.js framework: Creating presentations Portfolios Sliders Single page websites List of demos containing various types of impress.js presentations can be found at https://github.com/bartaz/impress.js/wiki/Examples-and-demos. Why is it important? You must be wondering why we need to care about such a framework when we have quality presentation programs such as PowerPoint. The most important thing we need to look at is the license for impress.js. Since it is licensed under MIT and GPL we can even change the source codes to customize the framework according to our needs. Also most of the modern browsers support CSS transformations, allowing you to use impress.js, eliminating the platform dependency of presentation programs. Both desktop-based presentations and online presentations are equally good at presenting information to the audience, but online presentations with impress.js provide a slight advantage over desktop-based presentations in terms of usability. The following are some of the drawbacks of desktop program generated presentations, compared to impress.js presentations: Desktop presentations require a presentation creation software or presentation viewer. Therefore, it's difficult to get the same output in different operating systems. Desktop presentations use standard slide-based techniques with a common template, while impress.js presentation slides can be designed in a wide range of ways. Modifications are difficult in desktop-based presentations since it requires presentation creation software. impress.js presentations can be changed instantly by modifying the HTML content with a simple text editor. Creating presentations is not just about filling our slides with a lot of information and animations. It is a creative process that needs to be planned carefully. Best practices will tell us that we should keep the slides as simple as possible with very limited information and, letting presenter do the detailed explanations. Let's see how we can use impress.js to work with some well-known presentation design guidelines. Presentation outline The audience does not have any idea about the things you are going to present prior to the start of the presentation. If your presentation is not up to standard, the audience will wonder how many boring slides are to come and what the contents are going to be. Hence, it's better to provide a preliminary slide with the outline of your presentation. A limited number of slides and their proper placement will allow us to create a perfect outline of the presentation. Steps in impress.js presentations are placed in 3D space and each slide is positioned relatively. Generally, we will not have an idea about how slides are placed when the presentation is on screen. You can zoom in on the steps by using the scaling feature of impress.js. In this way, we can create additional steps containing the overview of the presentation by using scaling features. Using bullet points People prefer to read the most important points articles rather than huge chunks of text . It's wise to put these brief points on the slides and let the details come through your presenting skills. Since impress.js slides are created with HTML, you can easily use bullet points and various types of designs for them using CSS. You can also create each point as a separate step allowing you to use different styles for each point. Animations We cannot keep the audience interested just by scrolling down the presentation slides . Presentations need to be interactive and animations are great for getting the attention of the audience. Generally, we use animations for slide transitions. Even though presentation tools provide advanced animations, it's our responsibility to choose the animations wisely. impress.js provides animation effects for moving, rotating, and scaling step transitions. We have to make sure it is used with purpose. Explaining the life cycle of a product or project is an excellent scenario for using rotation animations. So choose the type of animation that suits your presentation contents and topic. Using themes Most people like to make the design of their presentation as cool as possible. Sometimes they get carried away and choose from the best themes available in the presentation tool. Themes provided by tools are predefined and designed to suit general purposes. Your presentation might be unique and choosing an existing theme can ruin the uniqueness. The best practice is to create your own themes for your presentations. impress.js does not come with built-in themes. Hence there is no other option than to create a new theme from scratch. impress.js steps are different to each other unlike standard presentations, so you have the freedom to create a theme or design for each of the steps just by using some simple HTML and CSS code. Apart from the previous points, we can use typography, images, and videos to create better designs for impress.js presentations. We have covered the background and the importance for impress.js. Now we can move on to creating real presentations using the framework throughout the next few sections. Downloading and configuring impress.js You can obtain a copy of the impress.js library by downloading from the github page at https://github.com/bartaz/impress.js/. The downloaded .zip file contains an example demo and necessary styles in addition to the impress.js file. Extract the .zip file on to your hard drive and load the index.html on the browser to see impress.js in action. The folder structure of the downloaded .zip file is as given in the following screenshot: Configuring impress.js is something you should be able to do quite easily. I'll walk you through the configuration process. First we have to include the impress.js file in the HTML file. It is recommended you load this file as late as possible in your document. Create a basic HTML using the following code: <!doctype html> <html lang="en"> <head> <title>impress.js </title> </head> <body> <script src = "js/impress.js"></script> </body> </html> We have linked the impress.js file just before the closing body tag to make sure it is loaded after all the elements in our document. Then we need to initialize the impress library to make the presentations work. We can place the following code after the impress.js file to initialize any existing presentation in the document which is compatible with the impress library: <script>impress(). init();</script> Since we have done the setup of the impress.js library, we can now create our impressive presentation. Summary In this article we looked at the background of the impress.js framework and how it was created. Then we talked about the importance of impress.js in creating web-based presentations and various types of usage beyond presentations. Finally we obtained a copy of the framework from the official github page and completed the setup. Resources for Article : Further resources on this subject: 3D Animation Techniques with XNA Game Studio 4.0 [Article] Enhancing Your Math Teaching using Moodle 1.9: Part 1 [Article] Your First Page with PHP-Nuke [Article]
Read more
  • 0
  • 0
  • 1363

article-image-doing-it-forms
Packt
21 Mar 2013
8 min read
Save for later

Doing it with Forms

Packt
21 Mar 2013
8 min read
(For more resources related to this topic, see here.) The form component In order to collect and handle data Ext comes with the Ext.form.Panel class. This class extends from the panel so we can place the form in any other container. We also have all the functionality the panel offers, such as adding a title and using layouts. If we look at the wireframes, we can see that we need to have the functionality of creating, editing, and deleting clients from our database: We are going to work on this form. As seen in the previous screenshot the form contains a title, a toolbar with some buttons, and a few fields. One important thing to keep in mind when working with Ext JS is that we should create our components isolated from the other components as much as we can. This way we can reuse them in other modules or even extend them to add new functionality. First we need to extend from the Form class, so let's create a JavaScript file with the following code: Ext.define('MyApp.view.clients.Form',{ extend : 'Ext.form.Panel', alias : 'widget.clientform', title : 'Client form', initComponent : function(){ var me = this; me.callParent(); } }); We need to create the file in the following path: MyApp/view/clients/Form.js The previous code doesn't do much, it's only extending from the form panel, defining an alias and a title. The initComponent method is empty, but we're going to create some components for it. Now let's create an HTML file, where we can test our new class. We need to import the Ext JS library, our JS file, where our new class is, and wait for the DOM ready event to create an instance of our class: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Texfield</title> <!-- Importing the Ext JS library --> <script type="text/javascript" src = "../ext-4.1.1a-gpl/ext-all-dev. js"></script> <linkrel="stylesheet" href="../ext-4.1.1a-gpl/resources/css/ext-all. css" /> <script type="text/javascript" src = "MyApp/view/clients/Form.js"></ script> <script type="text/javascript"> Ext.onReady(function(){ Ext.create('MyApp.view.clients.Form',{ width : 300, height : 200, renderTo: Ext.getBody() }); }); </script> <style type="text/css"> body{padding:10px;} </style> </head> <body> </body> </html> We are creating an instance of the Client form as usual. We have set the width, height, and the place where the form is going to be rendered, in this case the body of our document. As a result we have our form created as shown in the following screenshot: So far we have an empty form. We can add any of the available components and widgets, let's start with the textfield property: Ext.define('MyApp.view.clients.Form',{ extend : 'Ext.form.Panel', alias : 'widget.clientform', title : 'Client form', bodyPadding : 5, defaultType : 'textfield', //Step 1 initComponent : function(){ var me = this; me.items = me.buildItems(); //Step 2 me.callParent(); }, buildItems : function(){ //Step 3 return [{ fieldLabel : 'Name', name : 'name' },{ fieldLabel : 'Contact', name : 'contact' }]; } }); The steps are explained as follows: Step 1: We have defined the default type of component we are going to use. This way we don't have to define the xtype property every time we want to create textfield. Step 2: We use the items property to add components to our form. We are calling a function that should return an array of components. Step 3: We are defining two textfields. First we set the value of the label for each textfield and then we set name. It's important to use name if we want to send or retrieve data to our server. Setting the name property will allow us to set and retrieve data to our fields in an easy way. Using a function to define the items array is a great way to write our code for readability. Also if we would like to extend this class, we can override this method and add more components to our form in the subclass. With the previous lines of code we have added two textfields to our form as shown in the following screenshot: Now let's add the Address field to our form using a textarea property. In order to do that we need to override the default xtype property as follows: Ext.define('MyApp.view.clients.Form',{ //... buildItems : function(){ return [ //... ,{ xtype : 'textarea', fieldLabel : 'Address', name : 'address' } ]; } }); If we want to define new components we can override the xtype property with the component we need. In this case we are using a textarea xtype, but we can use any of the available components. The last field in our wireframe is a textfield to collect the phone number. We already defined the default xtype as textfield so we only need to define the name and the label of our new textfield as follows: Ext.define('MyApp.view.clients.Form',{ //... buildItems : function(){ return [ //... ,{ fieldLabel : 'Phone', name : 'phone' } ]; } }); As a result we have all the required fields in our form. Now if we refresh our browser, we should see something like the following screenshot: We have our form ready, but if we see our wireframe we can realize that something is missing. We need to add three buttons to the top of the panel. We already know how to create toolbars and buttons; the following code should be familiar for us: Ext.define('MyApp.view.clients.Form',{ //... initComponent : function(){ var me = this; me.items = me.buildItems(); me.dockedItems = me.buildToolbars(); //Step 1 me.callParent(); }, buildItems : function(){ //... }, buildToolbars : function(){ //Step 2 return [{ xtype : 'toolbar', docked : 'top', items : [{ text : 'New', iconCls : 'new-icon' },{ text : 'Save', iconCls : 'save-icon' },{ text : 'Delete', iconCls : 'delete-icon' }] }]; } }); In the previous code, we are defining the dockedItems property; we are using the same pattern of defining a function that returns the array of items in the first step. In the second step we define a function that returns an array of components to be docked. In this case we are only returning a toolbar docked to the top; this toolbar contains three buttons. The first button is for a new client, the second one is to save the current client, and the third button is to delete the current client in the form. We need to use CSS classes to add an icon to the buttons. The previous code is using three different classes so we need to create them: <style type="text/css"> .new-icon{background:transparent url(images/page_add.png) 0 0 norepeat !important;} .save-icon{background:transparent url(images/disk.png) 0 0 no-repeat !important;} .delete-icon{background:transparent url(images/delete.png) 0 0 norepeat !important;} </style> Once we have defined our CSS classes let's refresh our browser and see our latest changes in action: We have finished our wireframe, but the form is not doing anything yet. For now let's just move forward and see what other components we have available. Anatomy of the fields Ext JS provides many components to give the user a great experience when using their applications. The following fields are components we can use in a form or outside of the form, for example, we can add a textfield or a combobox to a toolbar, where we place some filters or search options. Every input field extends from the Ext.Component class; this means that every field has its own lifecycle, events, and also can be placed on any container. There's also a class called Ext.form.field.Base that defines common properties, methods, and events across all form fields. This base class also extends from the Ext. form.Labelable and Ext.form.field.Field classes (using mixins). The Labelable class gives the field the ability to display a label and errors in every subclass such as textfields, combos, and so on. The Field class gives the fields the ability to manage their value, because it adds a few important methods, such as the getValue and setValue methods to set and retrieve the current value of the field; also this class introduces an important concept, the raw value. A great example of the raw value is when we pull data from our server and we get a date value in string format, the raw value is in plain text, but the value of the date field should be in a native Date object so that we can work easily with dates and time. We can always use the raw value, but it's recommended to use the value instead, which in this example is a Date object.
Read more
  • 0
  • 0
  • 4164