Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-configuring-ovirt
Packt
19 Nov 2013
9 min read
Save for later

Configuring oVirt

Packt
19 Nov 2013
9 min read
(For more resources related to this topic, see here.) Configuring the NFS storage NFS storage is a fairly common type of storage that is quite easy to set up and run even without special equipment. You can take the server with large disks and create NFS directory.But despite the apparent simplicity of NFS, setting s should be done with attention to details. Make sure that the NFS directory is suitable for use; go to the procedure of connecting storage to the data center. The following options are displayed after you click on the Configure Storage dialog box in which we specify the basic storage configuration: Name and Data Center: It is used to specify a name and target of the data center for storage Domain Function/Storage Type: It is used to choose a data function and NFS type Use Host: It is used to enter the host that will make the initial connection to the storage and a host who will be in the role of SPM Export Path: It is used to enter the storage server name and path of the exported directory Advanced Parameters: It provides additional connection options, such as NFS version, number of retransmissions and timeout, that are recommended to be changed only in exceptional cases Fill in the required storage settings and click on the OK button; this will start the process of connecting storage. The following image shows the New Storage dialog box with the connecting NFS storage: Configuring the iSCSI storage This section will explain how to connect the iSCSI storage to the data center with the type of storage as iSCSI. You can skip this section if you do not use iSCSI storage. iSCSI is a technology for building SAN (Storage Area Network). A key feature of this technology is the transmission of SCSI commands over the IP networks. Thus, there is a transfer of block data via IP. By using the IP networks, data transfer can take place over long distances and through network equipment such as routers and switches. These features make the iSCSI technology good for construction of low-cost SAN. oVirt supports iSCSI and iSCSI storages that can be connected to oVirt data centers. Then begin the process of connecting the storage to the data center. After you click on the Configure Storage dialog box in which you specify the basic storage configuration, the following options are displayed: Name and Data Center: It is used to specify the name and target of the data center. Domain Function/Storage Type: It is used to specify the domain function and storage type. In this case, the data function and iSCSI type. Use Host: It is used to specify the host to which the storage (SPM) will be attached. The following options are present in the search box for iSCSI targets: Address and Port: It is used to specify the address and port of the storage server that contains the iSCSI target User Authentication: Enable this checkbox if authentication is to be used on the iSCSI target CHAP username and password: It is used to specify the username and password for authentication Click on the Discover button and oVirt Engine connects to the specified server for the searching of iSCSI targets. In the resulting list, click on the designated targets, we click on the Login button to authenticate. Upon successful completion of the authentication, the display target LUN will be displayed; check it and click on OK to start connection to the data center. New storage will automatically connect to the data center. If it does not, select the location from the list and click on the Attach button in the detail pane where we choose a target data center. Configuring the Fibre Channel storage If you have selected Fibre Channel when creating the data center, we should create a Fibre Channel storage domain. oVirt supports Fibre Channel storage based on multiple preconfigured Logical Unit Numbers (LUN). Skip this section if you do not use Fibre Channel equipment. Begin the process of connecting the storage to the data center. Open the Guide Me wizard and click on the Configure Storage dialog box where you specify the basic storage configuration: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: Here we need to specify the data function and Fibre Channel type Use Host: It specifies the address of the virtualization host that will act as the SPM In the area below, the list of LUNs are displayed, enable the Add LUN checkbox on the selected LUN to use it as Fibre Channel data storage. Click on the OK button and this will start the process of connecting storage to the data centers. In the Storage tab and in the list of storages, we can see created Fibre Channel storage. In the process of connecting, its status will change and at the end new storage will be activated and connected to the data center. The connection process can also be seen in the event pane. The following screenshot shows the New Storage dialog box with Fibre Channel storage type: Configuring the GlusterFS storage GlusterFS is a distributed, parallel, and linearly scalable filesystem. GlusterFS can combine the data storage that are located on different servers into a parallel network filesystem. GlusterFS's potential is very large, so developers directed their efforts towards the implementation and support of GlusterFS in oVirt (GlusterFS documentation is available at http://www.gluster.org/community/documentation/index.php/Main_Page). oVirt 3.3 has a complete data center with the GlusterFS type of storage. Configuring the GlusterFS volume Before attempting to connect GlusterFS storage into the data center, we need to create the volume. The procedure of creating GlusterFS volume is common in all versions. Select the Volumes tab in the resource pane and click on Create Volume. In the open window, fill the volume settings: Data Center: It is used to specify the data center that will be attached to the GlusterFS storage. Volume Cluster: It is used to specify the name of the cluster that will be created. Name: It is used to specify a name for the new volume. Type: It is used to specify the type of GlusterFS volume available to choose from, there are seven types of volume that implement various strategic placements of data on the filesystem. Base types are Distribute, Replicate, and Stripe and other combination of these types: Distributed Replicate, Distributed Stripe, Striped Replicate, and Distributed Striped Replicate (additional info can be found at the link: http://gluster.org/community/documentation/index.php/GlusterFS_Concepts). Bricks: With this button, a list of bricks will be collected from the filesystem. Brick is a separate piece with which volume will be built. These bricks are distributed across the hosts. As bricks use a separate directory, it should be placed on a separate partition. Access Protocols: It defines basic access protocols that can be used to gain access to the following: Gluster: It is a native protocol access to volumes GlusterFS, enabled by default. NFS: It is an access protocol based on NFS. CIFS: It is an access protocol based on CIFS. Allow Access From: It allows us to enter a comma-separated IP address, hostnames, or * for all hosts that are allowed to access GlusterFS volume. Optimize for oVirt Store: Enabling this checkbox will enable extended options for created volume. The following screenshot shows the dialog box of Create Volume: Fill in the parameters, click on the Bricks button, and go to the new window to add new bricks with the following properties: Volume Type: This is used to change the previously marked type of the GlusterFS volume Server: It is used to specify a separate server that will export GlusterFS brick Brick Directory: It is used to specify the directory to use Specify the server and directory and click on Add. Depending on the type of volume, specify multiple bricks. After completing the list with bricks, click on the OK button to add volume and return to the menu. Click on the OK button to create GlusterFS volumes with the specified parameters. The following screenshot shows the Add Bricks dialog box: Now that we have GlusterFS volume, we select it from the list and click on Start. Configuring the GlusterFS storage oVirt 3.3 has support for creating data centers with the GlusterFS storage type: The GlusterFS storage type requires a preconfigured data center. A pre-created cluster should be present inside the data center. The enabled Gluster service is required. Go to the Storage section in resource pane and click on New Domain. In the dialog box that opens, fill in the details of our storage. The details are given as follows: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: It is used to specify the data function and GlusterFS type Use Host: It is used to specify the host that will connect to the SPM Path: It is used to specify the path to the location in the format hostname:volume_name VFS Type: Leave it as glusterfs and leave Mount Option blank Click on the OK button; this will start the process of creating the repository. The created storage automatically connects to the specified data centers. If not, select the repository created in the list, and in the subtab named Data Center in the detail pane, click on the Attach button and choose our data center. After you click on OK, the process of connecting storage to the data center starts. The following screenshot shows the New Storage dialog box with the GlusterFS storage type. Summary In this article we learned how to configure NFS Storage, iSCSI Storage, FC storages, and GlusterFS Storage. Resources for Article: Further resources on this subject: Tips and Tricks on Microsoft Application Virtualization 4.6 [Article] VMware View 5 Desktop Virtualization [Article] Qmail Quickstarter: Virtualization [Article]
Read more
  • 0
  • 0
  • 2190

article-image-icinga-object-configuration
Packt
18 Nov 2013
9 min read
Save for later

Icinga Object Configuration

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) A localhost monitoring setup Let us take a close look at our current setup, which we created, for monitoring a localhost. Icinga by default comes with object configuration for a localhost. The object configuration files are inside /etc/icinga/objects for default installations. $ ls /etc/icinga/objects commands.cfg notifications.cfg templates.cfg contacts.cfg printer.cfg timeperiods.cfg localhost.cfg switch.cfg windows.cfg There are several configuration files with object definitions. Together, these object definitions define the monitoring setup for monitoring some services on a localhost. Let's first look at localhost.cfg, which has most of the relevant configuration. We have a host definition: define host{ use linux-server host_name localhost alias localhost address 127.0.0.1 } The preceding object block defines one object, that is, the host that we want to monitor, with details such as the hostname, alias for the host, and the address of the server—which is optional, but is useful when you don't have DNS record for the hostname. We have a localhost host object defined in Icinga with the preceding object configuration. The localhost.cfg file also has a hostgroup defined which is as follows: define hostgroup { hostgroup_name linux-servers alias Linux Servers members localhost // host_name of the host object } The preceding object defines a hostgroup with only one member, localhost, which we will extend later to include more hosts. The members directive specifies the host members of the hostgroup. The value of this directive refers to the value of the host_name directive in the host definitions. It can be a comma-separated list of several hostnames. There is also a directive called hostgroups in the host object, where you can give a comma-separated list of names of the hostgroups that we want the host to be part of. For example, in this case, we could have omitted the members directive in the hostgroup definition and specified a hostgroups directive, which has the value linux-servers, in the localhost host definition. At this point, we have a localhost host and a linux-servers hostgroup, and localhost is a member of linux-servers. This is illustrated in the following figure: Going further into localhost.cfg, we have a bunch of service object definitions that follow. Each of these definitions indicate the service on a localhost that we want to monitor with the host_name directive. define service { use local-service host_name localhost service_description PING check_command check_ping!100.0,20%!500.0,60% } This is one of the service definitions. The object defines a PING service check that monitors the reachability. The host_name directive specifies the host that this service check should be associated with, which in this case is localhost. Again, the value of the host_name directive here should reflect the value of the host_name directive defined in the host object definition. So, we have a PING service check defined for a localhost, which is illustrated by following figure: There are several such service definitions that are placed on a localhost. Each service has a check_command directive that specifies the command for monitoring that service. Note that the exclamation marks in the check_command values are the command argument separators. So, cmd!foo!bar indicates that the command is cmd with foo as its first argument and bar as the second. It is important to remember that the check_ping part in check_command in the preceding example does not mean the check_ping executable that is in /usr/lib64/nagios/plugins/check_ping for most installations; it refers to the Icinga object of type command. In our setup, all command object definitions are inside commands.cfg. The commands.cfg file has the command object definition for check_ping. define command { command_name check_ping command_line $USER1$/check_ping -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p 5 } The check_command value in the PING service definition refers to the preceding command object, which indicates the exact command to be executed for performing the service check. $USER1$ is a user-defined Icinga macro. Macros in Icinga are like variables that can be used in various object definitions to wrap data inside these variables. Some macros are predefined, while some are user defined. These user macros are usually defined in /etc/icinga/resources.cfg: $USER1$=/usr/lib64/nagios/plugins So replace the $USER1$ macro with its value, and execute: $ value/of/USER1/check_ping --help This command will print the usual usage string with all the command-line options available. $ARG1$ and $ARG2$ in the command definition are macros referring to the arguments passed in the check_command value in the service definition, which are 100.0,20% and 500.0,60% respectively for the PING service definition. We will come to this later. As noted earlier, the status of the service is determined by the exit code of the command that is specified in the command_line directive in command definition. We have many such service definitions for a localhost in localhost.cfg, such as Root Partition (monitors disk space), Total Processes, Current Load, HTTP, along with command definitions in commands.cfg for check_commands of each of these service definitions. So, we have a host definition for localhost, a hostgroup definition linux-servers having localhost as its member, several service check definitions for localhost with check commands, and the command definitions specifying the exact command with arguments to execute for the checks. This is illustrated with the example Ping check in the following figure: This completes the basic understanding of how our localhost monitoring is built up from plain-text configuration. Notifications We would, as is the point of having monitoring systems, like to get alerted when something actually goes down. We don't want to keep monitoring the Icinga web interface screen, waiting for something to go down. Icinga provides a very generic and flexible way of sending out alerts. We can have any alerting script triggered when something goes wrong, which in turn may run commands for sending e-mails, SMS, Jabber messages, Twitter tweets, or practically anything that can be done from within a script. The default localhost monitoring setup has an e-mail alerting configuration. The way these notifications work is that we define contact objects where we give the contact name, e-mail addresses, pager numbers, and other necessary details. These contact names are specified in the host/service templates or the objects themselves. So, when Icinga detects that a host/service has gone down, it will use this contact object to send contact details to the alerting script. The contact object definition also has the host_notification_commands and service_notification_commands directives. These directives specify the command objects that should be used to send out the notifications for that particular contact. The former directive is used when the host goes down, and the latter is used when a service goes down. The respective command objects are then looked up and the value of their command_line directive is executed. This command object is the same as the one we looked at previously for executing checks. The same command object type is used to also define notification commands. We can also define contact groups and specify them in the host/service object definitions to alert a bunch of contacts at the same time. We can also give a comma-separated list of contact names instead of a contact group. Let's have a look at our current setup for notification configuration. The host/service template objects have the admin contact group specified, whose definition is in contacts.cfg: define contactgroup { contactgroup_name admins alias Icinga Administrators members icingaadmin } The group has the icingaadmin member contact, which is again defined in the same file: define contact { contact_name icingaadmin use generic-contact alias Icinga Admin email [email protected] } The contacts.cfg file has your e-mail address. The contact object inherits the generic-contact template contact object. define contact{ name generic-contact service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email register 0 } This template object has the host_notification_commands and service_notification_commands directives defined as notify-host-by-email and notify-service-by-email respectively. These are commands similar to what we use in service definitions. These commands are defined in commands.cfg: define command { command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Icinga *****nnNotification Type: $NOTIFICATIONTYPE$nHost: $HOSTNAME$nState: $HOSTSTATE$nAddress: $HOSTADDRESS$nInfo: $HOSTOUTPUT$nnDate/Time: $LONGDATETIME$n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } define command { command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Icinga *****nnNotification Type: $NOTIFICATIONTYPE$nnService: $SERVICEDESC$nHost: $HOSTALIAS$nAddress: $HOSTADDRESS$nState: $SERVICESTATE$nnDate/Time: $LONGDATETIME$nnAdditional Info:nn$SERVICEOUTPUT$n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } These commands are eventually executed to send out e-mail notifications to the supplied e-mail addresses. Notice that command_lines uses the /bin/mail command to send e-mails, which is why we need a working setup of a SMTP server. Similarly, we could use any command/script path to send out custom alerts, such as SMS and Jabber. We could also change the above e-mail command to change the content format to suit our requirements. The following figure illustrates the contact and notification configuration: The correlation between hosts/services and contacts/notification commands is shown below: Summary In this article, we analyzed our current configuration for the Icinga setup which monitors a localhost. We can replicate this to monitor a number of other servers using the desired service checks. We also looked at how the alerting configuration works to send out notifications when something goes down. Resources for Article: Further resources on this subject: Troubleshooting Nagios 3.0 [Article] Notifications and Events in Nagios 3.0-part1 [Article] BackTrack 4: Target Scoping [Article]
Read more
  • 0
  • 0
  • 2648

article-image-getting-started-ansible
Packt
18 Nov 2013
8 min read
Save for later

Getting Started with Ansible

Packt
18 Nov 2013
8 min read
(For more resources related to this topic, see here.) First steps with Ansible Ansible modules take arguments in key value pairs that look similar to key=value, perform a job on the remote server, and return information about the job as JSON. The key value pairs allow the module to know what to do when requested. The data returned from the module lets Ansible know if anything changed or if any variables should be changed or set afterwards. Modules are usually run within playbooks as this lets you chain many together, but they can also be used on the command line. Previously, we used the ping command to check that Ansible had been correctly setup and was able to access the configured node. The ping module only checks that the core of Ansible is able to run on the remote machine but effectively does nothing. A slightly more useful module is called setup. This module connects to the configured node, gathers data about the system, and then returns those values. This isn't particularly handy for us while running from the command line, however, in a playbook you can use the gathered values later in other modules. To run Ansible from the command line, you need to pass two things, though usually three. First is a host pattern to match the machine that you want to apply the module to. Second you need to provide the name of the module that you wish to run and optionally any arguments that you wish to give to the module. For the host pattern, you can use a group name, a machine name, a glob, and a tilde (~), followed by a regular expression matching hostnames, or to symbolize all of these, you can either use the word all or simply *. To run the setup module on one of your nodes, you need the following command line: $ ansible machinename -u root -k -m setup The setup module will then connect to the machine and give you a number of useful facts back. All the facts provided by the setup module itself are prepended with ansible_ to differentiate them from variables. The following is a table of the most common values you will use, example values, and a short description of the fields: Field Example Description ansible_architecture x86_64 The architecture of the managed machine ansible_distribution CentOS The Linux or Unix Distribution on the managed machine ansible_distribution_version 6.3 The version of the preceding distribution ansible_domain example.com The domain name part of the server's hostname ansible_fqdn machinename.example.com This is the fully qualified domain name of the managed machine. ansible_interfaces ["lo", "eth0"] A list of all the interfaces the machine has, including the loopback interface ansible_kernel 2.6.32-279.el6.x86_64 The kernel version installed on the managed machine ansible_memtotal_mb 996 The total memory in megabytes available on the managed machine ansible_processor_count 1 The total CPUs available on the managed machine ansible_virtualization_role guest Whether the machine is a guest or a host machine ansible_virtualization_type kvm The type of virtualization setup on the managed machine These variables are gathered using Python from the host system; if you have facter or ohai installed on the remote node, the setup module will execute them and return their data as well. As with other facts, ohai facts are prepended with ohai_ and facter facts with facter_. While the setup module doesn't appear to be too useful on the command line, once you start writing playbooks, it will come into its own. If all the modules in Ansible do as little as the setup and the ping module, we will not be able to change anything on the remote machine. Almost all of the other modules that Ansible provides, such as the file module, allow us to actually configure the remote machine. The file module can be called with a single path argument; this will cause it to return information about the file in question. If you give it more arguments, it will try and alter the file's attributes and tell you if it has changed anything. Ansible modules will almost always tell you if they have changed anything, which becomes more important when you are writing playbooks. You can call the file module, as shown in the following command, to see details about /etc/fstab: $ ansible machinename -u root -k -m file -a 'path=/etc/fstab' The preceding command should elicit a response like the following code: machinename | success >> { "changed": false, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/fstab", "size": 779, "state": "file" } Or like the following command to create a new test directory in /tmp: $ ansible machinename -u root -k -m file -a 'path=/tmp/test state=directory mode=0700 owner=root' The preceding command should return something like the following code: machinename | success >> { "changed": true, "group": "root", "mode": "0700", "owner": "root", "path": "/tmp/test", "size": 4096, "state": "directory" } The second command will have the changed variable set to true, if the directory doesn't exist or has different attributes. When run a second time, the value of changed should be false indicating that no changes were required. There are several modules that accept similar arguments to the file module, and one such example is the copy module. The copy module takes a file on the controller machine, copies it to the managed machine, and sets the attributes as required. For example, to copy the /etc/fstabfile to /tmp on the managed machine, you will use the following command: $ ansible machinename -m copy -a 'path=/tmp/fstab mode=0700 owner=root' The preceding command, when run the first time, should return something like the following code: machinename | success >> { "changed": true, "dest": "/tmp/fstab", "group": "root", "md5sum": "fe9304aa7b683f58609ec7d3ee9eea2f", "mode": "0700", "owner": "root", "size": 637, "src": "/root/.ansible/tmp/ansible-1374060150.96- 77605185106940/source", "state": "file" } There is also a module called command that will run any arbitrary command on the managed machine. This lets you configure it with any arbitrary command, such as a preprovided installer or a self-written script; it is also useful for rebooting machines. Please note that this module does not run the command within the shell, so you cannot perform redirection, use pipes, and expand shell variables or background commands. Ansible modules strive to prevent changes being made when they are not required. This is referred to as idempotency and can make running commands against multiple servers much faster. Unfortunately, Ansible cannot know if your command has changed anything or not, so to help it be more idempotent you have to give it some help. It can do this either via the creates or the removes argument. If you give a creates argument, the command will not be run if the filename argument exists. The opposite is true of the removes argument; if the filename exists, the command will be run. You run the command as follows: $ ansible machinename -m command -a 'rm -rf /tmp/testing removes=/tmp/testing' If there is no file or directory named /tmp/testing, the command output will indicate that it was skipped, as follows: machinename | skipped Otherwise, if the file did exist, it will look as follows: ansibletest | success | rc=0 >> Often it is better to use another module in place of the command module. Other modules offer more options and can better capture the problem domain they work in. For example, it would be much less work for Ansible and also the person writing the configurations to use the file module in this instance, since the file module will recursively delete something if the state is set to absent. So, this command would be equivalent to the following command: $ ansible machinename -m file -a 'path=/tmp/testing state=absent' If you need to use features usually available in a shell while running your command, you will need the shell module. This way you can use redirection, pipes, or job backgrounding. You can pick which shell to use with the executable argument. However, when you write the code, it also supports the creates argument but does not support the removes argument. You can use the shell module as follows: $ ansible machinename -m shell -a '/opt/fancyapp/bin/installer.sh > /var/log/fancyappinstall.log creates=/var/log/fancyappinstall.log' Summary In this article, we have covered which installation type to choose, installing Ansible, and how to build an inventory file to reflect your environment. After this, we saw how to use Ansible modules in an ad hoc style for simple tasks. Finally, we discussed how to learn which modules are available on your system and how to use the command line to get instructions for using a module. Resources for Article: Further resources on this subject: Configuring Manage Out to DirectAccess Clients [Article] Creating and configuring a basic mobile application [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 2011
Visually different images

article-image-derivatives-pricing
Packt
18 Nov 2013
10 min read
Save for later

Derivatives Pricing

Packt
18 Nov 2013
10 min read
(For more resources related to this topic, see here.) Derivatives are financial instruments which derive their value from (or are dependent on) the value of another product, called the underlying. The three basic types of derivatives are forward and futures contracts, swaps, and options. In this article we will focus on this latter class and show how basic option pricing models and some related problems can be handled in R. We will start with overviewing how to use the continuous Black-Scholes model and the binomial Cox-Ross-Rubinstein model in R, and then we will proceed with discussing the connection between these models. Furthermore, with the help of calculating and plotting of the Greeks, we will show how to analyze the most important types of market risks that options involve. Finally, we will discuss what implied volatility means and will illustrate this phenomenon by plotting the volatility smile with the help of real market data. The most important characteristics of options compared to futures or swaps is that you cannot be sure whether the transaction (buying or selling the underlying) will take place or not. This feature makes option pricing more complex and requires all models to make assumptions regarding the future price movements of the underlying product. The two models we are covering here differ in these assumptions: the Black-Scholes model works with a continuous process while the Cox-Ross-Rubinstein model works with a discrete stochastic process. However, the remaining assumptions are very similar and we will see that the results are close too. The Black-Scholes model The assumptions of the Black-Scholes model (Black and Sholes, 1973, see also Merton, 1973) are as follows: The price of the underlying asset (S) follows geometric Brownian motion: Here μ (drift) and σ (volatility) are constant parameters and W is a standard Wiener process. The market is arbitrage-free. The underlying is a stock paying no dividends. Buying and (short) selling the underlying asset is possible in any (even fractional) amount. There are no transaction costs. The short-term interest rate (r) is known and constant over time. The main result of the model is that under these assumptions, the price of a European call option (c) has a closed form: Here X is the strike price, T-tis the time to maturity of the option, and N denotes the cumulative distribution function of the standard normal distribution. The equation giving the price of the option is usually referred to as the Black-Scholes formula. It is easy to see from put-call parity that the price of a European put option (p) with the same parameters is given by: Now consider a call and put option on a Google stock in June 2013 with a maturity of September 2013 (that is, with 3 months of time to maturity).Let us assume that the current price of the underlying stock is USD 900, the strike price is USD 950, the volatility of Google is 22%, and the risk-free rate is 2%. We will calculate the value of the call option with the GBSOption function from the fOptions package. Beyond the parameters already discussed, we also have to set the cost of carry (b); in the original Black-Scholes model, (with underlying paying no dividends) it equals the risk-free rate. > library(fOptions) > GBSOption(TypeFlag = "c", S = 900, X =950, Time = 1/4, r = 0.02, + sigma = 0.22, b = 0.02) Title: Black Scholes Option Valuation Call: GBSOption(TypeFlag = "c", S = 900, X = 950, Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22) Parameters: Value: TypeFlag c S 900 X 950 Time 0.25 r 0.02 b 0.02 sigma 0.22 Option Price: 21.79275 Description: Tue Jun 25 12:54:41 2013 This prolonged output returns the passed parameters with the result just below the Option Price label. Setting the TypeFlag to p would compute the price of the put option and now we are only interested in the results (found in the price slot—see the str of the object for more details) without the textual output: > GBSOption(TypeFlag = "p", S = 900, X =950, Time = 1/4, r = 0.02, sigma = 0.22, b = 0.02)@price [1] 67.05461 We also have the choice to compute the preceding values with a more user-friendly calculator provided by the GUIDE package. Running the blackscholes() function would trigger a modal window with a form where we can enter the same parameters. Please note that the function uses the dividend yield instead of cost of carry, which is zero in this case. The Cox-Ross-Rubinstein model The Cox-Ross-Rubinstein(CRR) model (Cox, Ross and Rubinstein, 1979) assumes that the price of the underlying asset follows a discrete binomial process. The price might go up or down in each period and hence changes according to a binomial tree illustrated in the following plot, where u and dare fixed multipliers measuring the price changes when it goes up and down. The important feature of the CRR model is that u=1/d and the tree is recombining; that is, the price after two periods will be the same if it first goes up and then goes down or vice versa, as shown in the following figure: To build a binomial tree, first we have to decide how many steps we are modeling (n); that is, how many steps the time to maturity of the option will be divided into. Alternatively, we can determine the length of one time step ∆t,(measured in years) on the tree: If we know the volatility (σ) of the underlying, the parameters u and dare determined according to the following formulas: And consequently: When pricing an option in a binomial model, we need to determine the tree of the underlying until the maturity of the option. Then, having all the possible prices at maturity, we can calculate the corresponding possible option values, simply given by the following formulas: To determine the option price with the binomial model, in each node we have to calculate the expected value of the next two possible option values and then discount it. The problem is that it is not trivial what expected return to use for discounting. The trick is that we are calculating the expected value with a hypothetic probability, which enables us to discount with the risk-free rate. This probability is called risk neutral probability (pn) and can be determined as follows: The interpretation of the risk-neutral probability is quite plausible: if the probability that the underlying price goes up from any of the nodes was pn, then the expected return of the underlying would be the risk-free rate. Consequently, an expected value calculated with pn can be discounted by rand the price of the option in any node of the tree is determined as: In the preceding formula, g is the price of an option in general (it may be call or put as well) in a given node, gu and gd are the values of this derivative in the two possible nodes one period later. For demonstrating the CRR model in R, we will use the same parameters as in the case of the Black-Scholes formula. Hence, S=900, X=950, σ=22%, r=2%, b=2%, T-t=0.25. We also have to set n, the number of time steps on the binomial tree. For illustrative purposes, we will work with a 3-period model: > CRRBinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3)@price [1] 20.33618 > CRRBinomialTreeOption(TypeFlag = "pe", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3)@price [1] 65.59803 It is worth observing that the option prices obtained from the binomial model are close to (but not exactly the same as) the Black-Scholes prices calculated earlier. Apart from the final result, that is, the current price of the option, we might be interested in the whole option tree as well: > CRRTree <- BinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3) > BinomialTreePlot(CRRTree, dy = 1, xlab = "Time steps", + ylab = "Number of up steps", xlim = c(0,4)) > title(main = "Call Option Tree") Here we first computed a matrix by BinomialTreeOption with the given parameters and saved the result in CRRTree that was passed to the plot function with specified labels for both the x and y axis with the limits of the x axis set from 0 to 4, as shown in the following figure. The y-axis (number of up steps) shows how many times the underlying price has gone up in total. Down steps are defined as negative up steps. The European put option can be shown similarly by changing the TypeFlag to pe in the previous code: Connection between the two models After applying the two basic option pricing models, we give some theoretical background to them. We do not aim to give a detailed mathematical derivation, but we intend to emphasize (and then illustrate in R) the similarities of the two approaches. The financial idea behind the continuous and the binomial option pricing is the same: if we manage to hedge the option perfectly by holding the appropriate quantity of the underlying asset, it means we created a risk-free portfolio. Since the market is supposed to be arbitrage-free, the yield of a risk-free portfolio must equal the risk-free rate. One important observation is that the correct hedging ratio is holding underlying asset per option. Hence, the ratio is the partial derivative (or its discrete correspondent in the binomial model) of the option value with respect to the underlying price. This partial derivative is called the delta of the option. Another interesting connection between the two models is that the delta-hedging strategy and the related arbitrage-free argument yields the same pricing principle: the value of the derivative is the risk-neutral expected value of its future possible values, discounted by the risk-free rate. This principle is easily tractable on the binomial tree where we calculated the discounted expected values node by node; however, the continuous model has the same logic as well, even if the expected value is mathematically more complicated to compute. This is the reason why we gave only the final result of this argument, which was the Black-Scholes formula. Now we know that the two models have the same pricing principles and ideas (delta-hedging and risk-neutral valuation), but we also observed that their numerical results are not equal. The reason is that the stochastic processes assumed to describe the price movements of the underlying asset are not identical. Nevertheless, they are very similar; if we determine the value of u and d from the volatility parameter as we did it in The Cox-Ross-Rubinstein model section, the binomial process approximates the geometric Brownian motion. Consequently, the option price of the binomial model converges to that of the Black-Scholes model if we increase the number of time steps (or equivalently, decrease the length of the steps). To illustrate this relationship, we will compute the option price in the binomial model with increasing numbers of time steps. In the following figure, we compare the results with the Black-Scholes price of the option: The plot was generated by a loop running N from 1 to 200 to compute CRRBinomialTreeOption with fixed parameters: > prices <- sapply(1:200, function(n) { + CRRBinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = n)@price + }) Now the prices variable holds 200 computed values: > str(prices) num [1:200] 26.9 24.9 20.3 23.9 20.4... Let us also compute the option with the generalized Black-Scholes option: > price <- GBSOption(TypeFlag = "c", S = 900, X = 950, Time = 1/4, r = 0.02, sigma = 0.22, b = 0.02)@price And show the prices in a joint plot with the GBS option rendered in red: > plot(1:200, prices, type='l', xlab = 'Number of steps', + ylab = 'Prices') > abline(h = price, col ='red') > legend("bottomright", legend = c('CRR-price', 'BS-price'), + col = c('black', 'red'), pch = 19)
Read more
  • 0
  • 0
  • 2614

article-image-css3-animation
Packt
18 Nov 2013
7 min read
Save for later

CSS3 Animation

Packt
18 Nov 2013
7 min read
(For more resources related to this topic, see here.) The websites, we see today, are complex and complicated. By complex and complicated, we are referring to the development of these websites and not the webpage itself. We see animations and complex features. Prior to HTML5 and CSS3, JavaScript was used extensively for this purpose. HTML was incorrectly used for styling when it was expected to design the structural markup of the page. However, with the advent of CSS, it is a good practice to use HTML for markup and CSS for styling. CSS3 brings along transforms, transition elements, and animation features that make it easier to develop awesome features. In transition, we can view the change from a single state to other but when it comes to multiple states, Animation is the solution. Let's discuss the various properties of CSS3 Animations and then we will incorporate all of that in a code to understand it better. @keyframes The points at which the transition should take place can be defined using the @keyframes property. As of now, we need to add a vendor prefix to the @keyframes property as it is still in its development state. In future, when it is accepted as a standard, then we do not have to use a vendor prefix. We can use percentage or from and to keywords to implement the change in state from one CSS style to another. animation-name We need to apply animation to an element. This property enables us to do so by applying it to the animation name defined in the keyframes rule. However, it cannot be a standalone property and has to be used in conjunction with other animation properties. animation-duration Using this feature, we can define the duration of the animation. If we specify the animation-duration to 5 seconds, changes in the CSS defined states will need to be completed within 5 seconds. animation-delay Similar to the delay property in transition, the delay feature will delay the animation by the time period specified. animation-timing-function Similar to the timing function, this property decides the speed of transition. It behaves the same way as the transition timing function that we have seen earlier. animation-iteration-count We can decide the number of iteration carried out in the animation phase using this property. Setting this property to infinite will mean that the animation will never stop. animation-direction We can decide the direction of the animation using this property. We can use values like reverse, alternate to define the direction of the element to be animated. animation-play-state Using this feature, we can determine whether the animation would be running or paused accordingly. Now that we had a look at these properties, we will now incorporate some of these properties in a code and understand the functionality in a better way. Hence, to gain a practical insight, let's look at the following code. <!DOCTYPE html> <html> <head> <style> body { background:#000; color:#fff; } #trigger { width:100px; height:100px; position:absolute; top:50%; margin:-50px 0 0 -50px; left:50%; background: black; border-radius:50px; /*set the animation*/ /*[animation name] [animation duration] [animation timing function] [animation delay] [animation iterations count] [animation direction]*/ animation: glowness 5s linear 0s 5 alternate; -moz-animation: glowness 5s linear 0s 5 alternate; /* Firefox */ -webkit-animation: glowness 5s linear 0s 5 alternate; /* Safari and Chrome */ -o-animation: glowness 5s linear 0s 5 alternate; /* Opera */ -ms-animation: glowness 5s linear 0s 5 alternate; /* IE10 */ } #trigger:hover { animation-play-state: paused; -moz-animation-play-state: paused; -webkit-animation-play-state: paused; -o-animation-play-state: paused; -ms-animation-play-state: paused; } /*animation keyframes*/ @keyframes glowness { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-moz-keyframes glowness /* Firefox */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-webkit-keyframes glowness /* Safari and Chrome */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-o-keyframes glowness /* Opera */ { 0% {box-shadow: 0 0 80px orange;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } @-ms-keyframes glowness /* IE10 */ { 0% {box-shadow: 0 0 20px green;} 25% {box-shadow: 0 0 150px red;} 50% {box-shadow: 0 0 70px pink;} 75% {box-shadow: 0 0 50px violet;} 100% {box-shadow: 0 0 100px yellow;} } </style> <script> // animation started (buggy on firefox) $('#trigger').on('animationstart mozanimationstart webkitAnimationStart oAnimationStart msanimationstart',function() { $('p').html('animation started'); }) // animation paused $('#trigger').on('mouseover',function(){ $('p').html('animation paused'); }) // animation re-started $('#trigger').on('mouseout',function(){ $('p').html('animation re-started'); }) // animation ended $('#trigger').on('animationend mozanimationend webkitAnimationEnd oAnimationEnd msanimationend',function() { $('p').html('animation ended'); }) //iteration count var i =0; $('#trigger').on('animationiteration mozanimationiteration webkitAnimationIteration oAnimationIteration msanimationiteration', function() { i++; $('p').html('animation iteration='+i); }) </script> </head> <body> <div id="trigger"></div> </body> </html> The output of the code on execution would be as follows: We have used –webkit as the prefix in this example as we are executing the code in Google Chrome. Please us –moz prefix for Firefox and –o- for Opera. Comments are added in the code so that we can understand it easily. Apart from HTML5 and CSS3, we have used a bit of JQuery. Let’s go through the animation part of the code to understand it better. In the CSS3 styles, we have mentioned the animation direction as alternate as a result of which the animation would be in a different direction after the first iteration. We have used the hover property. In this code, whenever we hover over the object, the animation is paused. We have also defined the glowness of the object in keyframes. We have also mentioned how the colors change and defined a box-shadow attribute for the animation in keyframes. We have defined the <script> tag in which we have included the JavaScript and JQuery code. We have used the trigger attribute. The trigger() method triggers a particular event and the default behavior of an event with regards to the chosen elements. We have used mouseover and mouseout properties. The mouseover and mouseout event fires when the user moves the mouse pointer over an element and out of an element respectively. We have used those events in conjunction with the start, end and pausing of the animation. Therefore, we can create complex animations using CSS3. Coding is an art which gets better with practice. Hence, we need to implement it practically in order to know the subtle nuances of HTML5 and CSS3. However, we can achieve that after a considerable amount of practice. However, we are just on the shore; the sea of knowledge is far beyond. In this article, we have covered a lot of HTML5 and CSS3 features. Instead of wading through loads of theory, the concepts in this article are explained in a practical manner using code samples to demonstrate the new features of HTML5 and CSS3. The code samples are such that you can copy the code (the entire code is written instead of code snippets) and execute it for better understanding. Transition, transformation, and animation are also explained in a lucid manner, and there is a gradual increase in the difficulty level throughout the article. By the end of the book, you will be thoroughly acquainted with HTML5 and CSS3, enabling you to design a web page using the included code samples with ease. Click on the following link to have a look at the book: http://www.packtpub.com/html5-and-css3-for-transition-transformation-animation/book Summary This article has discussed how HTML5 and CSS3 features can be used used in websites. There is a detailed discussion on the animations used in the websites offered by CSS3. Resources for Article: Further resources on this subject: Mobiles First – How and Why [Article] Creating an Animated Gauge with CSS3 [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 3386

article-image-fuelphp
Packt
15 Nov 2013
11 min read
Save for later

FuelPHP

Packt
15 Nov 2013
11 min read
(For more resources related to this topic, see here.) Since it is community-driven, everyone is in an equal position to spot bugs, provide fixes, or add new features to the framework. This has led to the creation of features such as the new temporal ORM (Object Relation Mapper), which is a first for any PHP-based ORM. This also means that everyone can help build tools that make development easier, more straightforward, and quicker. The framework is lightweight and allows developers to load only what they need. It's a configuration over convention approach. Instead of enforcing conventions, they act as recommendations and best practices. This allows new developers to jump onto a project and catch up to speed quicker. It also helps when we want to find extra team members for projects. A brief history of FuelPHP FuelPHP started out with the goal of adopting the best practices from other frameworks to form a thoroughly modern starting point, which makes full use of PHP Version 5.3 features, such as namespaces. It has little in the way of legacy and compatibility issues that can affect older frameworks. The framework was started in the year 2010 by Dan Horrigan. He was joined by Phil Sturgeon, Jelmer Schreuder, Harro Verton, and Frank de Jonge. FuelPHP was a break from other frameworks such as CodeIgniter, which was basically still a PHP 4 framework. This break allowed for the creation of a more modern framework for PHP 5.3, and brings together decades of experience of other languages and frameworks, such as Ruby on Rails and Kohana. After a period of community development and testing, Version 1.0 of the FuelPHP framework was released in July 2011. This marked a version ready for use on production sites and the start of the growth of the community. The community provides periodic releases (at the time of writing, it is up to Version 1.7) with a clear roadmap (http://fuelphp.com/roadmap) of features to be added. This also includes a good guide of progress made to date. The development of FuelPHP is an open process and all the code is hosted on GitHub at https://github.com/fuel/fuel, and the main core packages can be found in other repositories on the Fuel GitHub account—a full list of these can be found at https://github.com/fuel/. Features of FuelPHP Using a Bespoke PHP or a custom-developed framework could give you a greater performance. FuelPHP provides many features, documentation, and a great community. The following sections describe some of the most useful features. (H)MVC Although FuelPHP is a Model-View-Controller (MVC) framework, it was built to support the HMVC variant of MVC. Hierarchical Model-View-Controller (HMVC) is a way of separating logic and then reusing the controller logic in multiple places. This means that when a web page is generated using a theme or a template section, it can be split into multiple sections or widgets. Using this approach, it is possible to reuse components or functionality throughout a project or in multiple projects. In addition to the usual MVC structure, FuelPHP allows the use of presentation modules (ViewModels). These are a powerful layer that sits between the controller and the views, allowing for a smaller controller while still separating the view logic from both the controller and the views. If this isn't enough, FuelPHP also supports a router-based approach where you can directly route to a closure. This then deals with the execution of the input URI. Modular and extendable The core of FuelPHP has been designed so that it can be extended without the need for changing any code in the core. It introduces the notion of packages, which are self-contained functionality that can be shared between projects and people. Like the core, in the new versions of FuelPHP, these can be installed via the Composer tool . Just like packages, functionality can also be divided into modules. For example, a full user-authentication module can be created to handle user actions, such as registration. Modules can include both logic and views, and they can be shared between projects. The main difference between packages and modules is that packages can be extensions of the core functionality and they are not routable, while modules are routable. Security Everyone wants their applications to be as secure as possible; to this end, FuelPHP handles some of the basics for you. Views in FuelPHP will encode all the output to ensure that it's secure and is capable of avoiding Cross-site scripting (XSS) attacks. This behavior can be overridden or can be cleaned by the included htmLawed library. The framework also supports Cross-site request forgery (CSRF) prevention with tokens, input filtering, and the query builder, which tries to help in preventing SQL injection attacks. PHPSecLib is used to offer some of the security features in the framework. Oil – the power of the command line If you are familiar with CakePHP or the Zend framework or Ruby on Rails, then you will be comfortable with FuelPHP Oil. It is the command-line utility at the heart of FuelPHP—designed to speed up development and efficiency. It also helps with testing and debugging. Although not essential, it proves indispensable during development. Oil provides a quick way for code generation, scaffolding, running database migrations, debugging, and cron-like tasks for background operations. It can also be used for custom tasks and background processes. Oil is a package and can be found at https://github.com/fuel/oil. ORM FuelPHP also comes with an Object Relation Mapper (ORM) package that helps in working with various databases through an object-oriented approach. It is relatively lightweight and is not supposed to replace the more complex ORMs such as Doctrine or Propel. The ORM also supports data relations such as: belongs-to has-one has-many many-to-many relationships Another nice feature is cascading deletions; in this case, the ORM will delete all the data associated with a single entry. The ORM package is available separately from FuelPHP and is hosted on GitHub at https://github.com/fuel/orm. Base controller classes and model classes FuelPHP includes several classes to give a head start on projects. These include controllers that help with templates, one for constructing RESTful APIs, and another that combines both templates and RESTful APIs. On the model side, base classes include CRUD (Create, Read, Update, and Delete) operations. There is a model for soft deletion of records, one for nested sets, and lastly a temporal model. This is an easy way of keeping revisions of data. The authentication package The authentication framework gives a good basis for user authentication and login functionality. It can be extended using drivers for new authentication methods. Some of the basics such as groups, basic ACL functions, and password hashing can be handled directly in the authentication framework. Although the authentication package is included when installing FuelPHP, it can be upgraded separately to the rest of the application. The code can be obtained from https://github.com/fuel/auth. Template parsers The parser package makes it even easier to separate logic from views instead of embedding basic PHP into the views. FuelPHP supports many template languages, such as Twig, Markdown, Smarty, and HTML Abstraction Markup Language (Haml). Documentation Although not particularly a feature of the actual framework, the documentation for FuelPHP is one of the best available. It is kept up-to-date for each release and can be found at http://fuelphp.com/docs/. What to look forward to in Version 2.0 Although this book focuses on FuelPHP 1.6 and newer, it is worth looking forward to the next major release of the framework. It brings significant improvements but also makes some changes to the way the framework functions. Global scope and moving to dependency injection One of the nice features of FuelPHP is the global scope that allows easy static syntax and instances when needed. One of the biggest changes in Version 2 is the move away from static syntax and instances. The framework used the Multiton design pattern, rather than the Singleton design pattern. Now, the majority of Multitons will be replaced with the Dependency Injection Container (DiC) design pattern , but this depends on the class in question. The reason for the changes is to allow the unit testing of core files and to dynamically swap and/or extend our other classes depending upon the needs of the application. The move to dependency injection will allow all the core functionality to be tested in isolation. Before detailing the next feature, let's run through the design patterns in more detail. Singleton Ensures that a class only has a single instance and it provides a global point of access to it. The thinking is that a single instance of a class or object can be more efficient, but it can add unnecessary restrictions to classes that may be better served using a different design pattern. Multiton This is similar to the singleton pattern but expands upon it to include a way of managing a map of named instances as key-value pairs. So instead of having a single instance of a class or object, this design pattern ensures that there is a single instance for each key-value pair. Often the multiton is known as a registry of singletons. Dependency injection container This design pattern aims to remove hard coded dependencies and make is possible to change them either at run time or compile time. One example is ensure that variables have default values but also allow for them to be overridden, also allow for other objects to be passed to class for manipulation. It allows for mock objects to be used whilst testing functionality. Coding standards One of the far-reaching changes will be the difference in coding standards. FuelPHP Version 2.0 will now conform to both PSR-0 and PSR-1. This allows a more standard auto-loading mechanism and the ability to use Composer. Although Composer compatibility was introduced in Version 1.5, this move to PSR is for better consistency. It means that the method names will follow the "camelCase" method rather than the current "snake_case" method names. Although a simple change, this is likely to have a large effect on existing projects and APIs. With a similar move of other PHP frameworks to a more standardized coding standard, there will be more opportunities to re-use functionality from other frameworks. Package management and modularization Package management for other languages such as Ruby and Ruby on Rails has made sharing pieces of code and functionality easy and common-place. The PHP world is much larger and this same sharing of functionality is not as common. PHP Extension and Application Repository (PEAR) was a precursor of most package managers. It is a framework and distribution system for re-usable PHP components. Although infinitely useful, it is not as widely supported by the more popular PHP frameworks. Starting with FuelPHP 1.6 and leading into FuelPHP 2.0, dependency management will be possible through Composer (http://getcomposer.org). This deals with not only single packages, but also their dependencies. It allows projects to consistently set up with known versions of libraries required by each project. This helps not only with development, but also its testability of the project as well as its maintainability. It also protests against API changes. The core of FuelPHP and other modules will be installed via Composer and there will be a gradual migration of some Version 1 packages. Backwards compatibility A legacy package will be released for FuelPHP that will provide aliases for the changed function names as part of the change in the coding standards. It will also allow the current use of static function calling to continue working, while allowing for a better ability to unit test the core functionality. Speed boosts Although initially slower during the initial alpha phases, Version 2.0 is shaping up to be faster than Version 1.0. Currently, the beta version (at the time of writing) is 7 percent faster while requiring 8 percent less memory. This might not sound much, but it can equate to a large saving if running a large website over multiple servers. These figures may get better in the final release of Version 2.0 after the remaining optimizations are complete. Summary We now know a little more about the history of FuelPHP and some of the useful features such as ORM, authentication, modules, (H)MVC, and Oil (the command-line interface). We have also listed the following useful links, including the official API documentation (http://fuelphp.com/docs/) and the FuelPHP home page (http://fuelphp.com). This article also touched upon some of the new features and changes due in Version 2.0 of FuelPHP. Resources for Article: Further resources on this subject: Installing PHP-Nuke [Article] Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article]
Read more
  • 0
  • 0
  • 2166
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-application-performance
Packt
15 Nov 2013
8 min read
Save for later

Application Performance

Packt
15 Nov 2013
8 min read
(For more resources related to this topic, see here.) Data sizing The cost of abstractions in terms of data size plays an important role. For example, whether or not a data element can fit into a processor cache line depends directly upon its size. On a Linux system, we can find out the cache line size and other parameters by inspecting the values in the files under /sys/devices/system/cpu/cpu0/cache/. Another concern we generally find with data sizing is how much data we are holding at a time in the heap. GC has direct consequences on the application's performance. While processing data, often we do not really need all the data we hold on to. Consider the example of generating a summary report of sold items for a certain period (months) of time. After the subperiod (month wise), summary data is computed. We do not need the item details anymore, hence it's better to remove the unwanted data while we add the summaries. This is shown in the following example: (defn summarize [daily-data] ; daily-data is a map (let [s (items-summary (:items daily-data))] (-> daily-data (select-keys [:digest :invoices]) ; we keep only the required key/val pairs (assoc :summary s)))) ;; now inside report generation code (-> (fetch-items period-from period-to :interval-day) (map summarize) generate-report) Had we not used select-keys in the preceding summarize function, it would have returned a map with extra summary data along with all the other existing keys in the map. Now, such a thing is often combined with lazy sequences. So, for this scheme to work, it is important not to hold on to the head of the lazy sequence. Reduced serialization An I/O channel is a common source of latency. The perils of over-serialization cannot be overstated. Whether we read or write data from a data source over an I/O channel, all of that data needs to be prepared, encoded, serialized, de-serialized, and parsed before being worked on. It is better for every step to have less data involved in order to lower the overhead. Where there is no I/O involved, such as in-process communication, it generally makes no sense to serialize. A common example of over-serialization is encountered while working with SQL databases. Often, there are common SQL query functions that fetch all columns of a table or a relation—they are called by various functions that implement the business logic. Fetching data that we do not need is wasteful and detrimental to the performance for the same reason that we discussed in the preceding paragraph. While it may seem more work to write one SQL statement and one database query function for each use case, it pays off with better performance. Code that uses NoSQL databases is also subject to this anti-pattern—we have to take care to fetch only what we need even though it may lead to additional code. There's a pitfall to be aware of when reducing serialization. Often, some information needs to be inferred in absence of the serialized data. In such cases where some of the serialization is dropped so that we can infer other information, we must compare the cost of inference versus the serialization overhead. The comparison may not be necessarily done per operation, but rather on the whole. Then, we can consider the resources we can allocate in order to achieve capacities for various parts of our systems. Chunking to reduce memory pressure What happens when we slurp a text file regardless of its size? The contents of the entire file will sit in the JVM heap. If the file is larger than the JVM heap capacity, the JVM will terminate by throwing OutOfMemoryError. If the file is large but not large enough to force the JVM into an OOM error, it leaves a relatively smaller JVM heap space for other operations in the application to continue. A similar situation takes place when we carry out any operation disregarding the JVM heap capacity. Fortunately, this can be fixed by reading data in chunks and processing them before reading further. Sizing for file/network operations Let us take the example of a data ingestion process where a semi-automated job uploads large Comma Separated File (CSV) files via the File Transfer Protocol (FTP) to a file server, and another automated job, which is written in Clojure, runs periodically to detect the arrival of files via the Network File System (NFS). After detecting a new file, the Clojure program processes the file, updates the result in a database, and archives the file. The program detects and processes several files concurrently. The size of the CSV files is not known in advance, but the format is predefined. As per the preceding description, one potential problem is that since there could be multiple files being processed concurrently, how do we distribute the JVM heap among the concurrent file-processing jobs? Another issue could be that the operating system imposes a limit on how many files can be opened at a time; on Unix-like systems, you can use the ulimit command to extend the limit. We cannot arbitrarily slurp the CSV file contents—we must limit each job to a certain amount of memory and also limit the number of jobs that can run concurrently. At the same time, we cannot read a very small number of rows from a file at a time because this may impact performance. (def ^:const K 1024) ;; create the buffered reader using custom 128K buffer-size (-> filename java.io.FileInputStream java.io.InputStreamReader (java.io.BufferedReader (* K 128))) Fortunately, we can specify the buffer size when reading from a file or even from a network stream so as to tune the memory usage and performance as appropriate. In the preceding code example, we explicitly set the buffer size of the reader to facilitate the same. Sizing for JDBC query results Java's interface standard for SQL databases, JDBC (which is technically not an acronym), supports fetch-size for fetching query results via JDBC drivers. The default fetch size depends on the JDBC driver. Most JDBC drivers keep a low default value so as to avoid high memory usage and attain internal performance optimization. A notable exception to this norm is the MySQL JDBC driver that completely fetches and stores all rows in memory by default. (require '[clojure.java.jdbc :as jdbc]) ;; using prepare-statement directly (we rarely use it directly, shown just for demo) (with-open [stmt (jdbc/prepare-statement conn sql :fetch-size 1000 max-rows 9000) rset (resultset-seq (.executeQuery stmt))] (vec rset)) ;; using query (query db [{:fetch-size 1000} "SELECT empno FROM emp WHERE country=?" 1]) When using the Clojure Contrib library java.jdbc (https://github.com/clojure/java.jdbc as of Version 0.3.0), the fetch size can be set while preparing a statement as shown in the preceding example. The fetch size does not guarantee proportional latency; however, it can be used safely for memory sizing. We must test any performance-impacting latency changes due to fetch size at different loads and use cases for the particular database and JDBC driver. Besides fetch-size, we can also pass the max-rowsargument to limit the maximum rows to be returned by a query. However, this implies that the extra rows will be truncated from the result, not that the database will internally limit the number of rows to realize. Resource pooling There are several types of resources on the JVM that are rather expensive to initialize. Examples are HTTP connections, execution threads, JDBC connections, and so on. The Java API recognizes such resources and has built-in support for creating a pool of some of those resources so that the consumer code borrows a resource from a pool when required and at the end of the job simply returns it to the pool. Java's thread pools and JDBC data sources are prominent examples. The idea is to preserve the initialized objects for reuse. Even when Java does not support pooling of a resource type directly, you can always create a pool abstraction around custom expensive resources. The pooling technique is common in I/O activities, but it can be equally applicable to non-I/O purposes where the initialization cost is high. Summary Designing an application for performance should be based on the use cases and patterns of anticipated system load and behavior. Measuring performance is extremely important to guide optimization in the process. Fortunately, there are several well-known optimization patterns to tap into, such as resource pooling, and data sizing. Thus we analysed the performance optimization using these patterns. Resources for Article: Further resources on this subject: Improving Performance with Parallel Programming [Article] Debugging Java Programs using JDB [Article] IBM WebSphere Application Server Security: A Threefold View [Article]
Read more
  • 0
  • 0
  • 1167

Packt
15 Nov 2013
5 min read
Save for later

RESTful Web Services – Server-Sent Events (SSE)

Packt
15 Nov 2013
5 min read
Getting started Generally, the flow of web services is initiated by the client by sending a request for the resource to the server. This is the traditional way of consuming web services. Traditional Flow Here, the browser or Jersey client initiates the request for data from the server, and the server provides a response along with the data. Every time a client needs to initiate a request for the resource, the server may not have the capability to generate the data. This becomes difficult in an application where real-time data needs to be shown. Even though there is no new data over the server, the client needs to check for it every time. Nowadays, there is a requirement that the server needs to send some data without the client's request. For this to happen the client and server need to be connected, and the server can push the data to the client. This is why it is termed as Server-Sent Events. In these events, the connections created initially between the client and server is not released after the request. The server maintains the connection and pushes the data to the respective client when required. Server-Sent Event Flow In the Server-Sent Event Flow diagram initially, when a browser or a Jersey client initiates a request to establish a connection with the server using EventSource, the server is always in a listening mode for the new connection to be established. When a new connection from any EventSource is received, the server opens a new connection and maintains it in a queue. Maintaining a connection depends upon the implementation of business logic. SSE creates a single unidirectional connection. So, only a single connection is established between the client and server. After the connection is successfully established, the client is in the listening mode for new events from the server. Whenever any new event occurs on the server side, it will broadcast the event, along with the data to a specific open HTTP connection. In modern browsers that support HTML5, the onmessage method of EventSource is responsible for handling new events received from the server; whereas, in the case of Jersey clients, we have the onEvent method of EventSource, which handles new events from the server. Implementing Server-Sent Events (SSE) To use SSE, we need to register SseFeature on both the client and server sides. By doing so, the client/server gets connected to SseFeature to be used while traversing data over the network. SSE: Internal Working In the SSE: Internal Working diagram, we assume that the client/server is connected. When any new event is generated, the server initiates an OutboundEvent instance that will be responsible to have chunked output, which in turn will have a serialized data format. OutboundEventWriter is responsible to serialize the data on the server side. We need to specify the media type of the data in OutboundEvent. There are no restrictions of providing specific media types only. However, on the client side, InboundEvent is responsible for handling the incoming data from the server. Here, InboundEvent receives the chunked input that contains serialized data format. Using InbounEventReader, data is deserialized. Using SSEBroadCaster, we are able to broadcast events to multiple clients that are connected to the server. Let's look at the example, which shows how to create SSE web services and broadcast the events: @ApplicationPath("services") public class SSEApplication extends ResourceConfig { publicSSEApplication() { super(SSEResource.class, SseFeature.class); } } Here, we registered the SseFeature module and the SSEResource root-resource class to the server. private static final SseBroadcaster BROADCASTER = new SseBroadcaster(); …… @GET @Path("sseEvents") @Produces(SseFeature.SERVER_SENT_EVENTS) public EventOutput getConnection() { final EventOutput eventOutput = new EventOutput(); BROADCASTER.add(eventOutput); return eventOutput; } …… In the SSEResource root class, we need to create a resource method that will allow clients to establish the connection and persist accordingly. Here, we are maintaining the connection into the BROADCASTER instance in the SseBroadcaster class. EventOutput manages specific client connections. SseBroadcaster is simply responsible for accommodating a group of EventOutput; that is, the client's connection. …… @POST @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void post(@FormParam("name") String name) { BROADCASTER .broadcast(new OutboundEvent.Builder() .data(String.class, name) .build()); } …… When any post method is consumed, we create a new event and broadcast it to the client available in the BROADCASTER instance. The OutboundEvent instance will contain the data (MediaType, Object) method that is initialized with a specific media type and actual data. We can provide any media type to send data. By using the build() method, data is being serialized with the OutBoundEventWriter class internally. When the broadcast (OutboundEvent) is called, internally SseBroadcaster pushes data on all registered EventOutputs; that is, on clients connected to SseBroadcaster. At times, there's a scenario where the client/server has been connected and after sometime, the client gets disconnected. So, in this case, SseBroadcaster automatically handles the client connection; that is, it determines whether the connection needs to be maintained. When any client connection is closed, the broadcaster detects EventOutput and frees the connection and resources obtained by that EventOutput connection. Summary Thus we learned the difference between the traditional web service flow and SSE web service flow. We also covered how to create the SSE web services and implement the Jersey client in order to consume the SSE using different programmatic models. Useful Links: Setting up the most Popular Journal Articles in your Personalized Community in Liferay Portal Understanding WebSockets and Server-sent Events in Detail RESS - The idea and the Controversies
Read more
  • 0
  • 0
  • 5041

article-image-introduction-wordpress-applications-frontend
Packt
12 Nov 2013
7 min read
Save for later

Introduction to a WordPress application's frontend

Packt
12 Nov 2013
7 min read
(For more resources related to this topic, see here.) Basic file structure of a WordPress theme As WordPress developers, you should have a fairly good idea about the default file structure of WordPress themes. Let's have a brief introduction of the default files before identifying their usage in web applications. Think about a typical web application layout where we have a common header, footer, and content area. In WordPress, the content area is mainly populated by pages or posts. The design and the content for pages are provided through the page.php template, while the content for posts is provided through one of the following templates: index.php archive.php category.php single.php Basically, most of these post-related file types are developed to cater to the typical functionality in blogging systems, and hence can be omitted in the context of web applications. Since custom posts are widely used in application development, we need more focus on templates such as single-{post_type} and archive-{post_type} than category.php, archive.php, and tag.php. Even though default themes contain a number of files for providing default features, only the style.css and index.php files are enough to implement a WordPress theme. Complex web application themes are possible with the standalone index.php file. In normal circumstances, WordPress sites have a blog built on posts, and all the remaining content of the site is provided through pages. When referring to pages, the first thing that comes to our mind is the static content. But WordPress is a fully functional CMS, and hence the page content can be highly dynamic. Therefore, we can provide complex application screens by using various techniques on pages. Let's continue our exploration by understanding the theme file execution hierarchy. Understanding template execution hierarchy WordPress has quite an extensive template execution hierarchy compared to general web application frameworks. However, most of these templates will be of minor importance in the context of web applications. Here, we are going to illustrate the important template files in the context of web applications. The complete template execution hierarchy can be found at: http://hub.packtpub.com/wp-content/uploads/2013/11/Template_Hierarchy.png An example of the template execution hierarchy is as shown in the following diagram: Once the Initial Request is made, WordPress looks for one of the main starting templates as illustrated in the preceding screenshot. It's obvious that most of the starting templates such as front page, comments popup, and index pages are specifically designed for content management systems. In the context of web applications, we need to put more focus into both singular and archive pages, as most of the functionality depends on top of those templates. Let's identify the functionality of the main template files in the context of web applications: Archive pages: These are used to provide summarized listings of data as a grid. Single posts: These are used to provide detailed information about existing data in the system. Singular pages: These are used for any type of dynamic content associated with the application. Generally, we can use pages for form submissions, dynamic data display, and custom layouts. Let's dig deeper into the template execution hierarchy on the Singular Page path as illustrated in the following diagram: Singular Page is divided into two paths that contain posts or pages. Static Page is defined as Custom or Default page templates. In general, we use Default page templates for loading website pages. WordPress looks for a page with the slug or ID before executing the default page.php file. In most scenarios, web application layouts will take the other route of Custom page templates where we create a unique template file inside the theme for each of the layouts and define it as a page template using code comments. We can create a new custom page template by creating a new PHP file inside the theme folder and using the Template Name definition in code comments illustrated as follows: <?php/** Template Name: My Custom Template*/?> To the right of the preceding diagram, we have Single Post Page, which is divided into three paths called Blog Post, Custom Post, and Attachment Post. Both Attachment Posts and Blog Posts are designed for blogs and hence will not be used frequently in web applications. However, the Custom Post template will have a major impact on application layouts. As with Static Page, Custom Post looks for specific post type templates before looking for a default single.php file. The execution hierarchy of an Archive Page is similar in nature to posts, as it looks for post-specific archive pages before reverting to the default archive.php file. Now we have had a brief introduction to the template loading process used by WordPress. In the next section, we are going to look at the template loading process of a typical web development framework to identify the differences. Template execution process of web application frameworks Most stable web application frameworks use a flat and straightforward template execution process compared to the extensive process used by WordPress. These frameworks don't come with built-in templates, and hence each and every template will be generated from scratch. Consider the following diagram of a typical template execution process: In this process, Initial Request always comes to the index.php file, which is similar to the process used by WordPress or any other framework. It then looks for custom routes defined within the framework. It's possible to use custom routes within a WordPress context, even though it's not used generally for websites or blogs. Finally, Initial Request looks for the direct template file located in the templates section of the framework. As you can see, the process of a normal framework has very limited depth and specialized templates. Keep in mind that index.php referred to in the preceding section is the file used as the main starting point of the application, not the template file. In WordPress, we have a specific template file named index.php located inside the themes folder as well. Managing templates in a typical application framework is a relatively easy task when compared to the extensive template hierarchy used by WordPress. In web applications, it's ideal to keep the template hierarchy as flat as possible with specific templates targeted towards each and every screen. In general, WordPress developers tend to add custom functionalities and features by using specific templates within the hierarchy. Having multiple templates for a single screen and identifying the order of execution can be a difficult task in large-scale applications, and hence should be avoided in every possible instance. Web application layout creation techniques As we move into developing web applications, the logic and screens will become complex, resulting in the need of custom templates beyond the conventional ones. There is a wide range of techniques for putting such functionality into the WordPress code. Each of these techniques have their own pros and cons. Choosing the appropriate technique is vital in avoiding potential bottlenecks in large-scale applications. Here is a list of techniques for creating dynamic content within WordPress applications: Static pages with shortcodes Page templates Custom templates with custom routing Summary In this article we learned about basic file structure of the WordPress theme, the template execution hierarchy, and template execution process. We also learned the different techniques of Web application layout creation. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Dynamic Menus in WordPress [Article]
Read more
  • 0
  • 0
  • 5452

Packt
12 Nov 2013
6 min read
Save for later

Quick start – creating your first template

Packt
12 Nov 2013
6 min read
(For more resources related to this topic, see here.) Preparing the project To get started, create a file named index.htmland add the following boilerplate code: <!DOCTYPE HTML> <html> <head> <title>Handlebars Quickstart</title> <script src ="handlebars.js"></script> </head> <body> <script> var src = "<h1>Hello {{name}}</h1>"; var template = Handlebars.compile(src); var output = template({name: "Tom"}); document.body.innerHTML += output; </script> </body> </html> This is a pretty good example to start with, as it demonstrates the minimum amount of code you will need to write to get a template on screen. We will start it by writing the template itself, just a pair of header tags with a greeting message inside. If you remember from the introduction, a Handlebars tag is a reference for some external data wrapped between two pairs of curly braces, and it signifies a dynamic point in the page where Handlebars will insert some information. Here we just want a property called "name" to be inserted at this point, which we will set in a moment. Once you have the template, the next step is where all the magic begins; Handlebars compile function will process through the template's source and generate a JavaScript function to output the result. What I mean by this is Handlebars will create a function that accepts some data and returns the final string with all the placeholders replaced. An example of what I mean could be something like the following code for our quick template stated in the preceding paragraph: var template = function (data) { return "<h1>Hello " + data.name + "</h1>"; } And then every time the template gets called with data, the resulting string will be passed back. Now obviously it is a bit more complex than this, and Handlebars performs some escaping for you and other such checks, but the basic idea of what the compile function generates remains the same. So with our template function created, we can call it by passing in some data (in this case the name Tom), and we take the output and append it to the body. After opening this page in a browser, you should see something like the following screenshot: With the basics out of the way, let's take a look at helpers. Block helpers Helpers can be called in the same way as the data placeholder was called from the template. The difference between them is that a data placeholder will just take a static string or number and insert it into the template's output. Helpers on the other hand are functions, which first compute something, and then the results get placed into the output instead. You can think of helpers as a more dynamic form of placeholders. Now there are two types of helpers in Handlebars: tag helpers, which work like regular functions; and block helpers, which have an added, nested template to manipulate. Handlebars comes with a series of block helpers built-in, which allows you to perform basic logic in your templates. One of the most commonly used block helpers in Handlebars would have to be the each helper, which allows you to run a section of template per item in an array. Let's take a look at it in action. It is going to be too messy to continue placing the templates into JavaScript strings like we did in the first example, so we will place it in its own script tag and pull it in. The reason we are using a script tag is because we don't want the template to show up on the page itself; by placing it in a script tag and setting the type to something the browser doesn't understand it will just be ignored. So right on top of the script tag block that we just wrote, add the following code: <script id="quickstart" type="template/handlebars"><h1>Hello {{name}}</h1><ul>{{#each messages}}<li><b>{{from}}</b>: {{text}}</li>{{/each}}</ul></script> We give the script tag an id, so we can access it later, and then we give it an arbitrary type, so that the browser doesn't try to parse it as JavaScript. Inside it we start with the same template code as before, and then we add each block to cycle through a list of messages and print out each one in a list element. The next step is to replace the script block underneath with the new code, which will get the template from here: <script>var src = document.getElementById('quickstart').innerHTML;var template = Handlebars.compile(src);var output = template({name: "Tom",messages: [{ from: "John", text: "Demo Message" },{ from: "Bob", text: "Something Else" },{ from: "John", text: "Second Post" }]});document.body.innerHTML += output;</script> We start by pulling the template from the script block we added in the previous paragraph using standard JavaScript; next we compile it like before and run the template, this time with the added "messages" array. Running this in your browser will give you something like the following: You may have picked up on this, but it's worth mentioning, that inside each block the context changes from the global data object passed into the template to the specific array element, because of this we are able to access its properties directly. These first few steps have been simple, but subtly we have covered loading in templates from script tags, and the syntax for both standard placeholders as well as block helpers in your templates. Summary Thus we have learned how to create template in this article. Resources for Article: Further resources on this subject: Working with JavaScript in Drupal 6: Part 1 [Article] Using JavaScript and jQuery in Drupal Themes [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article]
Read more
  • 0
  • 0
  • 805
article-image-building-do-list-ajax
Packt
08 Nov 2013
8 min read
Save for later

Building a To-do List with Ajax

Packt
08 Nov 2013
8 min read
(For more resources related to this topic, see here.) Creating and migrating our to-do list's database As you know, migrations are very helpful to control development steps. We'll use migrations in this article. To create our first migration, type the following command: php artisan migrate:make create_todos_table --table=todos --create When you run this command, Artisan will generate a migration to generate a database table named todos. Now we should edit the migration file for the necessary database table columns. When you open the folder migration in app/database/ with a file manager, you will see the migration file under it. Let's open and edit the file as follows: <?php use IlluminateDatabaseMigrationsMigration; class CreateTodosTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('todos', function(Blueprint $table){ $table->create(); $table->increments("id"); $table->string("title", 255); $table->enum('status', array('0', '1'))->default('0'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop("todos"); } } To build a simple TO-DO list, we need five columns: The id column will store ID numbers of to-do tasks The title column will store a to-do task's title The status column will store statuses of the tasks The created_at and updated_at columns will store the created and updated dates of tasks If you write $table->timestamps() in the migration file, Laravel's migration class automatically creates created_at and updated_at columns. As you know, to apply migrations, we should run the following command: php artisan migrate After the command is run, if you check your database, you will see that our todos table and columns have been created. Now we need to write our model. Creating a todos model To create a model, you should open the app/models/ directory with your file manager. Create a file named Todo.php under the directory and write the following code: <?php class Todo extends Eloquent { protected $table = 'todos'; } Let's examine the Todo.php file. As you see, our Todo class extends an Eloquent model, which is the ORM (Object Relational Mapper) database class of Laravel. The protected $table = 'todos'; code tells Eloquent about our model's table name. If we don't set the table variable, Eloquent accepts the plural version of the lower case model name as table name. So this isn't required technically. Now, our application needs a template file, so let's create it. Creating the template Laravel uses a template engine that is called blade for static and application template files. Laravel calls the template files from the app/views/ directory, so we need to create our first template under this directory. Create a file with the name index.blade.php. The file contains the following code: <html> <head> <title>To-do List Application</title> <link rel="stylesheet" href="assets/css/style.css"> <!--[if lt IE 9]><script src = "//html5shim.googlecode.com/svn/trunk/html5.js"> </script><![endif]--> </head> <body> <div class="container"> <section id="data_section" class="todo"> <ul class="todo-controls"> <li><img src = "/assets/img/add.png" width="14px" onClick="show_form('add_task');" /></li> </ul> <ul id="task_list" class="todo-list"> @foreach($todos as $todo) @if($todo->status) <li id="{{$todo->id}}" class="done"> <a href="#" class="toggle"></a> <span id="span_{{$todo->id}}">{ {$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon-delete">Delete</a> <a href="#" onClick="edit_task('{{$todo->id}}', '{{$todo->title}}');" class="icon-edit">Edit</a></li> @else <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo->id}}');" class="toggle"></a> <span id="span_{ {$todo->id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{ {$todo->id}}');" class= "icon-delete">Delete</a> <a href="#" onClick="edit_task('{ {$todo->id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endif @endforeach </ul> </section> <section id="form_section"> <form id="add_task" class="todo" style="display:none"> <input id="task_title" type="text" name="title" placeholder="Enter a task name" value=""/> <button name="submit">Add Task</button> </form> <form id="edit_task" class="todo" style="display:none"> <input id="edit_task_id" type="hidden" value="" /> <input id="edit_task_title" type="text" name="title" value="" /> <button name="submit">Edit Task</button> </form> </section> </div> <script src = "http://code.jquery.com/ jquery-latest.min.js"type="text/javascript"></script> <script src = "assets/js/todo.js" type="text/javascript"></script> </body> </html> The preceding code may be difficult to understand if you're writing a blade template for the first time, so we'll try to examine it. You see a foreach loop in the file. This statement loops our todo records. We will provide you with more knowledge about it when we are creating our controller in this article. If and else statements are used for separating finished and waiting tasks. We use if and else statements for styling the tasks. We need one more template file for appending new records to the task list on the fly. Create a file with the name ajaxData.blade.php under app/views/ folder. The file contains the following code: @foreach($todos as $todo) <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo- >id}}');" class="toggle"></a> <span id="span_{{$todo >id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon delete">Delete</a> <a href="#" onClick="edit_task('{{$todo >id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endforeach Also, you see the /assets/ directory in the source path of static files. When you look at the app/views directory, there is no directory named assets. Laravel separates the system and public files. Public accessible files stay under your public folder in root. So you should create a directory under your public folder for asset files. We recommend working with these types of organized folders for developing tidy and easy-to-read code. Finally you see that we are calling jQuery from its main website. We also recommend this way for getting the latest, stable jQuery in your application. You can style your application as you wish, hence we'll not examine styling code here. We are putting our style.css files under /public/assets/css/. For performing Ajax requests, we need JavaScript coding. This code posts our add_task and edit_task forms and updates them when our tasks are completed. Let's create a JavaScript file with the name todo.js in /public/assets/js/. The files contain the following code: function task_done(id){ $.get("/done/"+id, function(data) { if(data=="OK"){ $("#"+id).addClass("done"); } }); } function delete_task(id){ $.get("/delete/"+id, function(data) { if(data=="OK"){ var target = $("#"+id); target.hide('slow', function(){ target.remove(); }); } }); } function show_form(form_id){ $("form").hide(); $('#'+form_id).show("slow"); } function edit_task(id,title){ $("#edit_task_id").val(id); $("#edit_task_title").val(title); show_form('edit_task'); } $('#add_task').submit(function(event) { /* stop form from submitting normally */ event.preventDefault(); var title = $('#task_title').val(); if(title){ //ajax post the form $.post("/add", {title: title}).done(function(data) { $('#add_task').hide("slow"); $("#task_list").append(data); }); } else{ alert("Please give a title to task"); } }); $('#edit_task').submit(function() { /* stop form from submitting normally */ event.preventDefault(); var task_id = $('#edit_task_id').val(); var title = $('#edit_task_title').val(); var current_title = $("#span_"+task_id).text(); var new_title = current_title.replace(current_title, title); if(title){ //ajax post the form $.post("/update/"+task_id, {title: title}).done(function(data) { $('#edit_task').hide("slow"); $("#span_"+task_id).text(new_title); }); } else{ alert("Please give a title to task"); } }); Let's examine the JavaScript file.
Read more
  • 0
  • 0
  • 3797

article-image-dynamic-pom
Packt
06 Nov 2013
9 min read
Save for later

Dynamic POM

Packt
06 Nov 2013
9 min read
(For more resources related to this topic, see here.) Case study Our project meets the following requirements: It depends on org.codehaus.jedi:jedi-XXX:3.0.5. Actually, the XXX is related to the JDK version, that is, either jdk5 or jdk6. The project is built and run on three different environments: PRODuction, UAT, and DEVelopment The underlying database differs owing to the environment: PostGre in PROD, MySQL in UAT, and HSQLDB in DEV. Besides, the connection is set in a Spring file, which can be spring-PROD.xml, spring-UAT.xml, or spring-DEV.xml, all being in the same src/main/resource folder. The first bullet point can be easily answered, using a jdk-version property. The dependency is then declared as follows: <dependency> <groupId>org.codehaus.jedi</groupId> <!--For this dependency two artifacts are available, one for jdk5 or and a second for jdk6--> <artifactId>jedi-${jdk.version}</artifactId> <version>${jedi.version}</version> </dependency> Still, the fourth bullet point is resolved by specifying a resource folder: <resources> <resource> <directory>src/main/resource</directory> <!--include the XML files corresponding to the environment: PROD, UAT, DEV. Here, the only XML file is a Spring configuration one. There is one file per environment--> <includes> <include> **/*-${environment}.xml </include> </includes> </resource> </resources> Then, we will have to run Maven adding the property values using one of the following commands: mvn clean install –Denvironment=PROD –Djdk.version=jdk6 mvn clean install –Denvironment=DEV –Djdk.version=jdk5 By the way, we could have merged the three XML files as a unique one, setting dynamically the content thanks to Maven's filter tag and mechanism. The next point to solve is the dependency to actual JDBC drivers. A quick and dirty solution A quick and dirty solution is to mention the three dependencies: <!--PROD --> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.1-901.jdbc4</version> <scope>runtime</scope> </dependency> <!--UAT--> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.25</version> <scope>runtime</scope> </dependency> <!--DEV--> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>2.3.0</version> <scope>runtime</scope> </dependency> Anyway, this idea has drawbacks. Even though only the actual driver (org. postgresql.Driver, com.mysql.jdbc.Driver, or org.hsqldb.jdbcDriver as described in the Spring files) will be instantiated at runtime, the three JARs will be transitively transmitted—and possibly packaged—in a further distribution. You may argue that we can work around this problem in most of situations, by confining the scope to provided, and embed the actual dependency by any other mean (such as rely on an artifact embarked in an application server); however, even then you should concede the dirtiness of the process. A clean solution Better solutions consist in using dynamic POM. Here, too, there will be a gradient of more or less clean solutions. Once more, as a disclaimer, beware of dynamic POMs! Dynamic POMs are a powerful and tricky feature of Maven. Moreover, modern IDEs manage dynamic POMs better than a few years ago. Yet, their use may be dangerous for newcomers: as with generated code and AOP for instance, what you write is not what you execute, which may result in strange or unexpected behaviors, needing long hours of debug and an aspirin tablet for the headache. This is why you have to carefully weigh their interest, relatively to your project before introducing them. With properties in command lines As a first step, let's define the dependency as follows: <!-- The dependency to effective JDBC drivers: PostGre, MySQL or HSQLDB--> <dependency> <groupId>${effective.groupId}</groupId> <artifactId> ${effective.artifactId} </artifactId> <version>${effective.version}</version> </dependency> As you can see, the dependency is parameterized thanks to three properties: effective.groupId, effective.artifactId, and effective.version. Then, in the same way we added earlier the –Djdk.version property, we will have to add those properties in the command line, for example,: mvn clean install –Denvironment=PROD –Djdk.version=jdk6 -Deffective.groupId=postgresql -Deffective.artifactId=postgresql -Deffective.version=9.1-901.jdbc4 Or add the following property mvn clean install –Denvironment=DEV –Djdk.version=jdk5 -Deffective.groupId=org.hsqldb -Deffective.artifactId=hsqldb -Deffective.version=2.3.0 Then, the effective POM will be reconstructed by Maven, and include the right dependencies: <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.2.3.RELEASE</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.codehaus.jedi</groupId> <artifactId>jedi-jdk6</artifactId> <version>3.0.5</version> <scope>compile</scope> </dependency> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.1-901.jdbc4</version> <scope>compile</scope> </dependency> </dependencies> Yet, as you can imagine, writing long command lines like the preceding one increases the risks of human error, all the more that such lines are "write-only". These pitfalls are solved by profiles. Profiles and settings As an easy improvement, you can define profiles within the POM itself. The profiles gather the information you previously wrote in the command line, for example: <profile> <!-- The profile PROD gathers the properties related to the environment PROD--> <id>PROD</id> <properties> <environment>PROD</environment> <effective.groupId> postgresql </effective.groupId> <effective.artifactId> postgresql </effective.artifactId> <effective.version> 9.1-901.jdbc4 </effective.version> <jdk.version>jdk6</jdk.version> </properties> <activation> <!-- This profile is activated by default: in other terms, if no other profile in activated, then PROD will be--> <activeByDefault>true</activeByDefault> </activation> </profile> Or: <profile> <!-- The profile DEV gathers the properties related to the environment DEV--> <id>DEV</id> <properties> <environment>DEV</environment> <effective.groupId> org.hsqldb </effective.groupId> <effective.artifactId> hsqldb </effective.artifactId> <effective.version> 2.3.0 </effective.version> <jdk.version>jdk5</jdk.version> </properties> <activation> <!-- The profile DEV will be activated if, and only if, it is explicitly called--> <activeByDefault>false</activeByDefault> </activation> </profile> The corresponding command lines will be shorter: mvn clean install (Equivalent to mvn clean install –PPROD) Or: mvn clean install –PDEV You can list several profiles in the same POM, and one, many or all of them may be enabled or disabled. Nonetheless, multiplying profiles and properties hurts the readability. Moreover, if your team has 20 developers, then each developer will have to deal with 20 blocks of profiles, out of which 19 are completely irrelevant for him/her. So, in order to make the thing smoother, a best practice is to extract the profiles and inset them in the personal settings.xml files, with the same information: <?xml version="1.0" encoding="UTF-8"?> <settings xsi_schemaLocation="http://maven.apache.org/ SETTINGS/1.0.0 http://maven.apache.org/xsd/ settings-1.0.0.xsd"> <profiles> <profile> <id>PROD</id> <properties> <environment>PROD</environment> <effective.groupId> postgresql </effective.groupId> <effective.artifactId> postgresql </effective.artifactId> <effective.version> 9.1-901.jdbc4 </effective.version> <jdk.version>jdk6</jdk.version> </properties> <activation> <activeByDefault>true</activeByDefault> </activation> </profile> </profiles> </settings> Dynamic POMs – conclusion As a conclusion, the best practice concerning dynamic POMs is to parameterize the needed fields within the POM. Then, by order of priority: Set an enabled profile and corresponding properties within the settings.xml. mvn <goals> [-f <pom_Without_Profiles.xml> ] [-s <settings_With_Enabled_Profile.xml>] Otherwise, include profiles and properties within the POM mvn <goals> [-f <pom_With_Profiles.xml> ] [-P<actual_Profile> ] [-s <settings_Without_Profile.xml>] Otherwise, launch Maven with the properties in command lines mvn <goals> [-f <pom_Without_Profiles.xml> ] [-s <settings_Without_Profile.xml>] -D<property_1>=<value_1> -D<property_2>=<value_2> (...) -D<property_n>=<value_n> Summary In this article we learned about Dynamic POM. We saw a case study and also saw its quick and easy solutions. Resources for Article: Further resources on this subject: Integrating Scala, Groovy, and Flex Development with Apache Maven [Article] Creating a Camel project (Simple) [Article] Using Hive non-interactively (Simple) [Article]
Read more
  • 0
  • 0
  • 3545

article-image-downloading-pyrocms-and-its-pre-requisites
Packt
31 Oct 2013
6 min read
Save for later

Downloading PyroCMS and it's pre-requisites

Packt
31 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting started PyroCMS, like many other content management systems including WordPress, Typo3, or Drupal, comes with a pre-developed installation process. For PyroCMS, this installation process is easy to use and comes with a number of helpful hints just in case you hit a snag while installing the system. If, for example, your system files don't have the correct permissions profile (writeable versus write-protected), the PyroCMS installer will help you, along with all the other installation details, such as checking for required software and taking care of file permissions. Before you can install PyroCMS (the version used for examples in this article is 2.2) on a server, there are a number of server requirements that need to be met. If you aren't sure if these requirements have been met, the PyroCMS installer will check to make sure they are available before installation is complete. Following are the software requirements for a server before PyroCMS can be installed: HTTP Web Server MySQL 5.x or higher PHP 5.2.x or higher GD2 cURL Among these requirements, web developers interested in PyroCMS will be glad to know that it is built on CodeIgniter, a popular MVC patterned PHP framework. I recommend that the developers looking to use PyroCMS should also have working knowledge of CodeIgniter and the MVC programming pattern. Learn more about CodeIgniter and see their excellent system documentation online at http://ellislab.com/codeigniter. CodeIgniter If you haven't explored the Model-View-Controller (MVC) programming pattern, you'll want to brush up before you start developing for PyroCMS. The primary reason that CodeIgniter is a good framework for a CMS is that it is a well-documented framework that, when leveraged in the way PyroCMS has done, gives developers power over how long a project will take to build and the quality with which it is built. Add-on modules for PyroCMS, for example, follow the MVC method, a programming pattern that saves developers time and keeps their code dry and portable. Dry and portable programming are two different concepts. Dry is an acronym for "don't repeat yourself" code. Portable code is like "plug-and-play" code—write it once so that it can be shared with other projects and used quickly. HTTP web server Out of the PyroCMS software requirements, it is obvious, you can guess, that a good HTTP web server platform will be needed. Luckily, PyroCMS can run on a variety of web server platforms, including the following: Abyss Web Server Apache 2.x Nginx Uniform Server Zend Community Server If you are new to web hosting and haven't worked with web hosting software before, or this is your first time installing PyroCMS, I suggest that you use Apache as a HTTP web server. It will be the system for which you will find the most documentation and support online. If you'd prefer to avoid Apache, there is also good support for running PyroCMS on Nginx, another fairly-well documented web server platform. MySQL Version 5 is the latest major release of MySQL, and it has been in use for quite some time. It is the primary database choice for PyroCMS and is thoroughly supported. You don't need expert level experience with MySQL to run PyroCMS, but you'll need to be familiar with writing SQL queries and building relational databases if you plan to create add-ons for the system. You can learn more about MySQL at http://www.mysql.com. PHP Version 5.2 of PHP is no longer the officially supported release of PHP, which is, at the time of this article, Version 5.4. Version 5.2, which has been criticized as being a low server requirement for any CMS, is allowed with PyroCMS because it is the minimum version requirement for CodeIgniter, the framework upon which PyroCMS is built. While future versions of PyroCMS may upgrade this minimum requirement to PHP 5.3 or higher, you can safely use PyroCMS with PHP 5.2. Also, many server operating systems, like SUSE and Ubuntu, install PHP 5.2 by default. You can, of course, upgrade PHP to the latest version without causing harm to your instance of PyroCMS. To help future-proof your installation of PyroCMS, it may be wise to install PHP 5.3 or above, to maximize your readiness for when PyroCMS more strictly adopts features found in PHP 5.3 and 5.4, such as namespaceing. GD2 GD2, a library used in the manipulation and creation of images, is used by PyroCMS to dynamically generate images (where needed) and to crop and resize images used in many PyroCMS modules and add-ons. The image-based support offered by this library is invaluable. cURL As described on the cURL project website, cURL is "a command line tool for transferring data with URL syntax" using a large number of methods, including HTTP(S) GET, POST, PUT, and so on. You can learn more about the project and how to use cURL on their website http://curl.haxx.se. If you've never used cURL with PHP, I recommend taking time to learn how to use it, especially if you are thinking about building a web-based API using PyroCMS. Most popular web hosting companies meet the basic server requirements for PyroCMS. Downloading PyroCMS Getting your hands on a copy of PyroCMS is very simple. You can download the system files from one of two locations, the PryoCMS project website and GitHub. To download PyroCMS from the project website, visit http://www.pyrocms.com and click on the green button labeled Get PyroCMS! This will take you to a download page that gives you the choice between downloading the Community version of PyroCMS and buying the Professional version. If you are new to PyroCMS, you can start with the Community version, currently at Version 2.2.3. The following screenshot shows the download screen: To download PyroCMS from GitHub, visit https://github.com/pyrocms/pyrocms and click on the button labeled Download ZIP to get the latest Community version of PyroCMS, as shown in the following screenshot: If you know how to use Git, you can also clone a fresh version of PyroCMS using the following command. A word of warning, cloning PyroCMS from GitHub will usually give you the latest, stable release of the system, but it could include changes not described in this article. Make sure you checkout a stable release from PyroCMS's repository. git clone https://github.com/pyrocms/pyrocms.git As a side-note, if you've never used Git, I recommend taking some time to get started using it. PyroCMS is an open source project hosted in a Git repository on Github, which means that the system is open to being improved by any developer looking to contribute to the well-being of the project. It is also very common for PyroCMS developers to host their own add-on projects on Github and other online Git repository services. Summary In this article, we have covered the pre-requisites for using PyroCMS, and also how to download PyroCMS. Resources for Article : Further resources on this subject: Kentico CMS 5 Website Development: Managing Site Structure [Article] Kentico CMS 5 Website Development: Workflow Management [Article] Web CMS [Article]
Read more
  • 0
  • 0
  • 2740
article-image-dialog-widget
Packt
30 Oct 2013
14 min read
Save for later

The Dialog Widget

Packt
30 Oct 2013
14 min read
(For more resources related to this topic, see here.) Wijmo additions to the dialog widget at a glance By default, the dialog window includes the pin, toggle, minimize, maximize, and close buttons. Pinning the dialog to a location on the screen disables the dragging feature on the title bar. The dialog can still be resized. Maximizing the dialog makes it take up the area inside the browser window. Toggling it expands or collapses it so that the dialog contents are shown or hidden with the title bar remaining visible. If these buttons cramp your style, they can be turned off with the captionButtons option. You can see how the dialog is presented in the browser from the following screenshot: Wijmo features additional API compared to jQuery UI for changing the behavior of the dialog. The new API is mostly for the buttons in the title bar and managing window stacking. Window stacking determines which windows are drawn on top of other ones. Clicking on a dialog raises it above other dialogs and changes their window stacking settings. The following table shows the options added in Wijmo. Options Events Methods captionButtons contentUrl disabled expandingAnimation stack zIndex blur buttonCreating stateChanged disable enable getState maximize minimize pin refresh reset restore toggle widget The contentUrl option allows you to specify a URL to load within the window. The expandingAnimation option is applied when the dialog is toggled from a collapsed state to an expanded state. The stack and zIndex options determine whether the dialog sits on top of other dialogs. Similar to the blur event on input elements, the blur event for dialog is fired when the dialog loses focus. The buttonCreating method is called when buttons are created and can modify the buttons on the title bar. The disable method disables the event handlers for the dialog. It prevents the default button actions and disables dragging and resizing. The widget method returns the dialog HTML element. The methods maximize, minimize, pin, refresh, reset, restore, and toggle, are available as buttons on the title bar. The best way to see what they do is play around with them. In addition, the getState method is used to find the dialog state and returns either maximized, minimized, or normal. Similarly, the stateChanged event is fired when the state of the dialog changes. The methods are called as a parameter to the wijdialog method. To disable button interactions, pass the string disable: $("#dialog").wijdialog ("disable"); Many of the methods come as pairs, and enable and disable are one of them. Calling enable enables the buttons again. Another pair is restore/minimize. minimize hides the dialog in a tray on the left bottom of the screen. restore sets the dialog back to its normal size and displays it again. The most important option for usability is the captionButtons option. Although users are likely familiar with the minimize, resize, and close buttons; the pin and toggle buttons are not featured in common desktop environments. Therefore, you will want to choose the buttons that are visible depending on your use of the dialog box in your project. To turn off a button on the title bar, set the visible option to false. A default jQuery UI dialog window with only the close button can be created with: $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: false } } }); The other options for each button are click, iconClassOff, and iconClassOn. The click option specifies an event handler for the button. Nevertheless, the buttons come with default actions and you will want to use different icons for custom actions. That's where iconClass comes in. iconClassOn defines the CSS class for the button when it is loaded. iconClassOff is the class for the button icon after clicking. For a list of available jQuery UI icons and their classes, see http://jquery-ui.googlecode.com/svn/tags/1.6rc5/tests/static/icons.html. Our next example uses ui-icon-zoomin, ui-icon-zoomout, and ui-icon-lightbulb. They can be found by toggling the text for the icons on the web page as shown in the preceding screenshot. Adding custom buttons jQuery UI's dialog API lacks an option for configuring the buttons shown on the title bar. Wijmo not only comes with useful default buttons, but also lets you override them easily. <!DOCTYPE HTML> <html> <head> ... <style> .plus { font-size: 150%; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({ autoOpen: true, captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: {visible: true, click: function () { $('#dialog').toggleClass('plus') }, iconClassOn: 'ui-icon-zoomin', iconClassOff: 'ui-icon-zoomout'} , minimize: { visible: false }, maximize: {visible: true, click: function () { alert('To enloarge text, click the zoom icon.') }, iconClassOn: 'ui-icon-lightbulb' }, close: {visible: true, click: self.close, iconClassOn:'ui-icon-close'} } }); }); </script> </head> <body> <div id="dialog" title="Basic dialog"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate</p> </div> </body> </html> We create a dialog window passing in the captionButtons option. The pin, refresh, and minimize buttons have visible set to false so that the title bar is initialized without them. The final output looks as shown in the following screenshot: In addition, the toggle and maximize buttons are modified and given custom behaviors. The toggle button toggles the font size of the text by applying or removing a CSS class. Its default icon, set with iconClassOn, indicates that clicking on it will zoom in on the text. Once clicked, the icon changes to a zoom out icon. Likewise, the behavior and appearance of the maximize button have been changed. In the position where the maximize icon was displayed in the title bar previously, there is now a lightbulb icon with a tip. Although this method of adding new buttons to the title bar seems clumsy, it is the only option that Wijmo currently offers. Adding buttons in the content area is much simpler. The buttons option specifies the buttons to be displayed in the dialog window content area below the title bar. For example, to display a simple confirmation button: $('#dialog').wijdialog({buttons: {ok: function () { $(this).wijdialog('close') }}}); The text displayed on the button is ok and clicking on the button hides the dialog. Calling $('#dialog').wijdialog('open') will show the dialog again. Configuring the dialog widget's appearance Wijmo offers several options that change the dialog's appearance including title, height, width, and position. The title of the dialog can be changed either by setting the title attribute of the div element of the dialog, or by using the title option. To change the dialog's theme, you can use CSS styling on the wijmo-wijdialog and wijmo-wijdialog-captionbutton classes: <!DOCTYPE HTML> <html> <head> ... <style> .wijmo-wijdialog { /*rounded corners*/ -webkit-border-radius: 12px; border-radius: 12px; background-clip: padding-box; /*shadow behind dialog window*/ -moz-box-shadow: 3px 3px 5px 6px #ccc; -webkit-box-shadow: 3px 3px 5px 6px #ccc; box-shadow: 3px 3px 5px 6px #ccc; /*fade contents from dark gray to gray*/ background-image: -webkit-gradient(linear, left top, left bottom, from(#444444), to(#999999)); background-image: -webkit-linear-gradient(top, #444444, #999999); background-image: -moz-linear-gradient(top, #444444, #999999); background-image: -o-linear-gradient(top, #444444, #999999); background-image: linear-gradient(to bottom, #444444, #999999); background-color: transparent; text-shadow: 1px 1px 3px #888; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({width: 350}); }); </script> </head> <body> <div id="dialog" title="Subtle gradients"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate </p> </div> </body> </html> We now add rounded boxes, a box shadow, and a text shadow to the dialog box. This is done with the .wijmo-wijdialog class. Since many of the CSS3 properties have different names on different browsers, the browser specific properties are used. For example, -webkit-box-shadow is necessary on Webkit-based browsers. The dialog width is set to 350 px when initialized so that the title text and buttons all fit on one line. Loading external content Wijmo makes it easy to load content in an iFrame. Simply pass a URL with the contentUrl option: $(document).ready(function () { $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: true }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: true }, close: { visible: false } }, contentUrl: "http://wijmo.com/demo/themes/" }); }); This will load the Wijmo theme explorer in a dialog window with refresh and maximize/restore buttons. This output can be seen in the following screenshot: The refresh button reloads the content in the iFrame, which is useful for dynamic content. The maximize button resizes the dialog window. Form Components Wijmo form decorator widgets for radio button, checkbox, dropdown, and textbox elements give forms a consistent visual style across all platforms. There are separate libraries for decorating the dropdown and other form elements, but Wijmo gives them a consistent theme. jQuery UI lacks form decorators, leaving the styling of form components to the designer. Using Wijmo form components saves time during development and presents a consistent interface across all browsers. Checkbox The checkbox widget is an excellent example of the style enhancements that Wijmo provides over default form controls. The checkbox is used if multiple choices are allowed. The following screenshot shows the different checkbox states: Wijmo adds rounded corners, gradients, and hover highlighting to the checkbox. Also, the increased size makes it more usable. Wijmo checkboxes can be initialized to be checked. The code for this purpose is as follows: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $("#checkbox3").wijcheckbox({checked: true}); $(":input[type='checkbox']:not(:checked)").wijcheckbox(); }); </script> <style> div { display: block; margin-top: 2em; } </style> </head> <body> <div><input type='checkbox' id='checkbox1' /><label for='checkbox1'>Unchecked</label></div> <div><input type='checkbox' id='checkbox2' /><label for='checkbox2'>Hover</label></div> <div><input type='checkbox' id='checkbox3' /><label for='checkbox3'>Checked</label></div> </body> </html>. In this instance, checkbox3 is set to Checked as it is initialized. You will not get the same result if one of the checkboxes is initialized twice. Here, we avoid that by selecting the checkboxes that are not checked after checkbox3 is set to be Checked. Radio buttons Radio buttons, in contrast with checkboxes, allow only one of the several options to be selected. In addition, they are customized through the HTML markup rather than a JavaScript API. To illustrate, the checked option is set by the checked attribute: <input type="radio" checked /> jQuery UI offers a button widget for radio buttons, as shown in the following screenshot, which in my experience causes confusion as users think that they can select multiple options: The Wijmo radio buttons are closer in appearance to regular radio buttons so that users would expect the same behavior, as shown in the following screenshot: Wijmo radio buttons are initialized by calling the wijradiomethod method on radio button elements: <!DOCTYPE html> <html> <head> ... <script id="scriptInit" type="text/javascript">$(document).ready(function () { $(":input[type='radio']").wijradio({ changed: function (e, data) { if (data.checked) { alert($(this).attr('id') + ' is checked') } } }); }); </script> </head> <body> <div id="radio"> <input type="radio" id="radio1" name="radio"/><label for="radio1">Choice 1</label> <input type="radio" id="radio2" name="radio" checked="checked"/><label for="radio2">Choice 2</label> <input type="radio" id="radio3" name="radio"/><label for="radio3">Choice 3</label> </div> </body> </html> In this example, the changed option, which is also available for checkboxes, is set to a handler. The handler is passed a jQuery.Event object as the first argument. It is just a JavaScript event object normalized for consistency across browsers. The second argument exposes the state of the widget. For both checkboxes and radio buttons, it is an object with only the checked property. Dropdown Styling a dropdown to be consistent across all browsers is notoriously difficult. Wijmo offers two options for styling the HTML select and option elements. When there are no option groups, the ComboBox is the better widget to use. For a dropdown with nested options under option groups, only the wijdropdown widget will work. As an example, consider a country selector categorized by continent: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('select[name=country]').wijdropdown(); $('#reset').button().click(function(){ $('select[name=country]').wijdropdown('destroy') }); $('#refresh').button().click(function(){ $('select[name=country]').wijdropdown('refresh') }) }); </script> </head> <body> <button id="reset"> Reset </button> <button id="refresh"> Refresh </button> <select name="country" style="width:170px"> <optgroup label="Africa"> <option value="gam">Gambia</option> <option value="mad">Madagascar</option> <option value="nam">Namibia</option> </optgroup> <optgroup label="Europe"> <option value="fra">France</option> <option value="rus">Russia</option> </optgroup> <optgroup label="North America"> <option value="can">Canada</option> <option value="mex">Mexico</option> <option selected="selected" value="usa">United States</option> </optgroup> </select> </body> </html> The select element's width is set to 170 pixels so that when the dropdown is initialized, both the dropdown menu and items have a width of 170 pixels. This allows the North America option category to be displayed on a single line, as shown in the following screenshot. Although the dropdown widget lacks a width option, it takes the select element's width when it is initialized. To initialize the dropdown, call the wijdropdown method on the select element: $('select[name=country]').wijdropdown(); The dropdown element uses the blind animation to show the items when the menu is toggled. Also, it applies the same click animation as on buttons to the slider and menu: To reset the dropdown to a select box, I've added a reset button that calls the destroy method. If you have JavaScript code that dynamically changes the styling of the dropdown, the refresh method applies the Wijmo styles again. Summary The Wijmo dialog widget is an extension of the jQuery UI dialog. In this article, the features unique to Wijmo's dialog widget are explored and given emphasis. I showed you how to add custom buttons, how to change the dialog appearance, and how to load content from other URLs in the dialog. We also learned about Wijmo's form components. A checkbox is used when multiple items can be selected. Wijmo's checkbox widget has style enhancements over the default checkboxes. Radio buttons are used when only one item is to be selected. While jQuery UI only supports button sets on radio buttons, Wijmo's radio buttons are much more intuitive. Wijmo's dropdown widget should only be used when there are nested or categorized <select> options. The ComboBox comes with more features when the structure of the options is flat. Resources for Article: Further resources on this subject: Wijmo Widgets [Article] jQuery Animation: Tips and Tricks [Article] Building a Custom Version of jQuery [Article]
Read more
  • 0
  • 0
  • 2111

article-image-creating-image-gallery
Packt
30 Oct 2013
5 min read
Save for later

Creating an image gallery

Packt
30 Oct 2013
5 min read
(For more resources related to this topic, see here.) Getting ready Before we get started, we need to find a handful of images that we can use for the gallery. Find four to five images to use for the gallery and put them in the images folder. How to do it... Add the following links to the images to the index.html file: <a class="fancybox"href="images/waterfall.png">Waterfall</a><a class="fancybox" href="images/frozenlake.png">Frozen Lake</a><a class="fancybox" href="images/road-inforest.png">Road in Forest</a><a class="fancybox" href="images/boston.png">Boston</a> The anchor tags no longer have an ID, but a class. It is important that they all have the same class so that Fancybox knows about them. Change our call to the Fancybox plugin in the scripts.js file to use the class that all of the links have instead of show-fancybox ID. $(function() { // Using fancybox class instead of the show-fancybox ID $('.fancybox').fancybox(); }); Fancybox will now work on all of the images but they will not be part of the same gallery. To make images part of a gallery, we use the rel attribute of the anchor tags. Add rel="gallery" to all of the anchor tags, shown as follows: <a class="fancybox" rel="gallery" href="images/waterfall.png">Waterfall</a> <a class="fancybox" rel="gallery" href="images/frozenlake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> Now that we have added rel="gallery" to each of our anchor tags, you should see left and right arrows when you hover over the left-hand side or right-hand side of Fancybox. These arrows allow you to navigate between images as shown in the following screenshot: How it works... Fancybox determines that an image is part of a gallery using the rel attribute of the anchor tags. The order of the images is based on the order of the anchor tags on the page. This is important so that the slideshow order is exactly the same as a gallery of thumbnails without any additional work on our end. We changed the ID of our single image to a class for the gallery because we wanted to call Fancybox on all of the links instead of just one. If we wanted to add more image links to the page, it would just be a matter of adding more anchor tags with the proper href values and the same class. There's more... So, what else can we do with the gallery functionality of Fancybox? Let's take a look at some of the other things that we could do with the gallery that we have currently. Captions and thumbnails All of the functionalities that we discussed for single images apply to galleries as well. So, if we wanted to add a thumbnail, it would just be a matter of adding an img tag inside the anchor tag instead of the text. If we wanted to add a caption, we can do so by adding the title attribute to our anchor tags. Showing slideshow from one link Let's say that we wanted to have just one link to open our gallery slideshow. This can be easily achieved by hiding the other links via CSS with the help of the following step: We start by adding this style tag to the <head> tag just under the <script> tag for our scripts.js file, shown as follows: <style type="text/css"> .hidden { display: none; } </style> Now, we update the HTML file so that all but one of our anchor tags have the hidden class. Next, when we reload the page, we will see only one link. When you click on the link, you should still be able to navigate through the gallery just like all of the links were on the page. <a class="fancybox" rel="gallery" href="images/waterfall.png">Image Gallery</a> <div class="hidden"> <a class="fancybox" rel="gallery" href="images/frozen-lake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> </div> Summary In this article we saw that Fancybox provides very strong image handling functionalities. We also saw how an image gallery is created by Fancybox. We can also display images as thumbnails and display the images as a slideshow using just one link. Resources for Article: Further resources on this subject: Getting started with your first jQuery plugin [Article] OpenCart Themes: Styling Effects of jQuery Plugins [Article] The Basics of WordPress and jQuery Plugin [Article]
Read more
  • 0
  • 0
  • 3844