Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-supporting-hypervisors-opennebula
Packt
25 May 2012
7 min read
Save for later

Supporting hypervisors by OpenNebula

Packt
25 May 2012
7 min read
(For more resources on Open Source, see here.) A host is a server that has the ability to run virtual machines using a special software component called a hypervisor that is managed by the OpenNebula frontend. All the hosts do not need to have homogeneous configuration, but it is possible to use different hypervisors on different GNU/Linux distributions on a single OpenNebula cluster. Using different hypervisors in your infrastructure is not just a technical exercise but assures you greater flexibility and reliability. A few examples where having multiple hypervisors would prove to be beneficial are as follows: A bug in the current release of A hypervisor does not permit the installation of a virtual machine with a particular legacy OS (let's say, for example,Windows 2000 Service Pack 4), but you can execute it with B hypervisor without any problem. You have a production infrastructure that is running a closed source free-to-use hypervisor, and during the next year the software house developing that hypervisor will request a license payment or declare bankruptcy due to economic crisis. The current version of OpenNebula will give you great flexibility regarding hypervisor usage since it natively supports KVM/Xen (which are open source) and VMware ESXi. In the future it will probably support both VirtualBox (Oracle) and Hyper-V (Microsoft). Configuring hosts The first thing to do before starting with the installation of a particular hypervisor on a host is to perform some general configuration steps. They are as follows: Create a dedicated oneadmin UNIX account (which should have sudo privileges for executing particular tasks, for example, iptables/ebtables,and network hooks that we have configured. The frontend and host's hostname should be resolved by a local DNS or a shared/etc/hosts file. The oneadmin on the frontend should be able to connect remotely through SSH to the oneadmin on the hosts without a password. Configure the shared network bridge that will be used by VM to get the physical network.   The oneadmin account and passwordless login Every host should have a oneadmin UNIX account that will be used by the OpenNebula frontend to connect and execute commands. If during the operating system install you did not create it, create a oneadmin user on the host by using the following command: youruser@host1 $ sudo adduser oneadmin You can configure any password you like (even blank) because we are going to set up a passwordless login from the frontend: oneadmin@front-end $ ssh-copy-id oneadmin@host1 Now if you connect from the oneadmin account on the frontend to the oneadminaccount of the host, you should get the shell prompt without entering any password by using the following command: oneadmin@front-end $ ssh oneadmin@host1 Uniformity of oneadmin UID number Later, we will learn about the possible storage solutions available with OpenNebula. However, keep in mind that if we are going to set up a shared storage, we need to make sure that the UID number of the oneadmin user is homogeneous between the frontend and every other host. In other words, check that with the id command the oneadmin UID is the same both on the frontend and the hosts. Verifying the SSH host fingerprints The first time you connect to a remote SSH server from a particular host, the SSH client will provide you the fingerprintprint of the remote server and ask for your permission to continue with the following message: The authenticity of host host01 (192.168.254.2)can't be established. RSA key fingerprint is 5a:65:0f:6f:21:bb:fd:6a:4a:68:cd: 72:58:5c:fb:9f. Are you sure you want to continue connecting (yes/no)? Knowing the fingerprint of the remote SSH key and saving it to the local SSH client fingerprint cache (saved in ~/.ssh/known_hosts) should be good enough to prevent man-in-the-middle attacks. For this reason, you need to connect from the oneadmin user on the frontend to every host in order to save the fingerprints of the remote hosts in the oneadmin known_hosts for the first time. Not doing this will prevent OpenNebula from connecting to the remote hosts. In large environments, this requirement may be a slow-down when cofiguring new hosts. However, it is possible to bypass this operation by instructing the remote client used by OpenNebula to connect to remote hosts and not check the remote SSH key in ~/.ssh/config. The command prompt will show the following content when the operation is bypassed: Host* StrictHostKeyChecking no. If you do not have a local DNS (or you cannot/do not want to set it up), you can manually manage the /etc/hosts file in every host, using the following IP addresses: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Now you should be able to remotely connect from a node to another with your hostname using the following command: $ ssh oneadmin@kvm01 Configuring a simple DNS with dnsmasq If you do not have a local DNS and manually managing the plain host's file on every host does not excite you, you can try to install and configure dnsmasq. It is a lightweight, easy-to-configure DNS forwarder (optionally DHCP and TFTP can be provided within it) that services well to a small-scale network. The OpenNebula frontend may be a good place to install it. For an Ubuntu/Debian installation use the following command: $ sudo apt-get install dnsmasq The default configuration should be fine. You just need to make sure that /etc/resolv.conf configuration details look similar to the following: # dnsmasq nameserver 127.0.0.1 # another local DNS nameserver 192.168.0.1 # ISP or public DNS nameserver 208.67.220.220 nameserver 208.67.222.222 The /etc/hosts configuration details will look similar to the following: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Configure any other hostname here in the hosts file on the frontend by running dnsmasq. Configure /etc/resolv.conf configuration details on the other hosts using the following code: # ip where dnsmasq is installed nameserver 192.168.0.2 Now you should be able to remotely connect from a node to another using your plain hostname using the following command: $ ssh oneadmin@kvm01 When you add new hosts, simply add them at /etc/hosts on the frontend and they will automatically work on every other host, thanks to dnsmasq. Configuring sudo To give administrative privileges to the oneadmin account on the hosts, add it to the sudo or admin group depending on your /etc/sudoers configuration using the following code: # /etc/sudoers Defaults env_reset root ALL=(ALL) ALL %sudo ALL=NOPASSWD: ALL With this simple sudo configuration, every user in the sudo group can execute any command with root privileges, without requiring to enter the user password before each command. Now add the oneadmin user to the sudo group with the following command: $ sudo adduser oneadmin sudo Giving full administrative privileges to the oneadmin account might be considered inappropriate for most security-focused people. However, I can assure you that if you are taking the first step with OpenNebula now, having full administrative privileges could save some headaches. This is a suggested configuration but it is not required to run OpenNebula. Configuring network bridges Every host should have its bridges configured with the same name. Check the following /etc/network/interfaces code as an example: # The loopback network interface auto lo iface lo inet loopback # The primary network interface iface eth0 inet manual auto lan0 iface lan0 inet static bridge_ports eth0 bridge_stp off bridge_fd 0 address 192.168.66.97 netmask 255.255.255.0 gateway 192.168.66.1 dns-nameservers 192.168.66.1 You can have as many bridges as you need, bound or not bound to a physical network. By eliminating the bridge_ports parameter you get a pure virtual network for your VMs but remember that without a physical network different VMs on different hosts cannot communicate with each other.
Read more
  • 0
  • 0
  • 4962

article-image-magento-designs-and-themes
Packt
19 May 2012
13 min read
Save for later

Magento: Designs and Themes

Packt
19 May 2012
13 min read
(For more resources on e-Commerce, see here.) The Magento theme structure The same holds true for themes. You can specify the look and feel of your stores at the Global, Website, or Store levels (themes can be applied for individual store views relating to a store) by assigning a specific theme. In Magento,a group of related themes is referred to as a design package. Design packages contain files that control various functional elements that are common among the themes within the package. By default, Magento Community installs two design packages: Base package: A special package that contains all the default elements for a Magento installation (we will discuss this in more detail in a moment) Default package: This contains the layout elements of the default store (look and feel) Themes within a design package contain the various elements that determine the look and feel of the site: layout files, templates, CSS, images, and JavaScript. Each design package must have at least one default theme, but can contain other theme variants. You can include any number of theme variants within a design package and use them, for example, for seasonal purposes (that is, holidays, back-to-school, and so on). The following image shows the relationship between design packages and themes: A design package and theme can be specified at the Global, Website or Store levels. Most Magento users will use the same design package for a website and all descendant stores. Usually, related stores within a website business share very similar functional elements, as well as similar style features. This is not mandatory; you are free to specify a completely different design package and theme for each store view within your website hierarchy. The Theme structure Magento divides themes into two group of files: templating and skin. Templating files contain the HTML, PHTML, and PHP code that determines the functional aspects of the pages in your Magento website. Skin files are made of CSS, image, and JavaScript files that give your site its outward design. Ingeniously, Magento further separates these areas by putting them into different directories of your installation: Templating files are stored in the app/design directory, where the extra security of this section protects the functional parts of your site design Skin files are stored within the skin directory (at the root level of the installation), and can be granted a higher permission level, as these are the files that are delivered to a visitor's browser for rendering the page Templating hierarchy Frontend theme template files (the files used to produce your store's pages) are stored within three subdirectories: layout: It contains the XML files that contain the various core information that defines various areas of a page. These files also contain meta and encoding information. template: This stores the PHTML files (HTML files that contain PHP code and processed by the PHP server engine) used for constructing the visual structure of the page. locale: This add files within this directory to provide additional language translations for site elements, such as labels and messages. Magento has a distinct path for storing templating files used for your website: app/design/frontend/[Design Package]/[Theme]/. Skin hierarchy The skin files for a given design package and theme are subdivided into the following: css: This stores the CSS stylesheets, and, in some cases, related image files that are called by CSS files (this is not an acceptable convention, but I have seen some designers do this) images:This contains the JPG, PNG, and GIF files used in the display of your site js: This contains the JavaScript files that are specific to a theme (JavaScript files used for core functionality are kept in the js directory at the root level) The path for the frontend skin files is: skin/frontend/[Design Package]/[Theme]/. The concept of theme fallback A very important and brilliant aspect of Magento is what is called the Magento theme fallback model. Basically, this concept means that when building a page, Magento first looks to the assigned theme for a store. If the theme is missing any necessary templating or skin files, Magento then looks to the required default theme within the assigned design package. If the file is not found there, Magento finally looks into the default theme of the Base design package. For this reason, the Base design package is never to be altered or removed; it is the failsafe for your site. The following flowchart outlines the process by which Magento finds the necessary files for fulfilling a page rendering request. This model also gives the designers some tremendous assistance. When a new theme is created, it only has to contain those elements that are different from what is provided by the Base package. For example, if all parts of a desired site design are similar to the Base theme, except for the graphic appearance of the site, a new theme can be created simply by adding new CSS and image files to the new theme (stored within the skin directory). Any new CSS files will need to be included in the local.xml file for your theme (we will discuss the local.xml file later in this article). If the design requires different layout structures, only the changed layout and template files need to be created; everything that remains the same need not be duplicated. While previous versions of Magento were built with fallback mechanisms, only in the current versions has this become a true and complete fallback. In the earlier versions, the fallback was to the default theme within a package, not to the Base design package. Therefore, each default theme within a package had to contain all the files of the Base package. If Magento base files were updated in subsequent software versions, these changes had to be redistributed manually to each additional design package within a Magento installation. With Magento CE 1.4 and above, upgrades to the Base package automatically enhance all design packages. If you are careful not to alter the Base design package, then future upgrades to the core functionality of Magento will not break your installation. You will have access to the new improvements based on your custom design package or theme, making your installation virtually upgrade proof. For the same reason, never install a custom theme inside the Base design package. Default installation design packages and themes In a new, clean Magento Community installation, you are provided with the following design packages and themes: Depending on your needs, you could add additional a custom design packages, or custom themes within the default design package: If you're going to install a group of related themes, you should probably create a new design package, containing a default theme as your fallback theme On the other hand, if you're using only one or two themes based on the features of the default design package, you can install the themes within the default design package hierarchy I like to make sure that whatever I customize can be undone, if necessary. It's difficult for me to make changes to the core, installed files; I prefer to work on duplicate copies, preserving the originals in case I need to revert back. After re-installing Magento for the umpteenth time because I had altered too many core files, I learned the hard way! As Magento Community installs a basic variety of good theme variants from which to start, the first thing you should do before adding or altering theme components is to duplicate the default design package files, renaming the duplicate to an appropriate name, such as a description of your installation (for example, Acme or Sports). Any changes you make within this new design package will not alter the originally installed components, thereby allowing you to revert any or all of your themes to the originals. Your new theme hierarchy might now look like this: When creating new packages, you also need to create new folders in the /skin directory to match your directory hierarchy in the /app/design directory. Likewise, if you decide to use one of the installed default themes as the basis for designing a new custom theme, duplicate and rename the theme to preserve the original as your fallback. The new Blank theme A fairly recent default installed theme is Blank. If your customization to your Magento stores is primarily one of colors and graphics, this is not a bad theme to use as a starting point. As the name implies, it has a pretty stark layout, as shown in the following screenshot. However, it does give you all the basic structures and components. Using images and CSS styles, you can go a long way to creating a good-looking, functional website, as shown in the next screenshot for www.aviationlogs.com: When duplicating any design package or theme, don't forget that each of them is defined by directories under /app/design/frontend/ and /skin/frontend/ Installing third-party themes In most cases, Magento users who are beginners will explore hundreds of the available Magento themes created by third-party designers. There are many free ones available, but most are sold by dedicated designers. Shopping for themes One of the great good/bad aspects of Magento is the third-party themes. The architecture of the Magento theme model gives knowledgeable theme designers tremendous abilities to construct themes that are virtually upgrade proof, while possessing powerful enhancements. Unfortunately, not all designers have either upgraded older themes properly or created new themes fully honoring the fallback model. If the older fallback model is still used for current Magento versions, upgrades to the Base package could adversely affect your theme. Therefore, as you review third-party themes, take time to investigate how the designer constructs their themes. Most provide some type of site demo. As you learn more about using themes, you'll find it easier to analyze third-party themes. Apart from a few free themes offered through the Magento website, most of them require that you install the necessary files manually, by FTP or SFTP to your server. Every third-party theme I have ever used has included some instructions on how to install the files to your server. However, allow me to offer the following helpful guidelines: When using FTP/SFTP to upload theme files, use the merge function so that only additional files are added to each directory, instead of replacing entire directories. If you're not sure whether your FTP client provides merge capabilities, or not sure how to configure for merge, you will need to open each directory in the theme and upload the individual files to the corresponding directories on your server. If you have set your CSS and JavaScript files to merge, under System | Configuration | Developer, you should turn merging off while installing and modifying your theme. After uploading themes or any component files (for example, templates, CSS, or images), clear the Magento caches under System | Cache Management in your backend. Disable your Magento cache while you install and configure themes. While not critical, it will allow you to see changes immediately instead of having to constantly clear the Magento cache. You can disable the cache under System | Cache Management in the backend. If you wish to make any changes to a theme's individual file, make a duplicate of the original file before making your changes. That way, if something goes awry, you can always re-install the duplicated original. If you have followed the earlier advice to duplicate the Default design package before customizing, instructions to install files within /app/design/frontend/default/ and /skin/frontend/default/ should be interpreted as /app/design/frontend/[your design package name]/ and /skin/frontend/[your design package name]/, respectively. As most of the new Magento users don't duplicate the Default design package, it's common for theme designers to instruct users to install new themes and files within the Default design package. (We know better, now, don't we?) Creating variants Let's assume that we have created a new design package called outdoor_package. Within this design package, we duplicate the Blank theme and call it outdoor_theme. Our new design package file hierarchy, in both /app/design/ and /skin/frontend/ might resemble the following hierarchy: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_theme/ However, let's also take one more customization step here. Since Magento separates the template structure from the skin structure—the layout from the design, so to speak—we could create variations of a theme that are simply controlled by CSS and images, by creating more than one skin. For Acme, we might want to have our English language store in a blue color scheme, but our French language store in a green color scheme. We could take the acme/skin directory and duplicate it, renaming both for the new colors: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_blue/ outdoor_green/ Before we continue, let's go over something which is especially relevant to what we just created. For our outdoor theme, we created two skin variants: blue and green. However, what if the difference between the two is only one or two files? If we make changes to other files that would affect both color schemes, but which are otherwise the same for both, this would create more work to keep both color variations in sync, right? Remember, with the Magento fallback method, if your site calls on a file, it first looks into the assigned theme, then the default theme within the same design package, and, finally, within the Base design package. Therefore, in this example, you could use the default skin, under /skin/frontend/outdoor_package/default/ to contain all files common to both blue and green. Only include those files that will forever remain different to each of them within their respective skin directories. Assigning themes As mentioned earlier, you can assign design packages and themes at any level of the GWS hierarchy. As with any configuration, the choice depends on the level you wish to assign control. Global configurations affect the entire Magento installation. Website level choices set the default for all subordinant store views, which can also have their own theme specifics, if desired. Let's walk through the process of assigning custom design package and themes. For the sake of this exercise, let's continue with our Outdoor theme, as described earlier.Refer to the following screenshot: We're going to now assign our Outdoor theme to a Outdoor website and store views. Our first task is to assign the design package and theme to the website as the default for all subordinant store views: Go to System | Configuration | General | Design in your Magento backend. In the Current Configuration Scope drop-down menu, choose Outdoor Products. As shown in the following screenshot, enter the name of your design package, template, layout, and skin. You will have to uncheck the boxes labeled Use Default beside each field you wish to use. Click on the Save Config button. The reason you enter default in the fields, as shown in the previous screenshot, is to provide the fallback protection I described earlier. Magento needs to know where to look for any files that may be missing from your theme files.
Read more
  • 0
  • 1
  • 2041

article-image-microsoft-silverlight-5-working-services
Packt
23 Apr 2012
11 min read
Save for later

Microsoft Silverlight 5: Working with Services

Packt
23 Apr 2012
11 min read
(For more resources on silverlight, see here.) Introduction Looking at the namespaces and classes in the Silverlight assemblies, it's easy to see that there are no ADO.NET-related classes available in Silverlight. Silverlight does not contain a DataReader, a DataSet, or any option to connect to a database directly. Thus, it's not possible to simply define a connection string for a database and let Silverlight applications connect with that database directly. The solution adds a layer on top of the database in the form of services. The services that talk directly to a database (or, more preferably, to a business and data access layer) can expose the data so that Silverlight can work with it. However, the data that is exposed in this way does not always have to come from a database. It can come from a third-party service, by reading a file, or be the result of an intensive calculation executed on the server. Silverlight has a wide range of options to connect with services. This is important as it's the main way of getting data into our applications. In this article, we'll look at the concepts of connecting with several types of services and external data. We'll start our journey by looking at how Silverlight connects and works with a regular service. We'll see the concepts that we use here recur for other types of service communications as well. One of these concepts is cross-domain service access. In other words, this means accessing a service on a domain that is different from the one where the Silverlight application is hosted. We'll see why Microsoft has implemented cross-domain restrictions in Silverlight and what we need to do to access externally hosted services. Next, we'll talk about working with the Windows Azure Platform. More specifically, we'll talk about how we can get our Silverlight application to get data from a SQL Azure database, how to communicate with a service in the cloud, and even how to host the Silverlight application in the cloud, using a hosted service or serving it from Azure Storage. Finally, we'll finish this chapter by looking at socket communication. This type of communication is rare and chances are that you'll never have to use it. However, if your application needs the fastest possible access to data, sockets may provide the answer. Connecting and reading from a standardized service Applies to Silverlight 3, 4 and 5 If we need data inside a Silverlight application, chances are that this data resides in a database or another data store on the server. Silverlight is a client-side technology, so when we need to connect to data sources, we need to rely on services. Silverlight has a broad spectrum of services to which it can connect. In this recipe, we'll look at the concepts of connecting with services, which are usually very similar for all types of services Silverlight can connect with. We'll start by creating an ASMX webservice—in other words, a regular web service. We'll then connect to this service from the Silverlight application and invoke and read its response after connecting to it. Getting ready In this recipe, we'll build the application from scratch. However, the completed code for this recipe can be found in the Chapter07/SilverlightJackpot_Read_Completed folder in the code bundle that is available on the Packt website. How to do it... We'll start to explore the usage of services with Silverlight using the following scenario. Imagine we are building a small game application in which a unique code belonging to a user needs to be checked to find out whether or not it is a winning code for some online lottery. The collection of winning codes is present on the server, perhaps in a database or an XML file. We'll create and invoke a service that will allow us to validate the user's code with the collection on the server. The following are the steps we need to follow: We'll build this application from scratch. Our first step is creating a new Silverlight application called SilverlightJackpot. As always, let Visual Studio create a hosting website for the Silverlight client by selecting the Host the Silverlight application in a new Web site checkbox in the New Silverlight Application dialog box. This will ensure that we have a website created for us, in which we can create the service as well. We need to start by creating a service. For the sake of simplicity, we'll create a basic ASMX web service. To do so, right-click on the project node in the SilverlightJackpot. Web project and select Add | New Item... in the menu. In the Add New Item dialog, select the Web Service item. We'll call the new service as JackpotService. Visual Studio creates an ASMX file (JackpotService.asmx) and a code-behind file (JackpotService.asmx.cs). To keep things simple, we'll mock the data retrieval by hardcoding the winning numbers. We'll do so by creating a new class called CodesRepository.cs in the web project. This class returns a list of winning codes. In real-world scenarios, this code would go out to a database and get the list of winning codes from there. The code in this class is very easy. The following is the code for this class: public class CodesRepository{ private List<string> winningCodes; public CodesRepository() { FillWinningCodes(); } private void FillWinningCodes() { if (winningCodes == null) { winningCodes = new List<string>(); winningCodes.Add("12345abc"); winningCodes.Add("azertyse"); winningCodes.Add("abcdefgh"); winningCodes.Add("helloall"); winningCodes.Add("ohnice11"); winningCodes.Add("yesigot1"); winningCodes.Add("superwin"); } } public List<string> WinningCodes { get { return winningCodes; } }} At this point, we need only one method in our JackpotService. This method should accept the code sent from the Silverlight application, check it with the list of winning codes, and return whether or not the user is lucky to have a winning code. Only the methods that are marked with the WebMethod attribute are made available over the service. The following is the code for our service: [WebService(Namespace = "http://tempuri.org/")][WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)][System.ComponentModel.ToolboxItem(false)]public class JackpotService : System.Web.Services.WebService{ List<string> winningCodes; public JackpotService() { winningCodes = new CodesRepository().WinningCodes; } [WebMethod] public bool IsWinningCode(string code) { if(winningCodes.Contains(code)) return true; return false; }} Build the solution at this point to ensure that our service will compile and can be connected from the client side. Now that the service is ready and waiting to be invoked, let's focus on the Silverlight application. To make the service known to our application, we need to add a reference to it. This is done by right-clicking on the SilverlightJackpot project node, and selecting the Add Service Reference... item. In the dialog that appears, we have the option to enter the address of the service ourselves. However, we can click on the Discover button as the service lives in the same solution as the Silverlight application. Visual Studio will search the solution for the available services. If there are no errors, our freshly created service should show up in the list. Select it and rename the Namespace: as JackpotService, as shown in the following screenshot. Visual Studio will now create a proxy class: The UI for the application is kept quite simple. An image of the UI can be seen a little further ahead. It contains a TextBox, where the user can enter a code, a Button that will invoke a check, and a TextBlock that will display the result. This can be seen in the following code: <StackPanel> <TextBox x_Name="CodeTextBox" Width="100" Height="20"> </TextBox> <Button x_Name="CheckForWinButton" Content="Check if I'm a winner!" Click="CheckForWinButton_Click"> </Button> <TextBlock x_Name="ResultTextBlock"> </TextBlock></StackPanel> In the Click event handler, we'll create an instance of the proxy class that was created by Visual Studio as shown in the following code: private void CheckForWinButton_Click(object sender, RoutedEventArgs e){ JackpotService.JackpotServiceSoapClient client = new SilverlightJackpot.JackpotService.JackpotServiceSoapClient();} All service communications in Silverlight happen asynchronously. Therefore, we need to provide a callback method that will be invoked when the service returns: client.IsWinningCodeCompleted += new EventHandler <SilverlightJackpot.JackpotService. IsWinningCodeCompletedEventArgs> (client_IsWinningCodeCompleted); To actually invoke the service, we need to call the IsWinningCodeAsync method as shown in the following line of code. This method will make the actual call to the service. We pass in the value that the user entered: client.IsWinningCodeAsync(CodeTextBox.Text); Finally, in the callback method, we can work with the result of the service via the Result property of the IsWinningCodeCompletedEventArgs instance. Based on the value, we display another message as shown in the following code: void client_IsWinningCodeCompleted(object sender, SilverlightJackpot.JackpotService. IsWinningCodeCompletedEventArgs e){ bool result = e.Result; if (result) ResultTextBlock.Text = "You are a winner! Enter your data below and we will contact you!"; else ResultTextBlock.Text = "You lose... Better luck next time!";} We now have a fully working Silverlight application that uses a service for its data needs. The following screenshot shows the result from entering a valid code: How it works... As it stands, the current version of Silverlight does not have support for using a local database. Silverlight thus needs to rely on external services for getting external data. Even if we had local database support, we would still need to use services in many scenarios. The sample used in this recipe is a good example of data that would need to reside in a secure location (meaning on the server). In any case, we should never store the winning codes in a local database that would be downloaded to the client side. Silverlight has the necessary plumbing on board to connect with the most common types of services. Services such as ASMX, WCF, REST, RSS, and so on, don't pose a problem for Silverlight. While the implementation of connecting with different types of services differs, the concepts are similar. In this recipe, we used a plain old web service. Only the methods that are attributed with the WebMethodAttribute are made available over the service. This means that even if we create a public method on the service, it won't be available to clients if it's not marked as a WebMethod. In this case, we only create a single method called IsWinningCode, which retrieves a list of winning codes from a class called CodesRepository. In real-world applications, this data could be read from a database or an XML file. Thus, this service is the entry point to the data. For Silverlight to work with the service, we need to add a reference to it. When doing so, Visual Studio will create a proxy class. Visual Studio can do this for us because the service exposes a Web Service Description Language (WSDL) file. This file contains an overview of the methods supported by the service. A proxy can be considered a copy of the server-side service class, but without the implementations. Instead, each copied method contains a call to the actual service method. The proxy creation process carried out by Visual Studio is the same as adding a service reference in a regular .NET application. However, invoking the service is somewhat different. All communication with services in Silverlight is carried out asynchronously. If this wasn't the case, Silverlight would have had to wait for the service to return its result. In the meantime, the UI thread would be blocked and no interaction with the rest of the application would be possible. To support the asynchronous service call inside the proxy, the IsWinningCodeAsync method as well as the IsWinningCodeCompleted event is generated. The IsWinningCodeAsync method is used to make the actual call to the service. To get access to the results of a service call, we need to define a callback method. This is where the IsWinningCodeCompleted event comes in. Using this event, we define which method should be called when the service returns (in our case, the client_IsWinningCodeCompleted method). Inside this method, we have access to the results through the Result parameter, which is always of the same type as the return type of the service method. See also Apart from reading data, we also have to persist data. In the next recipe, Persisting data using a standardized service, we'll do exactly that.
Read more
  • 0
  • 0
  • 1608
Visually different images

article-image-gradebook-introduction
Packt
13 Apr 2012
5 min read
Save for later

Gradebook-An Introduction

Packt
13 Apr 2012
5 min read
  Getting to the gradebook All courses in Moodle have a grades area, also known as the gradebook . A number of activities within Moodle can be graded and these grades will automatically be captured and shown in the gradebook. To get to the gradebook, view the Settings block on the course and then click on Grades. The following screenshot shows an example of the teachers' view of a simple gradebook with a number of different graded activities within it. Let's take a quick tour of what we can see! The top row of the screenshot shows the column headings which are each of the assessed activities within the Moodle course. These automatically appear in the grades area. In this case, the assessed activities are: Initial assessment U1: Task 1 U1: Task 1 U1: Task 2 U2: Test Evidence On the left of the screenshot, you can see the students' names. Essentially, the name is the start of a row of information about the student. If we start with Emilie H, we can see that she received a score of 100.00 for her Initial assessment. Looking at Bayley W, we can see that his work for U1: Task 2 received a Distinction grade. Using the gradebook, we can see all the assessments and grades linked to each student from one screen. Users with teacher, non-editing teacher, or manager roles will be able to see the grades for all students on the course. Students will only be able to see their own grades and feedback. The advantage of storing the grades within Moodle is that information can be easily shared between all teachers on the online course. Traditionally, if a course manager wanted to know how students were progressing they would need to contact the course teacher(s) to gather this information. Now, they can log in to Moodle and view the live data (as long as they have teacher or manager rights to the course). There are also benefits to students as they will see all their progress in one place and can start to manage their own learning by reviewing their progress to date as shown in the following example student view: This is Bayley W's grade report. Bayley can see each assessment on the left-hand side with his grade next to it. By default, the student grades report also shows the range of grades possible for the assessment (for example, the highest and lowest scores possible), but this can be switched off by the teacher in the Grades course settings. It also shows the equivalent percentage as well as the written feedback given by the teacher. Activities that work with the gradebook There are a number of Moodle activities that can be graded and, therefore, work with the gradebook. The main ones are the following: Quiz Assignments: Four different core assignment types can be used to meet a range of needs within courses: Advanced uploading of files Online text Upload a single file Offline activity (The offline assignment is particularly useful for practical qualifications or presentations where the assessment is not submitted and is assessed offline by the teacher. The offline activity allows the detail of the assessment to be provided to students in Moodle, and the grade and feedback to be stored in the gradebook, even though no work has been electronically submitted.) Encouraging the use of the gradebook The offline activity is often a good way to start using the gradebook to record progress, as the assessment can take place in the normal way, but the grades can be recorded centrally to benefit teachers and students. Once confident with using the gradebook, teachers can then review assessment processes to use other assignment types   Forum Lesson SCORM package Workshop Glossary It is also possible to manually set up a "graded item" within the gradebook that is not linked with an activity, but allows a grade to be recorded. Key features of the gradebook The gradebook primarily shows the grade or score for each graded activity within the online course. This grade could be shown in a number of ways: Numeric grade: A numerical grade between 1 and 100. This is already set up and ready to use within all Moodle courses. Scale: A customized grading profile that can be letters, words, statements, or numbers (such as Pass , Merit, and Distinction). Letter grade: A grading profile that can be linked to percentages (such as 100 percent = A) Organizing grades With lots of activities that use grades within a course, the gradebook can be a lot of data on one page. Categories can be created for group activities and the gradebook view can be customized according to the user to see all or some categories on the screen. Think about a course that has 15 units and each unit has three assessments within it. The gradebook will have 45 columns of grades – which is a lot of data! We can organize this information into categories to make it easier to use.
Read more
  • 0
  • 0
  • 1280

article-image-setting-biztalk-server-environment
Packt
09 Apr 2012
18 min read
Save for later

Setting up a BizTalk Server Environment

Packt
09 Apr 2012
18 min read
Gathering requirements by asking the right questions Although, this is not an exact recipe, asking questions to obtain requirements for your BizTalk environment is important. Having a clear view and understanding of the requirements enables you to deploy the desired BizTalk environment that meets expectations of the customer. What are the right questions you may ask yourself? Well, there is quite a large area in general you basically need to cover with questions. These questions will be around the following topics: A BizTalk work load(s) that is functional Non-functional (high availability, scalability, and so on) Licensing (software) Hardware Virtualization Development, Test, Acceptance, and Production (DTAP) environment Tracking/Tracing Hosting Security Getting ready Organize the sessions, and/or the workshop(s) to discuss the BizTalk architecture (environment), functionality, and non-functional requirements, where you do a series of interviews with appropriate stakeholders. This way you will be able to retrieve the necessary requirements and information for a BizTalk environment. You will need to focus on business first and IT later. You will notice that each business will have a different set of requirements on integration of data and processes. Some of these are listed as follows: Business is able to have the access of information from anywhere any time Have the proper information to present to the proper people Have the necessary information available when needed Manage knowledge efficiently and be able to share it with the business Change the information when needed Automate the business process that is error-prone Automate the business process to reduce the processing time of orders, invoices, and so on Regarding the business requirements, BizTalk will have certain workloads, and with the business you determine if you want BizTalk to aid in automating processes, exchange of information with partners, maintaining business rules, visibility of psychical events, and/or integration with different systems. One important factor to reckon with bringing BizTalk into an organization is risk-associated with transitioning to its platform. This risk can be of a technical, operational, political, and financial nature. BizTalk solutions have to operate correctly, meet the business requirements, and be accepted by stakeholders within the organization and should not be too expensive. With IT, you focus more on the technical side of the BizTalk Environment such as, "What messages in size, format, and encoding are sent to the BizTalk system or what does it need to output?" You should consider security around it, when information going to or coming from trading partners is confidential. Encryption and decryption of data such as, "What processes that are automated need to interact with internal and external systems?" or "How are you going to monitor messages that are going in and out?" can come into play. Support needs to be set up properly to keep BizTalk and its solutions healthy. Solutions need to be developed and tested, preferably using different environments such as test and acceptance. For that, you will need an agreed deployment process with IT. These are factors to reckon with and need to be addressed when interviewing or talking to IT stakeholders within the organization. How to do it… Categorize your stakeholders into two categories—business and IT. Create a communication plan and list of questions related to areas mentioned earlier. With the list of questions you can assign each question to a person you think can answer it. This way you ask the right questions to the right people. The following table shows a sample of roles belonging to business and/or IT. It could be that you identify more roles depending on your situation: Category Role Business CEO, CIO, Security Officer, Business Analyst, Enterprise Architect, and Solution Architect. IT IT Manager, Enterprise Architect, Solution Architect, System/Application Architect, System Analyst, Developer, System Engineer, and DBA. Having the roles clear belonging to either business, IT, or both, you will then need to have a list of questions and assign these to the appropriate role. You can find an example list of questions associated to a particular role in the following table: Question Role Will BizTalk integrate with systems in the enterprise? Which consumers and host systems will it integrate with? Enterprise Architect, Solution Architect What are the applicable workloads? Enterprise Architect Is BizTalk going to be strategic for integration with internal/external systems? CEO, CIO, Enterprise Architect, and Business Analyst Number of messages a day/hour Enterprise Architect What are the candidate processes to automate with BizTalk? Business Analyst, Solution Architect What communication protocols are required? Enterprise Architect, Solution Architect Choice of Microsoft platform-Operating System, SQL Server Database Enterprise Architect, Security Officer, Solution Architect, System Engineer, and DBA Encryption algorithm for data Enterprise Architect, Security Officer, Solution Architect, and System Engineer Is Secure Socket Layer required for communication? Enterprise Architect, Security Oficer, Solution Architect, and System Engineer What kind of certificate store is there? Enterprise Architect, Security Officer, Solution Architect, and System Engineer Is the Support for BizTalk going to be outsourced CEO, IT Manager There's more… The best approach to gather the requirements is to view it as a project or a part of the project. You can use a methodology such as PRINCE2. PRINCE2 Projects in Controlled Environments (PRINCE) is a project management method. It covers the management, control, and organization of a project. PRINCE2 is the second major release of it. More information is available at http://www.prince2.com/. Microsoft BizTalk Server website The Microsoft BizTalk Server website provides a lot of information. Especially, the Production Information section provides detailed information on system requirements, roadmap, and the FAQs. The latter sections provide details on pricing, licensing, and so on. Go to http://www.microsoft.com/biztalk/en/us/default.aspx. Analyzing requirements and creating a design Analyzing requirements and creating a design for the BizTalk landscape is the next step forward before planning and installing. With the gathered requirements, you can make decisions on how to design a BizTalk environment(s). If BizTalk is used for the first time in an enterprise environment capacity, planning and server allocation is something to focus on. Once you gather requirements and ask questions, you will have a clear picture of where the platform will be hosted and whether it needs to be scaled up or out. If everything gets placed on one big server, it will introduce a serious single point of failure. You should try to avoid this scenario. Therefore, separating BizTalk from the SQL Server is the first thing you will do in your design, each on a separate hardware preferably. Depending on availability requirements, you will probably cluster the SQL Server. Besides that, you can choose to scale out BizTalk into a multiserver group, because of availability requirements and if the expected load cannot be handled by one BizTalk instance. You can opt for installing BizTalk and SQL separately first and then scale-out after performing benchmark tests. You can scale vertically (scaleup) by increasing the number of processors and the amount of memory each server uses, or you can scale horizontally (scaleout) by adding more servers to your BizTalk Server configuration. Other options you can consider during your design are as follows: Having multiple MessageBox databases Separate BizTalk databases These options are best visualized by the scale-out poster from Microsoft (http://www.microsoft.com/download/en/details.aspx?id=13103). Based on the requirements, you can consider isolating the BizTalk hosts to be able to manage BizTalk applications better and divide the load. By separating send, receive, and processing functionality in different hosts, you will benefit from better memory and thread management. If you expect a high load of large messages or orchestrations that would consume large amounts of resources, you should isolate send and/or receive adapters. Another consideration is to separate a host to handle tracking and relieve processing hosts from it. So far we have discussed scalability and design decisions you could consider. There are some other design considerations for a BizTalk environment such as security, tracking, fault tolerance, load balancing, choice of license, and support for virtualization (http:// support.microsoft.com/kb/842301). BizTalk security can be enhanced by deploying Secure Socket Layer (SSL), IPSec Tunneling, the Inter Security and Acceleration (ISA) server, and certificate services included with the Windows Server 2008. With the BizTalk Server, you can apply access control, implement least rights to limit access, and provide integrated security through Enterprise Single Sign-On (http://msdn.microsoft.com/en-us/library/aa577802%28v=bts.70%29.aspx). Furthermore, you can protect and secure applications and data by authenticating the sender of a message and authorizing the receiver of a message. Tracking messages in BizTalk messages can be useful to see what messages come in and out of the system, or for auditing, troubleshooting, or archiving purposes. Tracking of messages within BizTalk is a process by which parts of a message such as the body, properties, and metadata are stored in a database. These parts can be viewed by running queries from the Group Hub page in the BizTalk Server Administration console. It is important that you decide, or take up into the design, what needs to be tracked based on the requirements. There are some considerations to make regarding tracking. Tracking everything is not the smart thing to do, as each time a message is touched in BizTalk; a copy is made and stored. Focus on scope by tracking only on a specific port, which is better for performance and keeps the database uncluttered. For the latter, it is important that the data purge and archive job is configured properly. As mentioned earlier, it is worth considering a dedicated host for tracking. Fault tolerance and load balancing for BizTalk can be achieved through clustering, separating hosts as described earlier, implement a Storage Area Network (SAN) to house the BizTalk Server databases, cluster Enterprise Single Sign-On (SSO) Master Secret Server, and configuring the Internet Information Services (IIS) web server for isolated host instances and the BAM Portal web page to be highly available using Network Load Balancing (NLB) or other load balancing devices. The best way to implement this is to follow the steps in the Checklist: Providing High Availability with Fault Tolerance or Load Balancing document found on MSDN (http://msdn.microsoft.com/en-us/library/gg634479%28v=bts.70%29.aspx). Another important topic regarding your BizTalk environment is costs and based on requirements you will choose the Branch, Standard, or Enterprise Edition. The editions differ not only in price, but also in functionality. As with the Standard Edition, it is not possible to support scenarios for high availability, fault tolerance, and is limited on CPU and applications. The Branch Edition is even more limited and is designed for hub and spoke deployment scenarios including Radio Frequency Identification (RFID). With any version, you probably want to consider whether or not to virtualize. With virtualization in mind, licensing can be difficult. With the Standard Edition, you need a license for each virtual processor used by the virtual OS environment, regardless of whether the number of virtual processors is less than, or greater than, the number of physical processors on the server. With the Enterprise Edition, if you license all physical CPUs on the server you can run any number of instances in the physical or virtual OS environment. With both of these, a virtual processor is assumed to have the same number of cores as the physical processor. Using less than the number of cores available in the physical processor still counts as a full virtual processor (http://www.microsoft. com/biztalk/en/us/editions.aspx). Last, but not least, you need to consider how to support your BizTalk environment. It is worth considering the System Center Operation Manager to monitor your BizTalk environment using management packs for the SQL Server, Windows Server, and BizTalk Server 2010. The management pack for the BizTalk Server 2010 provides two views, one for the enterprise IT administrator and one for the BizTalk Server administrator. The first will be monitoring the state and health of the various enterprise deployments, the machines hosting the SQL Server databases, machines hosting the Enterprise SSO service, host instance machines, IIS, network services, and is interested in the overall health of the "physical deployment" of a BizTalk Server setup. The BizTalk Server Administrator will be monitoring the state and health of various BizTalk Server application artifacts, such as orchestrations, send ports, receive locations, and is interested in monitoring and tracking the BizTalk Server's health. If necessary, he/she can carry out corrective measures to keep applications running as expected. What you have read so far are considerations, which are useful while analyzing requirements and preparing your design. You need to take a considerable amount of time for analyzing requirements to be able to create a solid design for your BizTalk environment. There is a wealth of information provided by Microsoft in this book. It will be worth investing time now as you will lose a lot time and money if your applications do not perform or the system cripples under load while receiving the process. How to do it... To analyze the requirements, you will need to categorize them to certain topics mentioned in the Gathering requirements by asking the right questions recipe. You will then go over each requirement and decide how it can be met best. For each requirement, you will consider what the best option is and capture that in your design for the BizTalk setup. The BizTalk design will be a Word document, where you capture your design, considerations, and decisions. How it works... During analysis of each requirement, you will capture your considerations and decisions in a word document. Besides that, you will also describe the situation at the enterprise where the BizTalk environment will be deployed. You will find an example structure of a design document for a Development, Test, Acceptance, and Production (DTAP) environment, as follows, where you can place all the information: Introduction Purpose Current situation IT landscape Design Decisions Considerations/Issues Overview DTAP landscape Scope MS BizTalk and SQL Server editions SQL Database Server ICT Policy Operating systems Windows Server Backup Antivirus Windows update Security Settings Backup and Restore Backup procedure Restore procedure Development Development environment Development server Developer machine Test Test server Acceptance SQL Server clustering BizTalk group Acceptance server Production SQL Server clustering BizTalk group (load balancing) Production server Management and security Groups and accounts SCOM Single Sign-On Hosts In process hosts Isolated hosts Trusted and untrusted hosts Hosts configuration DTAP Resources Appendix A Redistributable CAB Files Design decisions are the important parts of your document. Here, you summarize all your design decisions and reference them to each corresponding chapter/section in the document, where a decision is described; you also note issues around your design. There's more... Analyzing requirements is an important task, which should not be taken lightly. Knowing architectural patterns, for instance, can help you choose the right technology and create the appropriate design. It can be that the BizTalk Server is not the right fit for the purpose. The following resources can aid you in analyzing the requirements: Architectural Patterns: Packt has published a book called Applied Architecture Patterns on Microsoft Platform that can aid you in analyzing the requirements by selecting the right technology. Wiki TechNet article: Refer to the Recommendations for Installing, Sizing, Deploying, and Maintaining a BizTalk Server Solution article at http://social.technet. microsoft.com/wiki/contents/articles/666.aspx. Microsoft BizTalk Server 2010 Operations Guide: Microsoft has created a BizTalk Server 2010 Operations Guide for anyone involved in the implementation and administration of a BizTalk solution, particularly IT professionals. You can find it online (http://msdn.microsoft.com/en-us/library/ gg634499%28v=bts.70%29.aspx) or you can download it from http://www. microsoft.com/downloads/en/details.aspx?FamilyID=4ef9eebb-b3f4-4534-b733-3eb2cb83d867&displaylang=en. Microsoft volume licensing brief: Licensing Microsoft Server Products in Virtual Environments is an interesting white paper from Microsoft. It describes licensing models under virtual environments for the server operating systems and server applications. It can help you understand how to use Microsoft server products with virtualization technologies, such as Microsoft Hyper-V technology, Microsoft Virtual Server 2005 R2, or third-party virtualization solutions that are provided by VMWare and Parallels. You can download from the URL: http://www.microsoft.com/ downloads/en/details.aspx?FamilyID=9ef7fc47-c531-40f1-a4e9-9859e593a1f1&displaylang=en. Microsoft poster scale-out configurations: Microsoft has published a poster (normal or interactive) that can be downloaded describing typical scenarios and commonly used options for scaling out the BizTalk Server 2010's physical configurations. This post clearly illustrates how to scale for achieving high availability through load balancing and fault tolerance. It also shows how to configure for high-throughput scenarios. A normal poster can be obtained from the URL: http://www.microsoft.com/ downloads/en/details.aspx?FamilyID=2b70cbfc-d158-45a6-8bbd-99782d6747dc. An interactive poster created in Silverlight can be obtained from the URL:http:// www.microsoft.com/downloads/en/details.aspx?FamilyID=7ef9ae69-9cc8-442a-8193-831a414dfc30. Installing and using the BizTalk Best Practices Analyzer The Best Practices Analyzer (BPA) examines a BizTalk Server 2010 deployment and generates a list of issues pertaining to best practice standards for BizTalk Server deployments. This tool is designed to assess the configuration of a BizTalk installation. The BPA performs configuration-level verification by gathering data from different information sources, such as Windows Management Instrumentation (WMI) classes, SQL Server databases, and registry entries and presents a report to the user. Under the hood, it uses the data to evaluate the deployment configuration. It does not modify any system settings and is not a self-tuning tool. The tool is there to deliver support in achieving the best suitable configuration and report issues or possible issues, that could potentially harm the BizTalk environment. Getting ready The latest version of the BPA tool (V1.2) can be obtained from the Microsoft download center (http://www.microsoft.com/downloads/en/details.aspx?FamilyID=93d432fe-1370-4b6d-aaa8-a0c43c30f5ab&displaylang=en) and must be installed on the BizTalk machine. As a user, you need an account that has local administrative rights, that is a member of the BizTalk Server Administrators group, and a member of the SSO Administrators group to be able to run the BPA. You may need to explicitly set some WMI permissions before you can use the BPA in a distributed environment, where the SQL Server is not installed on the same computer as the BizTalk Server. This is because when the BPA tries to connect to a remote computer running the SQL Server, WMI may not have sufficient access to determine whether the SQL Server Agent is running. This may result in incorrect BPA evaluations. How to do it... To run the Best Practices Analyzer, perform one of the following: Start the BizTalk Server Best Practices Analyzer from the Start menu. Go to Start | Programs | Microsoft BizTalk Server Best Practices Analyzer. Open Windows Explorer and navigate to the Best Practices Analyzer installation directory (by default, c:Program FilesBizTalkBPA) and double-click on BizTalkBPA.exe. Open a command prompt, change to the installation directory, and then enter BizTalkBPACmd.exe. The following steps need to be performed to do the analysis: As soon as you start the BPA, it will check for updates. The user can decide whether or not to check for updates for newer versions of the configuration: (Move the mouse over the image to enlarge.) If a newer version is found, you are able to download the latest updates. The next step is to perform a scan by clicking on Start a scan: After starting the scan, starts data will be gathered from different information sources as described earlier. After the scan has been completed, the user can decide to view the report of the performed scan: You can click View a report of this Best Practices scan and the report will be generated. After generation of the report, several tabs will appear: Critical Issues All Issues Non-Default Settings Recent Changes Baseline Informational Items How it works... When the BPA is running, it gathers information and evaluates them to best practice rules from the Microsoft product group and support. A report is presented to the user providing information on issues, non-default settings, changes, and so on. The report enables you to take action and apply the necessary changes to resolve identified issues. The BPA can be run again to verify that it adheres to all the necessary best practices. This shows the value of the tool when assessing the deployed BizTalk environment before it is operational. When BizTalk becomes operational, the MessageBox Viewer (MBV) has more value. There's more... The BPA is very useful and gives you information that helps you to tune BizTalk and to keep it healthy. There are more tools that can help in sustaining a healthy environment overall. The Microsoft SQL Server 2008 R2 BPA is a diagnostic tool that provides information about a server and a Microsoft SQL Server 2008 or Microsoft SQL Server 2008 R2 instance installed on that server. The Microsoft SQL Server 2008 R2 Best Practices Analyzer can be downloaded from http://www.microsoft.com/download/en/details.aspx?id=15289. There are a couple of analyzers provided by Microsoft that do a good job helping you and the system engineer to put out a healthy, robust, and stable environment: Best Practices Analyzer: http://technet.microsoft.com/en-us/library/dd759260.aspx Microsoft Baseline Configuration Analyzer 2.0: http://www.microsoft.com/download/en/details.aspx?id=16475 Microsoft Baseline Security Analyzer 2.1.1: http://www.microsoft.com/download/en/details.aspx?id=19892
Read more
  • 0
  • 0
  • 4649

article-image-creating-views-3-programmatically
Packt
21 Mar 2012
18 min read
Save for later

Creating Views 3 Programmatically

Packt
21 Mar 2012
18 min read
(For more resources on Drupal, see here.) Programming a view Creating a view with a module is a convenient way to have a predefined view available with Drupal. As long as the module is installed and enabled, the view will be there to be used. If you have never created a module in Drupal, or even never written a line of Drupal code, you will still be able to create a simple view using this recipe. Getting ready Creating a module involves the creation of the following two files at a minimum: An .info file that gives Drupal the information needed to add the module A .module file that contains the PHP script More complex modules will consist of more files, but those two are all we will need for now. How to do it... Carry out the following steps: Create a new directory named _custom inside your contributed modules directory (so, probably sites/all/modules/_custom). Create a subdirectory inside that directory; we will name it d7vr (Drupal 7 Views Recipes). Open a new file with your editor and add the following lines: ; $Id: name = Programmatic Views description = Provides supplementary resources such as programmatic views package = D7 Views Recipes version = "7.x-1.0" core = "7.x" php = 5.2 Save the file as d7vrpv.info. Open a new file with your editor and add the following lines: Feel free to download this code from the author's web site rather than typing it, at http://theaccidentalcoder.com/ content/drupal-7-views-cookbook <?php /** * Implements hook_views_api(). */ function d7vrpv_views_api() { return array( 'api' => 2, 'path' => drupal_get_path('module', 'd7vrpv'), ); } /** * Implements hook_views_default_views(). */ function d7vrpv_views_default_views() { return d7vrpv_list_all_nodes(); } /** * Begin view */ function d7vrpv_list_all_nodes() { /* * View 'list_all_nodes' */ $view = views_new_view(); $view->name = 'list_all_nodes'; $view->description = 'Provide a list of node titles, creation dates, owner and status'; $view->tag = ''; $view->view_php = ''; $view->base_table = 'node'; $view->is_cacheable = FALSE; $view->api_version = '3.0-alpha1'; $view->disabled = FALSE; /* Edit this to true to make a default view disabled initially */ /* Display: Defaults */ $handler = $view->new_display('default', 'Defaults', 'default'); $handler->display->display_options['title'] = 'List All Nodes'; $handler->display->display_options['access']['type'] = 'role'; $handler->display->display_options['access']['role'] = array( '3' => '3', ); $handler->display->display_options['cache']['type'] = 'none'; $handler->display->display_options['exposed_form']['type'] = 'basic'; $handler->display->display_options['pager']['type'] = 'full'; $handler->display-> display_options['pager']['options']['items_per_page'] = '15'; $handler->display->display_options['pager']['options'] ['offset'] = '0'; $handler->display->display_options['pager']['options'] ['id'] = '0'; $handler->display->display_options['style_plugin'] = 'table'; $handler->display->display_options['style_options'] ['columns'] = array( 'title' => 'title', 'type' => 'type', 'created' => 'created', 'name' => 'name', 'status' => 'status', ); $handler->display->display_options['style_options'] ['default'] = 'created'; $handler->display->display_options['style_options'] ['info'] = array( 'title' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'type' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'created' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'name' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'status' => array( 'sortable' => 1, 'align' => 'views-align-left', 145 'separator' => '', ), ); $handler->display->display_options['style_options'] ['override'] = 1; $handler->display->display_options['style_options'] ['sticky'] = 0; $handler->display->display_options['style_options'] ['order'] = 'desc'; /* Header: Global: Text area */ $handler->display->display_options['header']['area'] ['id'] = 'area'; $handler->display->display_options['header']['area'] ['table'] = 'views'; $handler->display->display_options['header']['area'] ['field'] = 'area'; $handler->display->display_options['header']['area'] ['empty'] = TRUE; $handler->display->display_options['header']['area'] ['content'] = '<h2>Following is a list of all non-page nodes.</h2>'; $handler->display->display_options['header']['area'] ['format'] = '3'; /* Footer: Global: Text area */ $handler->display->display_options['footer']['area'] ['id'] = 'area'; $handler->display->display_options['footer']['area'] ['table'] = 'views'; $handler->display->display_options['footer']['area'] ['field'] = 'area'; $handler->display->display_options['footer']['area'] ['empty'] = TRUE; $handler->display->display_options['footer']['area'] ['content'] = '<small>This view is brought to you courtesy of the D7 Views Recipes module</small>'; $handler->display->display_options['footer']['area'] ['format'] = '3'; /* Field: Node: Title */ $handler->display->display_options['fields']['title'] ['id'] = 'title'; $handler->display->display_options['fields']['title'] ['table'] = 'node'; $handler->display->display_options['fields']['title'] ['field'] = 'title'; $handler->display-> display_options['fields']['title']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['title']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['title']['alter']['trim'] = 0; $handler->display-> display_options['fields']['title']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['title']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['title']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['title']['alter']['html'] = 0; $handler->display-> display_options['fields']['title']['hide_empty'] = 0; $handler->display-> display_options['fields']['title']['empty_zero'] = 0; $handler->display-> display_options['fields']['title']['link_to_node'] = 0; /* Field: Node: Type */ $handler->display->display_options['fields']['type'] ['id'] = 'type'; $handler->display->display_options['fields']['type'] ['table'] = 'node'; $handler->display->display_options['fields']['type'] ['field'] = 'type'; $handler->display-> display_options['fields']['type']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['type']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['type']['alter']['trim'] = 0; $handler->display-> display_options['fields']['type']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['type']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['type']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['type']['alter']['html'] = 0; $handler->display-> display_options['fields']['type']['hide_empty'] = 0; $handler->display-> display_options['fields']['type']['empty_zero'] = 0; $handler->display-> display_options['fields']['type']['link_to_node'] = 0; $handler->display-> display_options['fields']['type']['machine_name'] = 0; /* Field: Node: Post date */ $handler->display->display_options['fields']['created'] ['id'] = 'created'; $handler->display->display_options['fields']['created'] ['table'] = 'node'; $handler->display->display_options['fields']['created'] ['field'] = 'created'; $handler->display-> display_options['fields']['created']['alter'] ['alter_text'] = 0; $handler->display-> display_options['fields']['created']['alter'] ['make_link'] = 0; $handler->display-> display_options['fields']['created']['alter']['trim'] = 0; $handler->display-> display_options['fields']['created']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['created']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['created']['alter'] ['strip_tags'] = 0; $handler->display-> display_options['fields']['created']['alter']['html'] = 0; $handler->display-> display_options['fields']['created']['hide_empty'] = 0; $handler->display-> display_options['fields']['created']['empty_zero'] = 0; $handler->display-> display_options['fields']['created']['date_format'] = 'custom'; $handler->display-> display_options['fields']['created']['custom_date_format'] = 'Y-m-d'; /* Field: User: Name */ $handler->display->display_options['fields']['name'] ['id'] = 'name'; $handler->display->display_options['fields']['name'] ['table'] = 'users'; $handler->display->display_options['fields']['name'] ['field'] = 'name'; $handler->display->display_options['fields']['name'] ['label'] = 'Author'; $handler->display-> display_options['fields']['name']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['name']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['name']['alter']['trim'] = 0; $handler->display-> display_options['fields']['name']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['name']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['name']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['name']['alter']['html'] = 0; $handler->display-> display_options['fields']['name']['hide_empty'] = 0; $handler->display-> display_options['fields']['name']['empty_zero'] = 0; $handler->display-> display_options['fields']['name']['link_to_user'] = 0; $handler->display-> display_options['fields']['name']['overwrite_anonymous'] = 0; /* Field: Node: Published */ $handler->display->display_options['fields']['status'] ['id'] = 'status'; $handler->display->display_options['fields']['status'] ['table'] = 'node'; $handler->display->display_options['fields']['status'] ['field'] = 'status'; $handler->display-> display_options['fields']['status']['alter'] ['alter_text'] = 0; $handler->display-> display_options['fields']['status']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['status']['alter']['trim'] = 0; $handler->display-> display_options['fields']['status']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['status']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['status']['alter'] ['strip_tags'] = 0; $handler->display-> display_options['fields']['status']['alter']['html'] = 0; $handler->display-> display_options['fields']['status']['hide_empty'] = 0; $handler->display-> display_options['fields']['status']['empty_zero'] = 0; $handler->display->display_options['fields']['status'] ['type'] = 'true-false'; $handler->display->display_options['fields']['status'] ['not'] = 0; /* Sort criterion: Node: Post date */ $handler->display->display_options['sorts']['created'] ['id'] = 'created'; $handler->display->display_options['sorts']['created'] ['table'] = 'node'; $handler->display->display_options['sorts']['created'] ['field'] = 'created'; $handler->display->display_options['sorts']['created'] ['order'] = 'DESC'; /* Filter: Node: Type */ $handler->display->display_options['filters']['type'] ['id'] = 'type'; $handler->display->display_options['filters']['type'] ['table'] = 'node'; $handler->display->display_options['filters']['type'] ['field'] = 'type'; $handler->display-> display_options['filters']['type']['operator'] = 'not in'; $handler->display->display_options['filters']['type'] ['value'] = array( 'page' => 'page', ); /* Display: Page */ $handler = $view->new_display('page', 'Page', 'page_1'); $handler->display->display_options['path'] = 'list-all-nodes'; $views[$view->name] = $view; return $views; } ?>   Save the file as d7vrpv.module. Navigate to the modules admin page at admin/modules. Scroll down to the new module and activate it, as shown in the following screenshot: Navigate to the Views Admin page (admin/structure/views) to verify that the view appears in the list: Finally, navigate to list-all-nodes to see the view, as shown in the following screenshot: How it works... The module we have just created could have many other features associated with it, beyond simply a view, and enabling the module will make those features and the view available, while disabling it will hide those same features and view. When compiling the list of installed modules, Drupal looks first in its own modules directory for .info files, and then in the site's modules directories. As can be deduced from the fact that we put our .info file in a second-level directory of sites/all/modules and it was found there, Drupal will traverse the modules directory tree looking for .info files. We created a .info file that provided Drupal with the name and description of our module, its version, the version of Drupal it is meant to work with, and a list of files used by the module, in our case just one. We saved the .info file as d7vrpv.info (Drupal 7 Views Recipes programmatic view); the name of the directory in which the module files appear (d7vr) has no bearing on the module itself. The module file contains the code that will be executed, at least initially. Drupal does not "call" the module code in an active way. Instead, there are events that occur during Drupal's creation of a page, and modules can elect to register with Drupal to be notifi ed of such events when they occur, so that the module can provide the code to be executed at that time; for example, you registering with a business to receive an e-mail in the event of a sale. Just like you are free to act or not, but the sales go on regardless, so too Drupal continues whether or not the module decides to do something when given the chance. Our module 'hooks' the views_api and views_default_views events in order to establish the fact that we do have a view to offer. The latter hook instructs the Views module which function in our code executes our view: d7vrpv_list_all_nodes(). The first thing it does is create a view object by calling a function provided by the Views module. Having instantiated the new object, we then proceed to provide the information it needs, such as the name of the view, its description, and all the information that we would have selected through the Views UI had we used it. As we are specifying the view options in the code, we need to provide the information that is needed by each handler of the view functionality. The net effect of the code is that when we have cleared cache and enabled our module, Drupal then includes it in its list of modules to poll during events. When we navigate to the Views Admin page, an event occurs in which any module wishing to include a view in the list on the admin screen does so, including ours. One of the things our module does is defi ne a path for the page display of our view, which is then used to establish a callback. When that path, list-all-nodes, is requested, it results in the function in our module being invoked, which in turn provides all the information necessary for our view to be rendered and presented. There's more The details of the code provided to each handler are outside the scope of this book, but you don't really need to understand it all in order to use it. You can enable the Views Bulk Export module (it comes with Views), create a view using the Views UI in admin, and choose to Bulk Export it. Give the exporter the name of your new module and it will create a file and populate it with nearly all the code necessary for you. Handling a view field As you may have noticed in the preceding code that you typed or pasted, Views makes tremendous use of handlers. What is a handler? It is simply a script that performs a special task on one or more elements. Think of a house being built. The person who comes in to tape, mud, and sand the wallboard is a handler. In Views, one type of handler is the field handler, which handles any number of things, from providing settings options in the field configuration dialog, to facilitating the field being retrieved from the database if it is not part of the primary record, to rendering the data. We will create a field handler in this recipe that will add to the display of a zip code a string showing how many other nodes have the same zip code, and we will add some formatting options to it in the next recipe. Getting ready A handler lives inside a module, so we will create one: Create a directory in your contributed modules path for this module. Open a new text file in your editor and paste the following code into it: ; $Id: name = Zip Code Handler description = Provides a view handler to format a field as a zip code package = D7 Views Recipes ; Handler files[] = d7vrzch_handler_field_zip_code.inc files[] = d7vrzch_views.inc version = "7.x-1.0" core = "7.x" php = 5.2 Save the file as d7vrzch.info. Create another text file and paste the following code into it: <?php /** * Implements hook_views_data_alter() */ function d7vrzch_field_views_data_alter(&$data, $field) { if (array_key_exists('field_data_field_zip_code', $data)) { $data['field_data_field_zip_code']['field_zip_code'] ['field']['handler'] = 'd7vrzch_handler_field_zip_code'; } } Save the file as d7vrzch.views.inc. Create another text file and paste the following into it: <?php /** * Implements hook_views_api(). */ function d7vrzch_views_api() { return array( 'api' => 3, 'path' => drupal_get_path('module', 'd7vrzch'), ); } Save the file as d7vrzch.module. How to do it... Carry out the folowing steps: Create another text file and paste the following into it: <?php // $Id: $ /** * Field handler to format a zip code. * * @ingroup views_field_handlers */ class d7vrzch_handler_field_zip_code extends views_handler_field_field { function option_definition() { $options = parent::option_definition(); $options['display_zip_totals'] = array( 'contains' => array( 'display_zip_totals' => array('default' => FALSE), ) ); return $options; } /** * Provide a link to the page being visited. */ function options_form(&$form, &$form_state) { parent::options_form($form, $form_state); $form['display_zip_totals'] = array( '#title' => t('Display Zip total'), '#description' => t('Appends in parentheses the number of nodes containing the same zip code'), '#type' => 'checkbox', '#default_value' => !empty($this-> options['display_zip_totals']), ); } function pre_render(&$values) { if (isset($this->view->build_info['summary']) || empty($values)) { return parent::pre_render($values); } static $entity_type_map; if (!empty($values)) { // Cache the entity type map for repeat usage. if (empty($entity_type_map)) { $entity_type_map = db_query('SELECT etid, type FROM {field_config_entity_type}')->fetchAllKeyed(); } // Create an array mapping the Views values to their object types. $objects_by_type = array(); foreach ($values as $key => $object) { // Derive the entity type. For some field types, etid might be empty. if (isset($object->{$this->aliases['etid']}) && isset($entity_type_map[$object->{$this-> aliases['etid']}])) { $entity_type = $entity_type_map[$object->{$this-> aliases['etid']}]; $entity_id = $object->{$this->field_alias}; $objects_by_type[$entity_type][$key] = $entity_id; } } // Load the objects. foreach ($objects_by_type as $entity_type => $oids) { $objects = entity_load($entity_type, $oids); foreach ($oids as $key => $entity_id) { $values[$key]->_field_cache[$this->field_alias] = array( 'entity_type' => $entity_type, 'object' => $objects[$entity_id], ); } } } } function render($values) { $value = $values->_field_cache[$this->field_alias] ['object']->{$this->definition['field_name']} ['und'][0]['safe_value']; $newvalue = $value; if (!empty($this->options['display_zip_totals'])) { $result = db_query("SELECT count(*) AS recs FROM {field_data_field_zip_code} WHERE field_zip_code_value = :zip",array(':zip' => $value)); foreach ($result as $item) { $newvalue .= ' (' . $item->recs . ')'; } } return $newvalue; } Save the file as d7vrzch_handler_field_zip_code.inc. Navigate to admin/build/modules and enable the new module, which shows as the Zip Code Handler. We will test the handler in a quick view. Navigate to admin/build/views. Click on the +Add new view link , enter test as the View name, check the box for description and enter Zip code handler test; clear the Create a page checkbox , and click on the Continue & edit button . On the Views edit page, click on the add link in the Filter Criteria pane, check the box next to Content: Type, and click on the Add and configure filter criteria button . In the Content: Type configuration box , select Home and click on the Apply button . Click on the add link next to Fields, check the box next to Content: Zip code, and click on the Add and configure fields button. Check the box at the bottom of the Content: Zip code configuration box titled Display Zip total and click on the Apply button. Click on the Save button and see the result of our custom handler in the Live preview: How it works... The Views field handler is simply a set of functions that provide support for populating and formatting a field for Views, much in the way a printer driver does for the operating system. We created a module in which our handler resides, and whenever that field is requested within a view, our handler will be invoked. We also added a display option to the configuration options for our field, which when selected, takes each zip code value to be displayed, determines how many nodes have the same zip code, and appends the parenthesized total to the output. The three functions, two in the views.inc file and one in the module file, are very important. Their result is that our custom handler file will be used for field_zip_code instead of the default handler used for entity text fields. In the next recipe, we will add zip code formatting options to our custom handler.
Read more
  • 0
  • 0
  • 6279
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $19.99/month. Cancel anytime
article-image-customizing-look-and-feel-uag
Packt
22 Feb 2012
8 min read
Save for later

Customizing Look and Feel of UAG

Packt
22 Feb 2012
8 min read
(For more resources on Microsoft Forefront UAG, see here.) Honey, I wouldn't change a thing! We'll save the flattery for our spouses, and start by examining some key areas of interest what you might want and be able to change on a UAG implementation. Typically, the end user interface is comprised of the following: The Endpoint Components Installation page The Endpoint Detection page The Login page The Portal Frame The Portal page The Credentials Management page The Error pages There is also a Web Monitor, but it is typically only used by the administrator, so we won't delve into that. The UAG management console itself and the SSL-VPN/SSTP client-component user interface are also visual, but they are compiled code, so there's not much that can be done there. The elements of these pages that you might want to adjust are the graphics, layout, and text strings. Altering a piece of HTML or editing a GIF in Photoshop to make it look different may sound trivial, but there's actually more to it than that, and the supportability of your changes should definitely be questioned on every count. You wouldn't want your changes to disappear upon the next update to UAG, would you? Nor would you look the page to suddenly become all crooked because someone decided that he wants the RDP icon to have an animation from the Smurfs. The UI pages Anyone familiar with UAG will know of its folder structure and the many files that make up the code and logic that is applied throughout. For those less acquainted however, we'll start with the two most important folders you need to know—InternalSite and PortalHomePage. InternalSite contains pages that are displayed to the user as part of the login and logout process, as well as various error pages. PortalHomePage contains the files that are a part of the portal itself, shown to the user after logging in. The portal layout comes in three different flavors, depending on the client that is accessing it. The most common one is the Regular portal, which happens to be the more polished version of the three, shown to all computers. The second is the Premium portal, which is a scaled-down version designed for phones that have advanced graphic capabilities, such as Windows Mobile phones. The third is the Limited portal, which is a text-based version of the portal, shown to phones that have limited or no graphic capabilities, such as the Nokia S60 and N95 handsets. Regardless of the type, the majority of devices connecting to UAG will present a user-agent string in their request and it is this string that determines the type of layout that UAG will use to render its pages and content. UAG takes advantage of this, by allowing the administrator to choose between the various formats that are made available, on a per application basis. The results are pretty cool and being able to cater for most known platforms and form factors, provides users with the best possible experience. The following screenshot illustrates an application that is enabled for the Premium portal, and how the portal and login pages would look on both a premium device and on a limited device: Customizing the login and admin pages The login and admin pages themselves are simple ASP pages, which contain a lot of code, as well as some text and visual elements. The main files in InternalSite that may be of interest to you are the following: Login.asp LogoffMsg.asp InstallAndDetect.asp Validate.asp PostValidate.asp InternalError.asp In addition, UAG keeps another version of some of the preceding files for ADFS, OTP, and OWA under similarly named folders. This means that if you have enabled the OWA theme on your portal, and you wish to customize it, you should work with the files under the /InternalSite/OWA folder. Of course, there are many other files that partake in the flow of each process, but the fact is there is little need to touch either the above or the others, as most of the appearance is controlled by a CSS template and text strings stored elsewhere. Certain requirements may even involve making significant changes to the layout of the pages, and leave you with no other option but to edit core ASP files themselves, but be careful as this introduces risk and is not technically supported. It's likely that these pages change with future updates to UAG, and that may cause a conflict with the older code that is in your files. The result of mixing old and new code is unpredictable, to say the least. The general appearance of the various admin pages is controlled by the file /InternalSite/CSS/template.css. This file contains about 80 different style elements including some of the 50 or so images displayed in the portal pages, such as the gradient background, the footer, and command buttons to name a few. The images themselves are stored in /InternalSite/Images. Both these folders have an OWA folder, which contains the CSS and images for the OWA theme. When editing the CSS, most of the style names will make sense, but if you are not sure, then why not copy the relevant ASP file and the CSS to your computer, so you can take a closer look with a visual editor, to better understand the structure. If you are doing this be careful not to make any changes that may alter the code in a damaging way, as this is easily done and can waste a lot of valuable time. A very useful piece of advice for checking tweaked code is to consider the use of Internet Explorer's integrated developer tool. In case you haven't noticed, it's a simple press of F12 on the keyboard and you'll find everything you need to get debugging. IE 9 and higher versions even pack a nifty trace module that allows you to perform low-level inspection on client-server interaction, without the need for additional third-party tools. We don't intend to devote this book to CSS, but one useful CSS element to be familiar with is the display: none; element, which can be used to hide any element it's put in. For example, if you add this to the .button element, it will hide the Login button completely. A common task is altering the part of the page where you see the Application and Network Access Portal text displayed. The text string itself can be edited using the master language files, which we will discuss shortly. The background of that part of the page, however, is built with the files headertopl.gif, headertopm.gif, and headertopr.gif. The original page design is classic HTML—it places headertopl on the left, headertopr on the right, and repeats headertopm in between to fill the space. If you need to change it, you could simply design a similar layout and put the replacement image files in /InternalSite/Images/CustomUpdate. Alternatively, you might choose to customize the logo only by copying the /InternalSite/Samples/logo.inc file into the /InternalSite/Inc/CustomUpdate folder, as this is where the HTML code that pertains to that area is located. Another thing that's worth noting is that if you create a custom CSS file, it takes effect immediately, and there's no need to do an activation. Well at least for the purposes of testing anyway. The same applies for image file changes too but as a general rule you should always remember to activate when finished, as any new configurations or files will need to be pushed into the TMG storage. Arrays are no exception to this rule either and you should know that custom files are only propagated to array members during an activation, so in this scenario, you do need to activate after each change. During development, you may copy the custom files to each member node manually to save time between activations, or better still, simply stop NLB on all array members so that all client traffic is directed to the one you are working on. An equally important point is that when you test changes to the code, the browser's cache or IIS itself may still retain files from the previous test or config, so if changes you've made do not appear first time around, then start by clearing your browser's cache and even reset IIS, before assuming you messed up the code. Customizing the portal As we said earlier, the pages that make up a portal and its various flavors are under the PortalHomePage folder. These are all ASP.NET files (.ASPX), and the scope for making any alterations here is very limited. However, the appearance is mostly controlled via the file /InternalSite/PortalHomePage/Standard.Master, which contains many visual parameters that you can change. For example, the DIV with ID content has a section pertaining to the side bar application list. You might customize the midTopSideBarCell width setting to make the bar wider or thinner. You can even hide it completely by adding style="display: none;" to the contentLeftSideBarCell table cell. As always, make sure you copy the master file to CustomUpdate, and not touch the original file, and as with the CSS files, any changes you make take effect immediately. Additional things that you can do with the portal are removing or adding buttons to the portal toolbar. For example, you might add a button to point to a help page that describes your applications, or a procedure to contact your internal technical support in case of a problem with the site.
Read more
  • 0
  • 0
  • 1989

article-image-article-understanding-services
Packt
07 Feb 2012
23 min read
Save for later

Understanding Services in JBoss

Packt
07 Feb 2012
23 min read
(For more resources on topic_name, see here.) Preparing JBoss Developer Studio The examples in this article are based on a standard ESB application template that can be found under the Chapter3 directory within the sample downloads..We will modify this template application as we proceed through this chapter. Time for action – opening the Chapter3 app Follow these steps: Click on the File menu and select Import. Now choose Existing Projects into workspace and select the folder where the book samples have been extracted: Then click on Finish. Now have a look at the jboss-esb.xml file. You can see that it has a single service and action as defined in the following snippet: <jbossesb parameterReloadSecs="5" xsi_schemaLocation="http://anonsvn.labs.jboss.com/labs/ jbossesb/trunk/product/etc/schemas/xml/ jbossesb-1.3.0.xsd http://anonsvn.jboss.org/repos/labs/ labs/jbossesb/trunk/product/etc/ schemas/xml/jbossesb-1.3.0.xsd"> <providers> <jms-provider connection-factory="ConnectionFactory" name="JBossMQ"> <jms-bus busid="chapter3GwChannel"> <jms-message-filter dest-name="queue/chapter3_Request_gw" dest-type="QUEUE"/> </jms-bus> <jms-bus busid="chapter3EsbChannel"> <jms-message-filter dest-name="queue/chapter3_Request_esb" dest-type="QUEUE"/> </jms-bus> </jms-provider> </providers> <services> <service category="Chapter3Sample" description="A template for Chapter3" name="Chapter3Service"> <listeners> <jms-listener busidref="chapter3GwChannel" is-gateway="true" name="Chapter3GwListener"/> <jms-listener busidref="chapter3EsbChannel" name="Chapter3Listener"/> </listeners> <actions mep="OneWay"> <action class="org.jboss.soa.esb.actions.SystemPrintln" name="PrintBefore"> <property name="message"/> <property name="printfull" value="true"/> </action> </actions> </service> </services></jbossesb> Examining the structure of ESB messages A service is an implementation of a piece of business logic which exposes a well defined service contract to consumers. The service will provide an abstract service contract which describes the functionality exposed by the service and will exhibit the following characteristics: Self contained: The implementation of the service is independent from the context of the consumers; any implementation changes will have no impact. Loosely coupled: The consumer invokes the service indirectly, passing messages through the bus to the service endpoint. There is no direct connection between the service and its consumers. Reusable: The service can be invoked by any consumer requiring the functionality exposed by the service. The provider is tied to neither a particular application nor process. Services which adhere to these criteria will be capable of evolving and scaling without affecting any consumers of that service. The consumer no longer cares which implementation of the service is being invoked, nor where it is located, provided that the exposed service contract remains compatible. Examining the message The structure of the message, and how it can be manipulated, plays an important part in any ESB application as a result of the message driven nature of the communication between service providers and consumers. The message is the envelope which contains all of the information relevant to a specific invocation of a service. All messages within JBoss ESB are implementations of the org.jboss.soa.esb.message. Message interface, the major aspects of which are: Header: Information concerning the identity, routing addresses, and correlation of the message Context: Contextual information pertaining to the delivery of each message, such as the security context Body: The payload and additional details as required by the service contract Attachment: Additional information that may be referenced from within the payload Properties: Information relating to the specific delivery of a message, usually transport specific (for example the original JMS queue name) Time for action – printing the message structure Let us execute the Chapter3 sample application that was opened up at the beginning of this chapter. Follow these steps: In JBoss Developer Studio, click Run and select Run As and Run on Server. Alternatively you can press Alt + Shift + X, followed by R. You can see the server runtime has been pre-selected. Choosing the Always use this server when running this project check box will always use this runtime and this dialog will not appear again. Click Next. A window with the project pre-configured to run on this server is shown. Ensure that we have only our project Chapter3 selected to the right hand side. Click Finish. The server runtime will be started up (if not already started) and the ESB file will be deployed to the server runtime. Select the src folder, expand it till the SendJMSMessage.java file is displayed in the tree. Now click Run, select Run As and Java Application. The entire ESB message contents will be printed in the console as follows: INFO [STDOUT] Message structure: INFO [STDOUT] [ message: [ JBOSS_XML ]header: [ To: JMSEpr [ PortReference < <wsa:Address jms:localhost:1099#queue/chapter3_Request_esb/>, <wsa:ReferenceProperties jbossesb:java.naming.factory.initial : org.jnp.interfaces.NamingContextFactory/>, <wsa:ReferenceProperties jbossesb:java.naming.provider.url : localhost:1099/>, <wsa:ReferenceProperties jbossesb:java.naming.factory.url.pkgs : org.jnp.interfaces/>, <wsa:ReferenceProperties jbossesb:destination-type : queue/>, <wsa:ReferenceProperties jbossesb:destination-name : queue/chapter3_Request_esb/>, <wsa:ReferenceProperties jbossesb:specification-version : 1.1/>, <wsa:ReferenceProperties jbossesb:connection-factory : ConnectionFactory/>, <wsa:ReferenceProperties jbossesb:persistent : true/>, <wsa:ReferenceProperties jbossesb:acknowledge-mode : AUTO_ACKNOWLEDGE/>, <wsa:ReferenceProperties jbossesb:transacted : false/>, <wsa:ReferenceProperties jbossesb:type : urn:jboss/esb/epr/type/jms/> > ] MessageID: e694a6a5-6a30-45bf-8f6d-f48363219ccf RelatesTo: jms:correlationID#e694a6a5-6a30-45bf-8f6d-f48363219ccf ]context: {}body: [ objects: {org.jboss.soa.esb.message.defaultEntry=Chapter 3 says Hello!} ]fault: [ ]attachments: [ Named:{}, Unnamed:[] ]properties: [ {org.jboss.soa.esb.message.transport.type=Deferred serialized value: 12d16a5, org.jboss.soa.esb.message.byte.size=2757, javax.jms.message.redelivered=false, org.jboss.soa.esb.gateway.original.queue.name=Deferred serialized value: 129bebb, org.jboss.soa.esb.message.source=Deferred serialized value: 1a8e795} ] ] What just happened? You have just created a Chapter3.esb file and deployed it to the ESB Runtime on the JBoss Application Server 5.1. You executed a gateway client that posted a string to the Bus. The server converted this message to an ESB message and the complete structure was printed out. Take a moment to examine the output and understand the various parts of the ESB message. Have a go hero – deploying applications Step 1 through step 4 describe how to start the server and deploy our application from within JBoss Developer Studio. For the rest of this chapter, and throughout this book, you will be repeating these steps and will just be asked to deploy the application. Message implementations JBoss ESB provides two different implementations of the message interface, one which marshalls data into an XML format and a second which uses Java serialization to create a binary representation of the message. Both of these implementations will only handle Java serializable objects by default, however it is possible to extend the XML implementation to support additional object types. Message implementations are created indirectly through the org.jboss.soa.esb. message.format.MessageFactory class. In general any use of serializable objects can lead to a brittle application, one that is more tightly coupled between the message producer and consumer. The message implementations within JBoss ESB mitigate this by supporting a 'Just In Time' approach when accessing the data. Care must still be taken with what data is placed within the message, however serialization/marshalling of these objects will only occur as and when required. Extending the ESB to provide alternative message implementations, and extending the current XML implementation to support additional types, is outside the scope of this book. The body This is the section of the message which contains the main payload information for the message, adhering to the contract exposed by the service. The payload should only consist of the data required by the service contract and should not rely on any service implementation details as this will prevent the evolution or replacement of the service implementation at a future date. The types of data contained within the body are restricted only by the requirements imposed by the message implementation, in other words the implementation must be able to serialize or marshall the contents as part of service invocation. The body consists of Main payload: accessed using the following methods: public Object get() ;public void add(final Object value) ; Named objects: accessed using the following methods: public Object get(final String name) ;public void add(final String name, final Object value) ; Time for action – examining the main payload Let us create another action class that simply prints the message body. We will add this action to the sample application that was opened up at the beginning of this chapter. Right click on the src folder and choose New and select Class: Enter the Name as "MyAction", enter the Package as "org.jboss.soa. samples.chapter3", and select the Superclass as "org.jboss.soa.esb. actions.AbstractActionLifecycle": Click Finish. Add the following imports and the following body contents to the code: import org.jboss.soa.esb.helpers.ConfigTree;import org.jboss.soa.esb.message.Message; protected ConfigTree _config;public MyAction(ConfigTree config) { _config = config;}public Message displayMessage(Message message) throws Exception { System.out.println( "&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&");System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); System.out.println("Body: " + message.getBody().get()); System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); return message;} Click Save. Open the jboss-esb.xml file in Tree mode, expand till Actions is displayed in the tree. Select Actions, click Add | Custom Action: Enter the Name as "BodyPrinter" and choose the "MyAction" class and "displayMessage" process method: Click Save and the application will be deployed. If the server was stopped then deploy it using the Run menu and select Run As | Run on Server: Once the application is deployed on the server, run SendJMSMessage.java by clicking Run | Run As | Java Application. The following can be seen displayed in the console output: 12:19:32,562 INFO [STDOUT] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&12:19:32,562 INFO [STDOUT] Body: Chapter 3 says Hello!12:19:32,562 INFO [STDOUT] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& What just happened? You have just created your own action class that used the Message API to get the main payload of the message and printed it to the console. Have a go hero – additional body contents Now add another miscellaneous SystemPrintln action after our BodyPrinter. Name it PrintAfter and make sure printfull is set to true. Modify the MyAction class and add additional named content using the getBody().add(name, object) method and see what gets printed on the console. Here is the actions section of the config file <actions mep="OneWay"> <action class="org.jboss.soa.esb.actions.SystemPrintln" name="PrintBefore"> <property name="message"/> <property name="printfull" value="true"/> </action> <action class="org.jboss.soa.esb.samples.chapter3.MyAction" name="BodyPrinter" process="displayMessage"/> <action class="org.jboss.soa.esb.actions.SystemPrintln" name="PrintAfter"> <property name="message"/> <property name="printfull" value="true"/> </action></actions> The following is the listing of the MyAction class's modified displayMessage method public Message displayMessage(Message message) throws Exception { System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); System.out.println("Body: " + message.getBody().get()); message.getBody().add("Something", "Unknown"); System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); return message;} The header The message header contains the information relating to the identity, routing, and the correlation of messages. This information is based on, and shares much in common with, the concepts defined in the W3C WS-Addressing specification. It is important to point out that many of these aspects are normally initialized automatically by other parts of the codebase; a solid understanding of these concepts will allow the developer to create composite services using more advanced topologies. Routing information Every time a message is sent within the ESB it contains information which describes who sent the message, which service it should be routed to, and where any replies/faults should be sent once processing is complete. The creation of this information is the responsibility of the invoker and, once delivered, any changes made to this information, from within the target service, will be ignored by that service. The information in the header takes the form of Endpoint References (EPRs) containing a representation of the service address, often transport specific, and extensions which can contain relevant contextual information for that endpoint. This information should be treated as opaque by all parties except the party which was responsible for creating it. There are four EPRs included in the header, they are as follows: To: This is the only mandatory EPR, representing the address of the service to which the message is being sent. This will be initialized by ServiceInvoker with the details of the service chosen to receive the message. From: This EPR represents the originator of the message, if present, and may be used as the address for responses if there is neither an explicit ReplyTo nor FaultTo set on the message. ReplyTo: This EPR represents the endpoint to which all responses will be sent, if present, and may be used as the address for faults if there is no explicit FaultTo set on the message. This will normally be initialized by ServiceInvoker if a synchronous response is expected by the service consumer. FaultTo: This EPR represents the endpoint to which all faults will be sent, if present. When thinking about the routing information it is important to view these details from the perspective of the service consumer, as the EPRs represent the wishes of the consumer and must be adhered to. If the service implementation involves more advanced topologies, like chaining and continuations, which we will discuss later in the chapter, then care must be taken to preserve these EPRs when messages are propagated to subsequent services. Message identity and correlation There are two parts of the header which are related to the identity of the message and its correlation with a preceding message. These are as follows: MessageID: A unique reference which can be used to identify the message as it progresses through the ESB. The reference is represented by a Uniform Resource Name (URN), a specialized Uniform Resource Identifier (URI) which will represent the identity of the message within a specific namespace. The creator of the message may choose to associate it with an identity which is specific to the application context within which it is being used, in which case the URN should refer to a namespace which is also application context specific. If no MessageID has been associated with the message then the ESB will assign a unique identifier when it is first sent to a service. RelatesTo: When sending a reply, this represents the unique reference of the message representing the request. This may be used to correlate the response message with the original request. Service action The action header is an optional, service-specific URN that may be used to further refine the processing of the message by a service provider or service consumer. The URN should refer to an application-specific namespace. There are no restrictions on how this header is to be used by the application including, if considered appropriate, ignoring its contents.   Time for action – examining the header Let us modify MyAction to display some of the header information that we need: Open MyAction and edit the displayMessage method as follows: public Message displayMessage(Message message) throws Exception { System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); System.out.println("From: " + message.getHeader().getCall().getFrom()); System.out.println("To: " + message.getHeader().getCall().getTo()); System.out.println("MessageID: " + message.getHeader().getCall().getMessageID()); System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); return message;} Remove the PrintBefore and PrintAfter actions if they exist. Make sure that we have only the BodyPrinter action: Click on Save. If the server was still running (and a small red button appears in the console window), then you might notice the application gets redeployed by default. If this did not happen then deploy the application using the Run menu and select Run As | Run on Server. The following output will be displayed in the console: INFO [EsbDeployment] Stopping 'Chapter3.esb'INFO [EsbDeployment] Destroying 'Chapter3.esb'WARN [ServiceMessageCounterLifecycleResource] Calling cleanup on existing service message counters for identity ID-7INFO [QueueService] Queue[/queue/chapter3_Request_gw] stoppedINFO [QueueService] Queue[/queue/chapter3_Request_esb] stoppedINFO [QueueService] Queue[/queue/chapter3_Request_esb] started, fullSize=200000, pageSize=2000, downCacheSize=2000INFO [QueueService] Queue[/queue/chapter3_Request_gw] started, fullSize=200000, pageSize=2000, downCacheSize=2000INFO [EsbDeployment] Starting ESB Deployment 'Chapter3.esb' Run SendJMSMessage.java by clicking Run | Run As | Java Application. The following messages will be printed in the console INFO [STDOUT] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&INFO [STDOUT] From: nullINFO [STDOUT] To: JMSEpr [ PortReference < <wsa:Address jms:localhost:1099#queue/chapter3_Request_esb/>, <wsa:ReferenceProperties jbossesb:java.naming.factory.initial : org.jnp.interfaces.NamingContextFactory/>, <wsa:ReferenceProperties jbossesb:java.naming.provider.url : localhost:1099/>, <wsa:ReferenceProperties jbossesb:java.naming.factory.url.pkgs : org.jnp.interfaces/>, <wsa:ReferenceProperties jbossesb:destination-type : queue/>, <wsa:ReferenceProperties jbossesb:destination-name : queue/chapter3_Request_esb/>, <wsa:ReferenceProperties jbossesb:specification-version : 1.1/>, <wsa:ReferenceProperties jbossesb:connection-factory : ConnectionFactory/>, <wsa:ReferenceProperties jbossesb:persistent : true/>, <wsa:ReferenceProperties jbossesb:acknowledge-mode : AUTO_ACKNOWLEDGE/>, <wsa:ReferenceProperties jbossesb:transacted : false/>, <wsa:ReferenceProperties jbossesb:type : urn:jboss/esb/epr/type/jms/> > ]INFO [STDOUT] MessageID: 46e57744-d0ac-4f01-ad78-b1f15a3335d1INFO [STDOUT] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& What just happened? We examined some of the header contents through the API. We printed the From, To, and the MessageID from within our MyAction class. Have a go hero – additional header contents Now modify the MyAction class to print the Action, ReplyTo, RelatesTo, and FaultTo contents of the header to the console. Here is the listing of the modified MyAction class's method: public Message displayMessage(Message message) throws Exception { System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); System.out.println("From: " + message.getHeader().getCall().getFrom()); System.out.println("To: " + message.getHeader().getCall().getTo()); System.out.println("MessageID: " + message.getHeader().getCall().getMessageID()); System.out.println("Action: " + message.getHeader().getCall().getAction()); System.out.println("FaultTo: " + message.getHeader().getCall().getFaultTo()); System.out.println("RelatesTo: " + message.getHeader().getCall().getRelatesTo()); System.out.println("ReplyTo: " + message.getHeader().getCall().getReplyTo()); System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); return message;} The context The message context is used to transport the active contextual information when the message is sent to the target service. This may include information such as the current security context, transactional information, or even context specific to the application. This contextual information is not considered to be part of the service contract and is assumed to change between successive message deliveries. Where the message context really becomes important is when a service pipeline is invoked through an InVM transport, as this can allow the message to be passed by reference. When the transport passes the message to the target service it will create a copy of the message header and message context, allowing each to be updated in subsequent actions without affecting the invoked service. Have a go hero – printing message context Modify the MyAction class to print the context of the ESB message; obtain the context through the getContext() method. You will notice that the context is empty for our sample application as we currently have no security or transactional context attached to the message. Message validation The message format within JBoss ESB allows the consumer and producer to use any payload that suits the purpose of the service contract. No constraints are placed on this payload other than the fact that it must be possible to marshall the payload contents so that the messages can be transported between the consumer and producer. While this ability is useful for creating composite services, it can be a disadvantage when you need to design services that have an abstract contract, hide the details of the implementation, are loosely coupled, and can easily be reused. In order to encourage the loose coupling of services it is often advantageous to choose a payload that does not dictate implementation, for example XML. JBoss ESB provides support for enforcing the structure of XML payloads for request and response messages, through the XML schema language as defined through the W3C. An XML Schema Document (XSD) is an abstract, structural definition which can be used to formally describe an XML message and guarantee that a specific payload matches that definition through a process called validation. Enabling validation on a service is simply a matter of providing the schema associated with the request and/or response messages and specifying the validate attribute, as follows: <actions inXsd="/request.xsd" outXsd="/response.xsd" validate="true"> ...</actions> This will force the service pipeline to validate the request and response messages against the XSD files, if they are specified, with the request validation occurring before the first service action is executed and the response validation occurring immediately before the response message is sent to the consumer. If validation of the request or response message does fail then a MessageValidationException fault will be raised and sent to the consumer using the normal fault processing as defined in the MEPs and responses section. This exception can also be seen by enabling DEBUG logging through the mechanism supported by the server. Have a go hero – enabling validation Add a request.xsd or a response.xsd or both to your actions in the sample application provided. Enable validation and test the output. Configuring through the ConfigTree JBoss ESB handles the majority of its configuration through a hierarchical structure similar to the W3C DOM, namely, org.jboss.soa.esb.helpers.ConfigTree. Each node within the structure contains a name, a reference to the parent node, a set of named attributes, and references to all child nodes. This structure is used, directly and indirectly, within the implementation of the service pipeline and action processors, and will be required if you are intending to create your own action processors. The only exception to this is when using an annotated action class when the configuring of the action will be handled by the framework instead of programmatically. Configuring properties in the jboss-esb.xml file The ConfigTree instance passed to an action processor is a hierarchical representation of the properties as defined within the action definition of the jboss-esb.xml file. Each property defined within an action may be interpreted as a name/value pair or as hierarchical content to be parsed by the action. For example the following: <action ....> <!-- name/value property --> <property name="propertyName" value="propertyValue"/> <!-- Hierarchical property --><property name="propertyName"> <hierarchicalProperty attr="value"> <inner name="myName" random="randomValue"/> </hierarchicalProperty> </property></action> This will result in the following ConfigTree structure being passed to the action: Traversing the ConfigTree hierarchy Traversing the hierarchy is simply a matter of using the following methods to obtain access to the parent or child nodes: public ConfigTree getParent() ;public ConfigTree[] getAllChildren() ;public ConfigTree[] getChildren(String name) ;public ConfigTree getFirstChild(String name) ; Accessing attributes Attributes are usually accessed by querying the current ConfigTree instance for the value associated with the required name, using the following methods: public String getAttribute(String name) ;public String getAttribute(String name, String defaultValue) ;public long getLongAttribute(String name, long defaultValue) ;public float getFloatAttribute(String name, float defaultValue) ;public boolean getBooleanAttribute(String name, boolean defaultValue) ;public String getRequiredAttribute(String name) throws ConfigurationException ; It is also possible to obtain the number of attributes, names of all the attributes, or the set of key/value pairs using the following methods: public int attributeCount() ;public Set<String> getAttributeNames() ;public List<KeyValuePair> attributesAsList() ; Time for action – examining configuration properties Let us add some configuration properties to our MyAction. We will make the & and the number of times it needs to be printed as configurable properties. Follow these steps: Add two members to the MyAction class: public String SYMBOL = "&";public int COUNT = 48; Modify the constructor as follows: _config = config;String symbol = _config.getAttribute("symbol");if (symbol != null) { SYMBOL = symbol;}String count = _config.getAttribute("count");if (count != null) { COUNT = Integer.parseInt(count);} Add a printLine() method: private void printLine() { StringBuffer line = new StringBuffer(COUNT); for (int i = 0; i < COUNT; i++) { line.append(SYMBOL); } System.out.println(line);} Modify the printMessage() method as shown in the following snippet: printLine();System.out.println("Body: " + message.getBody().get());printLine();return message; Edit the jboss-esb.xml file and select the action, BodyPrinter. Add two properties symbol as * and count as 50: Click on Save or press Ctrl + S. Deploy the application using the Run menu and select Run As | Run on Server. Run SendJMSMessage.java by clicking Run, select Run As and Java Application. INFO [STDOUT] **************************************************INFO [STDOUT] Body: Chapter 3 says Hello!INFO [STDOUT] ************************************************** What just happened? You just added two properties to the MyAction class. You also retrieved these properties from the ConfigTree and used them. Have a go hero – additional header contents Experiment with the other API methods. Write hierarchicalProperty and see how that can be retrieved.
Read more
  • 0
  • 0
  • 1033

article-image-common-api-liferay-portal-systems-development
Packt
01 Feb 2012
11 min read
Save for later

Common API in Liferay Portal Systems Development

Packt
01 Feb 2012
11 min read
(For more resources on Liferay, see here.) User management The portal has defined user management with a set of entities, such as, User, Contact, Address, EmailAddress, Phone, Website, and Ticket, and so on at /portal/service.xml. In the following section, we're going to address the User entity, its association, and relationship. Models and services The following figure depicts these entities and their relationships. The entity User has a one-to-one association with the entity Contact, which may have many contacts as children. And the entity Contact has a one-to-one association with the entity Account, which may have many accounts as children. The entity Contact can have a many-to-many association with the entities Address, EmailAddress, Phone, Website, and Ticket. Logically, the entities Address, EmailAddress, Phone, Website, and Ticket may have a many-to-many association with the other entities, such as Group, Organization, and UserGroup as shown in the following image: Services The following table shows user-related service interfaces, extensions, utilities, wrappers, and their main methods: Interface Extension Utility/Wrapper Main methods UserService, UserLocalService PersistedModelLocalService User(Local)ServiceUtil, User(Local)ServiceWrapper add*, authenticate*, check*, decrypt*, delete*, get*, has*, search, unset*, update*, and so on. ContactService, ContactLocalService persistedmodellocalservice> Contact(Local)ServiceUtil, Contact(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. AccountService, AccountLocalService Account(Local)ServiceUtil, Account(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. AddressService, AddressLocalService Address(Local)ServiceUtil, Address(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. EmailAddressService, EmailAddressLocalService PersistedModelLocalService Address(Local)ServiceUtil, Address(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. PhoneService, PhoneLocalService PersistedModelLocalService Phone(Local)ServiceUtil, Phone(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. WebsiteService, WebsiteLocalService PersistedModelLocalService Website(Local)ServiceUtil, Website(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. TicketLocalService PersistedModelLocalService TicketLocalServiceUtil, TicketLocalServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on.   Relationships The portal also defined many-to-many relationships between User and Group, User and Organization, User and Team, User and UserGroup, as shown in the following code: <column name="groups" type="Collection" entity="Group" mapping-table="Users_Groups" /> <column name="userGroups" type="Collection" entity="UserGroup" mapping-table="Users_UserGroups" /> In particular, you will be able to find a similar definition at /portal/service.xml. Sample portal service portlet The portal provides a sample portal service plugin called sample-portal-service-portlet (refer to the plugin details at /portlets/sample-portal-service-portlet). The following is the code snippet: List organizations = OrganizationServiceUtil.getUserOrganizations( themeDisplay.getUserId()); // add your logic The previous code shows how to consume Liferay services through regular Java calls. These services include com.liferay.portal.service.OrganizationServiceUtil and the model involves com.liferay.portal.model.Organization. Similarly, you can use other services, for example, com.liferay.portal.service.UserServiceUtil and com.liferay.portal.service.GroupServiceUtil; and models, for example, com.liferay.portal.model.User, com.liferay.portal.model.Group. Of course, you can find other services and models—you will find services located at the com. liferay.portal.service package in the /portal-service/src folder. In the same way, you will find models located at the com.liferay.portal.model package in the /portal-service/src folder. What's the difference between *LocalServiceUtil and *ServiceUtil? The sign * represents models, for example, Organization, User, Group, and so on. Generally speaking, *Service is the remote service interface that defines the service methods available to remote code. *ServiceUtil has an additional permission check, since this method might be called as a remote service. *ServiceUtil is a facade class that combines the service locator with the actual call to the service *Service. While *LocalService is the internal service interface,*LocalServiceUtil is a facade class that combines the service locator with the actual call to the service *LocalService. *Service has a PermissionChecker in each method, and *LocalService usually doesn't have the same. Authorization Authorization is a process of finding out if the user, once identified, is permitted to have access to a resource. The portal implemented authorization by assigning permissions via roles and checking permissions, and this is called Role-Based Access Control (RBAC). The following figure depicts an overview of authorization. A user can be a member of Group, UserGroup, Organization, or Team. And a user or a group of users, such as Group, UserGroup, or Organization can be a member of Role. And the entity Role can have many ResourcePermission entities associated with it, while the entity ResourcePermission may contain many ResourceAction entities, as shown in the following diagram: The following table shows the entities Role, ResourcePermission, and ResourceAction: Interface Extension Wrapper/SOAP Main methods Role RoleModel, PersistedModel RoleWrapper, RoleSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. ResourceAction ResourceActionModel, PersistedModel ResourceActionWrapper, ResourceActionSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. ResourcePermission ResourcePermissionModel, PersistedModel ResourcePermissionWrapper, ResourcePermissionSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. In addition, the portal specifies role constants in the class RoleConstants. The entity ResourceAction gets specified with the columns name, actionId, and bitwiseValue as follows: <column name="name" type="String" /> <column name="actionId" type="String" /> <column name="bitwiseValue" type="long" /> The entity ResourcePermission gets specified with the columns name, scope, primKey, roleId, and actionIds as follows: <column name="name" type="String" /> <column name="scope" type="int" /> <column name="primKey" type="String" /> <column name="roleId" type="long" /> <column name="ownerId" type="long" /> <column name="actionIds" type="long" /> In addition, the portal specified resource permission constants in the class ResourcePermissionConstants Password policy The portal implements enterprise password policies and user account lockout using the entities PasswordPolicy and PasswordPolicyRel, as shown in the following table: Interface Extension Wrapper/Soap Description PasswordPolicy PasswordPolicyModel, PersistedModel PasswordPolicyWrapper, PasswordPolicySoap Columns: name, description, minAge, minAlphanumeric, minLength, minLowerCase, minNumbers, minSymbols, minUpperCase, lockout, maxFailure, lockoutDuration, and so on. PasswordPolicyRel PasswordPolicyRelModel, PersistedModel PasswordPolicyRelWrapper, PasswordPolicyRelSoap Columns: passwordPolicyId, classNameId, and classPK. Ability to associate the entity PasswordPolicy with other entities.   Passwords toolkit The portal has defined the following properties related to the passwords toolkit in portal.properties: passwords.toolkit= com.liferay.portal.security.pwd.PasswordPolicyToolkit passwords.passwordpolicytoolkit.generator=dynamic passwords.passwordpolicytoolkit.static=iheartliferay The property passwords.toolkit defines a class name that extends com.liferay.portal.security.pwd.BasicToolkit, which is called to generate and validate passwords. If you choose to use com.liferay.portal.security.pwd.PasswordPolicyToolkit as your password toolkit, you can choose either static or dynamic password generation. Static is set through the property passwords.passwordpolicytoolkit.static and dynamic uses the class com.liferay.util.PwdGenerator to generate the password. If you are using LDAP password syntax checking, you will also have to use the static generator, so that you can guarantee that passwords obey their rules. The passwords' toolkits get addressed in detail in the following table: Class Interface Utility Property Main methods DigesterImpl Digester DigesterUtil passwords.digest.encoding digest, digestBase64, digestHex, digestRaw, and so on. Base64 None None None decode, encode, fromURLSafe, objectToString, stringToObject, toURLSafe, and so on. PwdEncryptor None None passwords.encryption.algorithm encrypt, default types: MD2, MD5, NONE, SHA, SHA-256, SHA-384, SSHA, UFC-CRYPT, and so on .   Authentication Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be. The portal defines the class called PwdAuthenticator for authentication, as shown in the following code: public static boolean authenticate( String login, String clearTextPassword, String currentEncryptedPassword) { String encryptedPassword = PwdEncryptor.encrypt( clearTextPassword, currentEncryptedPassword); if (currentEncryptedPassword.equals(encryptedPassword)) { return true; } } As you can see, it encrypts the clear text password first into the variable encryptedPassword. It then tests whether the variable currentEncryptedPassword has the same value as that of the variable encryptedPassword or not. The classes UserLocalServiceImpl (the method authenticate) and EditUserAction (the method updateUser) call the class PwdAuthenticator for authentication. A Message Authentication Code (MAC) is a short piece of information used to authenticate a message. The portal supports MAC through the following properties: auth.mac.allow=false auth.mac.algorithm=MD5 auth.mac.shared.key= To use authentication with MAC, simply post to a URL as follows: It passes the MAC in the password field. Make sure that the MAC gets URL encoded, since it might contain characters not allowed in a URL. Authentication with MAC also requires that you set the following property in system-ext.properties: com.liferay.util.servlet.SessionParameters=false As shown in the previous code, it encrypts session parameters, so that browsers can't remember them. Authentication pipeline The portal provides the authentication pipeline framework for authentication, as shown in the following code: auth.pipeline.pre=com.liferay.portal.security.auth.LDAPAuth auth.pipeline.post= auth.pipeline.enable.liferay.check=true As you can see, the property auth.pipeline.enable.liferay.check is set to true to enable password checking by the internal portal authentication. If it is set to false, essentially, password checking is delegated to the authenticators configured in the auth.pipeline.pre and auth.pipeline.post settings. The interface com.liferay.portal.security.auth.Authenticator defines the constant values that should be used as return code from the classes implementing the interface. If authentication is successful, it returns SUCCESS; if the user exists but the passwords doesn't match, then it returns FAILURE. If the user doesn't exist in the system, it returns DNE. Constants get defined in the interface Authenticator. As shown in the following table, the available authenticator is com.liferay.portal.security.auth.LDAPAuth: Class Extension Involved properties Main Methods PasswordPolicyToolkit BasicToolkit passwords.passwordpolicytoolkit.charset.lowercase, passwords.passwordpolicytoolkit.charset.numbers, passwords.passwordpolicytoolkit.charset.symbols, passwords.passwordpolicytoolkit.charset.uppercase, passwords.passwordpolicytoolkit.generator, passwords.passwordpolicytoolkit.static generate, validate RegExpToolkit BasicToolkit passwords.regexptoolkit.pattern, passwords.regexptoolkit.charset, passwords.regexptoolkit.length generate, validate PwdToolkitUtil None passwords.toolkit Generate, validate PwdGenerator None None getPassword. getPinNumber   Authentication token The portal provides the interface com.liferay.portal.security.auth.AuthToken for the authentication token as follows: auth.token.check.enabled=true auth.token.impl= com.liferay.portal.security.auth.SessionAuthToken As shown in the previous code, the property auth.token.check.enabled is set to true to enable authentication token security checks. The checks can be disabled for specific actions via the property auth.token.ignore.actions or for specific portlets via the init parameter check-auth-token in portlet.xml. The property auth.token.impl is set to the authentication token class. This class must implement the interface AuthToken. The class SessionAuthToken is used to prevent CSRF (Cross-Site Request Forgery) attacks. The following table shows the interface AuthToken and its implementation: Class Interface Properties Main Methods LDAPAuth Authenticator ldap.auth.method, ldap.referral, ldap.auth.password.encryption.algorithm, ldap.base.dn, ldap.error.user.lockout, ldap.error.password.expired, ldap.import.user.password.enabled, ldap.base.provider.url, auth.pipeline.enable.liferay.check, ldap.auth.required authenticateByEmailAddress, authenticateByScreenName, authenticateByUserId   JAAS Java Authentication and Authorization Service (JAAS) is a Java security framework for user-centric security to augment the Java code-based security. The portal has specified a set of properties for JAAS as follows: portal.jaas.enable=false portal.jaas.auth.type=userId portal.impersonation.enable=true The property portal.jaas.enable is set to false to disable JAAS security checks. Disabling JAAS would speed up login. Note that JAAS must be disabled, if administrators are able to impersonate other users. JAAS can authenticate users based on their e-mail address, screen name, user ID, or login, as determined by the property company.security.auth.type. By default, the class com.liferay.portal.security.jaas.PortalLoginModule loads the correct JAAS login module, based on what application server or servlet container the portal is deployed on. You can set a JAAS implementation class to override this behavior. The following table shows this class and its associations: Class Interface Properties Main methods AuthTokenImpl AuthToken auth.token.impl check, getToken AuthTokenWrapper AuthToken None check, getToken AuthTokenUtil None None check, getToken SessionAuthToken AuthToken auth.token.shared.secret check, getToken   As you have noticed, the classes com.liferay.portal.kernel.security.jaas, PortalLoginModule, and com.liferay.portal.security.jaas.PortalLoginModule, implement the interface LoginModule, configured by the property portal.jaas.impl. As shown in the following table, the portal has provided different login module implementation for different application servers or servlet containers: Class Interface/Extension Package Main methods ProtectedPrincipal Principal com.liferay.portal.kernel.servlet getName, equals, hasCode, toString PortalPrincipal ProtectedPrincipal com.liferay.portal.kernel.security.jaas PortalPrincipal PortalRole PortalPrincipal com.liferay.portal.kernel.security.jaas PortalRole PortalGroup PortalPrincipal, java.security.acl.Group com.liferay.portal.kernel.security.jaas addMember, isMember, members, removeMember PortalLoginModule javax.security.auth.spi.LoginModule com.liferay.portal.kernel.security.jaas, com.liferay.portal.security.jaas abort, commit, initialize, login, logout  
Read more
  • 0
  • 0
  • 3112

article-image-ext-js-4-working-grid-component
Packt
11 Jan 2012
10 min read
Save for later

Ext JS 4: Working with the Grid Component

Packt
11 Jan 2012
10 min read
(For more resources on JavaScript, see here.) Grid panel The grid panel is one of the most-used components when developing an application Ext JS 4 provides some great improvements related to this component. The Ext JS 4 Grid panel renders a different HTML than Ext JS 3 Grid did. Sencha calls this new feature Intelligent Rendering. Ext JS 3 used to create the whole structure, supporting all the features. But, what if someone just wanted to display a simple grid? All the other features not being rendered would just be wasted, because no one was using that structure. Ext JS 4 now renders only the features the grid uses, minimizing and boosting the performance. Before we examine the grid's new features and enhancements, let's take a look how to implement a simple grid in Ext JS 4: Ext.create('Ext.grid.Panel', { store: Ext.create('Ext.data.ArrayStore', { fields: [ {name: 'book'}, {name: 'author'} ], data: [['Ext JS 4: First Look','Loiane Groner']] }), columns: [ { text : 'Book', flex : 1, sortable : false, dataIndex: 'book' },{ text : 'Author', width : 100, sortable : true, dataIndex: 'author' }], height: 80, width: 300, title: 'Simple Grid', renderTo: Ext.getBody() }); As you can see in the preceding code, the two main parts of the grid are the store and the columns declarations. Note, as well, names of both store and model fields always have to match with the column's dataIndex (if you want to display the column in the grid). So far, nothing has changed. The way we used to declare a simple grid in Ext JS 3 is the same way we do for Ext JS 4. However, there are some changes related to plugins and the new features property. We are going to take a closer look at that in this section. Let's dive into the changes! Columns Ext JS 4 organizes all the column classes into a single package—the Ext.grid.column package. We will explain how to use each column type with an example. But first, we need to declare a Model and a Store to represent and load the data: Ext.define('Book', { extend: 'Ext.data.Model', fields: [ {name: 'book'}, {name: 'topic', type: 'string'}, {name: 'version', type: 'string'}, {name: 'released', type: 'boolean'}, {name: 'releasedDate', type: 'date'}, {name: 'value', type: 'number'} ] }); var store = Ext.create('Ext.data.ArrayStore', { model: 'Book', data: [ ['Ext JS 4: First Look','Ext JS','4',false,null,0], ['Learning Ext JS 3.2','Ext JS','3.2',tr ue,'2010/10/01',40.49], ['Ext JS 3.0 Cookbook','Ext JS','3',true,'2009/10/01',44.99], ['Learning Ext JS','Ext JS','2.x',true,'2008/11/01',35.99], ] }); Now, we need to declare a grid: Ext.create('Ext.grid.Panel', { store: store, width: 550, title: 'Ext JS Books', renderTo: 'grid-example', selModel: Ext.create('Ext.selection.CheckboxModel'), //1 columns: [ Ext.create('Ext.grid.RowNumberer'), //2 { text: 'Book',//3 flex: 1, dataIndex: 'book' },{ text: 'Category', //4 xtype:'templatecolumn', width: 100, tpl: '{topic} {version}' },{ text: 'Already Released?', //5 xtype: 'booleancolumn', width: 100, dataIndex: 'released', trueText: 'Yes', falseText: 'No' },{ text: 'Released Date', //6 xtype:'datecolumn', width: 100, dataIndex: 'releasedDate', format:'m-Y' },{ text: 'Price', //7 xtype:'numbercolumn', width: 80, dataIndex: 'value', renderer: Ext.util.Format.usMoney },{ xtype:'actioncolumn', //8 width:50, items: [{ icon: 'images/edit.png', tooltip: 'Edit', handler: function(grid, rowIndex, colIndex) { var rec = grid.getStore().getAt(rowIndex); Ext.MessageBox.alert('Edit',rec.get('book')); } },{ icon: 'images/delete.gif', tooltip: 'Delete', handler: function(grid, rowIndex, colIndex) { var rec = grid.getStore().getAt(rowIndex); Ext.MessageBox.alert('Delete',rec.get('book')); } }] }] }); The preceding code outputs the following grid: The first column is declared as selModel, which, in this example, is going to render a checkbox, so we can select some rows from the grid. To add this column into a grid, simply declare the selModel (also known as sm in Ext JS 3) as CheckBox selection model, as highlighted in the code (comment 1 in the code). The second column that we declared is the RowNumberer column. This column adds a row number automatically into the grid. In the third column (with text:'Book'), we did not specify a column type; this means the column will display the data itself as a string. In the fourth column, we declared a column with xtype as templatecolumn. This column will display the data from the store, specified by an XTemplate, as declared in the tpl property. In this example, we are saying we want to display the topic (name of the technology) and its version. The fifth column is declared as booleancolumn. This column displays a true or false value. But, if we do not want to display true or false in the grid, we can specify the values that we want to get displayed. In this example, we displayed the value as Yes (for true values) and No (for false values), as we declared in the trueText and falseText. The sixth column we declared as datecolumn, which is used to display dates. We can also declare a date format we want to be displayed. In this example, we want to display only the month and the year. The format follows the same rules for PHP date formats. The seventh column we declared as numbercolumn. This column is used to display numbers, such as a quantitative number, money, and so on. If we want to display the number in a particular format, we can use one of the Ext JS renderers to create a customized one. And the last column we declared is the actioncolumn. In this column, we can display icons that are going to execute an action, such as delete or edit. We declare the icons we want to display in the items property. topic: {name}{rows.length} Book topic: {name}{rows.length} Books Feature support In Ext JS 3, when we wanted to add a new functionality to a grid, we used to create a plugin or extend the GridPanel class. There was no default way to do it. Ext JS 4 introduces the Ext.grid.feature.Feature class that contains common methods and properties to create a plugin. Inside the Ext.grid.feature packages, we will find seven classes: AbstractSummary, Chunking, Feature, Grouping, GroupingSummary, RowBody, and Summary. A feature is very simple to use—we need to add the feature inside the feature declaration in the grid: features: [{ groupHeaderTpl: 'Publisher: {name}', ftype: 'groupingsummary' }] Let's take a look at how to use some of these native grid features. Ext.grid.feature.Grouping Grouping rows in Ext JS 4 has changed. Now, Grouping is a feature and can be applied to a grid through the features property. The following code displays a grid grouped by book topic: Ext.define('Book', { extend: 'Ext.data.Model', fields: ['name', 'topic'] }); var Books = Ext.create('Ext.data.Store', { model: 'Book', groupField: 'topic', data: [{ name: 'Learning Ext JS', topic: 'Ext JS' },{ name: 'Learning Ext JS 3.2', topic: 'Ext JS' },{ name: 'Ext JS 3.0 Cookbook', topic: 'Ext JS' },{ name: 'Expert PHP 5 Tools', topic: 'PHP' },{ name: 'NetBeans IDE 7 Cookbook', topic: 'Java' },{ name: 'iReport 3.7', topic: 'Java' },{ name: 'Python Multimedia', topic: 'Python' },{ name: 'NHibernate 3.0 Cookbook', topic: '.NET' },{ name: 'ASP.NET MVC 2 Cookbook', topic: '.NET' }] }); Ext.create('Ext.grid.Panel', { renderTo: Ext.getBody(), frame: true, store: Books, width: 350, height: 400, title: 'Books', features: [Ext.create('Ext.grid.feature.Grouping',{ groupHeaderTpl: 'topic: {name} ({rows.length} Book{[values.rows.length > 1 ? "s" : ""]})' })], columns: [{ text: 'Name', flex: 1, dataIndex: 'name' },{ text: 'Topic', flex: 1, dataIndex: 'topic' }] }); In the groupHeaderTpl attribute, we declared a template to be displayed in the grouping row. We are going to display one of the following customized strings, depending on the number of books belonging to the topic: The string comprises of the topic name ({name}) and the count of the book for the topic ({rows.length}). In Ext JS 3, we still had to declare a grouping field in the store; but, instead of a Grouping feature, we used to declare GroupingView, as follows: view: new Ext.grid.GroupingView({ forceFit:true, groupTextTpl: '{text} ({[values.rs.length]} {[values.rs.length > 1 ? "Books" : "Book"]})' }) If we execute the grouping grid, we will get the following output:   Ext.grid.feature.GroupingSummary The GroupingSummary feature also groups rows with a field in common, but it also adds a summary row at the bottom of each group. Let's change the preceding example to use the GroupingSummary feature: Ext.create('Ext.grid.Panel', { renderTo: Ext.getBody(), frame: true, store: Books, width: 350, height: 400, title: 'Books', features: [{ groupHeaderTpl: 'Topic: {name}', ftype: 'groupingsummary' }], columns: [{ text: 'Name', flex: 1, dataIndex: 'name', summaryType: 'count', summaryRenderer: function(value){ return Ext.String.format('{0} book{1}', value, value !== 1 ? 's' : ''); } },{ text: 'Topic', flex: 1, dataIndex: 'topic' }] }); We highlighted two pieces in the preceding code. The first line is the feature declaration: in the previous example (Grouping) we created the feature using the Ext.create declaration. But if we do not want to explicitly create the feature every time we declare, we can use the ftype property, which is groupingsummary in this example. The groupingsummary that we added to the grid's name column is in the second line of highlighted code. We declared a summaryType property and set its value as count. Declaring the summaryType as count means we want to display the number of books in that particular topic/category; it is going to count how many records we have for a particular category in the grid. It is very similar to the count of the PL/SQL language. Other summary types we can declare are: sum, min, max, average (these are self-explanatory). In this example, we want to customize the text that will be displayed in the summary, so we are going to use the summaryRenderer function. We need to pass a value argument to it, and the value is the count of the name column. Then, we are going to return a customized string that is going to display the count (token {0}) and the string book or books, depending on the count (if it is more than 1 we add s at the end of the string book). Ext.String.format is a function that allows you to define a tokenized string and pass an arbitrary number of arguments to replace the tokens. Each token must be unique and must increment in the format {0}, {1}, and so on. The preceding code will output the following grid: Ext.grid.feature.Summary The GroupingSummary feature adds a row at the bottom of each grouping. The Summary feature adds a row at the bottom of the grid to display summary information. The property configuration is very similar to that for GroupingSummary, because both classes are subclasses of AbstractSummary (a class that provides common properties and methods for summary features). Ext.create('Ext.grid.Panel', { renderTo: Ext.getBody(), frame: true, store: Books, width: 350, height: 300, title: 'Books', features: [{ ftype: 'summary' }], columns: [{ text: 'Name', flex: 1, dataIndex: 'name', summaryType: 'count', summaryRenderer: function(value){ return Ext.String.format('{0} book{1}', value, value !== 1 ? 's' : ''); } },{ text: 'Topic', flex: 1, dataIndex: 'topic' }] }); The only difference from the GroupingSummary feature is the feature declaration itself. The summayType and summaryRenderer properties work in a similar way. The preceding code will output the following grid: Ext.grid.feature.RowBody The rowbody feature adds a new tr->td->div in the bottom of the row that we can use to display additional information. Here is how to use it: Ext.create('Ext.grid.Panel', { renderTo: Ext.getBody(), frame: true, store: Books, width: 350, height: 300, title: 'Books', features: [{ ftype: 'rowbody', getAdditionalData: function(data, idx, record, orig) { return { rowBody: Ext.String.format( '->topic: {0}', data.topic) }; } }, { ftype: 'rowwrap' }], columns: [{ text: 'Name', flex: 1, dataIndex: 'name' }] });  In the preceding code, we are not only displaying the name of the book; we are using the rowbody to display the topic of the book as well. The first step is to declare the rowbody feature. One very important thing to be noted is that rowbody will be initially hidden, unless you override the getAdditionalData method. If we execute the preceding code, we will get the following output:
Read more
  • 0
  • 0
  • 5052
article-image-ext-js-4-working-tree-and-form-components
Packt
11 Jan 2012
6 min read
Save for later

Ext JS 4: Working with Tree and Form Components

Packt
11 Jan 2012
6 min read
Tree panel The tree component is much more simplified in Ext JS 4. Like grid, it is also a subclass of Ext.panel.Table. This means we can add most functionality of the grid in the tree as well. Let's start declaring a simple tree in Ext JS 3: new Ext.tree.TreePanel({ renderTo: 'tree-example', title: 'Simple Tree', width: 200, rootVisible: false, root: new Ext.tree.AsyncTreeNode({ expanded: true, children: [ { text: "Menu Option 1", leaf: true }, { text: "Menu Option 2", expanded: true, children: [ { text: "Sub Menu Option 2.1", leaf: true }, { text: "Sub Menu Option 2.2", leaf: true} ] }, { text: "Menu Option 3", leaf: true } ] }) }); Now, let's see how to declare the same tree in Ext JS: Ext.create('Ext.tree.Panel', { title: 'Simple Tree', width: 200, store: Ext.create('Ext.data.TreeStore', { root: { expanded: true, children: [ { text: "Menu Option 1", leaf: true }, { text: "Menu Option 2", expanded: true, children: [ { text: "Sub Menu Option 2.1", leaf: true }, { text: "Sub Menu Option 2.2", leaf: true} ] }, { text: "Menu Option 3", leaf: true } ] } }), rootVisible: false, renderTo: 'tree-example' }); In Ext JS 4, we also have the title, width, and div properties, where the tree is going to be rendered, and a config store. The store config is a new element for the tree. If we output both of the codes, we will have the same output, which is the following tree: If we take a look at the data package, we will see three files related to tree: NodeInterface, Tree, and TreeStore. NodeInterface applies a set of methods to the prototype of a record to decorate it with a Node API. The Tree class is used as a container of a series of nodes and TreeStore is a store implementation used by a Tree. The good thing about having TreeStore is that we can use its features, such as proxy and reader, as we do for any other Store in Ext JS 4. Drag-and-drop and sorting The drag-and-drop feature is very useful for rearranging the order of the nodes in the Tree class. Adding the drag-and-drop feature is very simple. We need to add the following code into the tree declaration: Ext.create('Ext.tree.Panel', { store: store, viewConfig: { plugins: { ptype: 'treeviewdragdrop' } }, //other properties }); And how do we handle drag-and-drop in store? We do it in the same way as we handled the edition plugin on the Grid, using a Writer: var store = Ext.create('Ext.data.TreeStore', { proxy: { type: 'ajax', api: { read : '../data/drag-drop.json', create : 'create.php' } }, writer: { type: 'json', writeAllFields: true, encode: false }, autoSync:true }); In the earlier versions of Ext JS 4, the autoSync config option does work. Another way of synchronizing the Store with the server is adding a listener to the Store instead of the autoSync config option, as follows: listeners: { move: function( node, oldParent, newParent, index, options ) { this.sync(); } } And, to add the sorting feature to the Tree class, we simply need to configure the sorters property in the TreeStore, as follows: Ext.create('Ext.data.TreeStore', { folderSort: true, sorters: [{ property: 'text', direction: 'ASC' }] }); Check tree To implement a check tree, we simply need to make a few changes in the data that we are going to apply to the Tree. We need to add a property called checked to each node, with a true or false value; true indicates the node is checked, and false, otherwise. For this example, we will use the following json code: [{ "text": "Cartesian", "cls": "folder", "expanded": true, "children": [{ "text": "Bar", "leaf": true, "checked": true },{ "text": "Column", "leaf": true, "checked": true },{ "text": "Line", "leaf": true, "checked": false }] },{ "text": "Gauge", "leaf": true, "checked": false },{ "text": "Pie", "leaf": true, "checked": true }] And as we can see, the code is the same as that for a simple tree: var store = Ext.create('Ext.data.TreeStore', { proxy: { type: 'ajax', url: 'data/check-nodes.json' }, sorters: [{ property: 'leaf', direction: 'ASC' }, { property: 'text', direction: 'ASC' }] }); Ext.create('Ext.tree.Panel', { store: store, rootVisible: false, useArrows: true, frame: true, title: 'Charts I have studied', renderTo: 'tree-example', width: 200, height: 250 }); The preceding code will output the following tree: Tree grid In Ext JS 3, the client JavaScript Component, Tree Grid, was an extension part of the ux package. In Ext JS 4, this Component is part of the native API but it is no longer an extension. To implement a Tree Grid, we are going to use the Tree Component as well; the only difference is that we are going to declare some columns inside the tree. This is the good part of Tree being a subclass of Ext.panel.Table, the same super class for Grid as well. First, we will declare a Model and a Store, to represent the data we are going to display in the Tree Grid. We will then load the Tree Grid: Ext.define('Book', { extend: 'Ext.data.Model', fields: [ {name: 'book', type: 'string'}, {name: 'pages', type: 'string'} ] }); var store = Ext.create('Ext.data.TreeStore', { model: 'Book', proxy: { type: 'ajax', url: 'data/treegrid.json' }, folderSort: true }); So far there is no news. We declared the variable store as any other used in a grid, except that this one is a TreeStore. The code to implement the Component Tree Grid is declared as follows: Ext.create('Ext.tree.Panel', { title: 'Books', width: 500, height: 300, renderTo: Ext.getBody(), collapsible: true, useArrows: true, rootVisible: false, store: store, multiSelect: true, singleExpand: true, columns: [{ xtype: 'treecolumn', text: 'Task', flex: 2, sortable: true, dataIndex: 'task' },{ text: 'Assigned To', flex: 1, dataIndex: 'user', sortable: true }] }); The most important line of code is highlighted—the columns declaration. The columns property is an array of Ext.grid.column.Column objects, as we declare in a grid. The only thing we have to pay attention to is the column type of the first column, that is, treecolumn; this way we know that we have to render the node into the Tree Grid. We also configured some other properties. collapsible is a Boolean property; if set to true it will allow us to collapse and expand the nodes of the tree. The useArrows is also a Boolean property, which indicates whether the arrow icon will be visible in the tree (expand/collapse icons). The property rootVisible indicates whether we want to display the root of the tree, which is a simple period (.). The property singleExpand indicates whether we want to expand a single node at a time and the multiSelect property indicates whether we want to select more than one node at once. The preceding code will output the following tree grid:
Read more
  • 0
  • 0
  • 4554

article-image-article-building-location-aware-web-applications-mongodb-php
Packt
06 Jan 2012
14 min read
Save for later

Building Location-aware Web Applications with MongoDB and PHP

Packt
06 Jan 2012
14 min read
(For more resources on PHP and MongoDB, see here.) A geolocation primer The term geolocation refers to the act of locating the geographic position of a person, a place, or any place of interest. The geographic position of the object is determined mainly by the latitude and longitude of the object, sometimes its height from sea level is also taken into account. In this section, we are going to learn about different techniques that location- based applications use to determine a user's location. You may skip this section if you are already familiar with them, or if you just cannot wait to get started coding! Methods to determine location There are several ways to locate the geographic position of a computing device. Let's briefly learn about the most effective ones among them: Global Positioning System (GPS ): Nowadays, tech savvy people carry GPS-enabled smartphones in their pockets. Devices like these act as GPS receivers; they constantly exchange information with GPS satellites orbiting the Earth and calculate their geographic position. This process is known as trilateration . This is perhaps the most accurate way to determine location, as of today. Cellphone tracking: Each cellphone has a Cell ID assigned to it that uniquely identifies it in a particular cellular network. In a process known as cellular triangulation , three base stations (cellphone towers) are used to correctly identify the latitude and longitude of the cellphone identified by the Cell ID. This method is more accurate in urban areas, where there are more cellphone towers close to each other, than in rural areas. IP address: Internet service providers are given blocks of IP addresses based on a country/city/region. When a user visits a website, the website could take a look at his IP address and consult an database that stores location data against IP addresses (it might be either an internal database or provided by a third-party service) to get the location of the user. Accuracy of this approach depends on the accuracy of the database itself. Also, if the user is behind a proxy server, the application will see the IP address of the proxy server, which could be located in a different region than the user. Wi-Fi MAC address tracking: A Wi-Fi access point has a MAC (Media Access Control) address assigned to it, which is globally unique. Some location-based services use this to identify the location of the Wi-Fi router, and therefore, the location of users on that Wi-Fi LAN. In principle, it works in the same way IP address-based geolocation does. Google has an API that gives location information (latitude, longitude, and so on) when provided with a MAC address. If you are curious to learn more about how geolocation works, How Stuff Works has a comprehensive article on it available at http://electronics.howstuffworks.com/ everyday-tech/location-tracking.htm. Detecting the location of a web page visitor When building a location-aware web application, the first part of the problem to be solved is to get the location of the user visiting the web page. We have covered geolocation techniques in the previous section, now it is time to see them in action. The W3C Geolocation API We are going to use the W3C Geolocation API for locating the visitors to our web page. The W3C Geolocation API provides a high-level interface for web developers to implement geolocation features in an application. The API takes care of detecting the location using one or more methods (GPS, Cell ID, IP address). The developers do not have to worry about what is going on under the hood; they only need to focus on the geographic information returned by the API! You can read the whole specification online at http://www.w3.org/TR/ geolocation-API/. Browsers that support geolocation The following table lists the browsers that support the W3C Geolocation API: Browser Version Google Chrome 5.0+ Mozilla Firefox 3.5+ Internet Explorer 9.0+ Safari 5.0+ Opera 10.6+ iPhone 3.1+ Android 2.0+ Blackberry 6.0+ Make sure you use one of these browsers when you try the practical examples in this article. Time for action – detecting location with W3C API In this section, we are going to build a web page that detects the location of a visitor using the Geolocation API. The API will detect the latitude and longitude of the user who loads the page in his browser. We are going use that information on a map, rendered dynamically using the Google Maps API: Fire up your text editor and create a new HTML file named location.html. Put the following code in it: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xml_lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <link rel="stylesheet" href="styles.css"/> <style type="text/css" media="screen"> div#map { width:450px; height: 400px; } </style> <title>Locating your position</title> </head> <body> <div id="contentarea"> <div id="innercontentarea"> <h2>Locating your position</h2> <div id="map"></div> </div> </div> <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?sensor=false"> </script> <script type="text/javascript" src="geolocation.js"> </script> </body> </html> Create another file named geolocation.js and put the following JavaScript code in it: var mapContainer = document.getElementById('map');var map;function init() { //Google map settings (zoom level, map type etc.) var mapOptions = {zoom: 16, disableDefaultUI: true, mapTypeId: google.maps.MapTypeId.ROADMAP}; //map will be drawn inside the mapContainer map = new google.maps.Map(mapContainer, mapOptions); detectLocation();}function detectLocation(){ var options = { enableHighAccuracy: true, maximumAge: 1000, timeout: 30000}; //check if the browser supports geolocation if (window.navigator.geolocation) { //get current position window.navigator.geolocation.getCurrentPosition( drawLocationOnMap, handleGeoloacteError, options); } else { alert("Sorry, your browser doesn't seem to support geolocation :-("); }}//callback function of getCurrentPosition(), pinpoints location//on Google mapfunction drawLocationOnMap(position) { //get latitude/longitude from Position object var lat = position.coords.latitude; var lon = position.coords.longitude; var msg = "You are here: Latitude "+lat+", Longitude "+lon; //mark current location on Google map var pos = new google.maps.LatLng(lat, lon); var infoBox = new google.maps.InfoWindow({map: map, position:pos, content: msg}); map.setCenter(pos); return;}function handleGeoloacteError() { alert("Sorry, couldn't get your geolocation :-(");}window.onload = init; Load the location.html page in your browser. When the browser asks for permission to allow the page to access your location, click Yes/OK/Allow:   (Move the mouse over the image to enlarge.)   Once you allow the page to access your location, it renders a map that shows your current location on it, along with the geographic coordinates: What just happened? We built a web page and added JavaScript code that detects the latitude and longitude of the user who loads the page in his browser. The API needs the user's permission to get his geographic information. So when the page loads, it prompts the user to specify whether or not he will allow the page to get his location. If the user agrees, the JavaScript code executes and gets his geographic coordinates using the W3C Geolocation API. Then it renders a small map using the Google Maps API, and highlights the user's location on the map. The Geolocation object The Geolocation object implements the W3C Geolocation API. The JavaScript engine uses this object to obtain geographic information of the computer or phone on which the browser is running. Geolocation is a property of the Browser object (window.navigator), accessed as window.navigator.geolocation. In our example, we detect if the browser has geolocation capabilities by accessing this object, and notify the user if the browser fails the test: //check if the browser supports geolocationif (window.navigator.geolocation) { window.navigator.geolocation.getCurrentPosition( drawLocationOnMap, handleGeoloacteError, options);} else { alert("Sorry, your browser doesn't seem to support geolocation.");} The getCurrentPosition() method The location information is obtained invoking the getCurrentPosition() method on the Geolocation object.   getCurrentPostition(callbackOnSuccess, [callbackOnFailure, options])   The argument callbackOnSuccess is a reference to a callback function. It is executed when the getCurrentPosition() method successfully determines the geolocation. This is a mandatory argument. callbackOnFailure is an optional argument, a callback function for handling failure to get the geolocation. options represents the PositionOptions object, which specifies optional configuration parameters to the method. The PositionOptions object has the following properties: enableHighAccuracy : Tells the API to try its best to get the exact current position. It is set to false by default. When set to true, the API response tends to be slower. maximumAge : If API responses are cached, this setting specifies that the API will not use the cached responses older than maximumAge milliseconds. timeout : The timeout value in milliseconds to receive the API response. In our example, we used the drawLocationOnMap() method as a callbackOnSuccess function , which draws a map and pinpoints the location on it (we will walkthrough it shortly). The handleGeoloacteError() method notifies the user of any error while getting the position: window.navigator.geolocation.getCurrentPosition( drawLocationOnMap, handleGeoloacteError, options); Drawing the map using the Google Maps APTI The Google Maps APTIis a popular JavaScript API for drawing maps on a web page. This API has methods to highlight objects on the rendered map. We can access the API methods by adding the following script tag in the web page (as we did in the location.html file): <script type="text/javascript"src="http://maps.googleapis.com/maps/api/js?sensor=false"></script> If you are on a GPS-enabled device, set the sensor parameter to true, as follows: When the script is loaded, we can initiate the map drawing by instantinating the google. maps.Map object . The Map object takes a DOM object as its first parameter; the map will be rendered inside this DOM. It also takes an optional JSON object that specifies configurations for the map (zoom level, map type, and so on): var mapContainer = document.getElementById('map');var mapOptions = {zoom: 16, disableDefaultUI: true, mapTypeId: google.maps.MapTypeId.ROADMAP};map = new google.maps.Map(mapContainer, mapOptions); Now, let's focus on the drawLocationOnMap() function in the geolocation.js file, which is the callback function of the getCurrentPosition() method . As we know, this method gets called when the W3C API successfully locates the position; it receives a Position object as its argument. This object holds all the geolocation data returned by the API. The Position object holds a reference to the Coordinates object (accessed by the property coords). The Coordinates object contains geographical coordinates such as latitude, longitude, altitude, and so on of the location: function drawLocationOnMap(position) { var lat = position.coords.latitude; var lon = position.coords.longitude; var msg = "You are here: Latitude "+lat+", Longitude "+lon; ……………………………………………………………………………………………………………………………………………………………} After we get the latitude and longitude values of the coordinate, we set it as the center of the map. We also display an information box with a message saying, You are here on the map! function drawLocationOnMap(position) { var lat = position.coords.latitude; var lon = position.coords.longitude; var msg = "You are here: Latitude "+lat+", Longitude "+lon; var pos = new google.maps.LatLng(lat, lon); var infoBox = new google.maps.InfoWindow({map: map, position:pos, content: msg}); map.setCenter(pos); return;} Get to know Google Maps API We are going to use the Google Maps API in the upcoming examples as well. You might consider familiarizing yourself with it by reading some of its online documentation at http://code.google.com/apis/maps/ documentation/javascript/basics.html. Geospatial indexing We can now turn our attention to the main topic of this article—geospatial indexing . A geospatial index is a special kind of index, designed specifically with location queries in mind, so you can perform queries like "Give me the closest n objects to my location". Geospatial indexing essentially turns your collection into a two-dimensional map. Each point of interest on that map (each document in the collection) is assigned a special value named geohash. Geohashing divides the coordinate system into hierarchical buckets of grids; the whole map gets divided into smaller quadrants. When you look for objects nearest to a point (x,y) on the map, MongoDB calculates the geohash of (x,y) and returns the points with the same geohash. I am not going to delve into much detail here on how it works, but if you are interested, I recommend you read MongoDB in Flatland (found at http://www.snailinaturtleneck.com/blog/2011/06/08/mongo-in-flatland/), an elaborate yet simple demonstration of how geospatial indexing works in MongoDB. Indexes are generally applied on fields to make field lookups faster. Time for action – creating geospatial indexes Let's see how we can build the geospatial index on a MongoDB collection: Launch the mongo interactive shell. Switch to a new database namespace called geolocation: $ ./mongodb/bin/mongoMongoDB shell version: 1.8.1connecting to: test> use geolocationswitched to db geolocation> Insert a few documents in a collection named ?map. Each document must contain an embedded document with two fields, latitude and longitude: > db.map.insert({coordinate: {latitude:23.2342987, longitude:90.20348}})> db.map.insert({coordinate: {latitude:23.3459835, longitude:90.92348}})> db.map.insert({coordinate: {latitude:23.6743521, longitude:90.30458}}) Create the geospatial index for the map collection by issuing the following command: >db.map.ensureIndex({coordinate: '2d'}) Enter the next command to check if the index was created: > db.system.indexes.find(){ "name" : "_id_", "ns" : "geolocation.map", "key" : { "_id" : 1 }, "v" : 0 }{ "_id" : ObjectId("4e46af48ffd7d5fd0a4d1e41"), "ns" : "geolocation.map", "key" : { "coordinate" : "2d" }, "name" : " coordinate _" } What just happened? We created a MongoDB collection named geocollection in a database named map. We manually inserted documents into the collection, each document contains some random latitude and longitude values in an embedded document named coordinate: > db.map.findOne(){ "_id" : ObjectId("4e46ae9bffd7d5fd0a4d1e3e"), "coordinate" : { "latitude" : 23.2342987, "longitude" : 90.20348 }} After that, we built the geospatial index on the latitude/longitude pairs by calling the ensureIndex() method on the collection: db.map.ensureIndex({coordinate: "2d"}) Next, we invoked the system.indexes.find() method that lists the indexes in the database. The index we created should be in that list: > db.system.indexes.find(){ "name" : "_id_", "ns" : "geolocation.map", "key" : { "_id" : 1 }, "v" : 0 }{ "_id" : ObjectId("4e46af48ffd7d5fd0a4d1e41"), "ns" : "geolocation.map", "key" : { "coordinate" : "2d" }, "name" : " coordinate _" } Geospatial indexing – Important things to know There are a few of things you must know about geospatial indexing: There can be only one geospatial index for a MongoDB collection. You cannot have more than one geospatial index for a collection. The index must be created for an embeded documendt or an array field of the document. If you build the index for an array field, the first two elements of the array will be considered as the (x,y) coordinate: >db.map.insert({coordinate: [23.3459835, 90.92348]})>db.map.ensureIndex({coordinate: "2d"}) Ordering is important when you are storing coordinates. If you store them in the order (y,x) rather than (x,y), you will have to query the collection with (y,x). Use arrays to store coordinates When storing coordinates in a geospatially indexed field, arrays are preferable to embedded objects. This is because an array preserves the order of items in it. No matter what programming language you are using to interact with MongoDB, this comes in very handy when you do queries.
Read more
  • 0
  • 0
  • 1121

article-image-qooxdoo-working-layouts
Packt
27 Dec 2011
16 min read
Save for later

qooxdoo: Working with Layouts

Packt
27 Dec 2011
16 min read
(For more resources on this topic, see here.) qooxdoo uses the generic terminology of the graphical user interface. So, it is very easy to understand the concepts involved in it.. The basic building block in qooxdoo is termed a widget. Each widget (GUI component) is a subclass of the Widget class. A widget also acts as a container to hold more widgets. Wherever possible, grouping of the widgets to form a reusable component or custom widget is a good idea. This allows you to maintain consistency across your application and also helps you to build the application quicker than the normal time. It also increases maintainability, as you need to fix the defect at only one place. qooxdoo provides a set of containers too, to carry widgets, and provides public methods to manage. Let's start with the framework's class hierarchy: Base classes for widgets qooxdoo framework abstracts the common functionalities required by all the widgets into a few base classes, so that it can be reused by any class through object inheritance. Let's start with these base classes. qx.core.Object Object is the base class for all other qooxdoo classes either directly or indirectly. The qx.core.Object class has the implementation for most of the functionalities, such as, object management, logging, event handling, object-oriented features, and so on. A class can extend the qx.core.Object class to get all the functionalities defined in the this class. When you want to add any functionality to your class, just inherit the Object class and add the extra functionalities in the subclass. The major functionalities of the Object class are explained in the sections that follow. Object management   The Object class provides the following methods for object management, such as, creation, destruction, and so on: base(): This method calls base class method dispose(): This method disposes or destroys the object isDisposed(): This method returns a true value if the object is disposed toString(): This method returns the object in string format toHashCode(): This method returns hash code of the object Event handling The Object class provides the following methods for event creation, event firing, event listener, and so on: addListener(): This method adds the listener on the event target and returns the ID of the listener addListenerOnce(): This method adds the listener and listens only to the first occurrence of the event dispatchEvent(): This method dispatches the event fireDataEvent(): This method fires the data event fireEvent(): This method fires the event removeListener(): This method removes the listener removeListenerById(): This method removes the listener by its ID, given by addListener() Logging The Object class provides the following methods to log the message at different levels: warn(): Logs the message at warning level info(): Logs the message at information level error(): Logs the message at error level debug(): Logs the message at the debugging level trace(): Logs the message at the tracing level Also, the Object class provides the methods for setters and getters for properties, and so on. qx.core.LayoutItem LayoutItem is the super most class in the hierarchy. You can place only the layout items in the layout manager. LayoutItem is an abstract class. The LayoutItem class mainly provides properties, such as, height, width, margins, shrinking, growing, and many more, for the item to be drawn on the screen. It also provides a set of public methods to alter these properties. Check the API documentation for a full set of class information. qx.core.Widget Next in the class hierarchy is the Widget class, which is the base class for all the GUI components. Widget is the super class for all the individual GUI components, such as, button, text field, combobox, container, and so on, as shown in the class hierarchy diagram. There are different kinds of widgets, such as, containers, menus, toolbars, form items, and so on; each kind of widgets are defined in different namespaces. We will see all the different namespaces or packages, one-by-one, in this article. A widget consists of at least three HTML elements. The container element, which is added to the parent widget, has two child elements—the decoration and the content element. The decoration element decorates the widget. It has a lower z-index and contains markup to render the widget's background and border styles, using an implementation of the qx.ui.decoration.IDecorator interface. The content element is positioned inside the container element, with the padding, and contains the real widget element. Widget properties Common widget properties include:   Visibility: This property controls the visibility of the widget. The possible values for this property are: visible: Makes the widget visible on screen. hidden: Hides the widget, but widget space will be occupied in the parent widget's layout. This is similar to the CSS style visibility:hidden. exclude: Hides the widget and removes from the parent widget's layout, but the widget is still a child of its parent's widget. This is similar to the CSS style display:none. The methods to modify this property are show(), hide(), and exclude(). The methods to check the status are isVisible(), isHidden(), and isExcluded(). Tooltip: This property displays the tooltip when the cursor is pointing at the widget. This tooltip information consists of toolTipText and toolTipIcon. The different methods available to alter this property are: setToolTip()/getToolTip(): Sets or returns the qx.ui.tooltip.ToolTip instance. The default value is null. setToolTipIcon()/getToolTipIcon(): Sets or returns the URL for the icon. The default value is null. setToolTipText()/getToolTipText(): Sets or returns the string text. It also supports the HTML markup. Default value is null. Text color: The textColor property sets the frontend text color of the widget. The possible values for this property are any color or null. Padding: This property is a shorthand group property for paddingTop, paddingRight, paddingBottom and paddingLeft of the widget. The available methods are setPadding() and resetPadding(), which sets values for top, right, bottom, and left padding, consecutively. If any values are missing, the opposite side values will be taken for that side. Also, set/get methods for each padding side are also available. Tab index: This property controls the traversal of widgets on the Tab key press. Possible values for this property are any integer or null. The traversal order is from lower value to higher value. By default, tab index for the widgets is set in the order in which they are added to the container. If you want to provide a custom traversal order, set the tab index accordingly. The available methods are setTabIndex() and getTabIndex(). These methods, respectively set and return the integer value (0 to 32000) or null. Font: The Font property defines the font for the widget. The possible value is either a font name defined in the theme, or an instance of qx.bom.Font, or null. The available methods are: setFont(): Sets the font getFont(): Retrieves the font initFont(): Initializes the font resetFont(): Resets the font Enabled: This property enables or disables the widget for user input. Possible values are true or false (Boolean value). The default value is true. The widget invokes all the input events only if it is in the enabled state. In the disabled state, the widget will be grayed out and no user input is allowed. The only events invoked in the disabled state are mouseOver and mouseOut. In the disabled state, tab index and widget focus are ignored. The tab traversal focus will go to the next enabled widget. setEnabled()/getEnabled() are the methods to set or get a Boolean value, respectively. Selectable: This property says whether the widget contents are selectable. When a widget contains text data and the property is true, native browser selection can be used to select the contents. Possible values are true or false. The default value is false. setSelectable(), getSelectable(), initSelectable(), resetSelectable(), and toggleSelectable() are the methods available to modify the Selectable property. Appearance: This property controls style of the element and identifies the theme for the widget. Possible values are any string defined in the theme; the default value is widget. setAppearence(), getAppearence(), initAppearence(), and resetAppearence() are the methods to alter the appearance. Cursor: This property specifies which type of cursor to display on mouse over the widget. The possible values are any valid CSS2 cursor name defined by W3C (any string) and null. The default value is null. Some of the W3C-defined cursor names are default, wait, text, help, pointer, crosshair, move, n-resize, ne-resize, e-resize, se-resize, s-resize, sw-resize, w-resize, and nw-resize. setCursor(), getCursor(), resetCursor(), and initCursor() are the methods available to alter the cursor property. qx.application The starting point for a qooxdoo application is to write a custom application class by inheriting one of the qooxdoo application classes in the qx.application namespace or package. Similar to the main method in Java, the qooxdoo application also starts from the main method in the custom application class. qooxdoo supports three different kinds of applications: Standalone: Uses the application root to build full-blown, standalone qooxdoo applications. Inline: Uses the page root to build traditional web page-based applications, which are embedded into isles in the classic HTML page. Native: This class is for applications that do not involve qooxdoo's GUI toolkit. Typically, they only make use of the IO (AJAX) and BOM functionality (for example, to manipulate the existing DOM). Whenever a user creates an application with the Python script, a custom application class gets generated with a default main method. Let's see the custom application class generated for our Team Twitter application. After generation, the main function code is edited to add functionality to communicate to the RPC server and say "hello" to the qooxdoo world. The following code is the content of the Application.js class file with an RPC call to communicate with the server: /** * This is the main application class of your custom application "teamtwitter" */ qx.Class.define("teamtwitter.Application", { extend : qx.application.Standalone, members : { /** * This method contains the initial application code and gets called during startup of the application * @lint ignoreDeprecated(alert) */ main : function() { // Call super class this.base(arguments); // Enable logging in debug variant if (qx.core.Variant.isSet("qx.debug", "on")) { // support native logging capabilities, e.g. Firebug for Firefox qx.log.appender.Native; // support additional cross-browser console. Press F7 to toggle visibility qx.log.appender.Console; } /* Below is your actual application code... */ // Create a button var button1 = new qx.ui.form.Button("First Button", "teamtwitter/test.png"); // Document is the application root var doc = this.getRoot(); // Add button to document at fixed coordinates doc.add(button1, {left: 100, top: 50}); // Add an event listener button1.addListener("execute", function(e) { var rpc = new qx.io.remote.Rpc(); rpc.setCrossDomain( false ); rpc.setTimeout(1000); var host = window.location.host; var proto = window.location.protocol; var webURL = proto + "//" + host + "/teamtwitter/.qxrpc"; rpc.setUrl(webURL); rpc.setServiceName("qooxdoo.test"); rpc.callAsync(function(result, ex, id){ if (ex == null) { alert(result); } else { alert("Async(" + id + ") exception: " + ex); } }, "echo", "Hello to qooxdoo World!"); }); } } }); We've had an overview of the class hierarchy of the qooxdoo framework and got to know the base classes for the widgets. Now, we have an idea of the core functionalities available for the widgets, the core properties of the widgets, and the methods to manage those properties. We've received more information on the application in the qooxdoo framework. Now, it is time to learn about the containers. Containers A container is a kind of widget. It holds multiple widgets and exposes public methods to manage their child widgets. One can configure a layout manager for the container to position all the child widgets in the container. qooxdoo provides different containers for different purposes. Let's check different containers provided by the qooxdoo framework and understand the purpose of each container. Once you understand the purpose of each container, you can select the right container when you design your application. Scroll Whenever the content widget size (width and height) is larger than the container size (width and height), the Scroll container provides vertical, or horizontal, or both scroll bars automatically. You have to set the Scroll container's size carefully to make it work properly. The Scroll container is used most commonly if the application screen size is large. The Scroll container has a fixed layout and it can hold a single child. So, there is no need to configure the layout for this container. The following code snippet demonstrates how to use the Scroll container: // create scroll containervar scroll = new qx.ui.container.Scroll().set({width: 300,height: 200});// adding a widget with larger widget and height of the scrollscroll.add(new qx.ui.core.Widget().set({width: 600,minWidth: 600,height: 400,minHeight: 400})); // add to the root widget.this.getRoot().add(scroll); The GUI look for the preceding code is as follows: Stack The Stack container puts a widget on top of an old widget. This container displays only the topmost widget. The Stack container is used if there are set of tasks to be carried out in a flow. An application user can work on each user interface one-by-one in order. The following code snippet demonstrates how to use the Stack container: // create stack containervar stack = new qx.ui.container.Stack();// add some childrenstack.add(new qx.ui.core.Widget().set({backgroundColor: "red"}));stack.add(new qx.ui.core.Widget().set({backgroundColor: "green"}));stack.add(new qx.ui.core.Widget().set({backgroundColor: "blue"}));this.getRoot().add(stack); The GUI look for the preceding code is as follows: Resizer Resizer is a container that gives flexibility for resizing at runtime. This container should be used only if you want to allow the application user to dynamically resize the container. The following code snippet demonstrates how to use the Resizer container: var resizer = new qx.ui.container.Resizer().set({marginTop : 50,marginLeft : 50,width: 200,height: 100});resizer.setLayout(new qx.ui.layout.HBox());var label = new qx.ui.basic.Label("Resize me <br>I'm resizable");label.setRich(true);resizer.add(label);this.getRoot().add(resizer); The GUI look for the preceding code is as follows: Composite This is a generic container. If you do not want any of the specific features, such as, resize on runtime, stack, scroll, and so on, but just want a container, you can use this container. This is one of the mostly used containers. The following code snippet demonstrates the Composite container usage. A horizontal layout is configured to the Composite container. A label and a text field are added to the container. The horizontal layout manager places them horizontally: // create the compositevar composite = new qx.ui.container.Composite()// configure a layout.composite.setLayout(new qx.ui.layout.HBox());// add some child widgetscomposite.add(new qx.ui.basic.Label("Enter Text: "));composite.add(new qx.ui.form.TextField());// add to the root widget.this.getRoot().add(composite); The GUI look for the preceding code is as follows: Window Window is a container that has all features, such as, minimize, maximize, restore, and close. The icons for these operations will appear on the top-right corner. Different themes can be set to get the look and feel of a native window within a browser. This window is best used when an application requires Multiple Document Interface (MDI) or Single Document Interface (SDI). The following code snippet demonstrates a window creation and display: var win = new qx.ui.window.Window("First Window");win.setWidth(300);win.setHeight(200);// neglecting minimize buttonwin.setShowMinimize(false);this.getRoot().add(win, {left:20, top:20});win.open(); The GUI look for the preceding code is as follows: TabView The TabView container allows you to display multiple tabs, but only one tab is active at a time. The TabView container simplifies the GUI by avoiding the expansive content spreading to multiple pages, with a scroll. Instead, the TabView container provides the tab title buttons to navigate to other tabs. You can group the related fields into each tab and try to avoid the scroll by keeping the most-used tab as the first tab and making it active. Application users can move to other tabs, if required. TabView is the best example for the stack container usage. It stacks all pages one over the other and displays one page at a time. Each page will have a button at the top, in a button bar, to allow switching the page. Tabview allows positioning the button bar on top, bottom, left, or right. TabView also allows adding pages dynamically; a scroll appears when the page buttons exceed the size. The following code snippet demonstrates the usage of TabView: var tabView = new qx.ui.tabview.TabView();// create a pagevar page1 = new qx.ui.tabview.Page("Layout", "icon/16/apps/utilitiesterminal.png");// add page to tabviewtabView.add(page1);var page2 = new qx.ui.tabview.Page("Notes", "icon/16/apps/utilitiesnotes.png");page2.setLayout(new qx.ui.layout.VBox());page2.add(new qx.ui.basic.Label("Notes..."));tabView.add(page2);var page3 = new qx.ui.tabview.Page("Calculator", "icon/16/apps/utilities-calculator.png");tabView.add(page3);this.getRoot().add(tabView, {edge : 0}); The GUI look for the preceding code is as follows: GroupBox GroupBox groups a set of form widgets and shows an effective visualization with the use of a legend, which supports text and icons to describe the group. As with the container, you can configure any layout manager and allow adding a number of form widgets to the GroupBox. Additionally, it is possible to use checkboxes or radio buttons within the legend. This allows you to provide group functionalities such as selecting or unselecting all the options in the group. This feature is most important for complex forms with multiple choices. The following code snippet demonstrates the usage of GroupBox: // group boxvar grpBox = new qx.ui.groupbox.GroupBox("I am a box");this.getRoot().add(grpBox, {left: 20, top: 70});// radio group boxvar rGrpBox = new qx.ui.groupbox.RadioGroupBox("I am a box");rGrpBox.setLayout(new qx.ui.layout.VBox(4));rGrpBox.add(new qx.ui.form.RadioButton("Option1"));rGrpBox.add(new qx.ui.form.RadioButton("Option2"));this.getRoot().add(rGrpBox, {left: 160, top: 70});// check group boxvar cGrpBox = new qx.ui.groupbox.CheckGroupBox("I am a box");this.getRoot().add(cGrpBox, {left: 300, top: 70}); The GUI look for the preceding code is as follows: We got to know the different containers available in the qooxdoo framework. Each container provides a particular functionality. Based on the information displayed on the GUI, you should choose the right container to have better usability of the application. Containers are the outer-most widgets in the GUI. Once you decide on the containers for your user interface, the next thing to do is to configure the layout manager for the container. Layout manager places the child widgets in the container, on the basis of the configured layout manager's policies. Now, it's time to learn how to place and arrange widgets inside the container, that is, how to lay out the container.
Read more
  • 0
  • 0
  • 2924
article-image-ajax-basic-utilities
Packt
20 Dec 2011
8 min read
Save for later

Ajax: Basic Utilities

Packt
20 Dec 2011
8 min read
(For more resources on PHP Ajax, see here.) Validating a form using Ajax The main idea of Ajax is to get data from the server in real time without reloading the whole page. In this task we will build a simple form with validation using Ajax. Getting ready As a JavaScript library is used in this task, we will choose jQuery. We will download (if we haven't done it already) and include it in our page. We need to prepare some dummy PHP code to retrieve the validation results. In this example, let's name it inputValidation.php. We are just checking for the existence of a param variable. If this variable is introduced in the GET request, we confirm the validation and send an OK status back to the page: <?php $result = array(); if(isset($_GET["param"])){ $result["status"] = "OK"; $result["message"] = "Input is valid!"; } else { $result["status"] = "ERROR"; $result["message"] = "Input IS NOT valid!"; } echo json_encode($result); ?> How to do it... Let`s start with basic HTML structure. We will define a form with three input boxes and one text area. Of course, it is placed in : <body> <h1>Validating form using Ajax</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Title *</label> <input type="text" id="title" name="title" class="required" /> </div> <div class="fieldRow"> <label>Url</label> <input type="text" id="url" name="url" value="http://" /> </div> <div class="fieldRow"> <label>Labels</label> <input type="text" id="labels" name="labels" /> </div> <div class="fieldRow"> <label>Text *</label> <textarea id="textarea" class="required"></textarea> </div> <div class="fieldRow"> <input type="submit" id="formSubmitter" value="Submit" disabled= "disabled" /> </div> </form> </body> <style> For visual confirmation of the valid input, we will define CSS styles: label{ width:70px; float:left; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled], input[disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } failed { border: 1px solid red; } </style> Now, it is time to include jQuery and its functionality: <script src="js/jquery-1.4.4.js"></script> <script> var ajaxValidation = function(object){ var $this = $(object); var param= $this.attr('name'); var value = $this.val(); $.get("ajax/inputValidation.php", {'param':param, 'value':value }, function(data) { if(data.status=="OK") validateRequiredInputs(); else $this.addClass('failed'); },"json"); } var validateRequiredInputs = function (){ var numberOfMissingInputs = 0; $('.required').each(function(index){ var $item = $(this); var itemValue = $item.val(); if(itemValue.length) { $item.removeClass('failed'); } else { $item.addClass('failed'); numberOfMissingInputs++; } }); var $submitButton = $('#formSubmitter'); if(numberOfMissingInputs > 0){ $submitButton.attr("disabled", true); } else { $submitButton.removeAttr('disabled'); } } </script> We will also initialize the document ready function: <script> $(document).ready(function(){ var timerId = 0; $('.required').keyup(function() { clearTimeout (timerId); timerId = setTimeout(function(){ ajaxValidation($(this)); }, 200); }); }); </script> When everything is ready, our result is as follows: How it works... We created a simple form with three input boxes and one text area. Objects with class required are automatically validated after the keyup event and calling the ajaxValidation function. Our keyup functionality also includes theTimeoutfunction to prevent unnecessary calls if the user is still writing. The validation is based on two steps: Validation of the actual input box: We are passing the inserted text to the ajax/inputValidation.php via Ajax. If the response from the server is not OK we will mark this input box as failed. If the response is OK, we proceed to the second step. Checking the other required fields in our form. When there is no failed input box left in the form, we will enable the submit button. There's more... Validation in this example is really basic. We were just checking if the response status from the server is OK. We will probably never meet a validation of the required field like we have here. In this case,it's better to use the length property directly on the client side instead of bothering the server with a lot of requests,simply to check if the required field is empty or filled. This task was just a demonstration of the basic Validationmethod. It would be nice to extend it with regular expressions on the server-side to directly check whether the URL form or the title already exist in our database, and let the user know what the problem is and how he/she can fix it. Creating an autosuggest control This recipe will show us how to create an autosuggest control. This functionality is very useful when we need to search within huge amounts of data. The basic functionality is to display the list of suggested data based on text in the input box. Getting ready We can start with the dummy PHP page which will serve as a data source. When we call this script with GET method and variable string, this script will return the list of records (names) which include the selected string: <?php $string = $_GET["string"]; $arr = array( "Adam", "Eva", "Milan", "Rajesh", "Roshan", // ... "Michael", "Romeo" ); function filter($var){ global $string; if(!empty($string)) return strstr($var,$string); } $filteredArray = array_filter($arr, "filter"); $result = ""; foreach ($filteredArray as $key => $value){ $row = "<li>".str_replace($string, "<strong>".$string."</strong>", $value)."</li>"; $result .= $row; } echo $result; ?> How to do it... As always, we will start with HTML. We will define the form with one input box and an unsorted list datalistPlaceHolder: <h1>Dynamic Dropdown</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Skype name:</label> <div class="ajaxDropdownPlaceHolder"> <input type="text" id="name" name="name" class="ajaxDropdown" autocomplete="OFF" /> <ul class="datalistPlaceHolder"></ul> </div> </div> </form> When the HTML is ready, we will play with CSS: <style> label { width:80px; float:left; padding:4px; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } validationFailed { border: 1px solid red; } validationPassed { border: 1px solid green; } .datalistPlaceHolder { width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; display:none; } ul.datalistPlaceHolder li { list-style: none; cursor:pointer; padding:4px; } ul.datalistPlaceHolder li:hover { color:#FFF; background-color:#000; } </style>   Now the real fun begins. We will include jQuery library and define our keyup events: <script src="js/jquery-1.4.4.js"></script> <script> var timerId; var ajaxDropdownInit = function(){ $('.ajaxDropdown').keyup(function() { var string = $(this).val(); clearTimeout (timerId); timerId = setTimeout(function(){ $.get("ajax/dropDownList.php", {'string':string}, function(data) { if(data) $('.datalistPlaceHolder').show().html(data); else $('.datalistPlaceHolder').hide(); }); }, 500 ); }); } </script> When everything is set, we will call the ajaxDropdownInit function within the document ready function: <script>$(document).ready(function(){ajaxDropdownInit();});</script>  Our autosuggest control is ready. The following screenshot shows the output: How it works... The autosuggest control in this recipe is based on the input box and the list of items in datalistPlaceHolder. After each keyup event of the input box,datalistPlaceHolder will load the list of items from ajax/dropDownList.php via the Ajax function defined in ajaxDropdownInit. A good feature of this recipe is thetimerID variable that,when used with thesetTimeout method, will allow us to send the request on the server only when we stop typing (in our case it is 500 milliseconds). It may not look so important, but it will save a lot of resources. We do not want to wait for the response of "M" typed in the input box, when we have already typed in "Milan". Instead of 5 requests (150 milliseconds each), we have just one. Multiply it, for example, with 10,000 users per day and the effect is huge. There's more... We always need to remember that the response from the server is in the JSON format. [{ 'id':'1', 'contactName':'Milan' },...,{ 'id':'99', 'contactName':'Milan (office)' }] Using JSON objects in JavaScript is not always useful from the performance point of view. Let's imagine we have 5000 contacts in one JSON file. It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: [{ "status": "100", "responseMessage": "Everything is ok! :)", "data": "<li><h2><ahref="#1">Milan</h2></li> <li><h2><ahref="#2">Milan2</h2></li> <li><h2><ahref="#3">Milan3</h2></li>" }] It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: <?php echo "STEP 1"; // Same for 2 and 3 ?> In this case, we will have the complete data in HTML and there is no need to create any logic to create a simple list of items.
Read more
  • 0
  • 0
  • 1171

Packt
20 Dec 2011
8 min read
Save for later

Ajax:Basic Utilities

Packt
20 Dec 2011
8 min read
Validating a form using Ajax The main idea of Ajax is to get data from the server in real time without reloading the whole page. In this task we will build a simple form with validation using Ajax. Getting ready As a JavaScript library is used in this task, we will choose jQuery. We will download (if we haven't done it already) and include it in our page. We need to prepare some dummy PHP code to retrieve the validation results. In this example, let's name it inputValidation.php. We are just checking for the existence of a param variable. If this variable is introduced in the GET request, we confirm the validation and send an OK status back to the page: ?php $result = array(); if(isset($_GET["param"])){ $result["status"] = "OK"; $result["message"] = "Input is valid!"; } else { $result["status"] = "ERROR"; $result["message"] = "Input IS NOT valid!"; } echo json_encode($result); ?> How to do it... Let`s start with basic HTML structure. We will define a form with three input boxes and one text area. Of course, it is placed in : <body> <h1>Validating form using Ajax</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Title *</label> <input type="text" id="title" name="title" class="required" /> </div> <div class="fieldRow"> <label>Url</label> <input type="text" id="url" name="url" value="http://" /> </div> <div class="fieldRow"> <label>Labels</label> <input type="text" id="labels" name="labels" /> </div> <div class="fieldRow"> <label>Text *</label> <textarea id="textarea" class="required"></textarea> </div> <div class="fieldRow"> <input type="submit" id="formSubmitter" value="Submit" disabled="disabled" /> </div> </form> </body> <style> For visual confirmation of the valid input, we will define CSS styles: label{ width:70px; float:left; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled], input[disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } failed { border: 1px solid red; } </style> Now, it is time to include jQuery and its functionality: <script src="js/jquery-1.4.4.js"></script> <script> var ajaxValidation = function(object){ var $this = $(object); var param= $this.attr('name'); var value = $this.val(); $.get("ajax/inputValidation.php", {'param':param, 'value':value }, function(data) { if(data.status=="OK") validateRequiredInputs(); else $this.addClass('failed'); },"json"); } var validateRequiredInputs = function (){ var numberOfMissingInputs = 0; $('.required').each(function(index){ var $item = $(this); var itemValue = $item.val(); if(itemValue.length) { $item.removeClass('failed'); } else { $item.addClass('failed'); numberOfMissingInputs++; } }); var $submitButton = $('#formSubmitter'); if(numberOfMissingInputs > 0){ $submitButton.attr("disabled", true); } else { $submitButton.removeAttr('disabled'); } } </script> We will also initialize the document ready function: <script> $(document).ready(function(){ var timerId = 0; $('.required').keyup(function() { clearTimeout (timerId); timerId = setTimeout(function(){ ajaxValidation($(this)); }, 200); }); }); </script> When everything is ready, our result is as follows: How it works... We created a simple form with three input boxes and one text area. Objects with class required are automatically validated after the keyup event and calling the ajaxValidation function. Our keyup functionality also includes theTimeoutfunction to prevent unnecessary calls if the user is still writing. The validation is based on two steps: Validation of the actual input box: We are passing the inserted text to the ajax/inputValidation.php via Ajax. If the response from the server is not OK we will mark this input box as failed. If the response is OK, we proceed to the second step. Checking the other required fields in our form. When there is no failed input box left in the form, we will enable the submit button. There's more... Validation in this example is really basic. We were just checking if the response status from the server is OK. We will probably never meet a validation of the required field like we have here. In this case,it's better to use the length property directly on the client side instead of bothering the server with a lot of requests,simply to check if the required field is empty or filled. This task was just a demonstration of the basic Validationmethod. It would be nice to extend it with regular expressions on the server-side to directly check whether the URL form or the title already exist in our database, and let the user know what the problem is and how he/she can fix it. Creating an autosuggest control This recipe will show us how to create an autosuggest control. This functionality is very useful when we need to search within huge amounts of data. The basic functionality is to display the list of suggested data based on text in the input box. Getting ready We can start with the dummy PHP page which will serve as a data source. When we call this script with GET method and variable string, this script will return the list of records (names) which include the selected string: <?php $string = $_GET["string"]; $arr = array( "Adam", "Eva", "Milan", "Rajesh", "Roshan", // ... "Michael", "Romeo" ); function filter($var){ global $string; if(!empty($string)) return strstr($var,$string); } $filteredArray = array_filter($arr, "filter"); $result = ""; foreach ($filteredArray as $key => $value){ $row = "<li>".str_replace($string, "<strong>".$string."</strong>", $value)."</li>"; $result .= $row; } echo $result; ?> How to do it... As always, we will start with HTML. We will define the form with one input box and an unsorted list datalistPlaceHolder: <h1>Dynamic Dropdown</h1> <form class="simpleValidation"> <div class="fieldRow"> <label>Skype name:</label> <div class="ajaxDropdownPlaceHolder"> <input type="text" id="name" name="name" class="ajaxDropdown" autocomplete="OFF" /> <ul class="datalistPlaceHolder"></ul> </div> </div> </form> When the HTML is ready, we will play with CSS: <style> label { width:80px; float:left; padding:4px; } form{ width:320px; } input, textarea{ width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; } input[type=submit] { cursor:pointer; background-color:green; color:#FFF; } input[disabled=disabled] { background-color:#d1d1d1; } fieldRow { margin:10px 10px; overflow:hidden; } validationFailed { border: 1px solid red; } validationPassed { border: 1px solid green; } .datalistPlaceHolder { width:200px; border:1px solid black; border-radius: 5px; float:right; padding:5px; display:none; } ul.datalistPlaceHolder li { list-style: none; cursor:pointer; padding:4px; } ul.datalistPlaceHolder li:hover { color:#FFF; background-color:#000; } </style> Now the real fun begins. We will include jQuery library and define our keyup events: <script src="js/jquery-1.4.4.js"></script> <script> var timerId; var ajaxDropdownInit = function(){ $('.ajaxDropdown').keyup(function() { var string = $(this).val(); clearTimeout (timerId); timerId = setTimeout(function(){ $.get("ajax/dropDownList.php", {'string':string}, function(data) { if(data) $('.datalistPlaceHolder').show().html(data); else $('.datalistPlaceHolder').hide(); }); }, 500 ); }); } </script> When everything is set, we will call the ajaxDropdownInit function within the document ready function: <script> $(document).ready(function(){ ajaxDropdownInit(); }); </script> Our autosuggest control is ready. The following screenshot shows the output: How it works... The autosuggest control in this recipe is based on the input box and the list of items in datalistPlaceHolder. After each keyup event of the input box,datalistPlaceHolder will load the list of items from ajax/dropDownList.php via the Ajax function defined in ajaxDropdownInit. A good feature of this recipe is thetimerID variable that,when used with thesetTimeout method, will allow us to send the request on the server only when we stop typing (in our case it is 500 milliseconds). It may not look so important, but it will save a lot of resources. We do not want to wait for the response of "M" typed in the input box, when we have already typed in "Milan". Instead of 5 requests (150 milliseconds each), we have just one. Multiply it, for example, with 10,000 users per day and the effect is huge. There's more... We always need to remember that the response from the server is in the JSON format. [{ 'id':'1', 'contactName':'Milan' },...,{ 'id':'99', 'contactName':'Milan (office)' }] Using JSON objects in JavaScript is not always useful from the performance point of view. Let's imagine we have 5000 contacts in one JSON file. It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: [{ "status": "100", "responseMessage": "Everything is ok! :)", "data": "<li><h2><ahref=\"#1\">Milan</h2></li> <li><h2><ahref=\"#2\">Milan2</h2></li> <li><h2><ahref=\"#3\">Milan3</h2></li>" }] It may take a while to build HTML from 5000 objects but, if we build a JSON object, the code will be as follows: <?php echo "STEP 1"; // Same for 2 and 3 ?>   In this case, we will have the complete data in HTML and there is no need to create any logic to create a simple list of items.
Read more
  • 0
  • 0
  • 2821