Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-breaching-wireless-security
Packt
01 Jul 2013
5 min read
Save for later

Breaching Wireless Security

Packt
01 Jul 2013
5 min read
(For more resources related to this topic, see here.) Different types of attacks We will now discuss each one of these attacks briefly. The probing and discovery attacks are accomplished by sending out probes and looking for the wireless networks. We have used several tools for discovery so far, but they have all been passive in how they discover information. A passive probing tool can detect the SSID of a network even when it is cloaked, as we have shown with the Kismet tool. With active probing, we are sending out probes with the SSID in it. This type of probing will not discover a hidden or cloaked SSID. An active probing tool for this is NetStumbler (www.netstumbler.com). With an active probe, the tool will actively send out probes and elicit responses from the access points to gather information. It is very difficult to prevent an attacker from gathering information about our wireless access points; this is because an access point has to be available for connection. We can cloak or hide the SSID. The next step an attacker will carry out is performing the surveillance of the network. This is the technique we used with Kismet, airodump-ng, and ssidsniff. An example of the output of the Kismet tool is shown in the next screenshot: All three of these tools are passive, so they do not probe the network for information. They just capture it from the wireless frequency that is received from the network. Each of these tools can discover the hidden SSID of a network, and again, are passive tools. Once the attacker has discovered the target network, they will move to the surveillance step and attempt to gather more information about the target. For this, we can again use any of the three tools we previously mentioned. The information that an attacker is looking for are as follows: Whether or not the network is protected The encryption level used The signal strength and the GPS coordinates When an attacker is scanning a network, he or she is looking for an "easy" target. This is the motive of most of the attackers; they want an easy way in, and almost always, they target the weakest link. The next step that an attacker will typically pursue is Denial of Service (DoS); unfortunately, this is one area we really cannot do much about. This is because, in the case of a wireless signal, the network can be jammed by using simple and inexpensive tools; so if an attacker wants to perform a DoS attack, there is really not much that we can do to prevent it. So we will not spend any more time on this attack. The next attack method is one that is shared between the "wired" network world and the wireless world. The attack of masquerading, or spoofing as it is sometimes referred to, involves impersonating an authorized client on a network. One of the protection mechanisms we have within our wireless networks is the capability to restrict or filter a client based on their Media Access Control (MAC) address. This address is that of the network card itself; it is how data is delivered on our networks. There are a number of ways to change the MAC address; we have tools, and we can also change it from the command line in Linux. The simplest way to change our MAC address is to use the macchanger tool. An example of how to use this tool to change an address is shown in the next screenshot: In the Windows world, we can do it in another way; but it involves editing the registry, which might be too difficult for some of you. The hardware address is in the registry; you can find it by searching for the term wireless within the registry. An example of this registry entry is shown in the following screenshot: The last category of attacks that we will cover here is the rogue access point. This is an attack that takes advantage of the fact that all wireless networks have a particular level of power that they transmit. What we do for this attack is create an access point with more power than the access point we are masquerading as; this results in a stronger signal being received by the client software. When would anyone take a three-bar signal over a five-bar signal? The answer for that would be: never; that is why the attack is so powerful. An attacker can create an access point as a rogue access point; there is no way for most clients to tell whether the access point is real or not. There really is nothing that you can do to stop this attack effectively. This is why it is a common attack used in areas that have a public hotspot. We do have a recommended mechanism you can use to help mitigate the impact of this type of attack. If you look at the example that is shown in the next screenshot, can you identify which one of the access points with the same name is the correct one? This is an example of what most clients see when they are using Windows. From this list, there is no way of knowing which one of the access points is the real one. Summary Thus in this article we covered, albeit briefly, the steps that an attacker typically uses when preparing for an attack. Resources for Article : Further resources on this subject: Tips and Tricks on BackTrack 4 [Article] BackTrack Forensics [Article] BackTrack 5: Attacking the Client [Article]
Read more
  • 0
  • 0
  • 1254

article-image-installing-and-configuring-drupal-commerce
Packt
28 Jun 2013
8 min read
Save for later

Installing and Configuring Drupal Commerce

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Installing Drupal Commerce to an existing Drupal 7 website There are two approaches to installing Drupal Commerce; this recipe covers installing Drupal Commerce on an existing Drupal 7 website. Getting started You will need to download Drupal Commerce from http://drupal.org/project/ commerce. Download the most recent recommended release you see that couples with your Drupal 7 website's core version: You will also require the following modules to allow Drupal Commerce to function: Ctools: http://drupal.org/project/ctools Entity API: http://drupal.org/project/entity Views: http://drupal.org/project/views Rules: http://drupal.org/project/rules Address Field: http://drupal.org/project/addressfield How to do it... Now that you're ready, install Drupal Commerce by performing the following steps: Install the modules that Drupal Commerce depends on, first by copying the preceding module files into your Drupal site's modules directory, sites/all/modules. Install Drupal Commerce's modules next, by copying the files into the sites/all/ modules directory, so that they appear in the sites/all/modules/commerce directory. Enable the newly installed Drupal Commerce module in your Drupal site's administration panel (example.com/admin/modules if you've installed Drupal Commerce at example.com), under the Modules navigation option, by ensuring the checkbox to the left-hand side of the module name is checked. Now that Drupal Commerce is installed, a new menu option will appear in the administration navigation at the top of your screen when you are logged in as a user with administration permissions. You may need to clear the cache to see this. Navigate to Configuration | Development | Performance in the administration panel to do this. How it works... Drupal Commerce depends on a number of other Drupal modules to function, and by installing and enabling these in your website's administration panel you're on your way to getting your Drupal Commerce store off the ground. You can also install the Drupal Commerce modules via Drush (the Drupal Shell) too. For more information on Drush, see http://drupal.org/project/drush. Installing Drupal Commerce with Commerce Kickstart 2 Drupal Commerce requires quite a number of modules, and doing a basic installation can be quite time-consuming, which is where Commerce Kickstart 2 comes in. It packages Drupal 7 core and all of the necessary modules. Using Commerce Kickstart 2 is a good idea if you are building a Drupal Commerce website from scratch, and don't already have Drupal core installed. Getting started Download Drupal Commerce Kickstart 2 from its drupal.org project page at http://drupal.org/project/commerce kickstart. How to do it... Once you have decompressed the Commerce Kickstart 2 files to the location you want to install Drupal Commerce in, perform the following steps: Visit the given location in your web browser. For this example, it is assumed that your website is at example.com, so visit this address in your web browser. You'll see that you are presented with a welcome screen as shown in the following screenshot: Click the Let's Get Started button underneath this, and the installer moves to the next configuration option. Next, your server's requirements are checked to ensure Drupal can run in this environment. In the preceding screenshot you can see some common problems when installing Drupal that prevent installation. In particular, ensure that you create the /sites/ default/files directory in your Drupal installation and ensure it has permissions to allow Drupal to write to it (as this is where your website's images and files are stored). You will also need to copy the /sites/default/default.settings.php file to /sites/default/settings.php before you can start. Make sure this file is writeable by Drupal too (you'll secure it after installation is complete). Once these problems have been resolved, refresh the page and you will be taken to the Set up database screen. Enter the database username, password, and database name you want to use with Drupal, and click on Save and continue: The next step is the Install profile section, which can take some time as Drupal Commerce is installed for you. There's nothing for you to do here; just wait for installation to complete! You can now safely remove write permissions for the settings.php file in the /sites/default directory of your Drupal Commerce installation. The next step is Configure site. Enter the name of your new store and your e-mail address here, and provide a username and password for your Drupal Commerce administrator account. Don't forget to make a note of these as you'll need them to access your website later! Below these options, you can specify the country of your server and the default time zone. These are usually picked up from your server itself, but you may want to change them: Click on the Save and continue button to progress now; the next step is Configure store. Here you can set your Default store country field (if it's different from your server settings) and opt to install Drupal Commerce's demo, which includes sample content and a sample Drupal Commerce theme too: Further down on this screen, you're presented with more options. By checking the Do you want to be able to translate the interface of your store? field, Drupal Commerce provides you with an ability to translate your website for customers of different languages (for this simple store installation, leave this set to No). Finally, you can set the Default store currency field you wish to use, and whether you want Commerce Kickstart to set up a sales tax rule for your store (select which is more appropriate for your store, or leave it set to No sample tax rate for now): Click on Create and finish at the bottom of the screen. If you chose to install the demo store in the previous screen, you will have to wait as it is added for you. There are now options to allow Drupal to check for updates automatically, and to receive e-mails about security updates. Leave these both checked to help you stay on top of keeping your Drupal website secure and up-to-date. Wait as Commerce Kickstart installs everything Drupal Commerce requires to run. That's it! Your Drupal Commerce store is now up and running thanks to Commerce Kickstart 2. How it works... The Commerce Kickstart package includes Drupal 7 core and the Drupal Commerce module. By packaging these together, installation and initial configuration for your Drupal Commerce store is made much easier! Creating your first product Now that you've installed Drupal Commerce, you can start to add products to display to customers and start making money. In this recipe you will learn how to add a basic product to your Drupal Commerce store. Getting started Log in to your Drupal Commerce store's administration panel, and navigate to Products | Add a product: If you haven't, navigate to Site settings | Modules and ensure that the Commerce Kickstart Menu module is enabled for your store. Note the sample products from Drupal Kickstart's installation are displaying there. How to do it... To get started adding a product to your store, click on the Add product button and follow these steps: Click on the Product display. Product displays groups of multiple related product variations together for display on the frontend of your website. Fill in the form that appears, entering a suitable Title, using the Body field for the product's description, as well as filling in the SKU (stock keeping unit; a unique reference for this product) and Price fields. Ensure that the Status field is set to Active. You can also optionally upload an image for the product here: Optionally, you can assign the product to one of the pre-existing categories in the Product catalog tab underneath these fields, as well as a URL for it in the URL path settings tab: Click on the Save product button, and you've now created a basic product in your store. To view the product on the frontend of your store, you can navigate to the category listings if you imported Drupal Commerce's demo data, or else you can return to the Products menu and click on the name of the product in the Title column: You'll now see your product on the frontend of your Drupal Commerce store: How it works... In Drupal Commerce, a product can represent several things, listed as follows: A single product for sale (for example, a one-size-fits-all t-shirt) A variation of a product (for example, a medium-size t-shirt) An item that is not necessarily a purchase as such (for example, it may represent a donation to a charity) An intangible product which the site allows reservations for (for example, an event booking) Product displays (for example, a blue t-shirt) are used to group product variations (for example, a medium-sized blue t-shirt and a large-sized blue t-shirt), and display them on your website to customers. So, depending on the needs of your Drupal Commerce website, products may be displayed on unique pages, or multiple products might be grouped onto one page as a product display.
Read more
  • 0
  • 0
  • 2653

article-image-responsive-design-media-queries
Packt
19 Jun 2013
6 min read
Save for later

Responsive Design with Media Queries

Packt
19 Jun 2013
6 min read
(For more resources related to this topic, see here.) Web design for a multimedia web world As noted in the introduction to this article, recent times have seen an explosion in the variety of media through which people interact with websites, particularly the way smart phones and tablets are defining the browsing experience more and more. Moreover, as noted, a web page design that is appropriate may be necessary for a wide-screen experience but is often inappropriate, overly cluttered, or just plain dysfunctional on a tiny screen. The solution is Media Queries—a new element of CSS stylesheets introduced with CSS3. But before we examine new media features in CSS3, it will be helpful to understand the basic evolutionary path that led to the development of CSS3 Media Queries. That background will be useful both in getting our heads around the concepts involved and because in the crazy Wild West state of browsing environments these days (with emerging and yet-unresolved standards conflicts), designing for the widest range of media requires combining new CSS3 Media Queries with older CSS Media detection tools. We'll see how this plays out in real life near the end of this article, when we examine particular challenges of creating Media Queries that can detect, for example, an Apple iPhone. How Media Queries work Let's look at an example. If you open the Boston Globe (newspaper) site (http://www.bostonglobe.com/) in a browser window the width of a laptop, you'll see a three-column page layout (go ahead, I'll wait while you check; or just take a look at the following example). The three-column layout works well in laptops. But in a smaller viewport, the design adjusts to present content in two columns, as shown in the following screenshot: The two-column layout is the same HTML page as the three-column layout. And the content of both pages (text, images, media, and so on) is the same. The crew at the Globe do not have to build a separate home page for tablets or smartphones. But a media query has linked a different CSS file that displays in narrower viewports. A short history of Media Queries Stepping back in time a bit, the current (pre-CSS3) version of CSS could already detect media, and enable different stylesheets depending on the media. Moreover, Dreamweaver CS6 (also CS5.5, CS5, and previous versions) provided very nice, intuitive support for these features. The way this works in Dreamweaver is that when you click the Attach Style Sheet icon at the bottom of the CSS Styles panel (with a web page open in Dreamweaver's Document window), the Attach External Style Sheet dialog appears. The Media popup in the dialog allows you to attach a stylesheet specifically designed for print, aural (to be read out loud by the reader software), Braille, handheld devices, and other "traditional" output options, as well as newer CSS3-based options. The handheld option, shown in the following screenshot, was available before CSS3: So, to summarize the evolutionary path, detecting media and providing a custom style for that media is not new to HTML5 and its companion CSS3, and there is support for those features in Dreamweaver CS6. Detecting and synchronizing styles with defined media has been available in Dreamweaver. However, what is relatively new is the ability to detect and supply defined stylesheets for specific screen sizes. And that new feature opens the door to new levels of customized page design for specific media. HTML5, CSS3, and Media Queries With HTML5 and CSS3, Media Queries have been expanded. We can now define all kinds of criteria for selecting a stylesheet to apply to a viewing environment, including orientation (whether or not a mobile phone, tablet, and so on, is held in the portrait [up-down] or landscape [sideways] view), whether the device displays color, the shape of the viewing area, and—of most value—the width and height of the viewing area. All these options present a multitude of possibilities for creating custom stylesheets for different viewing environments. In fact they open up a ridiculously large array of possibilities. But for most designers, simply creating three appropriate stylesheets, one for laptop/desktop viewing, one for mobile phones, and one for tablets, is sufficient. In order to define criteria for which stylesheet will display in an environment, HTML5 and CSS3 allow us to use if-then statements. So, for example, if we are assigning a stylesheet to tablets, we might specify that if the width of the viewing area is greater than that of a cell phone, but smaller than that of a laptop screen, we want the tablet stylesheet to be applied. Styling for mobile devices and tablets While a full exploration of the aesthetic dimensions of creating styles for different media is beyond the scope of our mission in this book, it is worth noting a few basic "dos and don'ts" vis-à-vis styling for mobile devices. I'll be back with more detailed advice on mobile styling later in this article, but in a word, the challenge is: simplify. In general, this means applying many or all of the following adjustments to your pages: Smaller margins Larger (more readable) type Much less complex backgrounds; no image backgrounds No sidebars or floated content (content around which other content wraps) Often, no containers that define page width Design advice online: If you search for "css for mobile devices" online, you'll find thousands of articles with different perspectives and advice on designing web pages that can be easily accessed with handheld devices. Media Queries versus jQuery Mobile and apps Before moving to the technical dimension of building pages with responsive design using Media Queries, let me briefly compare and contrast media queries to the two other options available for displaying content differently for fullscreen and mobile devices. One option is an app. Apps (short for applications) are full-blown computer programs created in a high-level programming language. Dreamweaver CS6 includes new tools to connect with and generate apps through the online PhoneGap resources. The second option is a jQuery Mobile site. jQuery Mobile sites are based on JavaScript. But, as we'll see later in this book, you don't need to know JavaScript to build jQuery Mobile sites. The main difference between jQuery Mobile sites and Media Query sites with mobile-friendly designs is that jQuery Mobile sites require different content while Media Query sites simply repackage the same content with different stylesheets. Which approach should you use, Media Queries or JavaScript? That is a judgment call. What I can advise here is that Media Queries provides the easiest way to create and maintain a mobile version of your site.
Read more
  • 0
  • 0
  • 1234
Visually different images

article-image-choosing-your-shipping-method
Packt
19 Jun 2013
9 min read
Save for later

Choosing your shipping method

Packt
19 Jun 2013
9 min read
(For more resources related to this topic, see here.) Getting ready To view and edit our shipping methods we must first navigate to System | Configuration | Shipping Methods. Remember, our Current Configuration Scope field is important as shipping methods can be set on a per website scope basis. There are many shipping methods available by default, but the main generic methods are Flat Rate, Table Rates, and Free Shipping. By default, Magento comes with the Flat Rate method enabled. We are going to start off by disabling this shipping method. Be careful when disabling shipping methods; if we leave our Magento installation without any active shipping methods then no orders can be placed—the customer would be presented with this error in the checkout: Sorry, no quotes are available for this order at this time. Likewise through the administration panel manual orders will also receive the error. How to do it... To disable our Flat Rate method we need to navigate to its configuration options in System | Configuration | Shipping Methods | Flat Rate and choose Enabled as No, and click on Save. The following screenshot highlights our current configuration scope and disabled Flat Rate method: Next we need to configure our Table Rates method, so we need to now click on the Table Rates tab and set Enabled to Yes , within Title enter National Delivery and within Method Name enter Shipping. Finally, for the Condition option select Weight vs. Destination (all the other information can be left as default as it will not affect our pricing for this scenario). To upload our spreadsheet for our new Table Rates method we need to first change our scope (shipping rates imported via a .csv file are always entered at a website view level). To do this we need to select Main Website (this wording can differ depending on System | Manage Stores Settings) from our Current Configuration Scope field. The following screenshot shows the change in input fields when our configuration scope has changed: Click on the Export CSV button and we should start downloading a blank .csv file (or if there are rates already, it will give us our active rates). Next we will populate our spreadsheet with the following information (shown in the screenshot) so that we can ship to anywhere in the USA: After finishing our spreadsheet we can now import it, so (with our Current Configuration Scope field set to our Website view) click on the Choose File/Browse button and upload it. Once the browser has uploaded the file we can click on Save. Next we are going to configure our Free Shipping method to run alongside our Table Rates method, so to start with we need to switch back to our Default Config scope and then click on the Free Shipping tab Within this tab we will set Enabled to Yes and Minimum Order Amount to 50. We can leave the other options as default. How it works... The following is a brief explanation of each of our main shipping methods. Flat Rate The Flat Rate method allows us to specify a fixed shipping charge to be applied either per item or per order. The Flat Rate method also allows us to specify a handling fee—a percentage or fixed amount surcharge of the flat rate fee. With this method we can also specify which countries we wish to make this shipping method applicable for (dependent solely on the customers' shipping address details). Unlike the Table Rates method, you cannot specify multiple flat rates for any given region of a country nor can you specify flat rates individually per country. Table Rates The Table Rates method uses a spreadsheet of data to increase the flexibility of our shipping charges by allowing us to apply different prices to our orders depending on the criteria we specify in the spreadsheet. Along with the liberty to specify which countries this method is applicable for and giving us the option to apply a handling fee, the Table Rates method also allows us to choose from a variety of shopping cart conditions. The choice that we select from these conditions affects the data that we can import via the spreadsheet. Inside this spreadsheet we can specify hundreds of rows of countries along with their specific states or Zip/Postal Codes. Each row has a condition such as weight (and above) and also a specific price. If a shopping cart matches the criteria entered on any of the rows, the shipping price will be taken from that row and set to the cart. In our example we have used Weight vs. Destination; there are two other alternative conditions which come with a default Magento installation that could be used to calculate the shipping: Price vs. Destination: This Table Rates condition takes into account the Order Subtotal (and above) amount in whichever currency is currently set for the store # of Items vs. Destination: This Table Rates condition calculates the shipping cost based on the # of Items (and above) within the customer's basket Free Shipping The Free Shipping method is one of the simplest and most commonly used of all the methods that come with a default Magento installation. One of the best ways to increase the conversion rate through your Magento store is to offer your customers Free Shipping. Magento allows you to do this by using its Free Shipping method. Selecting the countries that this method is applicable for and inputting a minimum order amount as the criteria will enable this method in the checkout for any matching shopping cart. Unfortunately, you cannot specify regions of a country within this method (although you can still offer a free shipping solution through table rates and promotional rules). Our configuration As mentioned previously, the Table Rates method provides us with three types of conditions. In our example we created a table rate spreadsheet that relies on the weight information of our products to work out the shipping price. Magento's default Free Shipping method is one of the most popular and useful shipping methods and its most important configuration option is Minimum Order Amount. Setting this value to 50 will tell Magento that any shopping cart with a subtotal greater than $50 should provide the Free Shipping method for the customer to use; we can see this demonstrated in the following screenshot: The enabled option is a standard feature among nearly all shipping method extensions. Whenever we wish to enable or disable a shipping method, all we need to do is set it to Yes for enabled and No to disable it. Once we have configured our Table Rates extension, Magento will use the values inputted by our customer and try to match them against our imported data. In our case if a customer has ordered a product weighing 2.5 kg and they live anywhere in the USA, they will be presented with our $6.99 price. However, a drawback of our example is if they live outside of the USA, our shipping method will not be available. The .csv file for our Weight vs. Destination spreadsheet is slightly different to the spreadsheet used for the other Table Rates conditions. It is therefore important to make sure that if we change our condition, we export a fresh spreadsheet with the correct column information. One very important point to note when editing our shipping spreadsheets is the format of the file—programs such as Microsoft Excel sometimes save in an incompatible format. It is recommended to use the free, downloadable Open Office suite to edit any of Magento's spreadsheets as they save the file in a compatible format. We can download Open Office here: www.openoffice.org If there is no alternative but to use Microsoft Excel then we must ensure we save as CSV for Windows or alternatively CSV (Comma Delimited). A few key points when editing the Table Rates spreadsheet: The * (Asterisk) is a wildcard—similar to saying ANY Weight (and above) is really a FROM weight and will set the price UNTIL the next row value that is higher than itself (for the matching Country, Region/State, and Zip/ Postal Code)—the downside of this is that you cannot set a maximum weight limit The Country column takes three-letter codes—ISO 3166-1 alpha-3 codes The Zip/Postal Code column takes either a full USA ZIP code or a full postal code The Region/State column takes all two-letter state codes from the USA or any other codes that are available in the drop-down select menus for regions on the checkout pages of Magento One final note is that we can run as many shipping methods as we like at the same time—just like we did with our Free Shipping method and our Table Rates method. There's more... For more information on setting up the many shipping methods that are available within Magento please see the following link: http://innoexts.com/magento-shipping-methods We can also enable and disable shipping methods on a per website view basis, so for example we could disable a shipping method for our French store. Disabling Free Shipping for French website If we wanted to disable our Free Shipping method for just our French store, we could change our Current Configuration Scope field to our French website view and then perform the following steps: Navigate to System | Configuration | Shipping Methods and click on the Free Shipping tab. Uncheck Use Default next to the Enabled option and set Enabled to No, and then click on Save Config. We can see that Magento normally defaults all of our settings to the Default Config scope; by unchecking the Use Default checkbox we can edit our method for our chosen store view. Summary This article explored the differences between the Flat Rate, Table Rates, and Free Shipping methods, as well as taught us how to disable a shipping method and configure your Table Rates. Resources for Article : Further resources on this subject: Magento Performance Optimization [Article] Magento: Exploring Themes [Article] Getting Started with Magento Development [Article]
Read more
  • 0
  • 0
  • 1788

article-image-magento-fundamentals-developers
Packt
11 Jun 2013
13 min read
Save for later

Magento Fundamentals for Developers

Packt
11 Jun 2013
13 min read
(For more resources related to this topic, see here.) Zend Framework – the base of Magento As you probably know, Magento is the most powerful e-commerce platform in the market; what you might not know about Magento is that it is also an object-oriented (OO) PHP framework developed on top of Zend Framework. Zend's official site describes the framework as: Zend Framework 2 is an open source framework for developing web applications and services using PHP 5.3+. Zend Framework 2 uses 100% object-oriented code and utilises most of the new features of PHP 5.3, namely namespaces, late static binding, lambda functions and closures. The component structure of Zend Framework 2 is unique; each component is designed with few dependencies on other components. ZF2 follows the SOLID object oriented design principle. This loosely coupled architecture allows developers to use whichever components they want. We call this a "use-at-will" design. But what is Zend Framework exactly? Zend Framework is an OO framework developed on PHP that implements the Model-View-Controller (MVC) paradigm. When Varien, now Magento Inc., started developing Magento it decided to do it on top of Zend because of the following components: Zend_Cache Zend_Acl Zend_DB Zend_Pdf Zend_Currency Zend_Date Zend_Soap Zend_Http In total, Magento uses around 15 different Zend components. The Varien library directly extends several of the Zend components mentioned previously, for example Varien_Cache_Core extends from Zend_Cache_Core. Using Zend Framework, Magento was built with the following principles in mind: Maintainability: It occurs using code pools to keep the core code separate from local customizations and third-party modules Upgradability: Magento modularity allows extensions and third-party modules to be updated independently from the rest of the system Flexibility: Allows seamless customization and simplifies the development of new features Although having used Zend Framework or even understanding it are not the requirements for developing with Magento, having at least a basic understanding of the Zend components, usage, and interaction can be invaluable information when we start digging deeper into the core of Magento. You can learn more about Zend Framework at http://framework.zend.com/ Magento folder structure Magento folder structure is slightly different from other MVC applications; let's take a look at the directory tree, and each directory and its functions: app: This folder is the core of Magento and is subdivided into three importing directories: code: This contains all our application code divided into three code pools such as core, community, and local design: This contains all the templates and layouts for our application locale: This contains all the translation and e-mail template files used for the store js: This contains all the JavaScript libraries that are used in Magento media: This contains all the images and media files for our products and CMS pages as well as the product image cache lib: This contains all the third-party libraries used in Magento such as Zend and PEAR, as well as the custom libraries developed by Magento, which reside under the Varien and Mage directories skin: This contains all CSS code, images, and JavaScript files used by the corresponding theme var: This contains our temporary data such as cache files, index lock files, sessions, import/export files, and in the case of the Enterprise edition the full page cache folders Magento is a modular system. This means that the application, including the core, is divided into smaller modules. For this reason, the folder structure plays a key role in the organization of each module core; a typical Magento module folder structure would look something like the following figure: Let's review each folder in more detail: Block: This folder contains blocks in Magento that form an additional layer of logic between the controllers and views controllers: controllers folders are formed by actions that process web server requests Controller: The classes in this folder are meant to be abstract classes and extended by the controller class under the the controllers folder etc: Here we can find the module-specific configuration in the form of XML files such as config.xml and system.xml Helper: This folder contains auxiliary classes that encapsulate a common-module functionality and make it available to a class of the same module and to other modules' classes as well Model: This folder contains models that support the controllers in the module for interacting with data sql: This folder contains the installation and upgrade files for each specific module As we will see later on in this article, Magento makes heavy use of factory names and factory methods. This is why the folder structure is so important. Modular architecture Rather than being a large application, Magento is built by smaller modules, each adding specific functionality to Magento. One of the advantages of this approach is the ability to enable and disable specific module functionality with ease, as well as add new functionality by adding new modules. Autoloader Magento is a huge framework, composed of close to 30,000 files. Requiring every single file when the application starts would make it incredibly slow and heavy. For this reason, Magento makes use of an autoloader class to find the required files each time a factory method is called. So, what exactly is an autoloader? PHP5 includes a function called __autoload().When instantiating a class, the __autoload() function is automatically called; inside this function, custom logic is defined to parse the class name and the required file. Let's take a closer look at the Magento bootstrap code located at app/Mage.php: …Mage::register('original_include_path', get_include_path());if (defined('COMPILER_INCLUDE_PATH')) {$appPath = COMPILER_INCLUDE_PATH;set_include_path($appPath . PS .Mage::registry('original_include_path'));include_once "Mage_Core_functions.php";include_once "Varien_Autoload.php";} else {/*** Set include path*/$paths[] = BP . DS . 'app' . DS . 'code' . DS . 'local';$paths[] = BP . DS . 'app' . DS . 'code' . DS . 'community';$paths[] = BP . DS . 'app' . DS . 'code' . DS . 'core';$paths[] = BP . DS . 'lib';$appPath = implode(PS, $paths);set_include_path($appPath . PS .Mage::registry('original_include_path'));include_once "Mage/Core/functions.php";include_once "Varien/Autoload.php";}Varien_Autoload::register(); The bootstrap file takes care of defining the include paths and initializing the Varien autoloader, which will in turn define its own autoload function as the default function to call. Let's take a look under the hood and see what the Varien autoload function is doing: /*** Load class source code** @param string $class*/public function autoload($class){if ($this->_collectClasses) {$this->_arrLoadedClasses[self::$_scope][] = $class;}if ($this->_isIncludePathDefined) {$classFile = COMPILER_INCLUDE_PATH .DIRECTORY_SEPARATOR . $class;} else {$classFile = str_replace(' ', DIRECTORY_SEPARATOR,ucwords(str_replace('_', ' ', $class)));}$classFile.= '.php';//echo $classFile;die();return include $classFile;} The autoload class takes a single parameter called $class, which is an alias provided by the factory method. This alias is processed to generate a matching class name that is then included. As we mentioned before, Magento's directory structure is important due to the fact that Magento derives its class names from the directory structure. This convention is the core principle behind factory methods that we will be reviewing later on in this article. Code pools As we mentioned before, inside our app/code folder we have our application code divided into three different directories known as code pools. They are as follows: core: This is where the Magento core modules that provide the base functionality reside. The golden rule among Magento developers is that you should never, by any circumstance, modify any files under the core code pool. community: This is the location where third-party modules are placed. They are either provided by third parties or installed through Magento Connect. local: This is where all the modules and code developed specifically for this instance of Magento reside. The code pools identify where the module came from and on which order they should be loaded. If we take another look at the Mage.php bootstrap file, we can see the order on which code pools are loaded: $paths[] = BP . DS . 'app' . DS . 'code' . DS . 'local';$paths[] = BP . DS . 'app' . DS . 'code' . DS . 'community';$paths[] = BP . DS . 'app' . DS . 'code' . DS . 'core';$paths[] = BP . DS . 'lib'; This means that for each class request, Magento will look in local, then community, then core, and finally inside the lib folder. This also leads to an interesting behavior that can easily be used for overriding core and community classes, by just copying the directory structure and matching the class name. Needless to say that this is a terrible practice, but it is still useful to know about just in case you someday have to take care of a project that exploits this behavior. Routing and request flow Before going into more detail about the different components that form a part of Magento, it is important that we understand how these components interact together and how Magento processes requests coming from the web server. As with any other PHP application, we have a single file as an entry point for every request; in the case of Magento this file is index.php, which is in charge of loading the Mage.php bootstrap class and starting the request cycle. It then goes through the following steps: The web server receives the request and Magento is instantiated by calling the bootstrap file, Mage.php. The frontend controller is instantiated and initialized; during this controller initialization Magento searches for the web routes and instantiates them. Magento then iterates through each of the routers and calls the match. The match method is responsible for processing the URL and generating the corresponding controller and action. Magento then instantiates the matching controller and takes the corresponding action. Routers are especially important in this process. The Router objects are used by the frontend controller to match a requested URL (route) to a module controller and action. By default, Magento comes with the following routers: Mage_Core_Controller_Varien_Router_Admin Mage_Core_Controller_Varien_Router_Standard Mage_Core_Controller_Varien_Router_Default The action controller will then load and render the layout, which in turn will load the corresponding blocks, models, and templates. Let's analyze how Magento will handle a request to a category page; we will use http:// localhost/catalog/category/view/id/10 as an example. Magento URIs are comprised of three parts – /FrontName/ControllerName/ActionName. This means that for our example URL, the breakdown would be as follows: FrontName: catalog ControllerName: category ActionName: view If I take a look at the Magento router class, I can see the Mage_Core_Controller_ Varien_Router_Standard match function: public function match(Zend_Controller_Request_Http $request){…$path = trim($request->getPathInfo(), '/');if ($path) {$p = explode('/', $path);} else {$p = explode('/', $this->_getDefaultPath());}…} From the preceding code, we can see that the first thing the router tries to do is to parse the URI into an array. Based on our example URL, the corresponding array would be something like the following code snippet: $p = Array([0] => catalog[1] => category[2] => view) The next part of the function will first try to check if the request has the module name specified; if not, then it tries to determine the module name based on the first element of our array. And if a module name can't be provided, then the function will return false. Let's take a look at that part of the code: // get module nameif ($request->getModuleName()) {$module = $request->getModuleName();} else {if (!empty($p[0])) {$module = $p[0];} else {$module = $this->getFront()->getDefault('module');$request->setAlias(Mage_Core_Model_Url_Rewrite::REWRITE_REQUEST_PATH_ALIAS, '');}}if (!$module) {if (Mage::app()->getStore()->isAdmin()) {$module = 'admin';} else {return false;}} Next, the match function will iterate through each of the available modules and try to match the controller and action, using the following code: …foreach ($modules as $realModule) {$request->setRouteName($this->getRouteByFrontName($module));// get controller nameif ($request->getControllerName()) {$controller = $request->getControllerName();} else {if (!empty($p[1])) {$controller = $p[1];} else {$controller =$front->getDefault('controller');$request->setAlias(Mage_Core_Model_Url_Rewrite::REWRITE_REQUEST_PATH_ALIAS,ltrim($request->getOriginalPathInfo(), '/'));}}// get action nameif (empty($action)) {if ($request->getActionName()) {$action = $request->getActionName();} else {$action = !empty($p[2]) ? $p[2] :$front->getDefault('action');}}//checking if this place should be secure$this->_checkShouldBeSecure($request,'/'.$module.'/'.$controller.'/'.$action);$controllerClassName = $this->_validateControllerClassName($realModule, $controller);if (!$controllerClassName) {continue;}// instantiate controller class$controllerInstance = Mage::getControllerInstance($controllerClassName,$request, $front->getResponse());if (!$controllerInstance->hasAction($action)) {continue;}$found = true;break;}... Now that looks like an awful lot of code, so let's break it down even further. The first part of the loop will check if the request has a controller name; if it is not set, it will check our parameter array's ($p) second value and try to determine the controller name, and then it will try to do the same for the action name. If we got this far in the loop, we should have a module name, a controller name, and an action name, which Magento will now use to try and get a matching controller class name by calling the following function: $controllerClassName = $this->_validateControllerClassName($realModule, $controller); This function will not only generate a matching class name but it will also validate its existence; in our example case this function should return Mage_Catalog_ CategoryController. Since we now have a valid class name, we can proceed to instantiate our controller object; if you were paying attention up to this point, you have probably noticed that we haven't done anything with our action yet, and that's precisely the next step in our loop. Our new instantiated controller comes with a very handy function called hasAction(); in essence, what this function does is to call a PHP function called is_callable(), which will check if our current controller has a public function matching the action name; in our case this will be viewAction(). The reason behind this elaborate matching process and the use of a foreach loop is that it is possible for several modules to use the same FrontName. Now, http://localhost/catalog/category/view/id/10 is not a very user-friendly URL; fortunately, Magento has its own URL rewrite system that allows us to use http: //localhost/books.html. Let's dig a little deeper into the URL rewrite system and see how Magento gets the controller and action names from our URL alias. Inside our Varien/Front.php controller dispatch function, Magento will call: Mage::getModel('core/url_rewrite')->rewrite(); Before actually looking into the inner workings of the rewrite function, let's take a look at the structure of the core/url_rewrite model: Array (["url_rewrite_id"] => "10"["store_id"] => "1"["category_id"] => "10"["product_id"] => NULL["id_path"] => "category/10"["request_path"] => "books.html"["target_path"] => "catalog/category/view/id/10"["is_system"] => "1"["options"] => NULL["description"] => NULL) As we can see, the rewrite module is comprised of several properties, but only two of them are of particular interest to use – request_path and target_path. Simply put, the job of the rewrite module is to modify the request object path information with the matching values of target_path.
Read more
  • 0
  • 0
  • 1755

article-image-welcoming-your-visitors-creating-attractive-home-pages-and-overview-pages
Packt
31 May 2013
25 min read
Save for later

Welcoming your Visitors: Creating Attractive Home Pages and Overview Pages

Packt
31 May 2013
25 min read
(For more resources related to this topic, see here.) Up to now, you've set up the home page and category overview pages using the default options. But you may have noticed that Joomla offers dozens of options for these page types. Changing these options can completely alter the way content is presented. In fact, different settings can create very different looking pages. To effectively welcome your visitors and entice them to read your valuable content, we'll create a better home page and effective category pages. In the following screenshots, you'll see the page types we're talking about. The basic layout of both home pages and overview pages is similar. On the left-hand side is the example home page in the default Joomla installation, on the right-hand side is an example category overview page found via the About Joomla menu (Using Joomla | Using Extensions | Components | Content Component | Article Category Blog): Why do you need overview pages, anyway? Typically, Joomla will lead your site visitor to a category content in three steps. Between the Main Menu and the actual content, there's a secondary page to show category contents. You can see how this works in the following set of screenshots: A visitor clicks on a menu link. They are taken to an overview page with article previews inviting them to click Read more links. They click to read the full article. As you can see, what's on the home page and the overview pages (and how it's presented) is vitally important to your site. It's the teaser texts, images, and hyperlinks on these pages that offer your visitors a first glimpse of the actual content. Of course, people don't always arrive at your site via the home page. Search engine results might take them directly to any page— including overview pages. One more reason to make those pages as enticing as you can! Overview page, landing page, secondary home page? Joomla doesn't have a name for overview pages. Among web builders they're also known as start pages, category pages, department pages, or landing pages. Whatever you like to call it, it's the same thing: a navigational page that provides an overview of site categories. In this book we'll call them category overview pages. Creating the perfect home – mastering home page layout By default, the homepage of any Joomla site is set up to display the following items: One introductory article text over the full width of the mainbody Three intro texts in three columns As we haven't changed any of the homepage layout settings up to now, the example site homepage has the same layout. This default setup is suited for many types of content-rich sites. But you're certainly not limited to displaying this one particular combination of intro texts and links in the central part of the home page (the "mainbody", as it is called in Joomla). There's a vast amount of choices on how to display content on the home page, and what to display. Changing the way the home page is arranged It's your client on the phone, telling you that—happy as they are with their new site—some CORBA staff members find the home page layout too distracting. They don't like the newspaper look that displays the content columns in different widths. Would you be so kind as to tone things down a little? If you could quickly show them an alternative layout, that would be fine. You hang up and dive into the homepage settings. Time for action – rearranging the layout of articles on the home page You decide to rearrange the items on the home page. Let's say you want a maximum of two intro texts, both in just one column. Apart from this, you would like to show a few hyperlinks to other articles that could be of interest to visitors browsing the home page. You may wonder where Joomla stores the home page settings. As we've seen in previous chapters, menu link settings often determine Joomla's page output—and this also holds for the Home link in the main menu. This menu link is of a specific Menu Item Type, Featured Articles. To change the appearance of the home page, we'll customize the Home menu link settings. Navigate to Menus | Main Menu. In the Menu Manager, click on Home to enter the screen where you can edit the menu link settings. Click the Advanced Options tab. In the Layout Options section, the current settings are shown as follows: These are the "magic numbers" that determine the page lay-out. There's 1 leading article (which means it's displayed in full width), intro articles are shown in 3 columns, and there are 0 links to articles. Change the values as follows: set # Leading Articles to 0, # Intro Articles to 2, # Columns to 1, and # Links to 4. This way just two articles will be shown in a single column and the rest of the featured articles is displayed as a set of hyperlinks. Save your changes and click on View site to see the changes on the frontend. There are now two full-width intro texts. Although you have set # Links to 4, beneath the intro texts only two article links are displayed. This is because up to now only four articles have been assigned to the home page. If you'll assign more articles to the home page, this list will grow to a maximum of four hyperlinks. What just happened? The settings of any menu item allow you to influence the look of the hyperlink's destination page. By default, the Joomla Home link of the main menu is of the Featured Articles Menu Item Type. In this case, you've tweaked the layout options of the Featured Articles menu link to change the home page mainbody. The magic numbers of the Layout Options section are really powerful as different values can completely change the way the page content is displayed. Have a go hero – tweak home page layout options Joomla offers you dozens of settings to customize the home page layout. Navigate to Menus | Main Menu | Home and click the Advanced Options tab to have a look at the different option panels, such as Layout Options. First, you will probably want to set Pagination to Hide. That way, you'll hide the pagination links (< Start Prev Next Last >) that Joomla displays when there are more articles available than can be shown on the home page as intro texts. In our example, the pagination links allow the visitor to navigate to a "second home page", displaying the intro texts of the two hyperlinks in the More articles ... list. Showing pagination links on a home page seems suited for weblog home pages, where visitors expect to be able to browse through possibly long lists of blog entries. On most other types of sites, web users aren't likely to expect multi-page home pages. The options for the Home link (or any other Featured Articles Menu Item Type) allow you to also control exactly what details are shown for every article on the home page. Through the menu link settings you determine whether or not you want to show the author name, publication date, the category name, and much more. These article display settings in the menu link overrule the general settings found through Content | Article Manager | Options. For a full overview of all options available for the Featured Articles Menu Item Type. Adding items to the home page In the More Articles … hyperlink list at the bottom of your home page, two hyperlinks are shown. That's because only four articles are set to display on the home page. To add a couple of articles, navigate to Content | Article Manager. Add any article by clicking on the white star in the Status column to the left-hand side of the article title. The grey star changes to an orange star. In the following example, we've selected a News item (CORBA Magazine Looking for Authors) to be Featured on the homepage: Want to see what this looks like up front? Just click on View Site. The new home page item is shown at the top. All other featured items are now positioned one position lower than before. You'll notice that the Hideous Still Lifes intro text has disappeared as this featured item has now slid down one position, to the list with article hyperlinks. This list now contains three articles instead of two. Another way to add articles to the home page Adding items to the home page takes just a few clicks in the Article Manager Status column. You can also add an individual article to the home page through a setting in the Edit Article screen: under Details, select Featured: Yes. Controlling the order of home page items manually Now that you've reorganized your home page layout, you'll probably want some control over the order of the home page items. To manually set the order, first edit the Home menu link. Click Advanced Options and under Layout Options, make sure Category Order is set to No order: Click Save & Close and go to Content | Featured Articles and set the order as desired. First, set the value of the Sort Order By select box (by default it shows Title) to Ordering. Now you can change the articles order by clicking the three vertical squares to the left-hand side of any article title and dragging the article to another position. The intro texts and links on the home page will now be displayed in the order they have in the Featured Articles screen: What's the use of the Featured Articles screen? In the Featured Articles screen, you can't—as you might have expected—assign items to the Featured status. As you've seen, you can assign articles to the Featured status in the Article Manager (or in the article editing screen). You'll probably use the Featured Articles screen if you want to manually control the order of home page items, or if you want a quick overview of all featured articles. Apart from this, the Featured Articles screen allows you to publish, delete, or archive featured articles—but you can just as easily use the Article Manager for that too. Setting a criteria to automatically order home page items Having full manual control over the order of home page items can be convenient when you have a fixed set of content items that you want to show up on the home page, for example, when you have a corporate site and want to always show your company profile, an introduction to your products, and a link to a page with your address and contact details. However, when your site is frequently updated with new content, you'll probably want Joomla to automatically arrange the home page items to a certain ordering criteria. Again, you can customize this behavior by editing the Home link in the main menu. Its Layout Options allow you to choose from a wide range of ordering methods. Time for action – show the most recent items first The visitors of the CORBA site will probably expect to see the most recently added items on the top of the page. Let's set the Layout Options to organize things accordingly. Navigate to Menus | Main Menu and click the Home link to edit its settings. Under the Advanced Options tab, you'll find the Layout Options offering several ordering options for featured articles. Make sure Category Order is set to No order, to avoid that specific category order settings overruling the article settings you choose. In the Article Order drop-down list, choose Most recent first. As the Date for ordering, select Create Date. When ordering your articles by date, you'll probably want to display the creation date for every article. Navigate to the Article Options panel of the menu link and make sure Show Create Date is set to Show. Click on Save and click on View Site. Now the most recent items are shown first on the home page: What just happened? You've told Joomla to put the most recently added items first on the home page. If you want, you can check this by opening a featured article, changing its created date to Today, and saving your changes; this article will immediately be displayed at the top in the home page layout. If you prefer to order home page items in another way (for example, alphabetically by title), you can do this by selecting the appropriate Article Order settings of the home page menu item (the Home link in the Main Menu). The Featured Articles Menu Item Type – an overview of all options You've seen that the Home menu is a link of the Featured Articles Menu Item Type. When adding or editing a Featured Articles menu link, you'll see there are are six expandable options panels available under the Advanced Options tab, offering a huge number of customization settings. Below you'll find a complete reference of all available options. Dozens of dazzling options – isn't that a bit too much? You've seen them before and now they turn up again, those seemingly endless lists of options. Maybe you find this abundance discouraging. Is it really necessary to check thirty or forty options to create just one menu link? Luckily, that's not how it works. You get fine results when you stick to the default settings. But if you want to tweak the way pages are displayed, it is worthwhile to experiment with the different options. See which settings fit your site best; in your day-to-day web building routine you'll probably stick to those. Layout Options Under Layout Options of the Featured Articles Menu Item Type, you find the main settings affecting the layout and arrangement of home page items. Select Categories   By default, the home page displays Featured Articles from all article categories. You can, however, control exactly from which categories featured articles should be shown. For example, you might want to display only featured articles from the News category on the home page, and featured articles from another category on another Featured Articles page, introducing another category. You'll see an example of this in the section Creating more than one page containing featured articles later in this article. # Leading Articles   Enter the number of leading articles you want to display, that is, intro texts displayed across the entire width of the mainbody. # Intro Articles The number of article intro texts that you want to show in two or more columns. # Columns   Specify the number of columns; over how many columns should the # Intro Articles be distributed? # Links   The number of hyperlinks to other articles (shown below Leading or Intro Articles) Multi Column Order   Should intro texts in multiple columns be sorted from left to right (across) or from top to bottom (down)? Category Order   Do you want to organize the items on the page by category title? You might want to do this when you have many items on your home page and you want your visitor to understand the category structure behind this. If you want to order by category, set Show Category (see Article Options explained in the next table) to show; that way, the visitor can see that the articles are grouped by category. The following Category Order options are available: No Order: If you select this option, the items are displayed in the order you set in the Article Order field (the next option under Layout Options). Title Alphabetical: Organizes categories alphabetically by title. Title Reverse Alphabetical: Organizes categories reverse-alphabetically by title. Category Manager Order: Organizes categories according to the order in the Category Manager and orders the category contents according to the Article Order (which you can specify below). Article Order   You can order the items within the featured articles page by date, alphabetically by Author name or Title, Most hits, and so on. If you choose Featured Articles Order, then the items appear in the order they have on the Content | Featured Articles screen. This last option gives you full manual control over the order of items on the page. Note: the Article Order is applied only after the Category Order. Article Order only has effect if you choose No Order in the Category Order box. Date for Ordering   If you've set the Article Order to Most Recent First or Oldest First, select the date for ordering: Created, Modified, or Published. Pagination   Auto: When there are more items available than it can fit the first page, Joomla automatically adds pagination links (<<Start <Previous 1 2 3 Next> End>>). On the home page, in many cases, you'll probably want to set Pagination to Hide. Pagination Results   If pagination links are shown, Joomla can also display the Pagination Results, the total number of pages (as in Page 1 of 3). Article Options The Article Options influence how articles are displayed on the Featured Articles page. For many extras you can select Show, Hide, Use Global (which means: use the settings chosen under Article Manager | Options), or Use Article Settings (use the settings chosen in the option panels of the individual articles). The Article Options are similar to the options you can set in the general preferences for articles (Article Manager | Options. Here, you can depart from the general settings for the articles and make different choices for this particular menu item. Show Title Display article titles or not? It's hard to find a reason to select Hide. Linked Titles Should the title of the article be a hyperlink to the full article? By default this option is set to Yes. This is better for usability reasons, because your visitor can just click on the article title to read a full article (instead of just on a Read more link). It is also better because search engines love links that clearly define the destination (which article titles should do). Show Intro Text After the visitor has clicked on a Read more link, do you want them to see a page with just the rest of the article text (select No) or the full article including the intro text (select Yes)? Position of Article Info The Article Info consists of the article details, such as the publication date, author name, and so on. If these details are set to be displayed, do you want to display them above the article, below the article, or split (partly above the article and partly below it)? Show Category Select Show if you want to show the category name below the article title. Joomla will display the category (as shown in the following screenshot: Category: Club Meetings). Link Category If the Category title is shown, should it be a hyperlink to the category? In most cases it's a good idea to select Yes here: this provides visitors with a link to category contents with every article. Show Parent Select Show if you want to show the name of the main category (the parent category of the current article category) below the article title. This will look as follows: Link Parent Just like the Category title, the title of the parent category can be made a link to an overview page of the main category contents. Show Author, Link Author, Show Create Date, Show Modify Date, Show Publish Date Do you want to show the author name (and make it a link to a page with other articles by the same author), the creation date, the date the article was last updated, and/or the date on which the article was first published? By default, many of these options are set to Show. You may want to choose Hide if you've got a small site or a site that isn't regularly updated. In that case you probably don't want to broadcast when your articles were written or who wrote them. Show Navigation Select Show if want to display navigation links between articles. Show Voting Should readers be able to rate articles (assign one to five stars to an article)? Show "Read more" Do you want a Read more link to appear below an article intro text? You'll probably want to leave this set to Yes, but if the title of the article is a hyperlink, a Read more link can be omitted. Although Joomla displays the Read more link by default, many web builders just make the article title clickable and omit a separate Read more link. Show Title with Read More It's a good idea to display the article title as part of the Read more text, as this will make the link text more meaningful for both search engines and ordinary visitors. Show Icons Joomla can show a set of special function icons with any article. These allow the visitor to print the article, or to e-mail it. Do you want to display these options as icons or text links? Show Print Icon, Show Email Icon Show or hide the special function icons? It's often better to altogether hide these extras. Your visitors may want to use the print function, but any modern browser offers a print function with better page formatting options. Show Hits Should the number of hits per article (the number of times it's been displayed) be shown? Show Unauthorized Links Do you want to show hyperlinks to articles that are only accessible to registered users, or hide these articles completely? The Article Options listed previously allow you to show or hide all kinds of details, such as Author, Create Date, and Modify Date. In the following image, you can see the result when most options are set to Show. Obviously, this results in too much "detail clutter". On a website that's maintained by just one or a few authors, or a website that isn't updated regularly, you might want to hide author and date details. On a home page you'll probably also want to hide all of the special function icons (set Icons, Print Icon, and Email Icon to Hide). It's unlikely that visitors want to print or e-mail parts of your home page content. In the following image, all extras are hidden, which leaves much more room for actual content in the same space. Integration Options The Integration Options are only relevant when you use news feeds (RSS feeds) on your website. Show Feed Link The Show Feed Link option allows you to show or hide an RSS Feed Link. This will display a feed icon in the address bar of the web browser. For each feed item show This option allows you to control what to show in the news feed; the intro text of each article, or the full article. Link Type Options The Link Type Options allow you to set the display of the menu link to this page (in this case the Home link). Link Title Attribute Here you can add a description that is displayed when the mouse cursor hovers over the menu link to this page. Link CSS Style Only relevant if you are familiar with CSS and want to apply a custom CSS style to this specific menu link. If you've added a specific style in the CSS stylesheet, in this box you should fill in the name of that special style. Joomla will adjust the HTML and add the CSS style to the current menu Home link, as follows: <a class="specialstyle" ref="/index.php">Home </a> Link Image Should an image be shown in the Main Menu link next to the Home link? Menu images (icons) can make a menu more attractive and easier to scan. Following is one of countless examples from the web: Add Menu Title When you use a Link Image, should the menu link text be displayed next to it? Select No only if you want a completely graphical menu, using just icons. Page Display Options Under Page Display Options, you'll find some options to customize page headings and an option to control the general design of the page. Browser Page Title An HTML page contains a title tag. This doesn't appear on the page itself, but it's is displayed in the title bar of the browser. By default, Joomla will use the menu item title as the title tag. Here, you can overrule this default title. Show Page Heading Here you can determine if a page heading appears at the top of the page (that is, in the mainbody). By default, this option is set to No. Select Yes to show the Menu Item Title as the Page Heading. Page Heading If you want to customize the Page Heading (instead of using the default Menu Item Title as the heading text), enter a text here. Page Class This is only relevant if you want to get more control over the page design: font size, colors, and so on. Using the Page Class field, you add a suffix to the name of all CSS styles used on this page. To use this feature, you have to know your way around in CSS. Metadata Options The Metadata Options allow you to add description and keywords to describe the web page's content. Meta Description, Meta Keywords, Robots, Secure Metadata information is used by search engines. You can add an article description, meta keywords, and enter instructions for Robots (web search engine spiders) and select whether this link should use a specified security protocol. Module Assignment for this Menu Item tab Click the Module Assignment for this Menu Item tab to see links to all modules that are assigned to the current menu item. Modules in Joomla are always assigned to one or more menu items. When the visitor clicks a menu link, a page is displayed consisting of (among other things) specific module blocks. This overview of (links to) assigned modules makes it easier for administrators to jump directly from the menu item to all related modules and change their settings. Creating more than one page containing featured articles By default, the Featured Articles Menu Item Type is used only once on your site. All articles that have the Featured status, are shown on the homepage. This is because the Home link in the Main Menu is created using the Featured Articles Menu Item Type. However, you can create as many Featured Articles pages as you like, each one showing featured articles from different categories. Let's say you want to create a page called "News Highlights", containing featured articles only from the News category. To do this, create a new menu link of the Featured Articles Menu Item Type and instead of All Categories select only the News category: The output would be a separate featured articles page containing news highlights. To avoid the same featured news showing up on both the homepage and the News Highlights page, you would probably want to change the home page settings (currently set to show all categories) and get it to display featured articles from all categories except for the News category. Another type of home page: using a single article So far you've used Joomla's Featured Articles layout for your site's home page. But what if you want a completely different home page layout? That's easily achieved, since Joomla allows you to set any menu item as the default page. Time for action – creating a different home page Let's not use the Featured Articles and create a simple home page that only shows one single, full article: Navigate to Menus | Main Menu. As you can see, there's a star in the Home column of the Home link. This indicates that this is the default page; the visitor will see this page in the mainbody when accessing your site. In this example we'll select the Mission Statement menu item as the new home page. Locate this article in the list and click on the grey star in the Home column. Clicking the grey star will turn its color into orange, indicating this is now the default page. Click on View Site. The results are shown in the following screenshot. An ordinary article is now the home page: If you want to update the Main Menu to reflect these changes, you can hide the existing Home link in the Article Manager, which is still pointing to the "old" homepage. To do this, in the Menu Manager you would click on the Unpublish item icon next to the Home link and rename the existing Mission Statement menu link to Home. What just happened? You've changed the default view of the home page to a fresh look, showing one article. Of course, you can dress up such a basic home page any way you like. For some sites (a simple "brochure site" presenting a small company or a project), this may be a good solution. The consequence of this approach is, of course, that the Featured status (that you can set in the Article Manager and in the article edit screen) no longer determines what's published on the home page.
Read more
  • 0
  • 0
  • 2240
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introduction-citrix-xendesktop
Packt
29 May 2013
23 min read
Save for later

Introduction to Citrix XenDesktop

Packt
29 May 2013
23 min read
(For more resources related to this topic, see here.) Configuring the XenDesktop policies Now that the XenDesktop infrastructure has been configured, it's time to activate and populate the VDI policies. This is an extremely important part of the implementation process, because with these policies you will regulate the resource use and assignments, and you will also improve the general virtual desktops performance. Getting ready All the policies will be applied to the deployed virtual desktop instances and the assigned users, so you need an already existing XenDesktop infrastructure on which you will enable and use the configuration rules. How to do it... In this recipe we will explain the configuration for the user and machine policies offered by Citrix XenDesktop. Perform the following steps: Connect to the XenDesktop Director machine with domain administrative credentials, then navigate to Start | All Programs | Citrix and run the Desktop Studio. On the left-hand side menu expand the HDX Policy section and select the Machines link. Click on the New button to create a new policy container, or select the default unfiltered policies and click on Edit to modify them. In the first case, you have to assign a descriptive name to the created policy. In the Categories menu, click on the following sections and configure the values for the policies that will be applied to the clients, in terms of network flow optimization and resource usage monitoring: The ICA section ICA listener connection timeout: Insert a value in milliseconds; default is 12000. ICA listener port number: This is the TCP/IP port number on which the ICA protocol will try to establish the connection. The default value is 1494. The Auto Client Reconnect subsection Auto client reconnect: (Values Allowed or Prohibited) Specify whether or not to automatically reconnect in case of a broken connection from a client. Auto client reconnect authentication: (Values Do not require authentication or Require authentication) Decide whether to let the Citrix infrastructure ask you for the credentials each time you have to reperform the login operation. Auto client reconnect logging: (Values Do Not Log auto-reconnect events or Log auto-reconnect events) This policy enables or disables the logging activities in the system log for the reconnection process. In case of active autoclient reconnect, you should also activate its logging. End User Monitoring subsection ICA round trip calculation: (Values Enabled or Disabled) This decides whether or not to enable the calculation of the ICA network traffic time. ICA round trip calculation interval: Insert the time interval in seconds for the period of the round trip calculation. ICA round trip calculations for idle connections: (Values Enabled or Disabled) Decide whether to enable the round trip calculation for connections that are not performing traffic. Enable this policy only if necessary. The Graphics subsection Display memory limit: Configure the maximum value in KB to assign it to the video buffer for a session. Display mode degrade preference: (Values Degrade color depth first or Degrade resolution first) Configure a parameter to lower the resolution or the color quality in case of graphic memory overflow. Dynamic Windows Preview: (Values Enabled or Disabled) With this policy you have the ability to turn on or turn off the high-level preview of the windows open on the screen. Image caching: (Values Enabled or Disabled) With this parameter you can cache images on the client to obtain a faster response. Notify user when display mode is degraded: (Values Enabled or Disabled) In case of degraded connections you can display a pop up to send a notification to the involved users. Queueing and tossing: (Values Enabled or Disabled) By enabling this policy you can stop the processing of the images that are replaced by other pictures. In presence of slow or WAN network connections, you should create a separate policy group which will reduce the display memory size, configure the degrade color depth policy, activate the image caching, and remove the advanced Windows graphical features. The Keep Alive subsection ICA keep alive timeout: Insert a value in seconds to configure the keep alive timeout for the ICA connections. ICA keep alives: (Values Do not send ICA keep alive messages or Send ICA keep alive messages) Configure whether or not to send keep-alive signals for the running sessions. The Multimedia subsection Windows Media Redirection: (Values Allowed or Prohibited) Decide whether or not to redirect the multimedia execution on the Citrix server(s) and then stream it to the clients. Windows Media Redirection Buffer Size: Insert a value in seconds for the buffer used to deliver multimedia contents to the clients. Windows Media Redirection Buffer Size Use: (Values Enabled or Disabled) This policy decides whether or not to let you use the previously configured media buffer size. The Multi-Stream Connections subsection Audio UDP Port Range: Specify a port range for the UDP connections used to stream audio data. The default range is 16500 to 16509. Multi-Port Policy: This policy configures the traffic shaping to implement the quality of service (QoS). You have to specify from two to four ports and assign them a priority level. Multi-Stream: (Values Enabled or Disabled) Decide whether or not to activate the previously configured multistream ports. You have to enable this policy to activate the port configuration in the Multi-Port Policy. The Session Reliability subsection Session reliability connections: (Values Allowed or Prohibited) By enabling this policy you allow the sessions to remain active in case of network problems. Session reliability port number: Specify the port used by ICA to check the reliability of incoming connections. The default port is 2598. Session reliability timeout: Specify a value in seconds used by the session reliability manager component to wait for a client reconnection. You cannot enable the ICA keep alives policy if the policies under the Session Reliability subsection have been activated. The Virtual Desktop Agent Settings section Controller Registration Port: Specify the port used by Virtual Desktop Agent on the client to register with the Desktop Controller. The default value is 80. Changing this port number will require you to also modify the port on the controller machine by running the following command: <BrokerInstallationPath>BrokerService.exe / VdaPort <newPort> Controller SIDs: Specify a single controller SID or a list of them used by Virtual Desktop Agent for registration procedures. Controllers: Specify a single or a set of Desktop Controllers in the form of FQDN, used by Virtual Desktop Agent for registration procedures. Site GUID: Specify the XenDesktop unique site identifier used by Virtual Desktop Agent for registration procedures. In presence of more than one Desktop Controller, you should create multiple VDA policies with different controllers for a load-balanced infrastructure.   The CPU Usage Monitoring subsection Enable Monitoring: (Values Allowed or Prohibited) With this policy you can enable or disable the monitoring for the CPU usage. Monitoring Period: Insert a value in seconds to configure the time period to run the CPU usage recalculation. Threshold: Configure a percentage value to activate the high CPU usage alert. The default value is 95 percent. Enable the CPU Usage Monitoring policies in order to better troubleshoot machine load issues. After configuring, click on the OK button to save the modifications. On the left-hand side menu, click on the Users policy link in the HDX Policy section. Click on the New button to create a new policy container, or select the default unfiltered policies and click on Edit to modify them. In the first case, you have to assign a descriptive name to the created policy. In the Categories menu click on the following sections and configure the associated values: The ICA section Client clipboard redirection: (Values Allowed or Prohibited) This policy permits you to decide whether or not to enable the use of the client clipboard in the XenDesktop session, and to perform copy and paste operations from the physical device to the remote Citrix session. The active clipboard redirection could be a security issue; be sure about its activation! The Flash Redirection subsection Flash acceleration: (Values Enabled or Disabled) This policy permits you to redirect the Flash rendering activities to the client. This is possible only with the legacy mode. Enable this policy to have a better user experience for the Flash contents. Flash backwards compatibility: (Values Enabled or Disabled) With this policy you can decide whether or not to activate the compatibility of older versions of Citrix Receiver with the most recent Citrix Flash policies and features. Flash default behavior: (Values Enable Flash acceleration, Disable Flash acceleration, or Block Flash player) This policy regulates the use of the Adobe Flash technology, respectively enabling the most recent Citrix for Flash features (including the client-side processing), permitting only server-side processed contents, or blocking any Flash content. Flash event logging: (Values Enabled or Disabled) Decide whether or not to create system logs for the Adobe Flash events. Flash intelligent fallback: (Values Enabled or Disabled) This policy, if enabled, is able to activate the server-side Flash content processing when the client side is not required. The Flash Redirection features have been strongly improved starting from XenDesktop Version 5.5. The Audio subsection Audio over UDP Real-time transport: (Values Enabled or Disabled) With this policy you can decide which protocols to transmit the audio packets, RTP/UDP (policy enabled) or TCP (policy disabled). The choice depends on the kind of audio traffic to transmit. UDP is better in terms of performance and bandwidth consumption. Audio quality: (Values Low, Medium, or High) This parameter depends on a comparison between the quality of the network connections and the audio level, and they respectively cover the low-speed connections, optimized for speech and high-definition audio cases. Client audio redirection: (Values Allowed or Prohibited) Allowing or prohibiting this policy permits applications to use the audio device on the client's machine(s). Client microphone redirection: (Values Allowed or Prohibited ) This policy permits you to map client microphone devices to use within a desktop session. Try to reduce the network and load impact of the multimedia components and devices where the high user experience is not required. The Bandwidth subsection Audio redirection bandwidth limit: Insert a value in kilobits per second (Kbps) to set the maximum bandwidth assigned to the playing and recording audio activities. Audio redirection bandwidth limit percent: Insert a maximum percentage value to play and record audio. Client USB device redirection bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to USB devices redirection. Client USB device redirection bandwidth limit percent: Insert a maximum percentage value for USB devices redirection. Clipboard redirection bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to the clipboard traffic from the local client to the remote session. Clipboard redirection bandwidth limit percent: Insert a maximum percentage value for the clipboard traffic from the local client to the remote session. COM port redirection bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to the client COM port redirected traffic. COM port redirection bandwidth limit percent: Insert a maximum percentage value for the client COM port redirected traffic. File redirection bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to client drives redirection. File redirection bandwidth limit percent: Insert a maximum percentage value for client drives redirection. HDX MediaStream Multimedia Acceleration bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to the multimedia content redirected through the HDX MediaStream acceleration. HDX MediaStream Multimedia Acceleration bandwidth limit percent: Insert a maximum percentage value for the multimedia content redirected through the HDX MediaStream acceleration. Overall session bandwidth limit: Specify a value in Kbps for the total bandwidth assigned to the client sessions. In presence of both bandwidth limit and bandwidth limit percent enabled policies, the most restrictive value will be used. The Desktop UI subsection Aero Redirection: (Values Enabled or Disabled) This policy decides whether or not to activate the redirection of the Windows Aero graphical feature to the client device. If Aero has been disabled, this policy has no value. Aero Redirection Graphics Quality: (Values High, Medium, Low, and Lossless) If Aero has been enabled, you can configure its graphics level. Desktop wallpaper: (Values Allowed or Prohibited) Through this policy you can decide whether or not to permit the users having the desktop wallpaper in your session. Disable this policy if you want to standardize your desktop deployment. Menu animation: (Values Allowed or Prohibited) This policy permits you to decide whether or not to have the animated menu of the Microsoft operating systems. The choice depends on what kind of performances you need for your desktops. View window contents while dragging: (Values Allowed or Prohibited) This policy gives you the ability to see the entire window contents during the drag-and-drop activities between windows, if enabled. Otherwise you'll see only the window's border. Enabling the Aero redirection will have impact only on the LAN-based connection; on WAN, Aero will not be redirected by default. The File Redirection subsection Auto connect client drives: (Values Enabled or Disabled) With this policy the local drives of your client will or will not be automatically connected at logon time. Client drive redirection: (Values Allowed or Prohibited) The drive redirection policy allows you to decide whether it is permitted or not to save files locally on the client machine drives. Client fixed drives: (Values Allowed or Prohibited) This policy decides whether or not to permit you to read data from and save information to the fixed drives of your client machine. Client floppy drives: (Values Allowed or Prohibited) This policy decides whether or not to permit you to read data from and save information to the floppy drives of your client machine. This should be allowed only in presence of an existing floppy drive, otherwise it has no value to your infrastructure. Client network drives: (Values Allowed or Prohibited) With this policy you have the capability of mapping the remote network drives from your client. Client optical drives: (Values Allowed or Prohibited) With this policy you can enable or disable the access to the optical client drives, such as CD-ROM or DVD-ROM. Client removable drives: (Values Allowed or Prohibited) This policy allows or prohibits you to map, read, and save removable drives from your client, such as USB keys. Preserve client drive letters: (Values Enabled or Disabled) Enabling this policy offers you the possibility of maintaining the client drive letters when mapping them in the remote session, whenever possible. Read-only client drive access: (Values Enabled or Disabled) Enabling this policy will not permit you to access the mapped client drivers in write mode. By default, this policy is disabled to permit the full drive access. To reduce the impact on the client security, you should enable it. You can always modify it when necessary. These are powerful policies for regulating the access to the physical storage resources. You should configure them to be consistent with your company security policies. The Multi-Stream connections subsection Multi-Stream: (Values Enabled or Disabled) As seen earlier for the machine section, this policy enables or disables the multistreamed traffic for specific users. The Port Redirection subsection Auto connect client COM ports: (Values Enabled or Disabled) If enabled, this policy automatically maps the client COM ports. Auto connect client LPT ports: (Values Enabled or Disabled) This policy, if enabled, autoconnects the client LPT ports. Client COM port redirection: (Values Allowed or Prohibited) This policy configures the COM port redirection between the client and the remote session. Client LPT port redirection: (Values Allowed or Prohibited) This policy configures the LPT port redirection between the client and the remote session. You have to enable only the necessary ports, so disable the policies for the missing COM or LPT ports. The Session Limits subsection Disconnected session timer: (Values Enabled or Disabled) This policy enables or disables the counter used to migrate from a locked workstation to a logged off session. For security reasons, you should enable the automatic logoff of the idle sessions. Disconnected session timer interval: Insert a value in minutes, which will be used as a counter reference value to log off locked workstations. Set this parameter based on a real inactivity time for your company employees. Session connection to timer: (Values Enabled or Disabled) This policy will or will not use a timer to measure the duration of active connections from clients to the remote sessions. The Time Zone Control subsection Use local time of client: (Values Use server time zone or Use client time zone) With this policy you can decide whether to use the time settings from your client or from the server. XenDesktop uses the user session's time zone. The USB Devices subsection Client USB device redirection: (Values Allowed or Prohibited) With this important policy you can permit or prohibit USB drives redirection. Client USB device redirection rules: Through this policy you can generate rules for specific USB devices and vendors, in order to filter or not; and if yes, what types of external devices mapping. The Visual Display subsection Max Frame Per Second: Insert a value, in terms of frames per second, which will define the number of frames sent from the virtual desktop to the user client. This parameter could dramatically impact the network performance, so be careful about it and your network connection. The Server Session Settings section Single Sign-On: (Values Enabled or Disabled) This policy decides whether to turn on or turn off the SSO for the user sessions. Single Sign-On central store: Specify the SSO store server to which the user will connect for the logon operations, in the form of a UNC path. The Virtual Desktop Agent Settings section The HDX3DPro subsection EnableLossLess: (Values Allowed or Prohibited) This policy permits or prohibits the use of a lossless codec. HDX3DPro Quality Settings: Specify two values, Minimum Quality and Maximum Quality (from 0 to 100), as HDX 3D Pro quality levels. In the absence of a valid HDX 3D Pro license, this policy has no effect. The ICA Latency Monitoring subsection Enable Monitoring: (Values Allowed or Prohibited) This rule will or will not monitor the ICA latency problems. Monitoring Period: Define a value in seconds to run the ICA latency monitor. Threshold: Insert a threshold value in milliseconds to check if the ICA latency has arrived to the highest level. The Profile Load Time Monitoring subsection Enable Monitoring: (Values Allowed or Prohibited) With this policy you can monitor the time required to load a user profile. Threshold: Specify a value in seconds to activate the trigger for the high profile loading time event. These are important policies to troubleshoot performance issues in the profile loading activities, especially referred to the centralized profiles. After configuring click on the OK button to save the modifications. For both the edited policy categories (Machines and Users), click on the Edit button, select the Filters tab, and add one or more of the following filters: Access Control: (Mode: Allow or Deny, Connection Type: With Access Gateway or Without Access Gateway) Insert the parameters for the type of connection to which you are applying the policies, using or not using Citrix Access Gateway. Branch Repeater: (Values Connections with Branch Repeater or Connections without Branch Repeater) This policy decides whether or not to apply the policies to the connection that passes or doesn't pass through a configured Citrix Branch Repeater. Client IP Address: (Mode: Allow or Deny) Specify a client IP address to which you are allowing or denying the policy application. Client Name: (Mode: Allow or Deny) Specify a client name to which you are allowing or denying the policy application. Desktop Group: (Mode: Allow or Deny) Select from the drop-down list an existing desktop or application group to which you are applying or not applying the configured policies. Desktop Type: (Mode: Allow or Deny) This policy decides to allow or deny the policy application to the existing deployed resources (Private Desktop or Shared Desktop, Private Applications or Shared Applications). Organizational Unit: (Mode: Allow or Deny) Browse for an existing domain OU to which you are applying or not applying the configured policies. Tag: (Mode: Allow or Deny) This policy decides to allow or deny the application of the policies to specific tags applied to the desktops. User or Group: (Mode: Allow or Deny) Browse for existing domain users and groups to which you are applying or not applying the configured policies. For the machine section, you'll only have the desktop group, desktop type, organizational unit, and tag categories of filters. After completing this, click on the OK button to save the changed filters. How it works... The Citrix XenDesktop policies work at two different levels of components, machines and users, and for each of them you can apply a set of filters to decide when and where to permit or not to permit the policy utilization. These configurations should be strongly oriented to the performance and security optimization, so the best practices to apply is to generate different sets of policies and specifically apply them to different kinds of virtual desktops, clients, and users. The following is the explanation of the previously applied configurations: Machines policy level: These kinds of policies apply at the machine level, trying to regulate and optimize the session management, and the multimedia resources redirection. With this group of settings you are able to configure the standard ICA port to listen, and the relative connection timeouts. It's possible to decide whether or not to automatically reconnect a client in case of broken connections. Enabling Auto client reconnect policy could be right in some cases, especially when you have interrupted an important working session, but on the other hand, you could not have calculated waste of resources, because the Citrix broker could run a new session in the presence of issues with the session cookies. With the ICA round trip policies, you can monitor and measure the response time taken by the users for the operations. This data permits you to understand the responsiveness of your Citrix infrastructure. In case it allows you to apply remediation to the configuration, especially for the policies that involve graphics components, you can size the display memory and the image caching area, or turn on or off specific Windows advanced graphical features, such as the Dynamic Windows Preview (DWP). With the queuing and tossing policy active, you could have problems of lost frames when reproducing animations. The Windows media redirection policy optimizes the reproduction of multimedia objects; by applying a correct sizing to its buffer size you should obtain evident improvements in the streaming and reproduction operations. So, you should consider disabling this policy, demanding the processing of audio and video to the clients only when you can see no particular benefits. Another important feature offered by these policies is the QoS implementation; you can enable the multistream connection configurations and apply the traffic priority levels to them, permitting to give precedence and more bandwidth to the traffic that is considered more critical than others. The Multi-Stream policy for the QoS can be considered a less powerful alternative to Citrix Branch Repeater. As the last part of this section, the Virtual Desktop Agent Settings section permits you to restrict the access to only pre-configured resources, such as specific Desktop Controllers. Users policy level: Combined with the machines policies we have the users policies. These policies apply settings from a user session perspective, so you can configure, for instance, processing the Adobe Flash contents, deciding whether or not to activate the compatibility with the oldest version of this software, and whether to elaborate the Flash multimedia objects on the user's clients or on the Citrix servers. Moreover, you can configure the audio settings, such as audio and microphone client redirection (in the sense of using the local device resources), the desktop settings (Aero parameters, desktop wallpapers, and so on), or the HDX protocol quality settings. Be careful when applying policies for the desktop graphical settings. To optimize the information transmission for the desktops the bandwidth policy is extremely important; by this you can assign, in the form of maximum Kbps or percentage, the values for the traffic types such as audio, USB, clipboard, COM and LPT ports, and file redirection. These configurations require a good analysis of the traffic levels and their priorities within your organization. The last great configuration is the redirection of the client drives to the remote Citrix sessions; in fact, you can activate the mount (automatic or not) and the users rights (read only or read/write) on the client drives, removable or not, such as CD-ROM or DVD-ROM, removable USB devices, and fixed drives as the client device operating system root. This option gives you the flexibility to transfer information from the local device to the XenDesktop instance through the use of properly configured Virtual Desktop Agent. This last device policy could make your infrastructure more secure, thanks to the use of the USB device redirection rules; through it, in fact, you could only permit the use of USB keys approved by your company, prohibiting any other nonpolicy-compliant device. The granularity of the policy application is granted by the configuration of the filters; by using these-you can apply the policies to specific clients, desktop or application groups, or domain users and groups. In this way you can create different policies with different configurations, and apply them to specific areas of your company, without generalizing and overriding settings. There's more... To verify the effective running of the policies applied to your VDI infrastructure, there's a tool called Citrix Group Policy Modeling Wizard inside the HDX Policy section, which performs this task. This tool performs a simulation for the policy applications, returning a report with the current configuration. This is something similar to Microsoft Windows Domain Group Policy Results. The simulations apply to one or all the domain controllers configured within your domain, being able to test the application for a specific user or computer object, including organizational units containing them. Moreover, you can apply filters based on the client IP address, the client name, the type of machine (private or shared desktop, private or shared application), or you can apply the simulation to a specific desktop group. In the Advanced Options section you can simulate slow network connections and/or loopback processing (basically, a policy application only based on the computer object locations, instead of both the user and computer object positions) for a configured XenDesktop site. After running the policy application test, you can check the results by right-clicking on the generated report name, and selecting the View Report option. This tool is extremely powerful when you have to verify unexpected behaviors of your desktop instances or user rights because of the application of incorrect policies. Summery In this article we discussed the configuration of the XenDesktop infrastructural policies. Resources for Article : Further resources on this subject: Linux Thin Client : Considering the Network [Article] Designing a XenApp 6 Farm [Article] Getting Started with XenApp 6 [Article]
Read more
  • 0
  • 0
  • 2022

article-image-article-magento-performance-optimization
Packt
24 May 2013
5 min read
Save for later

Magento Performance Optimization

Packt
24 May 2013
5 min read
(For more resources related to this topic, see here.) Using the Magento caching system A cache is a system that stores data so that future requests for that data can be served faster. Having cache is definitely a good thing, but the caching system of Magento is not super effective. How to do it... Let's begin with cache enabling, even if most users are well aware of this one. Go to your backend console and then go to System | Cache Management. By default, all caches are enabled; but some have a negative impact. You have to disable caches for the following items: Collections Data EAV types and attributes Web Services Configuration The following table shows the improvement made due to the previous settings, that is, by disabling the selected caches: Another little win, 200 milliseconds, just enough to fulfill the promise made in the previous recipes. How it works... A cache is a system that stores data so that future requests for that data can be served faster. A web cache stores copies of documents passing through it, and subsequent requests may be satisfied from the cache if a set of conditions exists. There are many hypotheses out there to explain this weird optimization. The main one is that the Magento core has to parse the cache and check in MySQL to compare updated data, and this causes a huge delay. In fact, by allowing Magento to do these kinds of operations, we don't use the full resources of our systems. Using a memory-based filesystem for caching We can easily say that the slowest component of a computer is its hard drive. Moreover, the Magento caching system makes massive use of this component. It would be amazing if we could store the Magento cache files directly inside the memory. How to do it... Open a new console on your Unix server and enter the following command: code1 The path is based on a common installation of Apache with Magento; pay attention to your configuration when typing this command. You have to repeat this command every time the server starts up or you can automatize it by adding the following line into your /etc/fstab file: code1 All the caching mechanisms of Magento will now work with a memory-based filesystem instead of the classical filesystem. How it works... This newly created filesystem is intended to appear as a mounted filesystem, but takes place in the RAM. Of course, the access time of this kind of filesystem is extremely slow in comparison with a classical hard drive. However, all files updated or created are temporary because of the nature of this filesystem. Nothing will be written in the hard drive, and if you reboot everything will be lost. If you plan to reboot your server, you have to save the volatile files in your hard drive, unmount the memory-based system, and then copy the saved data from tmpfs in the cache folder. With the second command, the folder will be remounted automatically after the reboot. Clustering If you have successfully applied all the techniques and your Magento is still slow, it means that you are a very prosperous online retailer and it's time to leave the comfortable world where there is a single server. To keep your customers satisfied, you have to invest in hardware; the tweaking time is now over. How to do it... If you own a single server, you can begin by separating your database in a dedicated server. In order to do this, you have to invest in another server and install MySQL on it (get your host to do it for you), and then extract your database from your first server and import it to your new server. Magento stays on your first server; you have to modify the database connection. Go to /app/etc/local.xml and modify the following lines to fit the new server parameters: code1 As simple as that. You now use a dedicated database server and improve your store performance. The second step in clustering our environment could be using a CDN for our images, CSS, and scripts. A CDN is an independent server optimized for delivering static content such as images, CSS, and scripts. In this way, our web server can focus on running Magento and the CDN can focus on displaying static content. The good news is that Magento has native support to do this. In your Magento backend, navigate to System | General | Web | Unsecure. If you still have CSS and JavaScript compressed from the previous recipes, you just have to copy your media directory from your main server to your CDN server. If it's not the case anymore, you have to modify the Base Skin URL field and the Base JavaScript URL field. Also, if for some reason you use the secure URL for that kind of content, don't forget to apply the changes to the secure part as well. How it works... That's a very good start. Let's summarize it. We were using a single server for all requests, and now, depending on the request, we use three different servers. The first one handles all the Magento work for building pages, the second one handles the data-related operations, and the last one provides static content. With this kind of architecture, each server can focus on only one purpose. Summary This article helped you learn Magento's built-in caching system for saving frequently asked requests. It also introduced you on Magento makes massive use of hard drive which lets you use your available RAM. It also helped you on how to configure a set of loosely connected computers working together for handling more and more customers. Resources for Article : Further resources on this subject: Magento: Exploring Themes [Article] Getting Started with Magento Development [Article] Integrating Facebook with Magento [Article]
Read more
  • 0
  • 0
  • 1138

article-image-creating-website-artisteer
Packt
30 Apr 2013
5 min read
Save for later

Creating a website with Artisteer

Packt
30 Apr 2013
5 min read
(For more resources related to this topic, see here.) Layout The first thing that we should set up while designing a new website is its width. If you are interested in creating web pages, you probably have a monitor with a large widescreen and good resolution. But we have to remember that not all of your visitors will have such good hardware. All the templates generated by Artisteer are centered, and almost all modern browsers enable you to freely zoom the page. It's far better to let some of your visitors enlarge the site than to make the rest of them use the horizontal scroll bar while reading. The resolution you choose will depend on the target audience of your site. Usually, private computers have better parameters than the typical PCs used for just office work in companies. So if you design a site that you know will be viewed mostly by private individuals, you can choose a slightly wider layout than you might for a typical business site. But you cannot forget that many nonbusiness websites, such as community sites, are often accessed from offices. So what is the answer? In my opinion, a layout with a width of 1,000 pixels is still a good choice for most of the cases. Such width ensures that the site will be displayed correctly on a pretty old, but still commonly used, nonwide 17'' monitor. (The typical resolution for this hardware is 1,024 x 768 and such a layout will fill the whole screen.) As more and more users have now started using computers that are equipped with a far better screen, you can consider increasing the resolution slightly, to, for example, 1,150 pixels. Remember that not every user will visit your site using a desktop. Many laptops, and especially netbooks and tablets, don't have wide screens either. Remember that the width of the page must be a little lower than the total resolution of the screen. You should reserve some space for the vertical scrollbar. We are going to set up the width of our project traditionally to 1,000 pixels. To do this, click on the Layout tab on the ribbon, and next to the Sheet Width button. Choose 1000 pixels from the available options on the list. The Sheet Options window is divided into two areas: on the left you can choose from the values expressed in pixels, while on the right, as a percentage. The percentage value means that the page doesn't have a fixed width, but it will change according to the parameters of the screen it is displayed on (according to the chosen percentage value). Designing layouts with the width defined in percentage might seem to be a great idea; and indeed, this technique, when properly used, can lead to great results. But you have to remember, that in such a case, all page elements have to be similarly prepared in order, to be able to adapt to the dynamically changing width of the site. It is far simpler to achieve good results for the layout with fixed values (expressed in pixels). It is a common rule while working with Artisteer that after clicking on a button on the ribbon, you get the list containing the most commonly used standard values. If you need a custom value, however, you can click on the button located at the bottom of the list to go to a window where you can freely set up and choose the required value. For example, while choosing the width of a layout, clicking on the More Sheet Widths... button (located just under the list) will lead you to a window where you can set up the required width with an accuracy up to 1 pixel. We can set the required value in three ways: We can click on the up and down arrows that are located on the right side of the field. We can move the mouse cursor on the field and use the slider that appears. We can click on the field. The text cursor will appear. Then we can type the required value using the keyboard. For me, this is the most comfortable way, especially since the slider's minimal progress is more than 1. Panel mode versus windows mode If you look carefully at the displayed windows, on the bottom-right corner you will see a panel mode button. This button switches Artisteer's interface between panel mode and windows mode. In the windows mode, the advanced settings are displayed in windows. In the panel mode, the advanced settings are displayed on the side panel located on the right side of Artisteer's window. If you are using a wide screen, you may find the panel mode to be more comfortable. Its advantage is that the side panel doesn't cover anything on your project, so you have a better view to observe the changes. Such a change is persistent and if you switch to the panel mode, all the advanced settings will be displayed in the right panel, as long as you decide to go back into the windows mode. To reverse, find and click on the icon located in the top-right corner of the side panel (just next to the x button that closes the panel). Summary This article has covered some features exclusive to Artisteer. It has also explained a brief process of how to create stunning templates for websites using Artisteer. Resources for Article : Further resources on this subject: Creating and Using Templates with Cacti 0.8 [Article] Using Templates to Display Channel Content in ExpressionEngine [Article] Working with Templates in Apache Roller 4.0 [Article]
Read more
  • 0
  • 0
  • 1425

article-image-nginx-http-server
Packt
18 Apr 2013
28 min read
Save for later

The NGINX HTTP Server

Packt
18 Apr 2013
28 min read
(For more resources related to this topic, see here.) NGINX's architecture NGINX consists of a single master process and multiple worker processes. Each of these is single-threaded and designed to handle thousands of connections simultaneously. The worker process is where most of the action takes place, as this is the component that handles client requests. NGINX makes use of the operating system's event mechanism to respond quickly to these requests. The NGINX master process is responsible for reading the configuration, handling sockets, spawning workers, opening log files, and compiling embedded Perl scripts. The master process is the one that responds to administrative requests via signals. The NGINX worker process runs in a tight event loop to handle incoming connections. Each NGINX module is built into the worker, so that any request processing, filtering, handling of proxy connections, and much more is done within the worker process. Due to this worker model, the operating system can handle each process separately and schedule the processes to run optimally on each processor core. If there are any processes that would block a worker, such as disk I/O, more workers than cores can be configured to handle the load. There are also a small number of helper processes that the NGINX master process spawns to handle dedicated tasks. Among these are the cache loader and cache manager processes. The cache loader is responsible for preparing the metadata for worker processes to use the cache. The cache manager process is responsible for checking cache items and expiring invalid ones. NGINX is built in a modular fashion. The master process provides the foundation upon which each module may perform its function. Each protocol and handler is implemented as its own module. The individual modules are chained together into a pipeline to handle connections and process requests. After a request is handled, it is then passed on to a series of filters, in which the response is processed. One of these filters is responsible for processing subrequests, one of NGINX's most powerful features. Subrequests are how NGINX can return the results of a request that differs from the URI that the client sent. Depending on the configuration, they may be multiply nested and call other subrequests. Filters can collect the responses from multiple subrequests and combine them into one response to the client. The response is then finalized and sent to the client. Along the way, multiple modules come into play. See http://www.aosabook.org/en/nginx.html for a detailed explanation of NGINX internals. We will be exploring the http module and a few helper modules in the remainder of this article. The HTTP core module The http module is NGINX's central module, which handles all interactions with clients over HTTP. We will have a look at the directives in the rest of this section, again divided by type. The server The server directive starts a new context. We have already seen examples of its usage throughout the book so far. One aspect that has not yet been examined in-depth is the concept of a default server. A default server in NGINX means that it is the first server defined in a particular configuration with the same listen IP address and port as another server. A default server may also be denoted by the default_server parameter to the listen directive. The default server is useful to define a set of common directives that will then be reused for subsequent servers listening on the same IP address and port: server { listen 127.0.0.1:80; server_name default.example.com; server_name_in_redirect on; } server { listen 127.0.0.1:80; server_name www.example.com; } In this example, the www.example.com server will have the server_name_in_redirect directive set to on as well as the default.example.com server. Note that this would also work if both servers had no listen directive, since they would still both match the same IP address and port number (that of the default value for listen, which is *:80). Inheritance, though, is not guaranteed. There are only a few directives that are inherited, and which ones are changes over time. A better use for the default server is to handle any request that comes in on that IP address and port, and does not have a Host header. If you do not want the default server to handle requests without a Host header, it is possible to define an empty server_name directive. This server will then match those requests. server { server_name ""; } The following table summarizes the directives relating to server: Table: HTTP server directives Directive Explanation port_in_redirect Determines whether or not the port will be specified in a redirect issued by NGINX. server Creates a new configuration context, defining a virtual host. The listen directive specifies the IP address(es) and port(s); the server_name directive lists the Host header values that this context matches. server_name Configures the names that a virtual host may respond to. server_name_in_redirect Activates using the first value of the server_name directive in any redirect issued by NGINX within this context. server_tokens Disables sending the NGINX version string in error messages and the Server response header (default value is on). Logging NGINX has a very flexible logging model . Each level of configuration may have an access log. In addition, more than one access log may be specified per level, each with a different log_format. The log_format directive allows you to specify exactly what will be logged, and needs to be defined within the http section. The path to the log file itself may contain variables, so that you can build a dynamic configuration. The following example describes how this can be put into practice: http { log_format vhost '$host $remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; log_format downloads '$time_iso8601 $host $remote_addr ' '"$request" $status $body_bytes_sent $request_ time'; open_log_file_cache max=1000 inactive=60s; access_log logs/access.log; server { server_name ~^(www.)?(.+)$; access_log logs/combined.log vhost; access_log logs/$2/access.log; location /downloads { access_log logs/downloads.log downloads; } } } The following table describes the directives used in the preceding code: Table: HTTP logging directives Directive Explanation access_log Describes where and how access logs are to be written. The first parameter is a path to the file where the logs are to be stored. Variables may be used in constructing the path. The special value off disables the access log. An optional second parameter indicates log_format that will be used to write the logs. If no second parameter is configured, the predefined combined format is used. An optional third parameter indicates the size of the buffer if write buffering should be used to record the logs. If write buffering is used, this size cannot exceed the size of the atomic disk write for that filesystem. If this third parameter is gzip, then the buffered logs will be compressed on-the-fly, provided that the nginx binary was built with the zlib library. A final flush parameter indicates the maximum length of time buffered log data may remain in memory before being flushed to disk. log_format Specifies which fields should appear in the log file and what format they should take. See the next table for a description of the log-specific variables. log_not_found Disables reporting of 404 errors in the error log (default value is on). log_subrequest Enables logging of subrequests in the access log (default value is off ). open_log_file_cache Stores a cache of open file descriptors used in access_logs with a variable in the path. The parameters used are: max: The maximum number of file descriptors present in the cache inactive: NGINX will wait this amount of time for something to be written to this log before its file descriptor is closed min_uses: The file descriptor has to be used this amount of times within the inactive period in order to remain open valid: NGINX will check this often to see if the file descriptor still matches a file with the same name off: Disables the cache In the following example, log entries will be compressed at a gzip level of 4. The buffer size is the default of 64 KB and will be flushed to disk at least every minute. access_log /var/log/nginx/access.log.gz combined gzip=4 flush=1m; Note that when specifying gzip the log_format parameter is not optional.The default combined log_format is constructed like this: log_format combined '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; As you can see, line breaks may be used to improve readability. They do not affect the log_format itself. Any variables may be used in the log_format directive. The variables in the following table which are marked with an asterisk ( *) are specific to logging and may only be used in the log_format directive. The others may be used elsewhere in the configuration, as well. Table: Log format variables Variable Name Value $body_bytes_sent The number of bytes sent to the client, excluding the response header. $bytes_sent The number of bytes sent to the client. $connection A serial number, used to identify unique connections. $connection_requests The number of requests made through a particular connection. $msec The time in seconds, with millisecond resolution. $pipe * Indicates if the request was pipelined (p) or not (.). $request_length * The length of the request, including the HTTP method, URI, HTTP protocol, header, and request body. $request_time The request processing time, with millisecond resolution, from the first byte received from the client to the last byte sent to the client. $status The response status. $time_iso8601 * Local time in ISO8601 format. $time_local * Local time in common log format (%d/%b/%Y:%H:%M:%S %z). In this section, we have focused solely on access_log and how that can be configured. You can also configure NGINX to log errors. Finding files In order for NGINX to respond to a request, it passes it to a content handler, determined by the configuration of the location directive. The unconditional content handlers are tried first: perl, proxy_pass, flv, mp4, and so on. If none of these is a match, the request is passed to one of the following, in order: random index, index, autoindex, gzip_static, static. Requests with a trailing slash are handled by one of the index handlers. If gzip is not activated, then the static module handles the request. How these modules find the appropriate file or directory on the filesystem is determined by a combination of certain directives. The root directive is best defined in a default server directive, or at least outside of a specific location directive, so that it will be valid for the whole server: server { root /home/customer/html; location / { index index.html index.htm; } location /downloads { autoindex on; } } In the preceding example any files to be served are found under the root /home/customer/html. If the client entered just the domain name, NGINX will try to serve index.html. If that file does not exist, then NGINX will serve index.htm. When a user enters the /downloads URI in their browser, they will be presented with a directory listing in HTML format. This makes it easy for users to access sites hosting software that they would like to download. NGINX will automatically rewrite the URI of a directory so that the trailing slash is present, and then issue an HTTP redirect. NGINX appends the URI to the root to find the file to deliver to the client. If this file does not exist, the client receives a 404 Not Found error message. If you don't want the error message to be returned to the client, one alternative is to try to deliver a file from different filesystem locations, falling back to a generic page, if none of those options are available. The try_files directive can be used as follows: location / { try_files $uri $uri/ backups/$uri /generic-not-found.html; } As a security precaution, NGINX can check the path to a file it's about to deliver, and if part of the path to the file contains a symbolic link, it returns an error message to the client: server { root /home/customer/html; disable_symlinks if_not_owner from=$document_root; } In the preceding example, NGINX will return a "Permission Denied" error if a symlink is found after /home/customer/html, and that symlink and the file it points to do not both belong to the same user ID. The following table summarizes these directives: Table: HTTP file-path directives Directive Explanation disable_symlinks Determines if NGINX should perform a symbolic link check on the path to a file before delivering it to the client. The following parameters are recognized: off : Disables checking for symlinks (default) on: If any part of a path is a symlink, access is denied if_not_owner: If any part of a path contains a symlink in which the link and the referent have different owners, access to the file is denied from=part: When specified, the path up to part is not checked for symlinks, everything afterward is according to either the on or if_not_owner parameter root Sets the path to the document root. Files are found by appending the URI to the value of this directive. try_files Tests the existence of files given as parameters. If none of the previous files are found, the last entry is used as a fallback, so ensure that this path or named location exists, or is set to return a status code indicated by  =<status code>. Name resolution If logical names instead of IP addresses are used in an upstream or *_pass directive, NGINX will by default use the operating system's resolver to get the IP address, which is what it really needs to connect to that server. This will happen only once, the first time upstream is requested, and won't work at all if a variable is used in the *_pass directive. It is possible, though, to configure a separate resolver for NGINX to use. By doing this, you can override the TTL returned by DNS, as well as use variables in the *_pass directives. server { resolver 192.168.100.2 valid=300s; } Table: Name resolution directives Directive Explanation resolver   Configures one or more name servers to be used to resolve upstream server names into IP addresses. An optional  valid parameter overrides the TTL of the domain name record. In order to get NGINX to resolve an IP address anew, place the logical name into a variable. When NGINX resolves that variable, it implicitly makes a DNS look-up to find the IP address. For this to work, a resolver directive must be configured: server { resolver 192.168.100.2; location / { set $backend upstream.example.com; proxy_pass http://$backend; } } Of course, by relying on DNS to find an upstream, you are dependent on the resolver always being available. When the resolver is not reachable, a gateway error occurs. In order to make the client wait time as short as possible, the resolver_timeout parameter should be set low. The gateway error can then be handled by an error_ page designed for that purpose. server { resolver 192.168.100.2; resolver_timeout 3s; error_page 504 /gateway-timeout.html; location / { proxy_pass http://upstream.example.com; } } Client interaction There are a number of ways in which NGINX can interact with clients. This can range from attributes of the connection itself (IP address, timeouts, keepalive, and so on) to content negotiation headers. The directives listed in the following table describe how to set various headers and response codes to get the clients to request the correct page or serve up that page from its own cache: Table: HTTP client interaction directives Directive Explanation default_type Sets the default MIME type of a response. This comes into play if the MIME type of the file cannot be matched to one of those specified by the types directive. error_page Defines a URI to be served when an error level response code is encountered. Adding an = parameter allows the response code to be changed. If the argument to this parameter is left empty, the response code will be taken from the URI, which must in this case be served by an upstream server of some sort. etag Disables automatically generating the ETag response header for static resources (default is on). if_modified_since Controls how the modification time of a response is compared to the value of the If-Modified-Since request header: off: The If-Modified-Since header is ignored exact: An exact match is made (default) before: The modification time of the response is less than or equal to the value of the If-Modified-Since header ignore_invalid_headers Disables ignoring headers with invalid names (default is on). A valid name is composed of ASCII letters, numbers, the hyphen, and possibly the underscore (controlled by the underscores_in_headers directive). merge_slashes Disables the removal of multiple slashes. The default value of on means that NGINX will compress two or more / characters into one. recursive_error_pages Enables doing more than one redirect using the error_page directive (default is off). types Sets up a map of MIME types to file name extensions. NGINX ships with a conf/mime.types file that contains most MIME type mappings. Using include to load this file should be sufficient for most purposes. underscores_in_headers Enables the use of the underscore character in client request headers. If left at the default value off , evaluation of such headers is subject to the value of the ignore_invalid_headers directive. The error_page directive is one of NGINX's most flexible. Using this directive, we may serve any page when an error condition presents. This page could be on the local machine, but could also be a dynamic page produced by an application server, and could even be a page on a completely different site. http { # a generic error page to handle any server-level errors error_page 500 501 502 503 504 share/examples/nginx/50x.html; server { server_name www.example.com; root /home/customer/html; # for any files not found, the page located at # /home/customer/html/404.html will be delivered error_page 404 /404.html; location / { # any server-level errors for this host will be directed # to a custom application handler error_page 500 501 502 503 504 = @error_handler; } location /microsite { # for any non-existent files under the /microsite URI, # the client will be shown a foreign page error_page 404 http://microsite.example.com/404.html; } # the named location containing the custom error handler location @error_handler { # we set the default type here to ensure the browser # displays the error page correctly default_type text/html; proxy_pass http://127.0.0.1:8080; } } } Using limits to prevent abuse We build and host websites because we want users to visit them. We want our websites to always be available for legitimate access. This means that we may have to take measures to limit access to abusive users. We may define "abusive" to mean anything from one request per second to a number of connections from the same IP address. Abuse can also take the form of a DDOS (distributed denial-of-service) attack, where bots running on multiple machines around the world all try to access the site as many times as possible at the same time. In this section, we will explore methods to counter each type of abuse to ensure that our websites are available. First, let's take a look at the different configuration directives that will help us achieve our goal: Table: HTTP limits directives Directive Explanation limit_conn Specifies a shared memory zone (configured with limit_conn_zone) and the maximum number of connections that are allowed per key value. limit_conn_log_level When NGINX limits a connection due to the limit_conn directive, this directive specifies at which log level that limitation is reported. limit_conn_zone Specifies the key to be limited in limit_conn as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of connections per key and the size of that zone (name:size). limit_rate Limits the rate (in bytes per second) at which clients can download content. The rate limit works on a connection level, meaning that a single client could increase their throughput by opening multiple connections. limit_rate_after Starts the limit_rate after this number of bytes have been transferred. limit_req Sets a limit with bursting capability on the number of requests for a specific key in a shared memory store (configured with limit_req_zone). The burst can be specified with the second parameter. If there shouldn't be a delay in between requests up to the burst, a third parameter nodelay needs to be configured. limit_req_log_level When NGINX limits the number of requests due to the limit_req directive, this directive specifies at which log level that limitation is reported. A delay is logged at a level one less than the one indicated here. limit_req_zone Specifies the key to be limited in limit_req as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of requests per key and the size of that zone ( name:size). The third parameter, rate, configures the number of requests per second (r/s) or per minute (r/m) before the limit is imposed. max_ranges Sets the maximum number of ranges allowed in a byte-range request. Specifying 0 disables byte-range support. Here we limit access to 10 connections per unique IP address. This should be enough for normal browsing, as modern browsers open two to three connections per host. Keep in mind, though, that any users behind a proxy will all appear to come from the same address. So observe the logs for error code 503 (Service Unavailable), meaning that this limit has come into effect: http { limit_conn_zone $binary_remote_addr zone=connections:10m; limit_conn_log_level notice; server { limit_conn connections 10; } } Limiting access based on a rate looks almost the same, but works a bit differently. When limiting how many pages per unit of time a user may request, NGINX will insert a delay after the first page request, up to a burst. This may or may not be what you want, so NGINX offers the possibility to remove this delay with the nodelay parameter: http { limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_req_log_level warn; server { limit_req zone=requests burst=10 nodelay; } } Using $binary_remote_addr We use the $binary_remote_addr variable in the preceding example to know exactly how much space storing an IP address will take. This variable takes 32 bytes on 32-bit platforms and 64 bytes on 64-bit platforms. So the 10m zone we configured previously is capable of holding up to 320,000 states on 32-bit platforms or 160,000 states on 64-bit platforms. We can also limit the bandwidth per client. This way we can ensure that a few clients don't take up all the available bandwidth. One caveat, though: the limit_rate directive works on a connection basis. A single client that is allowed to open multiple connections will still be able to get around this limit: location /downloads { limit_rate 500k; } Alternatively, we can allow a kind of bursting to freely download smaller files, but make sure that larger ones are limited: location /downloads { limit_rate_after 1m; limit_rate 500k; } Combining these different rate limitations enables us to create a configuration that is very flexible as to how and where clients are limited: http { limit_conn_zone $binary_remote_addr zone=ips:10m; limit_conn_zone $server_name zone=servers:10m; limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_conn_log_level notice; limit_req_log_level warn; reset_timedout_connection on; server { # these limits apply to the whole virtual server limit_conn ips 10; # only 1000 simultaneous connections to the same server_name limit_conn servers 1000; location /search { # here we want only the /search URL to be rate-limited limit_req zone=requests burst=3 nodelay; } location /downloads { # using limit_conn to ensure that each client is # bandwidth-limited # with no getting around it limit_conn connections 1; limit_rate_after 1m; limit_rate 500k; } } } Restricting access In the previous section, we explored ways to limit abusive access to websites running under NGINX. Now we will take a look at ways to restrict access to a whole website or certain parts of it. Access restriction can take two forms here: restricting to a certain set of IP addresses, or restricting to a certain set of users. These two methods can also be combined to satisfy requirements that some users can access the website either from a certain set of IP addresses or if they are able to authenticate with a valid username and password. The following directives will help us achieve these goals: Table: HTTP access module directives Directive Explanation allow Allows access from this IP address, network, or all. auth_basic Enables authentication using HTTP Basic Authentication. The parameter string is used as the realm name. If the special value off is used, this indicates that the auth_basic value of the parent configuration level is negated. auth_basic_user_file Indicates the location of a file of username:password:comment tuples used to authenticate users. The password field needs to be encrypted with the crypt algorithm. The comment field is optional. deny Denies access from this IP address, network, or all. satisfy Allows access if all or any of the preceding directives grant access. The default value all indicates that a user must come from a specific network address and enter the correct password. To restrict access to clients coming from a certain set of IP addresses, the allow and deny directives can be used as follows: location /stats { allow 127.0.0.1; deny all; } This configuration will allow access to the /stats URI from the localhost only. To restrict access to authenticated users, the auth_basic and auth_basic_user_file directives are used as follows: server { server_name restricted.example.com; auth_basic "restricted"; auth_basic_user_file conf/htpasswd; } Any user wanting to access restricted.example.com would need to provide credentials matching those in the htpasswd file located in the conf directory of NGINX's root. The entries in the htpasswd file can be generated using any available tool that uses the standard UNIX crypt() function. For example, the following Ruby script will generate a file of the appropriate format: #!/usr/bin/env ruby # setup the command-line options require 'optparse' OptionParser.new do |o| o.on('-f FILE') { |file| $file = file } o.on('-u', "--username USER") { |u| $user = u } o.on('-p', "--password PASS") { |p| $pass = p } o.on('-c', "--comment COMM (optional)") { |c| $comm = c } o.on('-h') { puts o; exit } o.parse! if $user.nil? or $pass.nil? puts o; exit end end # initialize an array of ASCII characters to be used for the salt ascii = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a + [ ".", "/" ] $lines = [] begin # read in the current http auth file File.open($file) do |f| f.lines.each { |l| $lines << l } end rescue Errno::ENOENT # if the file doesn't exist (first use), initialize the array $lines = ["#{$user}:#{$pass}n"] end # remove the user from the current list, since this is the one we're editing $lines.map! do |line| unless line =~ /#{$user}:/ line end end # generate a crypt()ed password pass = $pass.crypt(ascii[rand(64)] + ascii[rand(64)]) # if there's a comment, insert it if $comm $lines << "#{$user}:#{pass}:#{$comm}n" else $lines << "#{$user}:#{pass}n" end # write out the new file, creating it if necessary File.open($file, File::RDWR|File::CREAT) do |f| $lines.each { |l| f << l} end Save this file as http_auth_basic.rb and give it a filename (-f), a user (-u), and a password (-p), and it will generate entries appropriate to use in NGINX's auth_ basic_user_file directive: $ ./http_auth_basic.rb -f htpasswd -u testuser -p 123456 To handle scenarios where a username and password should only be entered if not coming from a certain set of IP addresses, NGINX has the satisfy directive. The any parameter is used here for this either/or scenario: server { server_name intranet.example.com; location / { auth_basic "intranet: please login"; auth_basic_user_file conf/htpasswd-intranet; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; satisfy any; } If, instead, the requirements are for a configuration in which the user must come from a certain IP address and provide authentication, the all parameter is the default. So, we omit the satisfy directive itself and include only allow, deny, auth_basic, and auth_basic_user_file: server { server_name stage.example.com; location / { auth_basic "staging server"; auth_basic_user_file conf/htpasswd-stage; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; } Streaming media files NGINX is capable of serving certain video media types. The flv and mp4 modules, included in the base distribution, can perform what is called pseudo-streaming. This means that NGINX will seek to a certain location in the video file, as indicated by the start request parameter. In order to use the pseudo-streaming capabilities, the corresponding module needs to be included at compile time: --with-http_flv_module for Flash Video (FLV) files and/or --with-http_mp4_module for H.264/AAC files. The following directives will then become available for configuration: Table: HTTP streaming directives Directive Explanation flv Activates the flv  module for this location. mp4 Activates the mp4  module for this location. mp4_buffer_size Sets the initial buffer size for delivering MP4 files. mp4_max_buffer_size Sets the maximum size of the buffer used to process MP4 metadata. Activating FLV pseudo-streaming for a location is as simple as just including the flv keyword: location /videos { flv; } There are more options for MP4 pseudo-streaming, as the H.264 format includes metadata that needs to be parsed. Seeking is available once the "moov atom" has been parsed by the player. So to optimize performance, ensure that the metadata is at the beginning of the file. If an error message such as the following shows up in the logs, the mp4_max_buffer_size needs to be increased: mp4 moov atom is too large mp4_max_buffer_size can be increased as follows: location /videos { mp4; mp4_buffer_size 1m; mp4_max_buffer_size 20m; } Predefined variables NGINX makes constructing configurations based on the values of variables easy. Not only can you instantiate your own variables by using the set or map directives, but there are also predefined variables used within NGINX. They are optimized for quick evaluation and the values are cached for the lifetime of a request. You can use any of them as a key in an if statement, or pass them on to a proxy. A number of them may prove useful if you define your own log file format. If you try to redefine any of them, though, you will get an error message as follows: <timestamp> [emerg] <master pid>#0: the duplicate "<variable_name>" variable in <path-to-configuration-file>:<line-number> They are also not made for macro expansion in the configuration—they are mostly used at run time. Summary In this article, we have explored a number of directives used to make NGINX serve files over HTTP. Not only does the http module provide this functionality, but there are also a number of helper modules that are essential to the normal operation of NGINX. These helper modules are enabled by default. Combining the directives of these various modules enables us to build a configuration that meets our needs. We explored how NGINX finds files based on the URI requested. We examined how different directives control how the HTTP server interacts with the client, and how the error_page directive can be used to serve a number of needs. Limiting access based on bandwidth usage, request rate, and number of connections is all possible. We saw, too, how we can restrict access based on either IP address or through requiring authentication. We explored how to use NGINX's logging capabilities to capture just the information we want. Pseudo-streaming was examined briefly, as well. NGINX provides us with a number of variables that we can use to construct our configurations. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 3065
article-image-learning-fly-forcecom
Packt
17 Apr 2013
20 min read
Save for later

Learning to Fly with Force.com

Packt
17 Apr 2013
20 min read
(For more resources related to this topic, see here.) What is cloud computing? If you have been in the IT industry for some time, you probably know what cloud means. For the rest, it is used as a metaphor for the worldwide network or the Internet. Computing normally indicates the use of computer hardware and software. Combining these two terms, we get a simple definition—use of computer resources over the Internet (as a service). In other words, when the computing is delegated to resources available over the Internet, we get what is called cloud computing. As Wikipedia defines it: Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). Still confused? A simple example will help clarify it. Say you are managing the IT department of an organization, where you are responsible for purchasing hardware and software (licenses) for your employees and making sure they have the right resources to do their jobs. Whenever there is a new hire, you need to go through all the purchase formalities once again to get your user the necessary resources. Soon this turns out to be a nightmare of managing all your software licenses! Now, what if you could find an alternative where you host an application on the Web, which your users can access through their browsers and interact with it? You are freed from maintaining individual licenses and maintaining high-end hardware at the user machines. Voila, we just discovered cloud computing! Cloud computing is the logical conclusion drawn from observing the drawbacks of in-house solutions. The trend is now picking up and is quickly replacing the onpremise software application delivery models that are accompanied with high costs of managing data centers, hardware, and software. All users pay for is the quantum of the services that they use. That is why it's sometimes also known as utility-based computing, as the corresponding payment is resource usage based. Chances are that even before you ever heard of this term, you had been using it unknowingly. Have you ever used hosted e-mail services such as Yahoo, Hotmail, or Gmail where you accessed all of their services through the browser instead of an e-mail client on your computer? Now that is a typical example of cloud computing. Anything that is offered as a service (aaS) is usually considered in the realm of cloud computing. Everything in the cloud means no hardware, no software, so no maintenance and that is what the biggest advantage is. Different types of services that are most prominently delivered on the cloud are as follows: Infrastructure as a service (IaaS) Platform as a service (PaaS) Software as a service (SaaS) Infrastructure as a service (IaaS) Sometimes referred to hardware as a service, infrastructure as a service offers the IT infrastructure, which includes servers, routers, storages, firewalls, computing resources, and so on, in physical or virtualized forms as a service. Users can subscribe to these services and pay on the basis of need and usage. The key player in this domain is Amazon.com, with EC2 and S3 as examples of typical IaaS. Elastic Cloud Computing (EC2) is a web service that provides resizable computing capacity in the cloud. Computing resources can be scaled up or down within minutes, allowing users to pay for the actual capacity being used. Similarly, S3 is an online storage web service offered by Amazon, which provides 99.999999999 percent durability and 99.99 percent availability of objects over a given year and stores arbitrary objects (computer files) up to 5 terabytes in size! Platform as a service (PaaS) PaaS provides the infrastructure for development of software applications. Accessed over the cloud, it sits between IaaS and SaaS where it hides the complexities of dealing with underlying hardware and software. It is an application-centric approach that allows developers to focus more on business applications rather than infrastructure-level issues. Developers no longer have to worry about the server upgrades, scalability, load balancing, service availability, and other infrastructure hassles, as these are delegated to the platform vendors. Paas allows development of custom applications by providing the appropriate building blocks and the necessary infrastructure available as a service. An excellent example, in this category, is the Force.com platform, which is a game changer in the aaS, specially in the PaaS domain. It exposes a proprietary application development platform, which is woven around a relational database. It stands at a higher level than another key player in this domain, Google App Engine, which supports scalable web application development in Java and Python on the appropriate application server stack, but does not provide equivalent robust proprietary components or the building blocks as Force.com. Another popular choice (or perhaps not) is Microsoft's application platform called Widows Azure, which can be used to build websites (developed in ASP.NET, PHP, Node.JS), provision virtual machines, and provide cloud services (containers of hosted applications). A limitation with applications built on these platforms is the quota limits, or the strategy to prohibit the monopolization of the shared resources in the multitenant environment. Some developers see this as a restriction, which allows them to build applications with limited capability, but we reckon this as an opportunity to build highly efficient solutions to work within governor limits, while still maintaining the business process sanctity. Specificcally for the Force.com platform, some people consider shortage of skilled resources as a possible limitation, but we think the learning curve is steep on this platform and an experienced resource can pick proprietary languages pretty quickly, average ramp up time spanning anywhere from 15 to 30 days! Software as a service (SaaS) The opposite end of IaaS is SaaS. Business applications are offered as services over the Internet to users who don't have to go through the complex custom application development and implementation cycles. They also don't invest upfront on the IT infrastructure or maintain their software with regular upgrades. All this is taken care of by the SaaS vendors. These business applications normally provide the customization capability to accommodate specific business needs such as user interfaces, business workflows, and so on. Some good examples in this category are the Salesforce.com CRM system and Google Apps services. What is Force.com? Force.com is a natural progression from Salesforce.com, which was started as a sales force automation system offered as a service (SaaS). The need to go beyond the initially offered customizable CRM application and develop custom-based solutions, resulted in a radical shift of cloud delivery model from SaaS to PaaS. The technology that powers Salesforce CRM, whose design fulfills all the prerequisites of being a cloud application, is now available for developing enterprise-level applications. An independent study of the Force.com platform concluded that compared to the traditional Java-based application development platform, development with the Force.com platform is almost five times faster, with about a 40 percent smaller overall project cost and better quality due to rapid prototyping during the requirement gathering—thanks to the declarative aspect of the Force.com development—and less testing due to proven code re-use. What empowers Force.com? Why is Force.com application development so successful? Primarily because of its key architectural features, discussed in the following sections. Multitenancy Multitenancy is a concept that is the opposite of single-tenancy. In the Cloud Computing jargon, a customer or an organization is referred to as tenant. The various downsides and cost inefficiencies of single-tenant models are overcame by the multitenant model. A multitenant application caters to multiple organizations, each working in its own isolated virtual environment called org and sharing a single physical instance and version of the application hosted on the Force.com infrastructure. It is isolated because although the infrastructure is shared, every customer's data, customizations, and code remain secure and insulated from other customers. Multitenant applications run on a single physical instance and version of the application, providing the same robust infrastructure to all their customers. This also means freedom from upfront costs, ongoing upgrades, and maintenance costs. The test methods written by the customers on respective orgs ensure more than 75 percent code coverage and thus help Salesforce.com in regression testing of the Force.com upgrades, releases, and patches. The same is difficult to even visualize with an in-house software application development. Metadata What drives the multitenant applications on Force.com? Nothing else but the metadata-driven architecture of the platform! Think about the following: The platform allows all tenants to coexist at the same time Tenants can extend the standard common object model without affecting others Tenants' data is kept isolated from others in a shared database The platform customizes the interface and business logic without disrupting the services for others The platform's codebase can be upgraded to offer new features without affecting the tenants' customizations The platform scales up with rising demands and new customers To meet all the listed challenges, Force.com has been built upon a metadata-driven architecture, where the runtime engine generates application components from the metadata. All customizations to the standard platform for each tenant are stored in the form of metadata, thus keeping the core Force.com application and the client customizations distinctly separate, making it possible to upgrade the core without affecting the metadata. The core Force.com application comprises the application data and the metadata describing the base application, thus forming three layers sitting on top of each other in a common database, with the runtime engine interpreting all these and rendering the final output in the client browser. As metadata is a virtual representation of the application components and customizations of the standard platform, the statically compiled Force.com application's runtime engine is highly optimized for dynamic metadata access and advanced caching techniques to produce remarkable application response times. Understanding the Force.com stack A white paper giving an excellent explanation of the Force.com stack has been published. It describes various layers of technologies and services that make up the platform. We will also cover it here briefly. The application stack is shown in the following diagram: Infrastructure as a service Infrastructure is the first layer of the stack on top of which other services function. It acts as the foundation for securely and reliably delivering the cloud applications developed by the customers as well as the core Salesforce CRM applications. It powers more than 200 million transactions per day and more than 1.5 million subscribers. The highly managed data centers provide unparalleled redundancy with near-real-time replication, world class security at physical, network, host, data transmission, and database levels, and excellent design to scale both vertically and horizontally. Database as a service The powerful and reliable data persistence layer in the Force.com stack is known as the Force.com database. It sits on top of the infrastructure and provides the majority of the Force.com platform capabilities. The declarative web interface allows user to create objects and fields generating the native application UI around them. Users can also define relationships between objects, create validation rules to ensure data integrity, track history on certain fields, create formula fields to logically derive new data values, create fine-grained security access with the point and click operations, and all of this without writing a single line of code or even worrying about the database backup, tuning, upgrade, and scalability issues! As compared with the relational database, it is similar in the sense that the object (a data instance) and fields are analogous to tables and columns, and Force.com relationships are similar to the referential integrity constraints in a relation DB. But unlike physically separate tables with dedicated storage, Force.com objects are maintained as a set of metadata interpreted on the fly by the runtime engine and all of the application data is stored in a set of a few large database tables. This data is represented as virtual records based on the interpretation of tenants' customizations stored as metadata. Integration as a service Integration as a service utilizes the underlying Force.com database layer and provides the platform's integration capabilities through the open-standards-based web services API. In today's world, most organizations have their applications developed on disparate platforms, which have to work in conjunction to correctly represent and support their internal business processes. Customers' existing applications can connect with Force.com through the SOAP or REST web services to access data and create mashups to combine data from multiple sources. The Force.com platform also allows native applications to integrate with third-party web services through callouts to include information from external systems in organizations' business processes. These integration capabilities of the platform through API (for example, Bulk API, Chatter API, Metadata API, Apex REST API, Apex SOAP API, Streaming API, and so on) can be used by developers to build custom integration solutions to both produce and consume web services. Accordingly, it's been leveraged by many third parties such as Informatica, Cast Iron, Talend, and so on, to create prepackaged connectors for applications and systems such as Outlook, Lotus Notes, SAP, Oracle Financials, and so on. It also allows clouds such as Facebook, Google, and Amazon to talk to each other and build useful mashups. The integration ability is the key for developing mobile applications for various device platforms, which solely rely on the web services exposed by the Force.com platform. Logic as a service A development platform has to have the capability to create business processes involving complex logic. The Force.com platform oversimplifies this task to automate a company's business processes and requirements. The platform logic features can be utilized by both developers and business analysts to build smart database applications that help increase user productivity, improve data quality, automate manual processes, and adapt quickly to changing requirements. The platform allows creating the business logic either through a declarative interface in the form of workflow rules, approval processes, required and unique fields, formula fields, validation rules, or in an advanced form by writing triggers and classes in the platform's programming language—Apex—to achieve greater levels of flexibility, which help define any kind of functionality and business requirement that otherwise may not be possible through the point and click operations. User interface as a service The user interface of platform applications can be created and customized by either of the two approaches. The Force.com builder application, an interface based on point-and-click/drag-and-drop, allows users to build page layouts that are interpreted from the data model and validation rules with user defined customizations, define custom application components, create application navigation structures through tabs, and define customizable reports and user-specific views. For more complex pages and tighter control over the presentation layer, a platform allows users to build custom user interfaces through a technology called Visualforce (VF), which is based on the XML markup tags. The custom VF pages may or may not adopt the standard look and feel based on the stylesheet applied and present data returned from the controller or the logic layer in the structured format. The Visualforce interfaces are either public, private, or a mix of the two. Private interfaces require users to log in to the system before they can access resources, whereas public interfaces, called sites, can be made available on the Internet to anonymous users. Development as a service This a set of features that allow developers to utilize traditional practices for building cloud applications. These features include the following: Force.com Metadata API: Lets developers push changes directly into the XML files describing the organization's customizations and acts as an alternative to platform's interface to manage applications IDE (Integrated Development Environment): A powerful client application built on the Eclipse platform, allowing programmers to code, compile, test, package, and deploy applications A development sandbox: A separate application environment for development, quality assurance, and training of programmers Code Share: A service for users around the globe to collaborate on development, testing, and deployment of the cloud applications Force.com also allows online browser based development providing code assist functionality, repository search, debugging, and so on, thus eliminating the need of a local machine specific IDE. DaaS expands the Cloud Computing development process to include external tools such as integrated development environments, source control systems, and batch scripts to facilitate developments and deployments. Force.com AppExchange This is a cloud marketplace (accessible at http://appexchange.salesforce.com/) that helps commercial application vendors to publish their custom development applications as packages and then reach out to potential customers who can install them on their orgs with merely a button click through the web interface, without going through the hassles of software installation and configuration. Here, you may find good apps that provide functionality, that are not available in Salesforce, or which may require some heavy duty custom development if carried out on-premises! Introduction to governor limits Any introduction to Force.com is incomplete without a mention of governor limits. By nature, all multitenant architecture based applications such as Force.com have to have a mechanism that does not allow the code to abuse the shared resources so that other tenants in the infrastructure remain unaffected. In the Force.com world, it is the Apex runtime engine that takes care of such malicious code by enforcing runtime limits (called governor limits) in almost all areas of programming on the Force.com platform. If these governor limits had not been in place, even the simplest code, such as an endless loop, would consume enough resources to disrupt the service to the other users of the system, as they all share the same physical infrastructure. The concept of governor limits is not just limited to Force.com, but extends to all SaaS/PaaS applications, such as Google App Engine, and is critical for making the cloud-based development platform stable. This concept may prove to be very painful for some people, but there is a key logic to it. The platform enforces the best practices so that the application is practically usable and makes an optimal usage of resources, keeping the code well under governor limits. So the longer you work on Force.com, the more you become familiar with these limits, the more stable your code becomes over time, and the easier it becomes to work around these limits. In one of the forthcoming chapters, we will discover how to work with these governor limits and not against them, and also talk about ways to work around them, if required. Salesforce environments An environment is a set of resources, physical or logical, that let users build, test, deploy, and use applications. In the traditional development model, one would expect to have application servers, web servers, databases, and their costly provisioning and configuration. But in the Force.com paradigm, all that's needed is a computer and an Internet connection to immediately get started to build and test a SaaS application. An environment, or a virtual or logical instance of the Force.com infrastructure and platform, is also called an organization or just org, which is provisioned in the cloud on demand. It has the following characteristics: Used for development, testing, and/or production Contains data and customizations Based on the edition containing specific functionality, objects, storage, and limits Certain restricted functionalities, such as the multicurrency feature (which is not available by default), can be enabled on demand All environments are accessible through a web browser There are broadly three types of environments available for developing, testing, and deploying applications: Production environments: The Salesforce.com environments that have active paying users accessing the business critical data. Development environments: These environments are used strictly for the development and testing applications with data that is not business critical, without affecting production environment. Developer environments are of two types: Developer Edition: This is a free, full-featured copy of the Enterprise Edition, with less storage and users. It allows users to create packaged applications suitable for any Salesforce production environment. It can be of two types: Regular Developer Edition: This is a regular DE org whose sign up is free and the user can register for any number of DE orgs. This is suitable when you want to develop managed packages for distribution through AppExchange or Trialforce, when you are working with an edition where sandbox is not available, or if you just want to explore the Force.com platform for free. Partner Developer Edition: This is a regular DE org but with more storage, features, and licenses. This is suitable when you expect a larger team to work who need a bigger environment to test the application against a larger real-life dataset. Note that this org can only be created with the Salesforce Consulting partners or Force.com ISV. Sandbox: This is nearly an identical copy of the production environment available to Enterprise or Unlimited Edition customers, and can contain data and/or customizations. This is suitable when developing applications for production environments only with no plans to distribute applications commercially through AppExchange or Trialforce, or if you want to test the beta-managed packages. Note that sandboxes are completely isolated from your Salesforce production organization, so operations you perform in your sandboxes do not affect your Salesforce production organization, and vice versa. Types of sandboxes are as follows: Full copy sandbox: Nearly an identical copy of the production environment, including data and customizations Configuration-only sandbox: Contains only configurations and not data from the production environment Developer sandbox: Same as Configuration-only sandbox but with less storage Test environments: These can be either production or developer environments, used speficially for testing application functionality before deploying to production or releasing to customers. These environments are suitable when you want to test applications in production such as environments with more users and storage to run real-life tests. Summary This article talked about the basic concepts of cloud computing. The key takeaway items from this article are the explanations of the different types of cloud-based services such as IaaS, SaaS, and PaaS. We introduced the Force.com platform and its key architectural features that power the platform types, such as multitenant and metadata. We briefly covered the application stack—technology and services layers—that makes up the Force.com platform. We gave an overview of governor limits without going too much detail about their use. We discussed situations where adopting cloud computing may be beneficial. We also discussed the guidelines that help you decide whether your software project should be developed on the Force.com platform or not. Last, but not least, we discussed various environments available to developers and business users and their characteristics and usage. Resources for Article : Further resources on this subject: Monitoring and Responding to Windows Intune Alerts [Article] Sharing a Mind Map: Using the Best of Mobile and Web Featuressil [Article] Force.com: Data Management [Article]
Read more
  • 0
  • 0
  • 1567

article-image-liferay-its-installation-and-setup
Packt
15 Apr 2013
7 min read
Save for later

Liferay, its Installation and setup

Packt
15 Apr 2013
7 min read
(For more resources related to this topic, see here.) Overview about portals Well, to understand more about what portals are, let me throw some familiar words at you. Have you used, heard, or seen iGoogle, the Yahoo! home page, or MSN? If the answer is yes, then you have been using portals already. All these websites have two things in common. A common dashboard Information from various sources shown on a single page, giving a uniform experience For example, on iGoogle, you can have a gadget showing the weather in Chicago, another gadget to play your favorite game of Sudoku, and a third one to read news from around the globe, everything on the same page without you knowing that all of these are served from different websites! That is what a portal is all about. So, a portal (or web portal) can be thought of as a website that shows, presents, displays, or brings together information or data from various sources and gives the user a uniform browsing experience. The small chunks of information that form the web page are given different names such as gadgets or widgets, portlets or dashlets. Introduction to Liferay Now that you have some basic idea about what portals are, let us revisit the initial statement I made about Liferay. Liferay is an open source portal solution. If you want to create a portal, you can use Liferay to do this. It is written in Java. It is an open source solution, which means the source code is freely available to everyone and people can modify and distribute it. With Liferay you can create basic intranet sites with minimal tweaking. You can also go for a full-fledged enterprise banking portal website with programming, and heavy customizations and integrations. Besides the powerful portal capabilities, Liferay also provides the following: Awesome enterprise and web content management capabilities Robust document management which supports protocols such as CMIS and WebDAV Good social collaboration features Liferay is backed up by a solid and active community, whose members are ever eager to help. Sounds good? So what are we waiting for? Let's take a look at Liferay and its features. Installation and setup In four easy steps, you can install Liferay and run it on your system. Step 1 – Prerequisites Before we go and start our Liferay download, we need to check if we have the requirements for the installation. They are as follows: Memory: 2 GB (minimum), 4 GB (recommended). Disk space: Around 5 GB of free space should be more than enough for the exercises mentioned in the book. The exercises performed in this book are done on Windows XP. So you can use the same or any subsequent versions of Windows OS. Although Liferay can be run on Mac OSX and Linux, it is beyond the scope of this book how to set up Liferay on them. The MySQL database should be installed. As with the OS, Liferay can be run on most of the major databases out there in the market. Liferay is shipped with the Hypersonic database by default for demo purpose, which should not be used for a production environment. Unzip tools such as gzip or 7-Zip. Step 2 – Downloading Liferay You can download the latest stable version of Liferay from https://www.liferay.com/downloads/liferay-portal/available-releases. Liferay comes in the following two versions: Enterprise Edition: This version is not free and you would have to purchase it. This version has undergone rigorous testing cycles to make sure that all the features are bug free, providing the necessary support and patches. Community Edition: This is a free downloadable version that has all the features but no enterprise support provided. Liferay is supported by a lot of open source application servers and the folks at Liferay have made it easy for end users by packaging everything as a bundle. What this means is that if you are asked to have Liferay installed in a JBoss application server, you can just go to the URL previously mentioned and select the Liferay-JBoss bundle to download, which gives you the JBoss Application server installed with Liferay. We will download the Community Edition of the Liferay-Tomcat bundle, which has Liferay preinstalled in the Tomcat server. The stable version at the time of writing this book was Liferay 6.1 GA2. As shown in the following screenshot, just click on Download after making sure that you have selected Liferay bundled with Tomcat and save the ZIP file at an appropriate location: Step 3 – Starting the server After you have downloaded the bundle, extract it to the location of your choice on your machine. You can see a folder named liferay-portal-6.1.1-ce-ga2. The latter part of the name can change based on the version that you download. Let us take a moment to have a look at the folder structure as shown in the following screenshot: The liferay-portal-6.1.1-ce-ga2 folder is what we will refer to as LIFERAY_HOME. This folder contains the server, which in our case is tomcat-7.0.27. Let's refer to this folder as SERVER_HOME. Liferay is created using Java, so to run Liferay we need Java Runtime Environment (JRE). The Liferay bundle is shipped with a JRE by default (as you can see inside our SERVER_HOME). So if you are running a Windows OS, you can directly start and run Liferay. If you are using any other OS, you need to set the JAVA_HOME environment variable. Navigate to SERVER_HOME/webapps. This is where all the web applications are deployed. Delete everything in this folder except marketplace-portlet and ROOT. Now go to SERVER/bin and double-click on startup.bat, since we are using Windows OS. This will bring up a console showing the server startup. Wait till you see the Server Startup message in the console, after which you can access Liferay from the browser. Step 4 – Doing necessary first-time configurations Once the server is up, open your favorite browser and type in http://localhost:8080. You will be shown a screen that performs basic configurations, such as changing the database and name of your portal, deciding what should be the admin name and e-mail address, or changing the default locale. This is a new feature introduced in Liferay 6.1 to ease the first-time setup, which on previous versions had to be done using the property file. Go change the name of the portal, administrator username, and e-mail address. Keep the locale as it is. As I stated earlier, Liferay is shipped with a default Hypersonic database which is normally used for demo purposes. You can change it to MySQL if you want, by selecting the database type from the drop-down list presented, and typing in the necessary JDBC details. I have created a database in MySQL by the name Portal Starter; hence my JDBC URL would contain that. You can create a blank database in MySQL and accordingly change the JDBC URL. Once you are done making your changes, click on the Finish Configuration button as shown in the following screenshot: This will open up a screen, which will show the path where this configuration is saved. What Liferay does behind the scenes is creates a property file named portal-setup-wizard.properties and put all the configurations in that. This, as I said earlier, was created manually in the previous versions of Liferay. Clicking on the Go to my portal button on this screen will take the user to the Terms of Use page. Agree to the terms and proceed further. A screen will be shown to change the password for your admin user that you earlier specified in the Basic Configuration screen. After you change the password, you will be presented with a screen to select a password reminder question. Select a question or create your own question from the drop-down list, set the password reminder, and move on. And that's it!! Finally, you can see the home page of Liferay. That's it and you are done setting up your very first Liferay instance. Summary So, we just gained a quick understanding about portals and Liferay and its installation and setup that teaches you how to set up Liferay on your local machine. Resources for Article : Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Setting up and Configuring a Liferay Portal [Article] User Interface in Production [Article]
Read more
  • 0
  • 0
  • 3828

article-image-advanced-performance-strategies
Packt
12 Apr 2013
6 min read
Save for later

Advanced Performance Strategies

Packt
12 Apr 2013
6 min read
(For more resources related to this topic, see here.) General tips Before diving into some advanced strategies for improving performance and scalability, let's briefly recap some of the general performance tips already spread across the book: When mapping your entity classes for Hibernate Search, use the optional elements of the @Field annotation to strip the unnecessary bloat from your Lucene indexes: If you are definitely not using index-time boosting , then there is no reason to store the information needed to make this possible. Set the norms element to Norms.NO . By default, the information needed for a projection-based query is not stored unless you set the store element to Store.YES or Store. COMPRESS. If you had projection-based queries that are no longer being used, then remove this element as part of the cleanup. Use conditional indexing and partial indexing to reduce the size of Lucene indexes. Rely on filters to narrow your results at the Lucene level, rather than using a WHERE clause at the database query level. Experiment with projection-based queries wherever possible , to reduce or eliminate the need for database calls. Be aware that with advanced database caching, the benefits might not always justify the added complexity. Test various index manager options , such as trying the near-real-time index manager or the async worker execution mode. Running applications in a cluster Making modern Java applications scale in a production environment usually involves running them in a cluster of server instances. Hibernate Search is perfectly at home in a clustered environment, and offers multiple approaches for configuring a solution. Simple clusters The most straightforward approach requires very little Hibernate Search configuration. Just set up a file server for hosting your Lucene indexes and make it available to every server instance in your cluster (for example, NFS, Samba, and so on): A simple cluster with multiple server nodes using a common Lucene index on a shared drive Each application instance in the cluster uses the default index manager, and the usual filesystem directory provider. In this arrangement, all of the server nodes are true peers. They each read from the same Lucene index, and no matter which node performs an update, that node is responsible for the write. To prevent corruption, Hibernate Search depends on simultaneous writes being blocked, by the locking strategy (that is, either "simple" or "native"). Recall that the "near-real-time" index manager is explicitly incompatible with a clustered environment. The advantage of this approach is two-fold. First and foremost is simplicity. The only steps involved are setting up a filesystem share, and pointing each application instance's directory provider to the same location. Secondly, this approach ensures that Lucene updates are instantly visible to all the nodes in the cluster. However, a serious downside is that this approach can only scale so far. Very small clusters may work fine, but larger numbers of nodes trying to simultaneously access the same shared files will eventually lead to lock contention. Also, the file server on which the Lucene indexes are hosted is a single point of failure. If the file share goes down, then your search functionality breaks catastrophically and instantly across the entire cluster. Master-slave clusters When your scalability needs outgrow the limitations of a simple cluster, Hibernate Search offers more advanced models to consider. The common element among them is the idea of a master node being responsible for all Lucene write operations. Clusters may also include any number of slave nodes. Slave nodes may still initiate Lucene updates, and the application code can't really tell the difference. However, under the covers, slave nodes delegate that work to be actually performed by the master node. Directory providers In a master-slave cluster, there is still an "overall master" Lucene index, which logically stands apart from all of the nodes. This may be filesystem-based, just as it is with a simple cluster. However, it may instead be based on JBoss Infinispan (http://www.jboss.org/infinispan), an open source in-memory NoSQL datastore sponsored by the same company that principally sponsors Hibernate development: In a filesystem-based approach, all nodes keep their own local copies of the Lucene indexes. The master node actually performs updates on the overall master indexes, and all of the nodes periodically read from that overall master to refresh their local copies. In an Infinispan-based approach, the nodes all read from the Infinispan index (although it is still recommended to delegate writes to a master node). Therefore, the nodes do not need to maintain their own local index copies. In reality, because Infinispan is a distributed datastore, portions of the index will reside on each node anyway. However, it is still best to visualize the overall index as a separate entity. Worker backends There are two available mechanisms by which slave nodes delegate write operations to the master node: A JMS message queue provider creates a queue, and slave nodes send messages to this queue with details about Lucene update requests. The master node monitors this queue, retrieves the messages, and actually performs the update operations. You may instead replace JMS with JGroups (http://www.jgroups.org), an open source multicast communication system for Java applications. This has the advantage of being faster and more immediate. Messages are received in real-time, synchronously rather than asynchronously. However, JMS messages are generally persisted to a disk while awaiting retrieval, and therefore can be recovered and processed later, in the event of an application crash. If you are using JGroups and the master node goes offline, then all the update requests sent by slave nodes during that outage period will be lost. To fully recover, you would likely need to reindex your Lucene indexes manually. A master-slave cluster using a directory provider based on filesystem or Infinispan, and worker based on JMS or JGroups. Note that when using Infinispan, nodes do not need their own separate index copies.   Summary In this article, we explored the options for running applications in multi-node server clusters, to spread out and handle user requests in a distributed fashion. We also learned how to use sharding to help make our Lucene indexes faster and more manageable. Resources for Article : Further resources on this subject: Integrating Spring Framework with Hibernate ORM Framework: Part 1 [Article] Developing Applications with JBoss and Hibernate: Part 1 [Article] Hibernate Types [Article]
Read more
  • 0
  • 0
  • 1403
article-image-improving-performance-parallel-programming
Packt
12 Apr 2013
11 min read
Save for later

Improving Performance with Parallel Programming

Packt
12 Apr 2013
11 min read
(For more resources related to this topic, see here.) Parallelizing processing with pmap The easiest way to parallelize data is to take a loop we already have and handle each item in it in a thread. That is essentially what pmap does. If we replace a call to map with pmap, it takes each call to the function argument and executes it in a thread pool. pmap is not completely lazy, but it's not completely strict, either: it stays just ahead of the output consumed. So if the output is never used, it won't be fully realized. For this recipe, we'll calculate the Mandelbrot set. Each point in the output takes enough time that this is a good candidate to parallelize. We can just swap map for pmap and immediately see a speed-up. How to do it... The Mandelbrot set can be found by looking for points that don't settle on a value after passing through the formula that defines the set quickly. We need a function that takes a point and the maximum number of iterations to try and return the iteration that it escapes on. That just means that the value gets above 4. (defn get-escape-point [scaled-x scaled-y max-iterations] (loop [x 0, y 0, iteration 0] (let [x2 (* x x), y2 (* y y)] (if (and (< (+ x2 y2) 4) (< iteration max-iterations)) (recur (+ (- x2 y2) scaled-x) (+ (* 2 x y) scaled-y) (inc iteration)) iteration)))) The scaled points are the pixel points in the output, scaled to relative positions in the Mandelbrot set. Here are the functions that handle the scaling. Along with a particular x-y coordinate in the output, they're given the range of the set and the number of pixels each direction. (defn scale-to ([pixel maximum [lower upper]] (+ (* (/ pixel maximum) (Math/abs (- upper lower))) lower))) (defn scale-point ([pixel-x pixel-y max-x max-y set-range] [(scale-to pixel-x max-x (:x set-range)) (scale-to pixel-y max-y (:y set-range))])) The function output-points returns a sequence of x, y values for each of the pixels in the final output. (defn output-points ([max-x max-y] (let [range-y (range max-y)] (mapcat (fn [x] (map #(vector x %) range-y)) (range max-x))))) For each output pixel, we need to scale it to a location in the range of the Mandelbrot set and then get the escape point for that location. (defn mandelbrot-pixel ([max-x max-y max-iterations set-range] (partial mandelbrot-pixel max-x max-y max-iterations set-range)) ([max-x max-y max-iterations set-range [pixel-x pixel-y]] (let [[x y] (scale-point pixel-x pixel-y max-x max-y set-range)] (get-escape-point x y max-iterations)))) At this point, we can simply map mandelbrot-pixel over the results of outputpoints. We'll also pass in the function to use (map or pmap). (defn mandelbrot ([mapper max-iterations max-x max-y set-range] (doall (mapper (mandelbrot-pixel max-x max-y max-iterations set-range) (output-points max-x max-y))))) Finally, we have to define the range that the Mandelbrot set covers. (def mandelbrot-range {:x [-2.5, 1.0], :y [-1.0, 1.0]}) How do these two compare? A lot depends on the parameters we pass them. user=> (def m (time (mandelbrot map 500 1000 1000 mandelbrot-range))) "Elapsed time: 28981.112 msecs" #'user/m user=> (def m (time (mandelbrot pmap 500 1000 1000 mandelbrot-range))) "Elapsed time: 34205.122 msecs" #'user/m user=> (def m (time (mandelbrot map 1000 10001000 mandelbrot-range))) "Elapsed time: 85308.706 msecs" #'user/m user=> (def m (time (mandelbrot pmap 1000 10001000 mandelbrot-range))) "Elapsed time: 49067.584 msecs" #'user/m Refer to the following chart: If we only iterate at most 500 times for each point, it's slightly faster to use map and work sequentially. However, if we iterate 1,000 times each, pmap is faster. How it works... This shows that parallelization is a balancing act. If each separate work item is small, the overhead of creating the threads, coordinating them, and passing data back and forth takes more time than doing the work itself. However, when each thread has enough to do to make it worth it, we can get nice speed-ups just by using pmap. Behind the scenes, pmap takes each item and uses future to run it in a thread pool. It forces only a couple more items than you have processors, so it keeps your machine busy, without generating more work or data than you need. There's more... For an in-depth, excellent discussion of the nuts and bolts of pmap, along with pointers about things to watch out for, see David Liebke's talk, From Concurrency to Parallelism (http://blip.tv/clojure/david-liebke-from-concurrency-to-parallelism-4663526). See also The Partitioning Monte Carlo Simulations for better pmap performance recipe Parallelizing processing with Incanter One of its nice features is that it uses the Parallel Colt Java library (http://sourceforge.net/projects/parallelcolt/) to actually handle its processing, so when you use a lot of the matrix, statistical, or other functions, they're automatically executed on multiple threads. For this, we'll revisit the Virginia housing-unit census data and we'll fit it to a linear regression. Getting ready We'll need to add Incanter to our list of dependencies in our Leiningen project.clj file: :dependencies [[org.clojure/clojure "1.5.0"] [incanter "1.3.0"]] We'll also need to pull those libraries into our REPL or script: (use '(incanter core datasets io optimize charts stats)) We can use the following filename: (def data-file "data/all_160_in_51.P35.csv") How to do it... For this recipe, we'll extract the data to analyze and perform the linear regression. We'll then graph the data afterwards. First, we'll read in the data and pull the population and housing unit columns into their own matrix. (def data (to-matrix (sel (read-dataset data-file :header true) :cols [:POP100 :HU100]))) From this matrix, we can bind the population and the housing unit data to their own names. (def population (sel data :cols 0)) (def housing-units (sel data :cols 1)) Now that we have those, we can use Incanter to fit the data. (def lm (linear-model housing-units population)) Incanter makes it so easy, it's hard not to look at it. (def plot (scatter-plot population housing-units :legend true)) (add-lines plot population (:fitted lm)) (view plot) Here we can see that the graph of housing units to families makes a very straight line: How it works… Under the covers, Incanter takes the data matrix and partitions it into chunks. It then spreads those over the available CPUs to speed up processing. Of course, we don't have to worry about this. That's part of what makes Incanter so powerful. Partitioning Monte Carlo simulations for better pmap performance In the Parallelizing processing with pmap recipe, we found that while using pmap is easy enough, knowing when to use it is more complicated. Processing each task in the collection has to take enough time to make the costs of threading, coordinating processing, and communicating the data worth it. Otherwise, the program will spend more time concerned with how (parallelization) and not enough time with what (the task). The way to get around this is to make sure that pmap has enough to do at each step that it parallelizes. The easiest way to do that is to partition the input collection into chunks and run pmap on groups of the input. For this recipe, we'll use Monte Carlo methods to approximate pi . We'll compare a serial version against a naïve parallel version against a version that uses parallelization and partitions. Getting ready We'll use Criterium to handle benchmarking, so we'll need to include it as a dependency in our Leiningen project.clj file, shown as follows: :dependencies [[org.clojure/clojure "1.5.0"] [criterium "0.3.0"]] We'll use these dependencies and the java.lang.Math class in our script or REPL. (use 'criterium.core) (import [java.lang Math]) How to do it… To implement this, we'll define some core functions and then implement a Monte Carlo method for estimating pi that uses pmap. We need to define the functions necessary for the simulation. We'll have one that generates a random two-dimensional point that will fall somewhere in the unit square. (defn rand-point [] [(rand) (rand)]) Now, we need a function to return a point's distance from the origin. (defn center-dist [[x y]] (Math/sqrt (+ (* x x) (* y y)))) Next we'll define a function that takes a number of points to process, and creates that many random points. It will return the number of points that fall inside a circle. (defn count-in-circle [n] (->> (repeatedly n rand-point) (map center-dist) (filter #(<= % 1.0)) count)) That simplifies our definition of the base (serial) version. This calls count-incircle to get the proportion of random points in a unit square that fall inside a circle. It multiplies this by 4, which should approximate pi. (defn mc-pi [n] (* 4.0 (/ (count-in-circle n) n))) We'll use a different approach for the simple pmap version. The function that we'll parallelize will take a point and return 1 if it's in the circle, or 0 if not. Then we can add those up to find the number in the circle. (defn in-circle-flag [p] (if (<= (center-dist p) 1.0) 1 0)) (defn mc-pi-pmap [n] (let [in-circle (->> (repeatedly n rand-point) (pmap in-circle-flag) (reduce + 0))] (* 4.0 (/ in-circle n)))) For the version that chunks the input, we'll do something different again. Instead of creating the sequence of random points and partitioning that, we'll have a sequence that tells how large each partition should be and have pmap walk across that, calling count-in-circle. This means that creating the larger sequences are also parallelized. (defn mc-pi-part ([n] (mc-pi-part 512 n)) ([chunk-size n] (let [step (int (Math/floor (float (/ n chunk-size)))) remainder (mod n chunk-size) parts (lazy-seq (cons remainder (repeat step chunk-size))) in-circle (reduce + 0 (pmap count-in-circle parts))] (* 4.0 (/ in-circle n))))) Now, how do these work? We'll bind our parameters to names, and then we'll run one set of benchmarks before we look at a table of all of them. We'll discuss the results in the next section. user=> (def chunk-size 4096) #'user/chunk-size user=> (def input-size 1000000) #'user/input-size user=> (quick-bench (mc-pi input-size)) WARNING: Final GC required 4.001679309213317 % of runtime Evaluation count : 6 in 6 samples of 1 calls. Execution time mean :634.387833 ms Execution time std-deviation : 33.222001 ms Execution time lower quantile : 606.122000 ms ( 2.5%) Execution time upper quantile : 677.273125 ms (97.5%) nil Here's all the information in the form of a table: Function Input Size Chunk Size Mean Std Dev. GC Time mc-pi 1,000,000 NA 634.39ms 33.22 ms 4.0%   mc-pi-pmap 1,000,000 NA 1.92 sec 888.52 ms 2.60%   mc-pi-part 1,000,000 4,096 455.94 ms 4.19 ms 8.75%   Here's a chart with the same information: How it works… There are a couple of things we should talk about here. Primarily, we'll need to look at chunking the inputs for pmap, but we should also discuss Monte Carlo methods. Estimating with Monte Carlo simulations Monte Carlo simulations work by throwing random data at a problem that is fundamentally deterministic, but when it's practically infeasible to attempt a more straightforward solution. Calculating pi is one example of this. By randomly filling in points in a unit square, p/4 will be approximately the ratio of points that will fall within a circle centered on 0, 0. The more random points that we use, the better the approximation. I should note that this makes a good demonstration of Monte Carlo methods, but it's a terrible way to calculate pi. It tends to be both slower and less accurate than the other methods. Although not good for this task, Monte Carlo methods have been used for designing heat shields, simulating pollution, ray tracing, financial option pricing, evaluating business or financial products, and many, many more things. For a more in-depth discussion, Wikipedia has a good introduction to Monte Carlo methods at http://en.wikipedia.org/wiki/Monte_Carlo_method. Chunking data for pmap The table we saw earlier makes it clear that partitioning helped: the partitioned version took just 72 percent of the time that the serial version did, while the naïve parallel version took more than three times longer. Based on the standard deviations, the results were also more consistent. The speed up is because each thread is able to spend longer on each task. There is a performance penalty to spreading the work over multiple threads. Context switching (that is, switching between threads) costs time, and coordinating between threads does as well. But we expect to be able to make that time and more up by doing more things at once. However, if each task itself doesn't take long enough, then the benefit won't out-weigh the costs. Chunking the input—and effectively creating larger individual tasks for each thread— gets around this by giving each thread more to do, and thereby spending less time context switching and coordinating, relative to the overall time spent running.
Read more
  • 0
  • 0
  • 1460

article-image-getting-started-impressive-presentations
Packt
25 Mar 2013
8 min read
Save for later

Getting Started with Impressive Presentations

Packt
25 Mar 2013
8 min read
(For more resources related to this topic, see here.) What is impress.js? impress.js is a presentation framework build upon the powerful CSS3 transformations and transitions on modern web browsers. Bartek Szopka is the creator of this amazing framework. According to the creator, the idea came to him while he was playing with CSS transformations. Prezi.com was the source that got him inspired. On w3.org we have the following mentioned about CSS transforms: CSS transforms allows elements styled with CSS to be transformed in twodimensional or three-dimensional space For more information on CSS transformations for those who are interested, visit http://www.w3.org/TR/css3-transforms/. Creating presentations with impress.js is not a difficult task once you get used to the basics of the framework. Slides in impress.js presentations are called steps and they go beyond the conventional presentation style. We can have multiple steps visible at the same time with different dimensions and effects. impress.js step designs are built upon HTML. This means we can create unlimited effects and the only limitation is your imagination. Built-in features impress.js comes with advanced support for most CSS transformations. We can combine these features to provide more advanced visualizations in modern browsers. These features are as follows: Positioning: Elements can be placed in certain areas of the browser window enabling us to move between slides. Scaling: Elements can be scaled up or scaled down to show an overview or a detailed view of elements. Rotating: Elements can be rotated across any given axis. Working on 3D space: Presentations are not limited to 2D space. All the previously mentioned effects can be applied to 3D space with the z axis. Beyond presentations with impress.js This framework was created to build online presentations with awesome effects with the power of CSS and JavaScript. Bartek, who is the creator of this framework, mentions that it has been used for various different purposes expanding the original intention. Here are some of the most common usages of the impress.js framework: Creating presentations Portfolios Sliders Single page websites List of demos containing various types of impress.js presentations can be found at https://github.com/bartaz/impress.js/wiki/Examples-and-demos. Why is it important? You must be wondering why we need to care about such a framework when we have quality presentation programs such as PowerPoint. The most important thing we need to look at is the license for impress.js. Since it is licensed under MIT and GPL we can even change the source codes to customize the framework according to our needs. Also most of the modern browsers support CSS transformations, allowing you to use impress.js, eliminating the platform dependency of presentation programs. Both desktop-based presentations and online presentations are equally good at presenting information to the audience, but online presentations with impress.js provide a slight advantage over desktop-based presentations in terms of usability. The following are some of the drawbacks of desktop program generated presentations, compared to impress.js presentations: Desktop presentations require a presentation creation software or presentation viewer. Therefore, it's difficult to get the same output in different operating systems. Desktop presentations use standard slide-based techniques with a common template, while impress.js presentation slides can be designed in a wide range of ways. Modifications are difficult in desktop-based presentations since it requires presentation creation software. impress.js presentations can be changed instantly by modifying the HTML content with a simple text editor. Creating presentations is not just about filling our slides with a lot of information and animations. It is a creative process that needs to be planned carefully. Best practices will tell us that we should keep the slides as simple as possible with very limited information and, letting presenter do the detailed explanations. Let's see how we can use impress.js to work with some well-known presentation design guidelines. Presentation outline The audience does not have any idea about the things you are going to present prior to the start of the presentation. If your presentation is not up to standard, the audience will wonder how many boring slides are to come and what the contents are going to be. Hence, it's better to provide a preliminary slide with the outline of your presentation. A limited number of slides and their proper placement will allow us to create a perfect outline of the presentation. Steps in impress.js presentations are placed in 3D space and each slide is positioned relatively. Generally, we will not have an idea about how slides are placed when the presentation is on screen. You can zoom in on the steps by using the scaling feature of impress.js. In this way, we can create additional steps containing the overview of the presentation by using scaling features. Using bullet points People prefer to read the most important points articles rather than huge chunks of text . It's wise to put these brief points on the slides and let the details come through your presenting skills. Since impress.js slides are created with HTML, you can easily use bullet points and various types of designs for them using CSS. You can also create each point as a separate step allowing you to use different styles for each point. Animations We cannot keep the audience interested just by scrolling down the presentation slides . Presentations need to be interactive and animations are great for getting the attention of the audience. Generally, we use animations for slide transitions. Even though presentation tools provide advanced animations, it's our responsibility to choose the animations wisely. impress.js provides animation effects for moving, rotating, and scaling step transitions. We have to make sure it is used with purpose. Explaining the life cycle of a product or project is an excellent scenario for using rotation animations. So choose the type of animation that suits your presentation contents and topic. Using themes Most people like to make the design of their presentation as cool as possible. Sometimes they get carried away and choose from the best themes available in the presentation tool. Themes provided by tools are predefined and designed to suit general purposes. Your presentation might be unique and choosing an existing theme can ruin the uniqueness. The best practice is to create your own themes for your presentations. impress.js does not come with built-in themes. Hence there is no other option than to create a new theme from scratch. impress.js steps are different to each other unlike standard presentations, so you have the freedom to create a theme or design for each of the steps just by using some simple HTML and CSS code. Apart from the previous points, we can use typography, images, and videos to create better designs for impress.js presentations. We have covered the background and the importance for impress.js. Now we can move on to creating real presentations using the framework throughout the next few sections. Downloading and configuring impress.js You can obtain a copy of the impress.js library by downloading from the github page at https://github.com/bartaz/impress.js/. The downloaded .zip file contains an example demo and necessary styles in addition to the impress.js file. Extract the .zip file on to your hard drive and load the index.html on the browser to see impress.js in action. The folder structure of the downloaded .zip file is as given in the following screenshot: Configuring impress.js is something you should be able to do quite easily. I'll walk you through the configuration process. First we have to include the impress.js file in the HTML file. It is recommended you load this file as late as possible in your document. Create a basic HTML using the following code: <!doctype html> <html lang="en"> <head> <title>impress.js </title> </head> <body> <script src = "js/impress.js"></script> </body> </html> We have linked the impress.js file just before the closing body tag to make sure it is loaded after all the elements in our document. Then we need to initialize the impress library to make the presentations work. We can place the following code after the impress.js file to initialize any existing presentation in the document which is compatible with the impress library: <script>impress(). init();</script> Since we have done the setup of the impress.js library, we can now create our impressive presentation. Summary In this article we looked at the background of the impress.js framework and how it was created. Then we talked about the importance of impress.js in creating web-based presentations and various types of usage beyond presentations. Finally we obtained a copy of the framework from the official github page and completed the setup. Resources for Article : Further resources on this subject: 3D Animation Techniques with XNA Game Studio 4.0 [Article] Enhancing Your Math Teaching using Moodle 1.9: Part 1 [Article] Your First Page with PHP-Nuke [Article]
Read more
  • 0
  • 0
  • 1363