Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-mailbox-database-management
Packt
19 Aug 2013
10 min read
Save for later

Mailbox Database Management

Packt
19 Aug 2013
10 min read
(For more resources related to this topic, see here.) Determining the average mailbox size per database PowerShell is very flexible and gives you the ability to generate very detailed reports. When generating mailbox database statistics, we can utilize data returned from multiple cmdlets provided by the Exchange Management Shell. This section will show you an example of this, and you will learn how to calculate the average mailbox size per database using PowerShell. How to do it... To determine the average mailbox size for a given database, use the following one-liner: Get-MailboxStatistics -Database DB1 | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average How it works... Calculating an average is as simple as performing some basic math, but PowerShell gives us the ability to do this quickly with the Measure-Object cmdlet. The example uses the Get-MailboxStatistics cmdlet to retrieve all the mailboxes in the DB1 database. We then loop through each one, retrieving only the TotalItemSize property, and inside the ForEach-Object script block we convert the total item size to megabytes. The result from each mailbox can then be averaged using the Measure-Object cmdlet. At the end of the command, you can see that the Select-Object cmdlet is used to retrieve only the value for the Average property. The number returned here will give us the average mailbox size in total for regular mailboxes, archive mailboxes, as well as any other type of mailbox that has been disconnected. If you want to be more specific, you can filter out these mailboxes after running the Get-MailboxStatistics cmdlet: Get-MailboxStatistics -Database DB1 | Where-Object{!$_.DisconnectDate -and !$_.IsArchive} | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average Notice that, in the preceding example, we have added the Where-Object cmdlet to filter out any mailboxes that have a DisconnectDate defined or where the IsArchive property is $true. Another thing that you may want to do is round the average. Let's say the DB1 database contained 42 mailboxes and the total size of the database was around 392 megabytes. The value returned from the preceding command would roughly look something like 2.39393939393939. Rarely are all those extra decimal places of any use. Here are a couple of ways to make the output a little cleaner: $MBAvg = Get-MailboxStatistics -Database DB1 | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average[Math]::Round($MBAvg,2) You can see that this time, we stored the result of the one-liner in the $MBAvg variable. We then use the Round method of the Math class in the .NET Framework to round the value, specifying that the result should only contain two decimal places. Based on the previous information, the result of the preceding command would be 2.39. We can also use string formatting to specify the number of decimal places to be used: [PS] "{0:n2}" -f $MBAvg2.39 Keep in mind that this command will return a string, so if you need to be able to sort on this value, cast it to double: [PS] [double]("{0:n2}" -f $MBAvg)2.39 The -f format operator is documented in PowerShell's help system in about_operators. There's more... The previous examples have only shown how to determine the average mailbox size for a single database. To determine this information for all mailbox databases, we can use the following code (save it to a file called size.ps1): foreach($DB in Get-MailboxDatabase) { Get-MailboxStatistics -Database $DB | ForEach-Object{$_.TotalItemSize.value.ToMB()} |Measure-Object -Average | Select-Object @{n="Name";e={$DB.Name}}, @{n="AvgMailboxSize";e={[Math] ` ::Round($_.Average,2)}} | Sort-Object ` AvgMailboxSize -Desc} The result of this command would look something like this: This example is very similar to the one we looked at previously. The difference is that, this time, we are running our one-liner using a foreach loop for every mailbox database in the organization. When each mailbox database has been processed, we sort the output based on the AvgMailboxSize property. Restoring data from a recovery database When it comes to recovering data from a failed database, you have several options depending on what kind of backup product you are using or how you have deployed Exchange 2013. The ideal method for enabling redundancy is to use a DAG, which will replicate your mailbox databases to one or more servers and provide automatic failover in the event of a disaster. However, you may need to pull old data out of a database restored from a backup. In this section, we will take a look at how you can create a recovery database and restore data from it using the Exchange Management Shell. How to do it... First, restore the failed database using the steps required by your current backup solution. For this example, let's say that we have restored the DB1 database file to E:RecoveryDB1 and the database has been brought to a clean shutdown state. We can use the following steps to create a recovery database and restore mailbox data: Create a recovery database using the New-MailboxDatabase cmdlet: New-MailboxDatabase -Name RecoveryDB `-EdbFilePath E:RecoveryDB1DB1.edb `-LogFolderPath E:RecoveryDB01 `-Recovery `-Server MBX1 When you run the preceding command, you will see a warning that the recovery database was created using the existing database file. The next step is to check the state of the database, followed by mounting the database: Eseutil /mh .DB1.edbEseutil /R E02 /DMount-Database -Identity RecoveryDB Next, query the recovery database for all mailboxes that reside in the database RecoveryDB: Get-MailboxStatistics –Database RecoveryDB | fl DisplayName,MailboxGUID,LegacyDN Lastly, we will use the New-MailboxRestoreRequest cmdlet to restore the data from the recovery database for a single mailbox: New-MailboxRestoreRequest -SourceDatabase RecoveryDB `-SourceStoreMailbox "Joe Smith" `-TargetMailbox joe.smith When running the eseutil commands, make sure you are in the folder where the restored mailbox database and logs are placed. How it works... When you restore the database file from your backup application, you may need to ensure that the database is in a clean shutdown state. For example, if you are using Windows Server Backup for your backup solution, you will need to use the Eseutil.exe database utility to play any uncommitted logs into the database to get it in a clean shutdown state. Once the data is restored, we can create a recovery database using the New-MailboxDatabase cmdlet, as shown in the first example. Notice that when we ran the command we used several parameters. First, we specified the path to the EDB file and the logfiles, both of which are in the same location where we restored the files. We have also used the -Recovery switch parameter to specify that this is a special type of database that will only be used for restoring data and should not be used for production mailboxes. Finally, we specified which mailbox server the database should be hosted on using the -Server parameter. Make sure to run the New-MailboxDatabase cmdlet from the mailbox server that you are specifying in the -Server parameter, and then mount the database using the Mount-Database cmdlet. The last step is to restore data from one or more mailboxes. As we saw in the previous example, New-MailboxRestoreRequest is the tool to use for this task. This cmdlet was introduced in Exchange 2010 SP1, so if you have used this process in the past, the procedure is the same with Exchange 2013. There's more… When you run the New-MailboxRestoreRequest cmdlet, you need to specify the identity of the mailbox you wish to restore using the -SourceStoreMailbox parameter. There are three possible values you can use to provide this information: DisplayName, MailboxGuid, and LegacyDN . To retrieve these values, you can use the Get-MailboxStatistics cmdlet once the recovery database is online and mounted: Get-MailboxStatistics -Database RecoveryDB | fl DisplayName,MailboxGUID,LegacyDN Here we have specified that we want to retrieve all three of these values for each mailbox in the RecoveryDB database. Understanding target mailbox identity When restoring data with the New-MailboxRestoreRequest cmdlet, you also need to provide a value for the -TargetMailbox parameter. The mailbox needs to already exist before running this command. If you are restoring data from a backup for an existing mailbox that has not changed since the backup was done, you can simply provide the typical identity values for a mailbox for this parameter. If you want to restore data to a mailbox that was not the original source of the data, you need to use the -AllowLegacyDNMismatch switch parameter. This will be useful if you are restoring data to another user's mailbox, or if you've recreated the mailbox since the backup was taken. Learning about other useful parameters The New-MailboxRestoreRequest cmdlet can be used to granularly control how data is restored out of a mailbox. The following parameters may be useful to customize the behavior of your restores: ConflictResolutionOption: This parameter specifies the action to take if multiple matching messages exist in the target mailbox. The possible values are KeepSourceItem, KeepLatestItem, or KeepAll. If no value is specified, KeepSourceItem will be used by default. ExcludeDumpster: Use this switch parameter to indicate that the dumpster should not be included in the restore. SourceRootFolder: Use this parameter to restore data only from a root folder of a mailbox. TargetIsArchive: You can use this switch parameter to perform a mailbox restore to a mailbox archive. TargetRootFolder: This parameter can be used to restore data to a specific folder in the root of the target mailbox. If no value is provided, the data is restored and merged into the existing folders, and, if they do not exist, they will be created in the target mailbox. These are just a few of the useful parameters that can be used with this cmdlet, but there are more. For a complete list of all the available parameters and full details on each one, run Get-Help New-MailboxRestoreRequest -Detailed. Understanding mailbox restore request cmdlets There is an entire cmdlet set for mailbox restore requests in addition to the New-MailboxRestoreRequest cmdlet. The remaining available cmdlets are outlined as follows: Get-MailboxRestoreRequest: Provides a detailed status of mailbox restore requests Remove-MailboxRestoreRequest : Removes fully or partially completed restore requests Resume-MailboxRestoreRequest : Resumes a restore request that was suspended or failed Set-MailboxRestoreRequest: Can be used to change the restore request options after the request has been created Suspend-MailboxRestoreRequest: Suspends a restore request any time after the request was created but before the request reaches the status of Completed For complete details and examples for each of these cmdlets, use the Get-Help cmdlet with the appropriate cmdlet using the -Full switch parameter. Taking it a step further Let's say that you have restored your database from backup, you have created a recovery database, and now you need to restore each mailbox in the backup to the corresponding target mailboxes that are currently online. We can use the following script to accomplish this: $mailboxes = Get-MailboxStatistics -Database RecoveryDBforeach($mailbox in $mailboxes) { New-MailboxRestoreRequest -SourceDatabase RecoveryDB ` -SourceStoreMailbox $mailbox.DisplayName ` -TargetMailbox $mailbox.DisplayName } Here you can see that first we use the Get-MailboxStatistics cmdlet to retrieve all the mailboxes in the recovery database and store the results in the $mailboxesvariable. We then loop through each mailbox and restore the data to the original mailbox. You can track the status of these restores using the Get-MailboxRestoreRequest cmdlet and the Get-MailboxRestoreRequestStatistics cmdlet. Summary Thus in this article, we covered a very small but an appetizing part of mailbox database management by determining the average mailbox size per database and restoring data from a recovery database. Resources for Article : Further resources on this subject: Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] Microsoft SQL Azure Tools [Article] SQL Server 2008 R2: Multiserver Management Using Utility Explorer [Article]
Read more
  • 0
  • 0
  • 966

article-image-installing-magento
Packt
19 Aug 2013
22 min read
Save for later

Installing Magento

Packt
19 Aug 2013
22 min read
(For more resources related to this topic, see here.) Installing Magento locally Whether you're working on a Windows computer, Mac, or Linux machine, you will notice very soon that it comes in handy to have a local Magento test environment available. Magento is a complex system and besides doing regular tasks, such as adding products and other content, you should never apply changes to your store directly in the live environment. When you're working on your own a local test system is easy to set up and it gives you the possibility to test changes without any risk. When you're working in a team it makes sense to have a test environment running on your own server or hosting provider. In here, we'll start by explaining how to set up your local test system. Requirements Before we jump into action, it's good to have a closer look at Magento's requirements. What do you need to run it? Simply put, all up-to-date requirements for Magento can be found here: http://www.magentocommerce.com/system-requirements. But maybe that's a bit overwhelming if you are just a beginner. So let's break this up into the most essential stuff: Requirement Notes Operating system: Linux Magento runs best on Linux, as offered by most hosting companies. Don't worry about your local test environment as that will run on Windows or Mac as well. But for your live store you should go in for a Linux solution because if you decide to run it on anything else other than Linux for a live store, it will not be supported. Web server: Apache Magento runs on Versions 1.3.x, 2.0.x, and 2.2.x of this very popular web server. As of Version 1.7 of Magento community and Version 1.12 of Magento Enterprise there's a new web server called Nginx that is compatible as well. Programming language: PHP Magento has been developed using PHP, a programming language which is very popular. Many major open source solutions such as WordPress and Joomla for instance, have been built using PHP. Use Versions 5.2.13 - 5.3.15. Do not use PHP4 anymore, nor use PHP 5.4 yet! PHP extensions Magento requires a number of extensions, which should be available on top of PHP itself. You will need: PDO_MySQL, mcrypt, hash, simplexml, GD, DOM, Iconv, and Curl. Besides that you also need to have the possibility to switch off ''safe mode''. You do not have a clue about all of this? Don't worry. A host offering Magento services already takes care of this. And for your local environment there are only a few additional steps to take. We'll get there in a minute. Database: MySQL MySQL is the database, where Magento will store all data for your store. Use Version 4.1.20 or (and preferably) newer. As you can see, even in a simplified format, there are quite some things that need to be taken care of. Magento hosting is not as simple as hosting for a small WordPress or Joomla! website, currently the most popular open source solutions to create a regular site. The requirements are higher and you just cannot expect to host your store for only a couple of dollars per month. If you do, your online store may still work, but it is likely that you'll run into some performance issues. Be careful with the cheapest hosting solutions. Although Magento may work, you'll be consuming too that need server resources soon. Go for a dedicated server or a managed VPS (Virtual Private Server), but definitely for a host that is advertising support of Magento. Time for action – installing Magento on a Windows machine We'll speak more deeply about Magento hosting later on. Let's first download and install the package on a local Windows machine. Are you a Mac user? Don't worry, we'll give instructions for Mac users as well later on. Note that the following instructions are written for Windows users, but will contain valuable information for Mac users as well. Perform the following steps to install Magento on your Windows computer: Download the Magento installation package. Head over to http://www.magentocommerce.com/download and download the package you need. For a Windows user almost always the full ZIP package is the most convenient one. In our situation Version 1.7.0.2 is the latest one, but please be aware that this will certainly change over time when newer versions are released. You will need to create a (free) account to download the software. This account will also be helpful later on. It will give you access to the Magento support forums, so make sure to store your login details somewhere.The download screen should look something like this: If you're a beginner then it is handy to have some sample data in your store. Magento offers a download package containing sample data on the same page, so download that as well. Note that for a production environment you would never install the sample data, but for a test system like the local installation we're doing here, it might be a good idea to use it. The sample data will create a few items and customers in your store, which will make the learning process easier. Did you notice the links to Magento Go at every download link? Magento Go is Magento's online platform, which you can use out of the box, without doing any installation at all. However, in the remaining part of this article, we assume that you are going to set up your own environment and want to have full control over your store. Next, you need a web server, so that you can run your website locally, on your own machine. On Windows machines, XAMPP is an easy to use all-in-one solution. Download the installer version via: http://www.apachefriends.org/en/xampp-windows.html. XAMPP is also available for Mac and Linux. The download screen is as follows: Once downloaded, run the executable code to start the installation process. You might receive some security warnings that you have to accept, especially when you're using Windows Vista, 7 or 8, like in the following example: Because of this it's best to install XAMPP directly in the root of your hard drive, c:xampp in most cases. Once you click on OK, you will see the following screen, which shows the progress of installation: Once the installation has finished, the software asks if you'd like to start the Control Panel. If you do so, you'll see a number of services that have not been started yet. The minimum that you should start by clicking the Start button are Apache, the web server and MySQL, the database server. Now you're running your own web server on your local computer. Be aware that generally this web server will not be accessible for the outside world. It's running on your local machine, just for testing purposes. Before doing the next step, please verify if your web server is actually running. You can do so by using your browser and going to http://localhost or http://127.0.0.1 If all went well you should see something similar to the following: No result? If you're on a Windows computer, please first reboot your machine. Next, check using the XAMPP control panel if the Apache service is running. If it isn't, try to start it and pay attention to the error messages that appear. Need more help? Start with the help available on XAMPP's website at: http://www.apachefriends.org/en/faq-xampp-windows.html. Can't start the Apache service? Check if there are any other applications using ports 80 and 443. The XAMPP control panel will give you more information. One of the applications that you should for instance stop before starting XAMPP is Skype. It's also possible to change this setting in Skype by navigating to Tools | Options | Advanced | Connections. Change the port number to something else, for instance port 8080. Then close and restart Skype. This prevents the two from interfering with each other in the future. So, the next thing that needs to be done is installing Magento on top of it. But before we do so, we first have to change a few settings. Change the following Windows file: C:WindowsSystem32driversetchosts.Make sure to open your editor using administrator rights, otherwise you will not be able to save your changes. Add the following line to the host file: 127.0.0.1 www.localhost.com. This is needed because Magento will not work correctly on a localhost without this setting. You may use a different name, but the general rule is that at least one dot must be used in the local domain name. The following screenshot gives an example of a possible host file. Please note that every host file will look a bit different. Also, your security software or Windows security settings may prevent you from making changes to this file, so please make sure you have the appropriate rights to change and save its contents: Do you need a text editor? There are really lots of possibilities when it comes to editing text for the web, as long as you use a ''plain text'' editor. Something like Microsoft Word isn't suitable because it will add a lot of unwanted code to your files! For very simple things like the one above, even Notepad would work. But soon you'll notice that it is much more convenient to use an editor that will help you in structuring and formatting your files. Personally, I can recommend the free Notepad++ for Windows users, which is even available in lots of different languages: http://notepad-plus-plus.org. Mac users can have a look at Coda: http://panic.com/coda/ or TextWrangler http://www.barebones.com/products/textwrangler/. Unzip the downloaded Magento package and put all files in a subfolder of your XAMPP installation. This could for instance be c:xampphtdocsmagento. Now, go to www.localhost.com/magento to check if the installation screen of Magento is visible, as shown in the following screenshot. But do not yet start the installation process! Before you start the installation, first create a MySQL database. To do this, use a second browser tab and navigate to localhost | phpMyAdmin. By default the user is root, and so without a password you should be able to continue without logging in. Click on Databases and create a database with a name of your choice. Write it down, as you will need it during the Magento installation. After creating the database you may close the browser tab. It's finally time to start the installation process now. Go back to the installation screen of Magento, accept the license agreement and click on Continue. Next, set your country, Time Zone and Default Currency. If you're working with multiple currencies that will be addressed later on: The next screen is actually the most important one of the installation process and this is where most beginners go wrong because they do not know what values to use. Using XAMPP this is an easy task, however, fill in your Database Name, User Name (root) and do not forget to check the Skip Base URL Validation Before the Next Step box, otherwise your installation might fail: In this same form there are some fields that you can use to immediately improve the security level of your Magento setup. On a local test environment that isn't necessary, so we'll pay attention to those settings later on when we'll discuss installing Magento at a hosting provider. Please note that the Use Secure URLs option should remain unchecked for a local installation like we're doing here. In the last step, yes, really! Just fill out your personal data and chose a username and password. Also in here, since you're working locally you do not have to create a complicated, unique password now. But you know what we mean, right? Doing a live installation at a hosting provider requires a good, strong password! You do not have to fill the Encryption Key field, Magento will do that for you: In the final screen please just make a note of the Encryption Key value that was generated. You might need it in the future whenever upgrading your Magento store to a newer software version: What just happened? Congratulations! You just installed Magento for the very first time! Summarizing it, you just: Downloaded and installed XAMPP Changed your Windows host file Created a MySQL database using PhpMyAdmin Installed Magento I'm on Mac; what should I do? Basically, the steps using XAMPP are a bit different if you're using Mac. We shall be using Mac OS X 10.8 as an example of Mac OS version. According to our experience, as an alternative to XAMPP, MAMP is a bit easier if you are working with Mac. You can find the MAMP software here: http://www.mamp.info/en/downloads/index.html And the documentation for MAMP is available here: http://documentation.mamp.info/en/mamp/installation The good thing about MAMP is that it is easy to install, with very few configuration changes. It will not conflict with any already running Apache installation on your Mac, in case you have any. And it's easy to delete as well; just removing the Mamp folder from your Applications folder is already sufficient to delete MAMP and all local websites running on it. Once you've downloaded the package, it will be in the Downloads folder of your Mac. If you are running Mac OS X 10.8, you first need to set the correct security settings to install MAMP. You can find out which version of Mac OS X you have using the menu option in the top-left corner of your screen: You can find the security settings menu by again going to the Apple menu and then selecting System Preferences: In System Preferences, select the Security & Privacy icon that can be found in the first row as seen in the following screenshot: In here, press the padlock and enter your admin password. Next, select the Anywhere radio button in the Allow applications downloaded from: section. This is necessary because it will not be possible to run the MAMP installation you downloaded without it: Open the image you've downloaded and simply move the Mamp folder to your Applications folder. That's all. Now that you've MAMP installed on your system, you may launch MAMP.app (located at Applications | Mamp | Mamp.app). While you're editing your MAMP settings, MAMP might prompt you for an administrator password. This is required because it needs to run two processes: httpd (Apache) and mysqld (MySQL). Depending on the settings you set for those processes, you may or may not need to enter your password. Once you open MAMP, click on Preferences button. Next, click on Ports. The default MAMP ports are 8888 for Apache, and 8889 for MySQL. If you use this configuration, you will not be asked for your password, but you will need to include the port number in the URL when using it (http://localhost:8888). You may change this by setting the Apache port to 80, for which you'll probably have to enter your administrator password. If you have placed your Magento installation in the Shop folder, it is advised to call your Magento installation through the following URL: http://127.0.0.1:8888/shop/, instead of http://localhost:8888/shop/. The reason for this is that Magento may require dots in the URL. The last thing you need to do is visit the Apache tab, where you'll need to set a document root. This is where all of your files are going to be stored for your local web server. An example of a document root is Users | Username | Sites. To start the Apache and MySQL servers, simply click on Start Servers from the main MAMP screen. After the MAMP servers start, the MAMP start page should open in your web browser. If it doesn't, click on Open start page in the MAMP window. From there please select phpMyAdmin. In PhpMyAdmin, you can create a database and start the Magento installation procedure, just like we did when installing Magento on a Windows machine. See the Time for action – installing Magento on a Windows machine section, point 8 to continue the installation of Magento. Of course you need to put the Magento files in your Mamp folder now, instead of the Windows path mentioned in that procedure. In some cases, it is necessary to change the Read & Write permissions of your Magento folder before you can use Magento on Mac. To do that, right-click on the Magento folder, and select the Get Info option. In the bottom of the resulting screen, you will see the folder permissions. Set all of these to Read & Write, if you have trouble in running Magento. Installing Magento at a hosting service There are thousands of hosting providers with as many different hosting setups. The difficulty of explaining the installation of Magento at a commonly used hosting service is that the procedure differs from hosting provider to hosting provider, depending on the tools they use for their services. There are providers, for instance, who use Plesk, DirectAdmin, or cPanel. Although these user environments differ from each other, the basic steps always remain the same: Check the requirements of Magento (there's more information on this topic at the beginning of this article). Upload the Magento installation files using an ftp tool, for instance, Filezilla (download this free at: http://filezilla-project.org). Create a database. This step differs slightly per hosting provider, but often a tool, such as PhpMyAdmin is used. Ask your hosting provider if you're in doubt about this step. You will need: the database name, database user, password, and the name of the database server. Browse to your domain and run the Magento installation process, which is the same as we saw earlier in this article. How to choose a Magento hosting provider One important thing we didn't discuss yet during this article is selecting a hosting provider that is capable of running your online store. We already mentioned that you should not expect performance for a couple of dollars per month. Magento will often still run at a cheap hosting service, but the performance is regularly very poor. So, you should pay attention to your choices here and make sure you make the right decision. Of course everything depends on the expectations for your online store. You should not aim for a top performance, if all you expect to do during your first few years is 10,000 dollars of revenue per year. OK, that's difficult sometimes. It's not always possible to create a detailed estimation of the revenue you may expect. So, let's see what you should pay attention to: Does the hosting provider mention Magento on its website? Or maybe they are even offering special Magento hosting packages? If yes, you are sure that technically Magento will run. There are even hosting providers for which Magento hosting is their speciality. Are you serious about your future Magento store? Then ask for references! Clients already running on Magento at this hosting provider can tell you more about the performance and customer support levels. Sometimes a hosting provider also offers an optimized demo store, which you can check out to see how it is performing. Ask if the hosting provider has Magento experts working for them and if yes, how many. Especially in case of large, high-traffic stores, it is important to hire the knowledge you need. Do not forget to check online forums and just do some research about this provider. However, we must also admit that you will find negative experiences of customers about almost every hosting provider. Are you just searching for a hosting provider to play around with Magento? In that case any cheap hosting provider would do, although your Magento store could be very slow. Take for instance, Hostgator (http://hostgator.com), which offers small hosting plans for a couple of U.S. dollars per month. Anyway, a lot of hosts are offering a free trial period, which you may use to test the performance. Installatron Can't this all be done a bit more easily? Yes, that's possible. If your host offers a service named Installatron and if it also includes Magento within it, your installation process will become a lot easier. We could almost call it a ''one-click'' installation procedure. Check if your hosting provider is offering the latest Magento version; this may not always be the case! Of course you may ask your (future) hosting provider if they are offering Installatron on their hosting packages. The example shown is from Simple Helix provider (http://simplehelix.com), a well-known provider specialized in Magento hosting solutions. Time for action – installing Magento using Installatron The following short procedure shows the steps you need to take to install Magento using Installatron: First, locate the Installatron Applications Installer icon in the administration panel of your hosting provider. Normally this is very easy to find, just after logging in: Next, within Installatron Applications Installer, click on the Applications Browser option: Inside Applications Browser, you'll see a list of CMS solutions and webshop software that you can install. Generally Magento can be located in the e-Commerce and Business group: Of course, click on Magento and after that click on the Install this application button. The next screen is the setup wizard for installing Magento. It lists a bunch of default settings, such as admin username, database settings, and the like. We recommend to change as little as possible for your first installation. You should pick the right location to install though! In our example, we will choose the test directory on www.boostingecommerce.com: Note that for this installation, we've chosen to install the Magento sample data, which will help us in getting an explanation of the Magento software. It's fine if you're installing for learning purposes, but in a store that is meant to be your live shop, it's better to start off completely empty. In the second part of the installation form, there are a few fields that you have to pay attention to: Switch off automatic updates Set database management to automatic Choose a secure administrator password Click on the Install button when you are done reviewing the form. Installatron will now begin installing Magento. You will receive an e-mail when Installatron is ready. It contains information about the URL you just installed and your login credentials to your newfangled Magento shop. That's all! Our just installed test environment is available at http://www.boostingecommerce.com/test. If all is well, yours should look similar to the following screenshot: How to test the minimum requirements If your host isn't offering Installatron and you would like to install Magento on it, how will you know if it's possible? In other words, will Magento run? Of course you can simply try to install and run Magento, but it's better to check for the minimum requirements before going that route. You can use the following method to test if your hosting provider meets all requirements needed to run Magento. First, create a text file using your favorite editor and name it as phpinfo.php. The contents of the file should be: <?php phpinfo(); ?> Save and upload this file to the root folder of your hosting environment, using an ftp tool such as Filezilla. Next, open your browser using this address: http://yourdomain.com/phpinfo.php; use your own domain name of course. You will see a screen similar to the following: Note that in the preceding screenshot, our XAMPP installation is using PHP 5.4.7. And as we mentioned earlier, Magento isn't compatible with this PHP version yet. So what about that? Well, XAMPP just comes with a recent stable release of PHP. Although it is officially not supported, in most cases your Magento test environment will run fine. Something similar to the previous screenshot will be shown, depending on your PHP (and XAMPP) version. Using this result, we can check for any PHP module that is missing. Just go through the list at the beginning of this article and verify if everything that is needed is available and enabled: What is SSL and do I need it? SSL (Secure Sockets Layer) is the standard for secure transactions on the web. You'll recognize it by websites running on https:// instead of http://. To use it, you need to buy an SSL Certificate and add it to your hosting environment. Some hosting providers offer it as a service, whereas others just point to third parties offering SSL Certificates, like for instance, RapidSSL (https://www.rapidssl.com) or VeriSign (http://www.verisign.com), currently owned by Symantec. We'll not offer a complete set of instructions on using SSL here, which is beyond the scope of this article. However, it is good to know when you'll need to pay attention to SSL. There can be two reasons to use an SSL certificate: You are accepting payments directly on your website and may even be storing credit card information. In such a case, make sure that you are securing your store by using SSL. On the other hand, if you are only using third parties to accept payments, like for example, Google Checkout or PayPal, you do not have to worry about this part. The transaction is done at the (secure part of the) website of your payment service provider and in such a case you do not need to offer SSL. However, there's another reason that makes using SSL interesting for all shop owners: trust. Regular shoppers know that https:// connections are secure and might feel just a bit more comfortable in closing the sale with you. It might seem a little thing, but getting a new customer to trust you is an essential step of the online purchase process. Summary In this article we've gone through several different ways to install Magento. We looked at doing it locally on your own machine using XAMPP or MAMP, or by using a hosting provider to bring your store online. When working with a hosting provider, using the Installatron tool makes the Magento installation very easy. Resources for Article: Further resources on this subject: Magento: Exploring Themes [Article] Getting Started with Magento Development [Article] Integrating Facebook with Magento [Article]
Read more
  • 0
  • 0
  • 2502

article-image-selecting-elements
Packt
16 Aug 2013
17 min read
Save for later

Selecting Elements

Packt
16 Aug 2013
17 min read
(For more resources related to this topic, see here.) Understanding the DOM One of the most powerful aspects of jQuery is its ability to make selecting elements in the DOM easy. The DOM serves as the interface between JavaScript and a web page; it provides a representation of the source HTML as a network of objects rather than as plain text. This network takes the form of a family tree of elements on the page. When we refer to the relationships that elements have with one another, we use the same terminology that we use when referring to family relationships: parents, children, and so on. A simple example can help us understand how the family tree metaphor applies to a document: <html> <head> <title>the title</title> </head> <body> <div> <p>This is a paragraph.</p> <p>This is another paragraph.</p> <p>This is yet another paragraph.</p> </div> </body> </html> Here, <html> is the ancestor of all the other elements; in other words, all the other elements are descendants of <html>. The <head> and <body> elements are not only descendants, but children of <html> as well. Likewise, in addition to being the ancestor of <head> and <body>, <html> is also their parent. The <p> elements are children (and descendants) of <div>, descendants of <body> and <html>, and siblings of each other. To help visualize the family tree structure of the DOM, we can use a number of software tools, such as the Firebug plugin for Firefox or the Web Inspector in Safari or Chrome. With this tree of elements at our disposal, we'll be able to use jQuery to efficiently locate any set of elements on the page. Our tools to achieve this are jQuery selectors and traversal methods. Using the $() function The resulting set of elements from jQuery's selectors and methods is always represented by a jQuery object. Such a jQuery object is very easy to work with when we want to actually do something with the things that we find on a page. We can easily bind events to these objects and add slick effects to them, as well as chain multiple modifications or effects together. Note that jQuery objects are different from regular DOM elements or node lists, and as such do not necessarily provide the same methods and properties for some tasks. In order to create a new jQuery object, we use the $() function. This function typically accepts a CSS selector as its sole parameter and serves as a factory returning a new jQuery object pointing to the corresponding elements on the page. Just about anything that can be used in a stylesheet can also be passed as a string to this function, allowing us to apply jQuery methods to the matched set of elements. Making jQuery play well with other JavaScript libraries In jQuery, the dollar sign ($) is simply an alias for jQuery. Because a $() function is very common in JavaScript libraries, conflicts could arise if more than one of these libraries were being used in a given page. We can avoid such conflicts by replacing every instance of $ with jQuery in our custom jQuery code. The three primary building blocks of selectors are tag name, ID, and class. They can be used either on their own or in combination with others. The following simple examples illustrate how these three selectors appear in code: Selector type CSS jQuery What it does Tag name p { } $('p') This selects all paragraphs in the document. ID #some-id { } $('#some-id') This selects the single element in the document that has an ID of some-id. Class .some-class { } $('.some-class') This selects all elements in the document that have a class of some-class. When we call methods of a jQuery object, the elements referred by the selector we passed to $() are looped through automatically and implicitly. Therefore, we can usually avoid explicit iteration, such as a for loop, that is so often required in DOM scripting. Now that we have covered the basics, we're ready to start exploring some more powerful uses of selectors. CSS selectors The jQuery library supports nearly all the selectors included in CSS specifications 1 through 3, as outlined on the World Wide Web Consortium's site: http://www.w3.org/Style/CSS/specs. This support allows developers to enhance their websites without worrying about which browsers might not understand more advanced selectors, as long as the browsers have JavaScript enabled. Progressive Enhancement Responsible jQuery developers should always apply the concepts of progressive enhancement and graceful degradation to their code, ensuring that a page will render as accurately, even if not as beautifully, with JavaScript disabled as it does with JavaScript turned on. We will continue to explore these concepts throughout the article. More information on progressive enhancement can be found at http://en.wikipedia.org/wiki/Progressive_enhancement. To begin learning how jQuery works with CSS selectors, we'll use a structure that appears on many websites, often for navigation – the nested unordered list: <ul id="selected-plays"> <li>Comedies <ul> <li><a href="/asyoulikeit/">As You Like It</a></li> <li>All's Well That Ends Well</li> <li>A Midsummer Night's Dream</li> <li>Twelfth Night</li> </ul> </li> <li>Tragedies <ul> <li><a href="hamlet.pdf">Hamlet</a></li> <li>Macbeth</li> <li>Romeo and Juliet</li> </ul> </li> <li>Histories <ul> <li>Henry IV (<a href="mailto:[email protected]">email</a>) <ul> <li>Part I</li> <li>Part II</li> </ul> <li><a href="http://www.shakespeare.co.uk/henryv.htm"> Henry V</a></li> <li>Richard II</li> </ul> </li> </ul> Notice that the first <ul> has an ID of selecting-plays, but none of the <li> tags have a class associated with them. Without any styles applied, the list looks like this: The nested list appears as we would expect it to—a set of bulleted items arranged vertically and indented according to their level. Styling list-item levels Let's suppose that we want the top-level items, and only the top-level items—Comedies, Tragedies, and Histories — to be arranged horizontally. We can start by defining a horizontal class in the stylesheet: .horizontal { float: left; list-style: none; margin: 10px; } The horizontal class floats the element to the left-hand side of the one following it, removes the bullet from it if it's a list item, and adds a 10-pixel margin on all sides of it. Rather than attaching the horizontal class directly in our HTML, we'll add it dynamically to the top-level list items only, to demonstrate jQuery's use of selectors: $(document).ready(function() { $('#selected-plays > li').addClass('horizontal '); }); Listing 2.1 We begin the jQuery code by calling $(document).ready(), which runs the function passed to it once the DOM has been loaded, but not before. The second line uses the child combinator (>) to add the horizontal class to all the top-level items only. In effect, the selector inside the $() function is saying, "Find each list item (li) that is a child (>) of the element with an ID of selected-plays (#selected-plays)". With the class now applied, the rules defined for that class in the stylesheet take effect, which in this case means that the list items are arranged horizontally rather than vertically. Now our nested list looks like this: Styling all the other items—those that are not in the top level—can be done in a number of ways. Since we have already applied the horizontal class to the top-level items, one way to select all sub-level items is to use a negation pseudo-class to identify all list items that do not have a class of horizontal. Note the addition of the third line of code: $(document).ready(function() { $('#selected-plays > li').addClass('horizontal'); $('#selected-plays li:not(.horizontal)').addClass('sub- level');li:not(.horizontal)').addClass('sub-level'); }); Listing 2.2 This time we are selecting every list item (<li>) that: Is a descendant of the element with an ID of selected-plays (#selected-plays) Does not have a class of horizontal (:not(.horizontal)) When we add the sub-level class to these items, they receive the shaded background defined in the stylesheet: .sub-level { background: #ccc; } Now the nested list looks like this: Attribute selectors Attribute selectors are a particularly helpful subset of CSS selectors. They allow us to specify an element by one of its HTML attributes, such as a link's title attribute or an image's alt attribute. For example, to select all images that have an alt attribute, we write the following: $('img[alt]') Styling links Attribute selectors accept a wildcard syntax inspired by regular expressions for identifying the value at the beginning (^) or end ($) of a string. They can also take an asterisk (*) to indicate the value at an arbitrary position within a string or an exclamation mark (!) to indicate a negated value. Let's say we want to have different styles for different types of links. We first define the styles in our stylesheet: a { color: #00c; } a.mailto { background: url(images/email.png) no-repeat right top; padding-right: 18px; } a.pdflink { background: url(images/pdf.png) no-repeat right top; padding-right: 18px; } a.henrylink { background-color: #fff; padding: 2px; border: 1px solid #000; } Then, we add the three classes—mailto, pdflink, and henrylink—to the appropriate links using jQuery. To add a class for all e-mail links, we construct a selector that looks for all anchor elements (a) with an href attribute ([href]) that begins with mailto: (^="mailto:"), as follows: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); }); Listing 2.3 Because of the rules defined in the page's stylesheet, an envelope image appears after the mailto: link on the page. To add a class for all the links to PDF files, we use the dollar sign rather than the caret symbol. This is because we're selecting links with an href attribute that ends with .pdf: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); $('a[href$=".pdf"]').addClass('pdflink'); }); Listing 2.4 The stylesheet rule for the newly added pdflink class causes an Adobe Acrobat icon to appear after each link to a PDF document, as shown in the following screenshot: Attribute selectors can be combined as well. We can, for example, add the class henrylink to all links with an href value that both starts with http and contains henry anywhere: $(document).ready(function() { $('a[href^="mailto:"]').addClass('mailto'); $('a[href$=".pdf"]').addClass('pdflink'); $('a[href^="http"][href*="henry"]') .addClass('henrylink'); }); }); Listing 2.5 With the three classes applied to the three types of links, we should see the following: Note the PDF icon to the right-hand side of the Hamlet link, the envelope icon next to the email link, and the white background and black border around the Henry V link. Custom selectors To the wide variety of CSS selectors, jQuery adds its own custom selectors. These custom selectors enhance the already impressive capabilities of CSS selectors to locate page elements in new ways. Performance note When possible, jQuery uses the native DOM selector engine of the browser to find elements. This extremely fast approach is not possible when custom jQuery selectors are used. For this reason, it is recommended to avoid frequent use of custom selectors when a native option is available and performance is very important. Most of the custom selectors allow us to choose one or more elements from a collection of elements that we have already found. The custom selector syntax is the same as the CSS pseudo-class syntax, where the selector starts with a colon (:). For example, to select the second item from a set of <div> elements with a class of horizontal, we write this: $('div.horizontal:eq(1)') Note that :eq(1) selects the second item in the set because JavaScript array numbering is zero-based, meaning that it starts with zero. In contrast, CSS is one-based, so a CSS selector such as $('div:nth-child(1)') would select all div selectors that are the first child of their parent. Because it can be difficult to remember which selectors are zero-based and which are one-based, we should consult the jQuery API documentation at http://api.jquery.com/category/selectors/ when in doubt. Styling alternate rows Two very useful custom selectors in the jQuery library are :odd and :even. Let's take a look at how we can use one of them for basic table striping given the following tables: <h2>Shakespeare's Plays</h2> <table> <tr> <td>As You Like It</td> <td>Comedy</td> <td></td> </tr> <tr> <td>All's Well that Ends Well</td> <td>Comedy</td> <td>1601</td> </tr> <tr> <td>Hamlet</td> <td>Tragedy</td> <td>1604</td> </tr> <tr> <td>Macbeth</td> <td>Tragedy</td> <td>1606</td> </tr> <tr> <td>Romeo and Juliet</td> <td>Tragedy</td> <td>1595</td> </tr> <tr> <td>Henry IV, Part I</td> <td>History</td> <td>1596</td> </tr> <tr> <td>Henry V</td> <td>History</td> <td>1599</td> </tr> </table> <h2>Shakespeare's Sonnets</h2> <table> <tr> <td>The Fair Youth</td> <td>1–126</td> </tr> <tr> <td>The Dark Lady</td> <td>127–152</td> </tr> <tr> <td>The Rival Poet</td> <td>78–86</td> </tr> </table> With minimal styles applied from our stylesheet, these headings and tables appear quite plain. The table has a solid white background, with no styling separating one row from the next, as shown in the following screenshot: Now we can add a style to the stylesheet for all the table rows and use an alt class for the odd rows: tr { background-color: #fff; } .alt { background-color: #ccc; } Finally, we write our jQuery code, attaching the class to the odd-numbered table rows (<tr> tags): $(document).ready(function() { $('tr:even').addClass('alt'); }); Listing 2.6 But wait! Why use the :even selector for odd-numbered rows? Well, just as with the :eq() selector, the :even and :odd selectors use JavaScript's native zero-based numbering. Therefore, the first row counts as zero (even) and the second row counts as one (odd), and so on. With this in mind, we can expect our simple bit of code to produce tables that look like this: Note that for the second table, this result may not be what we intend. Since the last row in the Plays table has the alternate gray background, the first row in the Sonnets table has the plain white background. One way to avoid this type of problem is to use the :nth-child() selector instead, which counts an element's position relative to its parent element rather than relative to all the elements selected so far. This selector can take a number, odd, or even as its argument: $(document).ready(function() { $('tr:nth-child(odd)').addClass('alt'); }); Listing 2.7 As before, note that :nth-child() is the only jQuery selector that is one-based. To achieve the same row striping as we did earlier—except with consistent behavior for the second table—we need to use odd rather than even as the argument. With this selector in place, both tables are now striped nicely, as shown in the following screenshot: Finding elements based on textual content For one final custom-selector touch, let's suppose for some reason we want to highlight any table cell that referred to one of the Henry plays. All we have to do—after adding a class to the stylesheet to make the text bold and italicized ( .highlight {font-weight:bold; font-style: italic;} )—is add a line to our jQuery code using the :contains() selector: $(document).ready(function() { $('tr:nth-child(odd)').addClass('alt'); $('td:contains(Henry)').addClass('highlight'); }); Listing 2.8 So, now we can see our lovely striped table with the Henry plays prominently featured: It's important to note that the :contains() selector is case sensitive. Using $('td:contains(henry)') instead, without the uppercase "H", would select no cells. Admittedly, there are ways to achieve the row striping and text highlighting without jQuery—or any client-side programming, for that matter. Nevertheless, jQuery, along with CSS, is a great alternative for this type of styling in cases where the content is generated dynamically and we don't have access to either the HTML or server-side code. Form selectors The capabilities of custom selectors are not limited to locating elements based on their position. For example, when working with forms, jQuery's custom selectors and complementary CSS3 selectors can make short work of selecting just the elements we need. The following table describes a handful of these form selectors: Selector Match :Input Input, text area, select, and button elements :button Button elements and input elements with a type attribute equal to button :enabled Form elements that are enabled :disabled Form elements that are disabled :checked Radio buttons or checkboxes that are checked :selected Option elements that are selected As with the other selectors, form selectors can be combined for greater specificity. We can, for example, select all checked radio buttons (but not checkboxes) with $('input[type="radio"]:checked') or select all password inputs and disabled text inputs with $('input[type="password"], input[type="text"]:disabled'). Even with custom selectors, we can use the same basic principles of CSS to build the list of matched elements. Summary With the techniques that we have covered in this article, we should now be able to locate sets of elements on the page in a variety of ways. In particular, we learned how to style top-level and sub-level items in a nested list by using basic CSS selectors, how to apply different styles to different types of links by using attribute selectors, add rudimentary striping to a table by using either the custom jQuery selectors :odd and :even or the advanced CSS selector :nth-child(), and highlight text within certain table cells by chaining jQuery methods. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 2415
Visually different images

article-image-themes-look-and-feel-and-logos
Packt
14 Aug 2013
9 min read
Save for later

Themes, the look and feel, and logos

Packt
14 Aug 2013
9 min read
(For more resources related to this topic, see here.) Themes Themes are sets of predefined styles and can be used to personalize the look and feel of Confluence. Themes can be applied to the entire site and individual spaces. Some themes add extra functionality to Confluence or change the layout significantly. Confluence 5 comes with two themes installed, and an administrator can install new themes as add-ons via the Administration Console. Atlassian is planning on merging the Documentation Theme with the Default Theme . As this is not the case in Confluence 5 yet, we will discuss them both as they have some different features. Keep in mind that, at some point, the Documentation Theme will be removed from Confluence. To change the global Confluence theme, perform the following steps: Browse to the Administration console (Administration | Confluence Admin). Select Themes from the left-hand side menu. All the installed themes will be present. Select the appropriate radio button to select a theme. Choose Confirm to change the theme: Space administrators can also decide on a different theme for their own spaces. Spaces with their own theme selections—and therefore not using the global look and feel—won't be affected if a Confluence Administrator changes the global default theme. To change a space theme, perform the following steps: Go to any page in the space. Select Space Tools in the sidebar. (If you are not using the default theme, select Browse | Space Admin.) Select Look and Feel, followed by Theme. Select the theme you want to apply to the space. Click on Confirm. The Default Theme As the name implies, this is the default theme shipped with Confluence. This Default Theme got a complete overhaul in Confluence 5 and looks as shown in the following screenshot: The Default Theme provides every space with a sidebar, containing useful links and navigation help throughout the current space. With the sidebar, you can quickly change from browsing pages to blog post or vice versa. The sidebar also allows important space content to be added as a link for quicker access, and displays the children of the current page for easy navigation. You can collapse or expand the sidebar. Click-and-drag the border, or use the keyboard shortcut: [. If the sidebar is collapsed, you can still access the sidebar options. Configuring the theme The default theme doesn't have any global configuration available, but a space administrator can make some space-specific changes to the theme's sidebar. Perform the following steps to change the space details: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Click on the edit icon next to the space title. A pop up will show where you can change the space title and logo (as shown in the preceding screenshot, indicated as 1). Click on Save to save the changes Click on the Done button to exit the configuration mode. The main navigation items on the sidebar (pages and blog posts) can be hidden. This can come in handy, for example, when you don't allow users to add blog posts to the space. To show or hide the main navigation items, perform the following steps: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Select the - or + icon beside the link to either hide or show the link. Click on the Done button to exit the configuration mode. Space shortcuts are manually added links to the sidebar, linking to important content within the space. A space administrator can manage these links. To add a space shortcut, perform the following steps: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Click on the Add Link button, indicated as 3 in the preceding screenshot. The Insert Link dialog will appear. Search and select the page you want to link. Click on Insert to add the link to the sidebar. Click on the Done button to exit the configuration mode. The Documentation Theme The Documentation Theme is another bundled theme. It supplies a built-in table of content for your space, a configurable header and footer, and a space-restricted search. The Documentation Theme's default look and feel is displayed in the following screenshot: The sidebar of the Documentation Theme will show a tree with all the pages in your space. Clicking on the icon in front of a page title will expand the branch and show its children. The sidebar can be opened and closed using the [ shortcut, or the icon on the left of the search box in the Confluence header. Configuring the theme The Documentation Theme allows configuration of the sidebar contents, page header and footer, and the possibility to restrict the search to only the current space. A Confluence Administrator can configure the theme globally, but a Space Administrator can overwrite this configuration for his or her own space. To configure the Documentation Theme for a space, the Space Administrator should explicitly select the Documentation Theme as the space theme. The theme configuration of the Documentation Theme allows you to change the properties displayed in the following screenshot. How to get to this screen and what the properties represent will be explained next. To configure the Documentation theme, perform the following steps: As a Confluence Administrator: Browse to the Administration Console (Administration | Confluence Admin). Select Themes from the left-hand side menu. Choose the Documentation Theme as the current theme. Click on the Configure theme link. As a Space administrator: Go to any page in the space. Select Browse | Space Admin. Choose Themes from the left-hand side menu. Make sure that the Documentation Theme is the current theme. Click on the Configure theme link. Select or deselect the Page Tree checkbox. This will determine if your space will display the default search box and page tree in the sidebar. Select or deselect the Limit search results to the current space checkbox. If you select the checkbox: The Confluence search in the top-left corner will only search in the current space. The sidebar will not contain a search box. If you deselect the checkbox: The Confluence search in the top-left corner will search across the entire Confluence site. The sidebar will contain a search box, which is limited to searching in the current space. In the three textboxes, you can enter any text or wiki markup you would like; for example, you could add some information to the sidebar or a notification to every page. The following screenshot will display these areas: Navigation: This will be displayed in the space sidebar. Header: This will be displayed above the title on all the pages in the space. Footer: This will be displayed after the comments on all the pages in the space. Look and feel The look and feel of Confluence can be customized on both global level and space level. Any changes made on a global level will be applied as the default settings for all spaces. A Space Administrator can choose to use a different theme than the global look and feel. When a Space Administrator selects a different theme, the default settings and theme are no longer applied to that space. This also means that settings in a space are not updated if the global settings are updated. In this section, we will cover some basic look and feel changes, such as changing the logo and color-scheme of your Confluence instance. It is also possible to change some of the Confluence layouts; this is covered in the Advanced customizing section. Confluence logo The Confluence logo is the logo that is displayed on the navigation bar in Confluence. This can easily be changed to your company logo. To change the global logo, perform the following steps: Browse to the Administration Console (Administration | Confluence Admin). Select Site Logo from the left-hand side menu Click on Choose File to select the file from your computer. Decide whether to show only your company logo, or also the title of your Confluence installation. If you choose to also show the title, you can change this in the text field next to Site Title. Click on Save. As you might notice, Confluence also changed the color scheme of your installation. Confluence will suggest a color scheme based upon your logo. To revert this change, click on Undo, which is directly available after updating your logo. Space logo Every space can choose its own logo, making it easy to identify certain topics or spaces. A Confluence administrator can also set the default space logo, for newly created spaces or spaces without their own specified logo. The logo of a personal space cannot be changed; it will always use the user's avatar as logo. To set the default space logo, perform the following steps: Browse to the Administration Console (Administration | Confluence Admin). Select Default Space Logo from the left-hand side menu. Click on Choose File to select the file from your computer. For the best result, make sure the image is about 48 x 48 pixels. Click on Upload Logo to upload the default space logo. As a Space Administrator, you can replace the default logo for your space. How this is done depends on the theme you are using. To change the space logo with the default theme, perform the following steps: Go to any page in the relevant space. Click on Configure Sidebar from the sidebar. Select the edit icon next to the page title. Click on Choose File next to the logo and select a file from your computer. Confluence will display an image editor to indicate how your logo should be displayed as shown in the next screenshot. Drag and resize the circle in the editor to the right position. Click on Save to save the changes. Click on the Done button to exit the configuration mode. To change the space logo with the Documentation Theme, perform the following steps: Go to any page in the relevant space. Select Browse | Space Admin. Select Change Space Logo from the left-hand side menu. Select Choose File and select the logo from your computer. Click on Upload Logo to save the new logo; Confluence will automatically resize and crop the logo for you. Summary We went through the different features for changing the look and feel of Confluence so that we can add some company branding to our instance or just to a space. Resources for Article : Further resources on this subject: Advanced JIRA 5.2 Features [Article] Getting Started with JIRA 4 [Article] JIRA: Programming Workflows [Article]
Read more
  • 0
  • 0
  • 2001

Packt
14 Aug 2013
3 min read
Save for later

nopCommerce – The Public-facing Storefront

Packt
14 Aug 2013
3 min read
(For more resources related to this topic, see here.) General site layout and overview When customers navigate to your store, they will be presented with the homepage. The homepage is where we'll begin to review the site layout and structure. Logo : This is your store logo. As with just about every e-commerce site, this serves as a link back to your homepage. Header links : The toolbar holds some of the most frequently used links, such as Shopping cart, Wishlist, and Account. These links are very customer focused, as this area will also show the customer's logged in status once they are registered with your site. Header menu : The menu holds various links to other important pages, such as New products, Search, and Contact us. It also contains the link to the built-in blog site. Left-side menu : The left-side menu serves as a primary navigation area. It contains the Categories and Manufacturers links as well as Tags and Polls. Center : This area is the main content of the site. It will hold category and product information, as well as the main content of the homepage. Right-side menu : The right-side menu holds links to other ancillary pages in your site, such as Contact us, About us, and News. It also holds the Newsletter signup widget. Footer : The footer holds the copyright information and the Powered by nopCommerce license tag. The naming conventions used for these areas are driven by the Cascading Style Sheet (CSS) definitions. For instance, if you look at the CSS for the Header links area, you will see a definition of header-links. nopCommerce uses layouts to define the overall site structure. A layout is a type of page used in ASP.NET MVC to define a common site template, which is then inherited across all the other pages on your site. In nopCommerce, there are several different layout pages used throughout the site. There are two main layout pages that define the core structure: Root head : This is the base layout page. It contains the header of the HTML that is generated and is responsible for loading all the CSS and JavaScript files needed for the site. Root : This layout is responsible for loading the header, footer, and contains the Master Wrapper, which contains all the other content of the page. These two layouts are common for all pages within nopCommerce, which means every page in the site will display the logo, header links, header menu, and footer. They form the foundation of the site structure. The site pages themselves will utilize one of three other layouts that determine the structure inside the Master Wrapper: Three-column : The three-column layout is what the nopCommerce homepage utilizes. It includes the right side, left side, and center areas. This layout is used primarily on the homepage. Two-column : This is the most common layout that customers will encounter. It includes the left side and center areas. This layout is used on all category and product pages as well as all the ancillary pages. One-column : This layout is used in the shopping cart and checkout pages. It includes the center area only. Changing the layout page used by certain pages requires changing the code. For instance, if we open the product page in Visual Studio, we can see the layout page being used: As you can see, the layout defined for this page is _ColumnsTwo.cshtml, the two-column layout. You can change the layout used by updating this property, for instance, to _ColumnsThree.cshtml, to use the three-column layout.
Read more
  • 0
  • 0
  • 850

article-image-creating-courses-blackboard-learn
Packt
14 Aug 2013
10 min read
Save for later

Creating Courses in Blackboard Learn

Packt
14 Aug 2013
10 min read
(For more resources related to this topic, see here.) Courses in Blackboard Learn The basic structure of any learning management system relies on the basic course, or course shell. A course shell holds all the information and communication that goes on within our course and is the central location for all activities between students and instructors. Let's think about our course shell as a virtual house or apartment. A house or apartment is made up of different rooms where we put things that we use in our everyday life. These rooms such as the living room, kitchen, or bedrooms can be compared to content areas within our course shell. Within each of these content areas, there are items such as telephones, dishwashers, computers, or televisions that we use to interact, communicate, or complete tasks. These items would be called course tools within the course shell. These content areas and tools are available within our course shells and we can use them in the same ways. While as administrators, we won't take a deep dive into all these tools; we should know that they are available and instructors use them within their courses. Blackboard Learn offers many different ways to create courses, but to help simplify our discussion, we will classify those ways in two categories, basic and advanced. This article will discuss the course creation options that we classify as basic. Course names and course IDs When we get ready to create a course in Blackboard Learn, the system requires a few items. It requires a course name and a course ID. The first one should be self-explanatory. If you are teaching a course on "Underwater Basket Weaving" (a hobby I highly recommend), you would simply place this information into the course name. Now the course ID is a bit trickier. Think of it like a barcode that you can find on your favorite cereal. That barcode is unique and tells the checkout scanner the item you have purchased. The course ID has a similar function in Blackboard Learn. It must be unique; so if you plan to have multiple courses on "Underwater Basket Weaving", you will need to figure out a way to express the differences in each course ID. We just talked about how each course ID in Blackboard has to be unique. We as administrators will find that most Blackboard Learn instances we deal with have numerous course shells. Providing multiple courses to the users might become difficult. So we should consider creating a course ID naming convention if one isn't already in place. Our conversation will not tell you which naming convention will be best for your organization, but here are some helpful tips for us to start with: Use a symbol to separate words, acronyms, and numbers from one another. Some admins may use an underscore, period, or dash. However, whitespace, percent, ampersand, less than, greater than, equals, or plus characters are not accepted within course IDs. If you plan to collect reporting data from your instance, make sure to include the term or session and department in the course ID. Collect input from people and teams within your organization who will enroll and support users. Their feedback about a course's naming convention will help it be successful. Many organizations use a student information system, (SIS), which manages the enrollment process.   Default course properties The first item in our Course Settings area allows us to set up several of the default access options within our courses. The Default Course Properties page covers when and who has access to a course by default. Available by Default: This option gives us the ability to have a course available to enrolled students when it is created. Most administrators will have this set to No, since the instructor may not want to immediately give access to the course. Allow Guests by Default and Allow Observers by Default: The next options allow us to set guest and observer access to created courses by default. Most administrators normally set these to No because the guest access and observer role aren't used by their organizations. Default Enrollment Options: We can set default enrollment options to either allow the instructor or system administrator to enroll students or allow the student to self enroll. If we choose the former, we can give the student the ability to e-mail the instructor to request access. If we set Self Enrollment, we can set dates when this option is available and even set a default access code for students to use when they can self enroll. Now that we have these two options for default enrollment, most administrators would suggest setting the default course enrollment option to instructors or system administrators, which will allow instructors to set self enrollment within their own course. Default Duration: The Continuous option allows the course to run continuously with no start or end date set. Select Dates sets specific start and end dates for all courses. The last option called Days from the Date of Enrollment sets courses to run for a specific number of days after the student was enrolled within our Blackboard Learn environment. This is helpful if a student self enrolls in a self-paced course with a set number of days to complete it. Pitfalls of setting start and end dates When using the Start and End dates to control course duration, we may find that all users enrolled within the course will lose access. Course themes and icons If we are using the Blackboard 2012 theme, we have the ability to enable course themes within our Blackboard instance. These themes are created by Blackboard and can be applied to an instructor's course by clicking on the theme icon, seen in the following screenshot, in the upper-right corner of the content area while in a course. They have a wide variety of options, but currently administrators cannot create custom course themes. We can also select which icon sets courses will use by default in our Blackboard instance. These icon themes are created by Blackboard and will appear beside different content items and tools within the course. In the following screenshot, we can see some of the icons that make up one of the sets. Unlike the course themes, these icons will be enforced across the entire instance. Course Tools The Course Tools area offers us the ability to set what tools and content items are available within courses by default. We can also control these settings along with organizations and system tools by clicking on the Tools link under the Tools and Utilities module. Let's review what tools are available and how to enable and disable them within our courses. The options we use to set course tools are exactly same as those used in the Tools area we just mentioned. Use the information provided here to set tool availability with a page. Let's take a more detailed look into the default availability setting within this page. We have four options for each tool. Every tool has the same options. Default On: A course automatically has this tool available to users, but an instructor or leader can disable the tool within it Default Off: Users in a course will not have access to this tool by default, but the instructor or leader can enable it Always On: Instructors or leaders are unable to turn this tool off in their course or organization Always Off: Users do not see this tool in a course or organization, nor can the instructor or leader turn it on within the course Once we make the changes, we must click on the Submit button. Quick Setup Guide The Quick Setup Guide page was introduced into Blackboard 9.1 Service Pack 8. As seen in the following screenshot, it offers instructors the basic introduction into the course if they have never used Blackboard before. Most of the links are to the content from the On Demand area of the Blackboard website. We as administrators can disable this from appearing when an instructor enters the course. If we leave the guide enabled, we can add custom text to the guide, which can help educate instructors about changes, help, and support available from our organization. Custom images We can continue to customize default look and feel of our course shells with images in the course entry point and at the top of the menu. We might use these images to spotlight that our organization has been honored with an award. Here we find an example of how these images would look. Two images can be located at the bottom of the course entry page, which is the page we see after entering a course. Another image can be located at the top of the course menu. This area also allows us to make these images linkable to a website. Here's an example. Default course size limits We can also create a default course size limit for the course and the course export and archive packages within this area. Course Size Limits allows administrators to control storage space, which may be limited in some instances. When a course size limit is within 10 percent of being reached, the administrator and instructor get an e-mail notification. This notification is triggered by the disk usage task that runs once a day. After getting the notification, the instructor can remove content from the course, or the administrator can increase the course quota for that specific course. Maximum Course disk size: This option sets the amount of disk space a course shell can use for storage. This includes all course and student files within the course shell. Maximum Course Package Size: This sets the maximum amount of content from the Course Files area included in a course copy, export, or archive. Grade Center settings This area allows us to set default controls over the Grade History portion of the Grade Center. Grade history is exactly what it says. It keeps a history of the changes within the Grade Center. Most administrators recommend having grade history enabled by default because of the historical benefits. There may be a discussion within your organization to permit instructors to disable this feature within their course or clear the history altogether. Course menu and structures The course menu offers the main navigation for any course user. Our organization can create a default course menu layout for all new course shells created based on the input from instructional designers and pedagogical experts. As seen in the following screenshot, we simply edit the default menu that appears on this page. As administrators, we should pay close attention when creating a default course menu. Any additions or removals to the default menu are automatically changed without clicking on the Submit or Cancel buttons, and are applied to any courses created from that point forward. Blackboard recently introduced course structures. If enabled, these pre-built course menus are available to the instructor within their course's control panel. The course structures fall into a number of different course instruction scenarios. An example of the course structure selection interface is shown in the following screenshot:
Read more
  • 0
  • 0
  • 1238
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-quick-start-your-first-sinatra-application
Packt
14 Aug 2013
15 min read
Save for later

Quick start - your first Sinatra application

Packt
14 Aug 2013
15 min read
(For more resources related to this topic, see here.) Step 1 – creating the application The first thing to do is set up Sinatra itself, which means creating a Gemfile. Open up a Terminal window and navigate to the directory where you're going to keep your Sinatra applications. Create a directory called address-book using the following command: mkdir address-book Move into the new directory: cd address-book Create a file called Gemfile: source 'https://rubygems.org'gem 'sinatra' Install the gems via bundler: bundle install You will notice that Bundler will not just install the sinatra gem but also its dependencies. The most important dependency is Rack (http://rack.github.com/), which is a common handler layer for web servers. Rack will be receiving requests for web pages, digesting them, and then handing them off to your Sinatra application. If you set up your Bundler configuration as indicated in the previous section, you will now have the following files: .bundle: This is a directory containing the local configuration for Bundler Gemfile: As created previously Gemfile.lock: This is a list of the actual versions of gems that are installed vendor/bundle: This directory contains the gems You'll need to understand the Gemfile.lock file. It helps you know exactly which versions of your application's dependencies (gems) will get installed. When you run bundle install, if Bundler finds a file called Gemfile.lock, it will install exactly those gems and versions that are listed there. This means that when you deploy your application on the Internet, you can be sure of which versions are being used and that they are the same as the ones on your development machine. This fact makes debugging a lot more reliable. Without Gemfile.lock, you might spend hours trying to reproduce behavior that you're seeing on your deployed app, only to discover that it was caused by a glitch in a gem version that you haven't got on your machine. So now we can actually create the files that make up the first version of our application. Create address-book.rb: require 'sinatra/base'class AddressBook < Sinatra::Base get '/' do 'Hello World!' endend This is the skeleton of the first part of our application. Line 1 loads Sinatra, line 3 creates our application, and line 4 says we handle requests to '/'—the root path. So if our application is running on myapp.example.com, this means that this method will handle requests to http://myapp.example.com/. Line 5 returns the string Hello World!. Remember that a Ruby block or a method without explicit use of the return keyword will return the result of its last line of code. Create config.ru: $: << File.dirname(__FILE__)require 'address-book'run AddressBook.new This file gets loaded by rackup, which is part of the Rack gem. Rackup is a tool that runs rack-based applications. It reads the configuration from config.ru and runs our application. Line 1 adds the current directory to the list of paths where Ruby looks for files to load, line 2 loads the file we just created previously, and line 4 runs the application. Let's see if it works. In a Terminal, run the following command: bundle exec rackup -p 3000 Here rackup reads config.ru, loads our application, and runs it. We use the bundle exec command to ensure that only our application's gems (the ones in vendor/bundle) get used. Bundler prepares the environment so that the application only loads the gems that were installed via our Gemfile. The -p 3000 command means we want to run a web server on port 3000 while we're developing. Open up a browser and go to http://0.0.0.0:3000; you should see something that looks like the following screenshot: Illustration 1: The Hello World! output from the application Logging Have a look at the output in the Terminal window where you started the application. I got the following (line numbers are added for reference): 1 [2013-03-03 12:30:02] INFO WEBrick 1.3.12 [2013-03-03 12:30:02] INFO ruby 1.9.3 (2013-01-15) [x86_64-linux]3 [2013-03-03 12:30:02] INFO WEBrick::HTTPServer#start: pid=28551 port=30004 127.0.0.1 - - [03/Mar/2013 12:30:06] "GET / HTTP/1.1" 200 12 0.01425 127.0.0.1 - - [03/Mar/2013 12:30:06] "GET /favicon.ico HTTP/1.1" 404 445 0.0018 Like it or not, you'll be seeing a lot of logs such as this while doing web development, so it's a good idea to get used to noticing the information they contain. Line 1 says that we are running the WEBrick web server. This is a minimal server included with Ruby—it's slow and not very powerful so it shouldn't be used for production applications, but it will do for now for application development. Line 2 indicates that we are running the application on Version 1.9.3 of Ruby. Make sure you don't develop with older versions, especially the 1.8 series, as they're being phased out and are missing features that we will be using in this book. Line 3 tells us that the server started and that it is awaiting requests on port 3000, as we instructed. Line 4 is the request itself: GET /. The number 200 means the request succeeded—it is an HTTP status code that means Success . Line 5 is a second request created by our web browser. It's asking if the site has a favicon, an icon representing the site. We don't have one, so Sinatra responded with 404 (not found). When you want to stop the web server, hit Ctrl + C in the Terminal window where you launched it. Step 2 – putting the application under version control with Git When developing software, it is very important to manage the source code with a version control system such as Git or Mercurial. Version control systems allow you to look at the development of your project; they allow you to work on the project in parallel with others and also to try out code development ideas (branches) without messing up the stable application. Create a Git repository in this directory: git init Now add the files to the repository: git add Gemfile Gemfile.lock address-book.rb config.ru Then commit them: git commit -m "Hello World" I assume you created a GitHub account earlier. Let's push the code up to www.github.com for safe keeping. Go to https://github.com/new. Create a repo called sinatra-address-book. Set up your local repo to send code to your GitHub account: git remote add origin [email protected]:YOUR_ACCOUNT/sinatra-address-book.git Push the code: git push You may need to sort out authentication if this is your first time pushing code. So if you get an error such as the following, you'll need to set up authentication on GitHub: Permission denied (publickey) Go to https://github.com/settings/ssh and add the public key that you generated in the previous section. Now you can refresh your browser, and GitHub will show you your code as follows: Note that the code in my GitHub repository is marked with tags. If you want to follow the changes by looking at the repository, clone my repo from //github.com/joeyates/sinatra-address-book.git into a different directory and then "check out" the correct tag (indicated by a footnote) at each stage. To see the code at this stage, type in the following command: git checkout 01_hello_world If you type in the following command, Git will tell you that you have "untracked files", for example, .bundle: git status To get rid of the warning, create a file called .gitignore inside the project and add the following content: /.bundle//vendor/bundle/ Git will no longer complain about those directories. Remember to add .gitignore to the Git repository and commit it. Let's add a README file as the page is requesting, using the following steps: Create the README.md file and insert the following text: sinatra-address-book ==================== An example program of various Sinatra functionality. Add the new file to the repo: git add README.md Commit the changes: git commit -m "Add a README explaining the application" Send the update to GitHub: git push Now that we have a README file, GitHub will stop complaining. What's more is other people may see our application and decide to build on it. The README file will give them some information about what the application does. Step 3 – deploying the application We've used GitHub to host our project, but now we're going to publish it online as a working site. In the introduction, I asked you to create a Heroku account. We're now going to use that to deploy our code. Heroku uses Git to receive code, so we'll be setting up our repository to push code to Heroku as well. Now let's create a Heroku app: heroku createCreating limitless-basin-9090... done, stack is cedarhttp://limitless-basin-9090.herokuapp.com/ | [email protected]:limitless-basin-9090.gitGit remote heroku added My Heroku app is called limitless-basin-9090. This name was randomly generated by Heroku when I created the app. When you generate an app, you will get a different, randomly generated name. My app will be available on the Web at the http://limitless-basin-9090.herokuapp.com/ address. If you deploy your app, it will be available on an address based on the name that Heroku has generated for it. Note that, on the last line, Git has been configured too. To see what has happened, use the following command: git remote show heroku* remote heroku Fetch URL: [email protected]:limitless-basin-9090.git Push URL: [email protected]:limitless-basin-9090.git HEAD branch: (unknown) Now let's deploy the application to the Internet: git push heroku master Now the application is online for all to see: The initial version of the application, running on Heroku Step 4 – page layout with Slim The page looks a bit sad. Let's set up a standard page structure and use a templating language to lay out our pages. A templating language allows us to create the HTML for our web pages in a clearer and more concise way. There are many HTML templating systems available to the Sinatra developer: erb , haml , and slim are three popular choices. We'll be using Slim (http://slim-lang.com/). Let's add the gem: Update our Gemfile: gem 'slim' Install the gem: bundle We will be keeping our page templates as .slim files. Sinatra looks for these in the views directory. Let's create the directory, our new home page, and the standard layout for all the pages in the application. Create the views directory: mkdir views Create views/home.slim: p address book – a Sinatra application When run via Sinatra, this will create the following HTML markup: <p>address book – a Sinatra application</p> Create views/layout.slim: doctype html html head title Sinatra Address Book body == yield Note how Slim uses indenting to indicate the structure of the web page. The most important line here is as follows: == yield This is the point in the layout where our home page's HTML markup will get inserted. The yield instruction is where our Sinatra handler gets called. The result it returns (that is, the web page) is inserted here by Slim. Finally, we need to alter address-book.rb. Add the following line at the top of the file: require 'slim' Replace the get '/' handler with the following: get '/' do slim :home end Start the local web server as we did before: bundle exec rackup -p 3000 The following is the new home page: Using the Slim Templating Engine Have a look at the source for the page. Note how the results of home.slim are inserted into layout.slim. Let's get that deployed. Add the new code to Git and then add the two new files: git add views/*.slim Also add the changes made to the other files: git add address-book.rb Gemfile Gemfile.lock Commit the changes with a comment: git commit -m "Generate HTML using Slim" Deploy to Heroku: git push heroku master Check online that everything's as expected. Step 5 – styling To give a slightly nicer look to our pages, we can use Bootstrap (http://twitter.github.io/bootstrap/); it's a CSS framework made by Twitter. Let's modify views/layout.slim. After the line that says title Sinatra Address Book, add the following code: link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.min.css" rel="stylesheet"There are a few things to note about this line. Firstly, we will be using a file hosted on a Content Distribution Network (CDN ). Clearly, we need to check that the file we're including is actually what we think it is. The advantage of a CDN is that we don't need to keep a copy of the file ourselves, but if our users visit other sites using the same CDN, they'll only need to download the file once. Note also the use of // at the beginning of the link address; this is called a "protocol agnostic URL". This way of referencing the document will allow us later on to switch our application to run securely under HTTPS, without having to readjust all our links to the content. Now let's change views/home.slim to the following: div class="container" h1 address book h2 a Sinatra application We're not using Bootstrap to anywhere near its full potential here. Later on we can improve the look of the app using Bootstrap as a starting point. Remember to commit your changes and to deploy to Heroku. Step 6 – development setup As things stand, during local development we have to manually restart our local web server every time we want to see a change. Now we are going to set things up with the following steps so the application reloads after each change: Add the following block to the Gemfile: group :development do gem 'unicorn' gem 'guard' gem 'listen' gem 'rb-inotify', :require => false gem 'rb-fsevent', :require => false gem 'guard-unicorn' endThe group around these gems means they will only be installed and used in development mode and not when we deploy our application to the Web. Unicorn is a web server—it's better than WEBrick —that is used in real production environments. WEBrick's slowness can even become noticeable during development, while Unicorn is very fast. rb-inotify and rb-fsevent are the Linux and Mac OS X components that keep a check on your hard disk. If any of your application's files change, guard restarts the whole application, updating the changes. Finally, update your gems: bundle Now add Guardfile: guard :unicorn, :daemonize => true do `git ls-files`.each_line { |s| s.chomp!; watch s }end Add a configuration file for unicorn: mkdir config In config/unicorn.rb, add the following: listen 3000 Run the web server: guard Now if you make any changes, the web server will restart and you will get a notification via a desktop message. To see this, type in the following command: touch address-book.rb You should get a desktop notification saying that guard has restarted the application. Note that to shut guard down, you need to press Ctrl + D . Also, remember to add the new files to Git. Step 7 – testing the application We want our application to be robust. Whenever we make changes and deploy, we want to be sure that it's going to keep working. What's more, if something does not work properly, we want to be able to fix bugs so we know that they won't come back. This is where testing comes in. Tests check that our application works properly and also act as detailed documentation for it; they tell us what the application is intended for. Our tests will actually be called "specs", a term that is supposed to indicate that you write tests as specifications for what your code should do. We will be using a library called RSpec . Let's get it installed. Add the gem to the Gemfile: group :test do gem 'rack-test' gem 'rspec'end Update the gems so RSpec gets installed: bundle Create a directory for our specs: mkdir spec Create the spec/spec_helper.rb file: $: << File.expand_path('../..', __FILE__)require 'address-book'require 'rack/test'def app AddressBook.newendRSpec.configure do |config| config.include Rack::Test::Methodsend Create a directory for the integration specs: mkdir spec/integration Create a spec/integration/home_spec.rb file for testing the home page: require 'spec_helper'describe "Sinatra App" do it "should respond to GET" do get '/' expect(last_response).to be_ok expect(last_response.body).to match(/address book/) endend What we do here is call the application, asking for its home page. We check that the application answers with an HTTP status code of 200 (be_ok). Then we check for some expected content in the resulting page, that is, the address book page. Run the spec: bundle exec rspec Finished in 0.0295 seconds1 example, 0 failures Ok, so our spec is executed without any errors. There you have it. We've created a micro application, written tests for it, and deployed it to the Internet. Summary This article discussed how to perform the core tasks of Sinatra: handling a GET request and rendering a web page. Resources for Article : Further resources on this subject: URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Setting up environment for Cucumber BDD Rails [Article]  
Read more
  • 0
  • 0
  • 14480

article-image-quick-start-creating-your-first-application
Packt
13 Aug 2013
14 min read
Save for later

Quick start - creating your first application

Packt
13 Aug 2013
14 min read
(For more resources related to this topic, see here.) By now you should have Meteor installed and ready to create your first app, but jumping in blindly would be more confusing than not. So let’s take a moment to discuss the anatomy of a Meteor application. We have already talked about how Meteor moves all the workload from the server to the browser, and we have seen firsthand the folder of plugins, which we can incorporate into our apps, so what have we missed? Well MVVM of course. MVVM stands for Model, View, and View-Model. These are the three components that make up a Meteor application. If you’ve ever studied programming academically, then you’ll know there’s a concept called separation of concerns. What this means is that you separate code with different intentions into different components. This allows you to keep things neat, but more importantly—if done right—it allows for better testing and customization down the line. A proper separation is one that allows you to remove a piece of code and replace it with another without disrupting the rest of your app. An example of this could be a simple function. If you print out debug messages to a file throughout your app, it would be a terrible practice to manually write this code out each time. A much better solution would be to “separate” this code out into its own function, and only reference it throughout your app. This way, down the line if you decide you want debug messages to be e-mailed instead of written to a file, you only need to change the one function and your app will continue to work without even knowing about the change. So we know separation is important but I haven’t clarified what MVVM is yet. To get a better idea let’s take a look at what kind of code should go in each component. Model: The Model is the section of your code that has to do with the backend code. This usually refers to your database, but it’s not exclusive to just that. In Meteor, you can generally consider the database to be your application’s model. View: The View is exactly what it sounds like, it’s your application’s view. It’s the HTML that you send to the browser. You want to keep these files as logic-less as possible, this will allow for better separation. It will assure that all your logic code is in one place, and it will help with testing and code re-use. View-Model: Now the View-Model is where all the magic happens. The View-Model has two jobs—one is to interface the model to the view and the second is to handle all the events. Basically, all your logic code will be going here. This is just a brief explanation on the MVVM pattern, but like most things I think an example is in order to better illustrate. Let’s pretend we have a site where people can share pictures, such as a typical social network would. On the Model side, you will have a database which contains all the user’s pictures. Now this is very nice but it’s private info and no user should be able to access it. That’s where the View-Model comes in. The View-Model accesses the main Model, and creates a custom version for the View. So, for instance, it creates a new dataset that only contains pictures from the user’s friends. That is the View-Model’s first job, to create datasets for the View with info from the Model. Next, the View accesses the View-Model and gets the information it needs to display the page; in our example this could be an array of pictures. Now the page is built and both the Model and View are done with their jobs. The last step is to handle page events, for example, the user clicks a button. If you remember, the views are logic-less, so when someone clicks a button, the event is sent back to the View-Model to be processed. If you’re still a bit fuzzy on the concept it should become clearer when we create our first application. Now that we have gone through the concepts we are ready to build our first application. To get started, open a terminal window and create a new folder for your Meteor applications: mkdir ~/meteorApps This creates a new directory in our home folder—which is represented by the tilde (~) symbol—called meteorApps. Next let’s enter this folder by typing: cd ~/meteorApps The cd (change directory) command will move the terminal to the location specified, which in our case is the meteorApps folder. The last step is to actually create a Meteor application and this is done by typing: meteor create firstApp You should be greeted with a message telling you how to run your app but we are going to hold of on that, for now just enter the directory by typing: cd firstAppls The cd command, you should already be familiar with what it does, and the ls function just lists the files in the current directory. If you didn’t play around with the skel folder from the last section, then you should have three files in your app’s folder—an HTML file, a JavaScript file, and a CSS file. The HTML and CSS files are the View in the MVVM pattern, while the JavaScript file is the View-Model. It’s a little difficult to begin explaining everything because we have a sort of chicken and egg paradox where we can’t explain one without the other. But let’s begin with the View as it’s the simpler of the two, and then we will move backwards to the View-Model. The View If you open the HTML file, you should see a couple of lines, mostly standard HTML, but there are a few commands from Meteor’s default templating language—Handlebars. This is not Meteor specific, as Handlebars is a templating language based on the popular mustache library, so you may already be familiar with it, even without knowing Meteor. But just in case, I’ll quickly run through the file: <head> <title>firstApp</title></head> This first part is completely standard HTML; it’s just a pair of head tags, with the page’s title being set inside. Next we have the body tag: <body> {{> hello}}</body> The outer body tags are standard HTML, but inside there is a Handlebars function. Handlebars allows you to define template partials, which are basically pieces of HTML that are given a name. That way you are able to add the piece wherever you want, even multiple times on the same page. In this example, Meteor has made a call to Handlebars to insert the template called hello inside the body tags. It’s a fairly easy syntax to learn; you just open two curly braces then you put a greater-than sign followed by the name of the template, finally closing it o ff with a pair of closing braces. The rest of the file is the definition of the hello template partial: <template name=”hello”> <h1>Hello World!</h1> {{greeting}} <input type=”button” value=”Click” /></template> Again it’s mostly standard HTML, just an H1 title and a button. The only special part is the greeting line in the middle, which is another Handlebars function to insert data. This is how the MVVM pattern works, I said earlier that you want to keep the view as simple as possible, so if you have to calculate anything you do it in the View-Model and then load the results to the View. You do this by leaving a reference; in our code the reference is to greeting , which means you place whatever greeting equals to here. It’s a placeholder for a variable, and if you guessed that the variable greeting will be in the View-Model, then you are 100 percent correct. Another thing to notice is the fact that we do have a button on the page, but you won’t find any event handlers here. That’s because, like I mentioned earlier, the events are handled in the View-Model as well. So it seems like we are done here, and the next logical step is to take a peek at the View-Model. If you remember, the View-Model is the .js file, so close this out and open the firstApp.js file. The JS file There is slightly more code here, but if you’re comfortable with JavaScript, then everything should feel right at home. At first glance you can see that the page is split up into two if statements— Meteor.isClient and Meteor.isServer. This is because the JS file is parsed on both the server and the user’s browser. These statements are used to write code for one and not the other. For now we aren’t going to be dealing with the server, so you don’t have to worry about the bottom section. The top section, on the other hand, has our HTML file’s data. While we were in the View, we saw a call to a template partial named hello and then inside it we referenced a placeholder called greeting . The way to set these placeholders is by referencing the global Template variable, and to set the value by following this pattern: Template.template_name.placeholder_name So in our example it would be: Template.hello.greeting And if you take a look at the first thing inside the isClient variable’s if statement, you will find exactly this. Here, it is set to a function, which returns a simple string. You can set it directly to a string, but then it’s not dynamic. Usually the only reason you are defining a View-Model variable is because it’s something that has to be computed via a function, so that’s why they did it like that. But there are cases where you may just want to reference a simple string, and that’s fine. To recap, so far in the View we have a reference to a piece of data named greeting inside a template partial called hello, which we are setting in the View-Model to the string Welcome to firstApp. The last part of the JS file is the part that handles events on the page; it does this by passing an event-map to a template’s events function. This follows the same notation as the previous, so you type: Template.template_name.events( events_map ); I’ll paste the example’s code here, for further illustration: Template.hello.events({ ‘click input’ : function () { // template data, if any, is available in ‘this’ if (typeof console !== ‘undefined’) console.log(“You pressed the button”); } }); Inside each events object, you place the action and target as the key, and you set a function as the value. The actions are standard JavaScript actions, so you have things such as click, dblclick, keydown, and so on. Targets use standard CSS notation, which is periods for classes, hash symbols for IDs, and just the tag name for HTML tags. Whenever the event happens (for example, the input is clicked) the attached function will be called. To view the full gist of event types, you can take a look at the full list here: http://docs.meteor.com/#template_events It would be a lot shorter if there wasn’t a comment or an if statement to make sure the console is defined. But basically the function will just output the words You pressed the button to the console every time you pressed the button. Pretty intuitive! So we went through the files, all that’s left to do is actually test them. To do this, go back to the terminal, and make sure you’re in the firstApps folder. This can be achieved by using ls again to make sure the three files are there, and by using cd ~/meteorApps/firstApp if you are not looking in the right folder. Next, just type meteor and hit Enter, which will cause Meteor to compile everything together and run the built-in web server. If this is done right, you should see a message saying something like: Running on: http: // localhost:3000/ Navigate your browser to the location specified (http : //localhost:3000), and you should see the app that we just created. If your browser has a console, you can open it up and click the button. Doing so will display the message You pressed the button, similar to the one we saw in the JS file. I hope it all makes sense now, but to drive the point home, we will make a few adjustments of our own. In the terminal window, press Ctrl + C to close the Meteor server, then open up the HTML file. A quick revision After the call to the hello template inside the body tags, add a call to another template named quickStart. Here is the new body section along with the completed quickStart template: <body> {{> hello}} {{> quickStart}}</body><template name=”quickStart”> <h3>Click Counter</h3> The Button has been pressed {{numClick}} time(s) <input type=”button” id=”counter” value=”CLICK ME!!!” /></template> Summary I wanted to keep it as similar to the other template as possible, not to throw too much at you all at once. It simply contains a title enclosed in the header tags followed by a string of text with a placeholder named numClick and a button with an id value of counter. There’s nothing radically different over the other template, so you should be fairly comfortable with it. Now save this and open the JS file. What we are adding to the page is a counter that will display the number of times the button was pressed. We do this by telling Meteor that the placeholder relies on a specific piece of data; Meteor will then track this data and every time it gets changed, the page will be automatically updated. The easiest way to set this up is by using Meteor’s Session object. Session is a key-value store object, which allows you to store and retrieve data inside Meteor. You set data using the set method, passing in a name (key) and value; you can then retrieve that stored info by calling the get method, passing in the same key. Besides the Session object bit, everything else is the same. So just add the following part right after the hello template’s events call, and make sure it’s inside the isClient variable’s if statement: Template.quickStart.numClick = function(){ var pcount = Session.get(“pressed_count”); return (pcount) ? pcount : 0; } This function gets the current number of clicks—stored with a key of pressed_count —and returns it, defaulting to zero if the value was never set. Since we are using the pressed_count property inside the placeholder’s function, Meteor will automatically update this part of the HTML whenever pressed_count changes. Last but not least we have to add the event-map; put the following code snippet right after the previous code: Template.quickStart.events({ ‘click #counter’ : function(){ var pcount = Session.get(“pressed_count”); pcount = (pcount) ? pcount + 1 : 1; Session.set(“pressed_count”, pcount); } }); Here we have a click event for our button with the counter ID, and the attached function just get’s the current count and increments it by one. To try it out, just save this file, and in the terminal window while still in the project’s directory, type meteor to restart the web server. Try clicking the button a few times, and if all went well the text should be updated with an incrementing value. Resources for Article: Further resources on this subject: Meteor.js JavaScript Framework: Why Meteor Rocks! [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2 [Article] YUI Test [Article]
Read more
  • 0
  • 0
  • 939

article-image-setting-node
Packt
07 Aug 2013
10 min read
Save for later

Setting up Node

Packt
07 Aug 2013
10 min read
(For more resources related to this topic, see here.) System requirements Node runs on POSIX-like operating systems, the various UNIX derivatives (Solaris, and so on), or workalikes (Linux, Mac OS X, and so on), as well as on Microsoft Windows, thanks to the extensive assistance from Microsoft. Indeed, many of the Node built-in functions are direct corollaries to POSIX system calls. It can run on machines both large and small, including the tiny ARM devices such as the Raspberry Pi microscale embeddable computer for DIY software/hardware projects. Node is now available via package management systems, limiting the need to compile and install from source. Installing from source requires having a C compiler (such as GCC), and Python 2.7 (or later). If you plan to use encryption in your networking code you will also need the OpenSSL cryptographic library. The modern UNIX derivatives almost certainly come with these, and Node's configure script (see later when we download and configure the source) will detect their presence. If you should have to install them, Python is available at http://python.org and OpenSSL is available at http://openssl.org. Installing Node using package managers The preferred method for installing Node, now, is to use the versions available in package managers such as apt-get, or MacPorts. Package managers simplify your life by helping to maintain the current version of the software on your computer and ensuring to update dependent packages as necessary, all by typing a simple command such as apt-get update. Let's go over this first. Installing on Mac OS X with MacPorts The MacPorts project (http://www.macports.org/) has for years been packaging a long list of open source software packages for Mac OS X, and they have packaged Node. After you have installed MacPorts using the installer on their website, installing Node is pretty much this simple: $ sudo port search nodejs nodejs @0.10.6 (devel, net) Evented I/O for V8 JavaScript nodejs-devel @0.11.2 (devel, net) Evented I/O for V8 JavaScript Found 2 ports. -- npm @1.2.21 (devel) node package manager $ sudo port install nodejs npm .. long log of downloading and installing prerequisites and Node Installing on Mac OS X with Homebrew Homebrew is another open source software package manager for Mac OS X, which some say is the perfect replacement for MacPorts. It is available through their home page at http://mxcl.github.com/homebrew/. After installing Homebrew using the instructions on their website, using it to install Node is as simple as this: $ brew search node leafnode node $ brew install node ==> Downloading http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz ######################################################################## 100.0% ==> ./configure –prefix=/usr/local/Cellar/node/0.10.7 ==> make install ==> Caveats Homebrew installed npm. We recommend prepending the following path to your PATH environment variable to have npm-installed binaries picked up: /usr/local/share/npm/bin ==> Summary /usr/local/Cellar/node/0.10.7: 870 files, 16M, built in 21.9 minutes Installing on Linux from package management systems While it's still premature for Linux distributions or other operating systems to prepackage Node with their OS, that doesn't mean you cannot install it using the package managers. Instructions on the Node wiki currently list packaged versions of Node for Debian, Ubuntu, OpenSUSE, and Arch Linux. See: https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager For example, on Debian sid (unstable): # apt-get update # apt-get install nodejs # Documentation is great. And on Ubuntu: # sudo apt-get install python-software-properties # sudo add-apt-repository ppa:chris-lea/node.js # sudo apt-get update # sudo apt-get install nodejs npm We can expect in due course that the Linux distros and other operating systems will routinely bundle Node into the OS like they do with other languages today. Installing the Node distribution from nodejs.org The nodejs.org website offers prebuilt binaries for Windows, Mac OS X, Linux, and Solaris. You simply go to the website, click on the Install button, and run the installer. For systems with package managers, such as the ones we've just discussed, it's preferable to use that installation method. That's because you'll find it easier to stay up-to-date with the latest version. However, on Windows this method may be preferred. For Mac OS X, the installer is a PKG file giving the typical installation process. For Windows, the installer simply takes you through the typical install wizard process. Once finished with the installer, you have a command line tool with which to run Node programs. The pre-packaged installers are the simplest ways to install Node, for those systems for which they're available. Installing Node on Windows using Chocolatey Gallery Chocolatey Gallery is a package management system, built on top of NuGet. Using it requires a Windows machine modern enough to support the Powershell and the .NET Framework 4.0. Once you have Chocolatey Gallery (http://chocolatey.org/), installing Node is as simple as this: C:> cinst install nodejs Installing the StrongLoop Node distribution StrongLoop (http://strongloop.com) has put together a supported version of Node that is prepackaged with several useful tools. This is a Node distribution in the same sense in which Fedora or Ubuntu are Linux distributions. StrongLoop brings together several useful packages, some of which were written by StrongLoop. StrongLoop tests the packages together, and distributes installable bundles through their website. The packages in the distribution include Express, Passport, Mongoose, Socket.IO, Engine.IO, Async, and Request. We will use all of those modules in this book. To install, navigate to the company home page and click on the Products link. They offer downloads of precompiled packages for both RPM and Debian Linux systems, as well as Mac OS X and Windows. Simply download the appropriate bundle for your system. For the RPM bundle, type the following: $ sudo rpm -i bundle-file-name For the Debian bundle, type the following: $ sudo dpkg -i bundle-file-name The Windows or Mac bundles are the usual sort of installable packages for each system. Simply double-click on the installer bundle, and follow the instructions in the install wizard. Once StrongLoop Node is installed, it provides not only the nodeand npmcommands (we'll go over these in a few pages), but also the slnodecommand. That command offers a superset of the npmcommands, such as boilerplate code for modules, web applications, or command-line applications. Installing from source on POSIX-like systems Installing the pre-packaged Node distributions is currently the preferred installation method. However, installing Node from source is desirable in a few situations: It could let you optimize the compiler settings as desired It could let you cross-compile, say for an embedded ARM system You might need to keep multiple Node builds for testing You might be working on Node itself Now that you have the high-level view, let's get our hands dirty mucking around in some build scripts. The general process follows the usual configure, make, and makeinstallroutine that you may already have performed with other open source software packages. If not, don't worry, we'll guide you through the process. The official installation instructions are in the Node wiki at https://github.com/joyent/node/wiki/Installation. Installing prerequisites As noted a minute ago, there are three prerequisites, a C compiler, Python, and the OpenSSL libraries. The Node installation process checks for their presence and will fail if the C compiler or Python is not present. The specific method of installing these is dependent on your operating system. These commands will check for their presence: $ cc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ python Python 2.6.6 (r266:84292, Feb 15 2011, 01:35:25) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Installing developer tools on Mac OS X The developer tools (such as GCC) are an optional installation on Mac OS X. There are two ways to get those tools, both of which are free. On the OS X installation DVD is a directory labeled Optional Installs, in which there is a package installer for—among other things—the developer tools, including Xcode. The other method is to download the latest copy of Xcode (for free) from http://developer.apple.com/xcode/. Most other POSIX-like systems, such as Linux, include a C compiler with the base system. Installing from source for all POSIX-like systems First, download the source from http://nodejs.org/download. One way to do this is with your browser, and another way is as follows: $ mkdir src $ cd src $ wget http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz $ tar xvfz node-v0.10.7.tar.gz $ cd node-v0.10.7 The next step is to configure the source so that it can be built. It is done with the typical sort of configure script and you can see its long list of options by running the following: $ ./configure –help. To cause the installation to land in your home directory, run it this way: $ ./configure –prefix=$HOME/node/0.10.7 ..output from configure If you want to install Node in a system-wide directory simply leave off the -prefixoption, and it will default to installing in /usr/local. After a moment it'll stop and more likely configure the source tree for installation in your chosen directory. If this doesn't succeed it will print a message about something that needs to be fixed. Once the configure script is satisfied, you can go on to the next step. With the configure script satisfied, compile the software: $ make .. a long log of compiler output is printed $ make install If you are installing into a system-wide directory do the last step this way instead: $ make $ sudo make install Once installed you should make sure to add the installation directory to your PATHvariable as follows: $ echo 'export PATH=$HOME/node/0.10.7/bin:${PATH}' >>~/.bashrc $ . ~/.bashrc For cshusers, use this syntax to make an exported environment variable: $ echo 'setenv PATH $HOME/node/0.10.7/bin:${PATH}' >>~/.cshrc $ source ~/.cshrc This should result in some directories like this: $ ls ~/node/0.10.7/ bin include lib share $ ls ~/node/0.10.7/bin node node-waf npm Maintaining multiple Node installs simultaneously Normally you won't have multiple versions of Node installed, and doing so adds complexity to your system. But if you are hacking on Node itself, or are testing against different Node releases, or any of several similar situations, you may want to have multiple Node installations. The method to do so is a simple variation on what we've already discussed. If you noticed during the instructions discussed earlier, the –prefixoption was used in a way that directly supports installing several Node versions side-by-side in the same directory: $ ./configure –prefix=$HOME/node/0.10.7 And: $ ./configure –prefix=/usr/local/node/0.10.7 This initial step determines the install directory. Clearly when Version 0.10.7, Version 0.12.15, or whichever version is released, you can change the install prefix to have the new version installed side-by-side with the previous versions. To switch between Node versions is simply a matter of changing the PATHvariable (on POSIX systems), as follows: $ export PATH=/usr/local/node/0.10.7/bin:${PATH} It starts to be a little tedious to maintain this after a while. For each release, you have to set up Node, npm, and any third-party modules you desire in your Node install; also the command shown to change your PATHis not quite optimal. Inventive programmers have created several version managers to make this easier by automatically setting up not only Node, but npmalso, and providing commands to change your PATHthe smart way: Node version manager: https://github.com/visionmedia/n Nodefront, aids in rapid frontend development: http://karthikv.github.io/nodefront/
Read more
  • 0
  • 0
  • 4433

article-image-developing-entity-metadata-wrappers
Packt
07 Aug 2013
8 min read
Save for later

Developing with Entity Metadata Wrappers

Packt
07 Aug 2013
8 min read
(For more resources related to this topic, see here.) Introducing entity metadata wrappers Entity metadata wrappers, or wrappers for brevity, are PHP wrapper classes for simplifying code that deals with entities. They abstract structure so that a developer can write code in a generic way when accessing entities and their properties. Wrappers also implement PHP iterator interfaces, making it easy to loop through all properties of an entity or all values of a multiple value property. The magic of wrappers is in their use of the following three classes: EntityStructureWrapper EntityListWrapper EntityValueWrapper The first has a subclass, EntityDrupalWrapper, and is the entity structure object that you'll deal with the most. Entity property values are either data, an array of values, or an array of entities. The EntityListWrapper class wraps an array of values or entities. As a result, generic code must inspect the value type before doing anything with a value, in order to prevent exceptions from being thrown. Creating an entity metadata wrapper object Let's take a look at two hypothetical entities that expose data from the following two database tables: ingredient recipe_ingredient The ingredient table has two fields: iid and name. The recipe_ingredient table has four fields: riid, iid , qty , and qty_unit. The schema would be as follows: Schema for ingredient and recipe_ingredient tables To load and wrap an ingredient entity with an iid of 1 and, we would use the following line of code: $wrapper = entity_metadata_wrapper('ingredient', 1); To load and wrap a recipe_ingredient entity with an riid of 1, we would use this line of code: $wrapper = entity_metadata_wrapper('recipe_ingredient', 1); Now that we have a wrapper, we can access the standard entity properties. Standard entity properties The first argument of the entity_metadata_wrapper function is the entity type, and the second argument is the entity identifier, which is the value of the entity's identifying property. Note, that it is not necessary to supply the bundle, as identifiers are properties of the entity type. When an entity is exposed to Drupal, the developer selects one of the database fields to be the entity's identifying property and another field to be the entity's label property. In our previous hypothetical example, a developer would declare iid as the identifying property and name as the label property of the ingredient entity. These two abstract properties, combined with the type property, are essential for making our code apply to multiple data structures that have different identifier fields. Notice how the phrase "type property" does not format the word "property"? That is not a typographical error. It is indicating to you that type is in fact the name of the property storing the entity's type. The other two, identifying property and label property are metadata in the entity declaration. The metadata is used by code to get the correct name for the properties on each entity in which the identifier and label are stored. To illustrate this, consider the following code snippet: $info = entity_get_info($entity_type);$key = isset($info['entity keys']['name']) ? $info['entity keys']['name'] : $info['entity keys']['id'];return isset($entity->$key) ? $entity->$key : NULL; Shown here is a snippet of the entity_id() function in the entity module. As you can see, the entity information is retrieved at the first highlight, then the identifying property name is retrieved from that information at the second highlight. That name is then used to retrieve the identifier from the entity. Note that it's possible to use a non-integer identifier, so remember to take that into account for any generic code. The label property can either be a database field name or a hook. The entity exposing developer can declare a hook that generates a label for their entity when the label is more complicated, such as what we would need for recipe_ingredient. For that, we would need to combine the qty, qty_unit, and the name properties of the referenced ingredient. Entity introspection In order to see the properties that an entity has, you can call the getPropertyInfo() method on the entity wrapper. This may save you time when debugging. You can have a look by sending it to devel module's dpm() function or var_dump: dpm($wrapper->getPropertyInfo());var_dump($wrapper->getPropertyInfo()); Using an entity metadata wrapper The standard operations for entities are CRUD: create, retrieve, update, and delete. Let's look at each of these operations in some example code. The code is part of the pde module's Drush file: sites/all/modules/pde/pde.drush.inc. Each CRUD operation is implemented in a Drush command, and the relevant code is given in the following subsections. Before each code example, there are two example command lines. The first shows you how to execute the Drush command for the operation. ; the second is the help command. Create Creation of entities is implemented in the drush_pde_entity_create function. Drush commands The following examples show the usage of the entity-create ( ec) Drush command and how to obtain help documentation for the command: $ drush ec ingredient '{"name": "Salt, pickling"}'$ drush help ec Code snippet $entity = entity_create($type, $data);// Can call $entity->save() here or wrap to play and save$wrapper = entity_metadata_wrapper($type, $entity);$wrapper->save(); In the highlighted lines we create an entity, wrap it, and then save it. The first line uses entity_create, to which we pass the entity type and an associative array having property names as keys and their values. The function returns an object that has Entity as its base class. The save() method does all the hard work of storing our entity in the database. No more calls to db_insert are needed! Whether you use the save() method on the wrapper or on the Entity object really depends on what you need to do before and after the save() method call. For example, if you need to plug values into fields before you save the entity, it's handy to use a wrapper. Retrieve The retrieving (reading) of entities is implemented in the drush_pde_print_entity() function. Drush commands The following examples show the usage of the entity-read (er) Drush command and how to obtain help documentation for the command. $ drush er ingredient 1$ drush help er Code snippet $header = ' Entity (' . $wrapper->type();$header .= ') - ID# '. $wrapper->getIdentifier().':';// equivalents: $wrapper->value()->entityType()// $wrapper->value()->identifier()$rows = array();foreach ($wrapper as $pkey => $property) { // $wrapper->$pkey === $property if (!($property instanceof EntityValueWrapper)) { $rows[$pkey] = $property->raw() . ' (' . $property->label() . ')'; } else { $rows[$pkey] = $property->value(); }} On the first highlighted line, we call the type() method of the wrapper, which returns the wrapped entity's type. The wrapped Entity object is returned by the value() method of the wrapper. Using wrappers gives us the wrapper benefits, and we can use the entity object directly! The second highlighted line calls the getIdentifier() method of the wrapper. This is the way in which you retrieve the entity's ID without knowing the identifying property name. We'll discuss more about the identifying property of an entity in a moment. Thanks to our wrapper object implementing the IteratorAggregate interface , we are able to use a foreach statement to iterate through all of the entity properties. Of course, it is also possible to access a single property by using its key. For example, to access the name property of our hypothetical ingredient entity, we would use $wrapper->name. The last three highlights are the raw(), label(), and value() method calls. The distinction between these is very important, and is as follows: raw(): This returns the property's value straight from the database. label(): This returns value of an entity's label property. For example, name. value(): This returns a property's wrapped data: either a value or another wrapper. Finally, the highlighted raw() and value() methods retrieve the property values for us. These methods are interchangeable when simple entities are used, as there's no difference between the storage value and property value. However, for complex properties such as dates, there is a difference. Therefore, as a rule of thumb, always use the value() method unless you absolutely need to retrieve the storage value. The example code is using the raw() method only so we that can explore it, and all remaining examples in this book will stick to the rule of thumb. I promise! Storage value: This is the value of a property in the underlying storage media. for example, database. Property value: This is the value of a property at the entity level after the value is converted from its storage value to something more pleasing. For example, date formatting of a Unix timestamp. Multi-valued properties need a quick mention here. Reading these is quite straightforward, as they are accessible as an array. You can use Array notation to get an element, and use a foreach to loop through them! The following is a hypothetical code snippet to illustrate this: $output = 'First property: ';$output .= $wrapper->property[0]->value();foreach ($wrapper->property as $vwrapper) { $output .= $vwrapper->value();} Summary This article delved into development using entity metadata wrappers for safe CRUD operations and entity introspection. Resources for Article: Further resources on this subject: Microsoft SQL Server 2008 R2 MDS: Creating and Using Models [Article] EJB 3 Entities [Article] ADO.NET Entity Framework [Article]
Read more
  • 0
  • 0
  • 2125
article-image-working-sample-controlling-mouse-hand
Packt
06 Aug 2013
10 min read
Save for later

Working sample for controlling the mouse by hand

Packt
06 Aug 2013
10 min read
(For more resources related to this topic, see here.) Getting ready Create a project in Visual Studio and prepare it for working with OpenNI and NiTE. How to do it... Copy ReadLastCharOfLine() and HandleStatus() functions to the top of your source code (just below the #include lines). Then add following lines of code: class MouseController : public nite::HandTracker::NewFrameListener { private: float startPosX, startPosY; int curX, curY; nite::HandId handId; RECT desktopRect; public: MouseController(){ startPosX = startPosY = -1; POINT curPos; if (GetCursorPos(&curPos)) { curX = curPos.x; curY = curPos.y; }else{ curX = curY = 0; } handId = -1; const HWND hDesktop = GetDesktopWindow(); GetWindowRect(hDesktop, &desktopRect); } void onNewFrame(nite::HandTracker& hTracker){ nite::Status status = nite::STATUS_OK; nite::HandTrackerFrameRef newFrame; status = hTracker.readFrame(&newFrame); if (!HandleStatus(status) || !newFrame.isValid()) return; const nite::Array<nite::GestureData>& gestures = newFrame.getGestures(); for (int i = 0; i < gestures.getSize(); ++i){ if (gestures[i].isComplete()){ if (gestures[i].getType() == nite::GESTURE_CLICK){ INPUT Input = {0}; Input.type = INPUT_MOUSE; Input.mi.dwFlags = MOUSEEVENTF_LEFTDOWN | MOUSEEVENTF_LEFTUP; SendInput(1, &Input, sizeof(INPUT)); }else{ nite::HandId handId; status = hTracker.startHandTracking( gestures[i].getCurrentPosition(), &handId); } } } const nite::Array<nite::HandData>& hands = newFrame.getHands(); for (int i = hands.getSize() -1 ; i >= 0 ; --i){ if (hands[i].isTracking()){ if (hands[i].isNew() || handId != hands[i].getId()){ status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &startPosX, &startPosY); handId = hands[i].getId(); if (status != nite::STATUS_OK){ startPosX = startPosY = -1; } }else if (startPosX >= 0 && startPosY >= 0){ float posX, posY; status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &posX, &posY); if (status == nite::STATUS_OK){ if (abs(int(posX - startPosX)) > 10) curX += ((posX - startPosX) - 10) / 3; if (abs(int(posY - startPosY)) > 10) curY += ((posY - startPosY) - 10) / 3; curX = min(curX, desktopRect.right); curX = max(curX, desktopRect.left); curY = min(curY, desktopRect.bottom); curY = max(curY, desktopRect.top); SetCursorPos(curX, curY); } } break; } } } }; Then locate the following line: int _tmain(int argc, _TCHAR* argv[]) { Add the following inside this function: nite::Status status = nite::STATUS_OK; status = nite::NiTE::initialize(); if (!HandleStatus(status)) return 1; printf("Creating hand tracker ...rn"); nite::HandTracker hTracker; status = hTracker.create(); if (!HandleStatus(status)) return 1; MouseController* listener = new MouseController(); hTracker.addNewFrameListener(listener); hTracker.startGestureDetection(nite::GESTURE_HAND_RAISE); hTracker.startGestureDetection(nite::GESTURE_CLICK); printf("Reading data from hand tracker ...rn"); ReadLastCharOfLine(); nite::NiTE::shutdown(); openni::OpenNI::shutdown(); return 0; How it works... Both the ReadLastCharOfLine() and HandleStatus() functions are present here too. These functions are well known to you and don't need any explanation. Then in the second part, we declared a class/struct that we are going to use for capturing the new data available event from the nite::HandTracker object. But the definition of this class is a little different here. Other than the onNewFrame() method, we defined a number of variables and a constructor method for this class too. We also changed its name to MouseController to be able to make more sense of it. class MouseController : public nite::HandTracker::NewFrameListener { private: float startPosX, startPosY; int curX, curY; nite::HandId handId; RECT desktopRect; As you can see, our class is still a child class of nite::HandTracker::NewFrameListener because we are going use it to listen to the nite::HandTracker events. Also, we defined six variables in our class. startPosX and startPosY are going to hold the initial position of the active hand whereas curY and curX are going to hold the position of the mouse when in motion. The handId variable is responsible for holding the ID of the active hand and desktopRecthold for holding the size of the desktop so that we can move our mouse only in this area. These variables are all private variables; this means they will not be accessible from the outside of the class. Then we have the class's constructor method that initializes some of the preceding variables. Refer to the following code: public: MouseController(){ startPosX = startPosY = -1; POINT curPos; if (GetCursorPos(&curPos)) { curX = curPos.x; curY = curPos.y; }else{ curX = curY = 0; } handId = -1; const HWND hDesktop = GetDesktopWindow(); GetWindowRect(hDesktop, &desktopRect); } In the constructor, we set both startPosX and startPosY to -1 and then store the current position of the mouse in the curX and curY variables. Then we set the handId variable to -1 to know-mark that there is no active hand currently, and retrieve the value of desktopRect using two Windows API methods, GetDesktopWindow() and GetWindowRect(). The most important tasks are happening in the onNewFrame() method. This method is the one that will be called when new data becomes available in nite::HandTracker; after that, this method will be responsible for processing this data. As the running of this method means that new data is available, the first thing to do in its body is to read this data. So we used the nite::HandTracker::readFrame() method to read the data from this object: void onNewFrame(nite::HandTracker& hTracker){ nite::Status status = nite::STATUS_OK; nite::HandTrackerFrameRef newFrame; status = hTracker.readFrame(&newFrame); When working with nite::HandTracker, the first thing to do after reading the data is to handle gestures if you expect any. We expect to have Hand Raise to detect new hands and click gesture to perform the mouse click: const nite::Array<nite::GestureData>& gestures = newFrame.getGestures(); for (int i = 0; i < gestures.getSize(); ++i){ if (gestures[i].isComplete()){ if (gestures[i].getType() == nite::GESTURE_CLICK){ INPUT Input = {0}; Input.type = INPUT_MOUSE; Input.mi.dwFlags = MOUSEEVENTF_LEFTDOWN | MOUSEEVENTF_LEFTUP; SendInput(1, &Input, sizeof(INPUT)); }else{ nite::HandId handId; status = hTracker.startHandTracking( gestures[i].getCurrentPosition(), &handId); } } } As you can see, we retrieved the list of all the gestures using nite::HandTrackerFrameRef::getGestures() and then looped through them, searching for the ones that are in the completed state. Then if they are the nite::GESTURE_CLICK gesture, we need to perform a mouse click. We used the SendInput() function from the Windows API to do it here. But if the recognized gesture wasn't of the type nite::GESTURE_CLICK, it must be a nite::GESTURE_HAND_RAISE gesture; so, we need to request for the tracking of this newly recognized hand using the nite::HandTracker::startHandTracking() method. The next thing is to take care of the hands being tracked. To do so, we first need to retrieve a list of them using the nite::HandTrackerFrameRef::getHands() method and then loop through them. This can be done using a simple for loop as we used for the gestures. But as we want to read this list in a reverse order, we need to use a reverse for loop. The reason we need to read this list in the reverse order is that we always want the last recognized hand to control the mouse: const nite::Array<nite::HandData>& hands = newFrame.getHands(); for (int i = hands.getSize() - 1 ; i >= 0 ; --i){ Then we need to make sure that the current hand is under-tracking because we don't want an invisible hand to control the mouse. The first hand being tracked is the one we want, so we will break the looping there, of course, after the processing part, which we will remove from the following code to make it clearer. if (hands[i].isTracking()){ . . . break; } Speaking of processing, in the preceding three lines of code (with periods) we have another condition. This condition is responsible for finding out if this hand is the same one that had control of the mouse in the last frame. If it is a new hand (either it is a newly recognized hand or it is a newly active hand), we need to save its current position in the startPosX and startPosY variables. if (hands[i].isNew() || handId != hands[i].getId()){ status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &startPosX, &startPosY); handId = hands[i].getId(); if (status != nite::STATUS_OK){ startPosX = startPosY = -1; } If it was the same hand, we have another condition. Do we have the startPosX and startPosY variables already or do we not have them yet? If we have them, we can calculate the mouse's movement. But first we need to calculate the position of the hand relative to the depth frame. }else if (startPosX >= 0 && startPosY >= 0){ float posX, posY; status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &posX, &posY); Once the process of conversation ends, we need to calculate the new position of the mouse depending on how the hand's position changes. But we want to define a safe area for it to be static when small changes happen. So we calculate the new position of the mouse only if it has moved by more than 10 pixels in our depth frame: if (status == nite::STATUS_OK){ if (abs(int(posX - startPosX)) > 10) curX += ((posX - startPosX) - 10) / 3; if (abs(int(posY - startPosY)) > 10) curY += ((posY - startPosY) - 10) / 3; As you can see in the preceding code, we also divided the changes by 3 because we didn't want it to move too fast. But before setting the position of the mouse, we need to first make sure that the new positions are in the screen view port using the desktopRect variable: curX = min(curX, desktopRect.right); curX = max(curX, desktopRect.left); curY = min(curY, desktopRect.bottom); curY = max(curY, desktopRect.top); After calculating everything, we can set the new position of the mouse using SetCursorPos() from the Windows API: SetCursorPos(curX, curY); Step three and four are not markedly differently. In this step, we have the initialization process; this includes the initialization of NiTE and the creation of the nite::HandTracker variable. status = nite::NiTE::initialize(); . . . nite::HandTracker hTracker; status = hTracker.create(); Then we should add our newly defined structure/class as a listener to nite::HandTracker so that nite::HandTracker can call it later when a new frame becomes available: MouseController* listener = new MouseController(); hTracker.addNewFrameListener(listener); Also, we need to have an active search for a hand gesture because we need to locate the position of the hands and we also have to search for another gesture for the mouse click. So we need to call the nite::HandTracker::startGestureDetection() method twice for both the Click (also known as push) and Hand Raise gestures here: hTracker.startGestureDetection(nite::GESTURE_HAND_RAISE); hTracker.startGestureDetection(nite::GESTURE_CLICK); At the end, we will wait until the user presses the Enter key to end the app. We do nothing more in our main thread except just waiting. Everything happens in another thread. ReadLastCharOfLine(); nite::NiTE::shutdown(); openni::OpenNI::shutdown(); return 0; Summary In this article, we learnt how to write a working example for using nite::HandTracker, controlled the position of the mouse cursor using the NiTE hand tracker feature, and simulated a click event Resources for Article: Further resources on this subject: Getting started with Kinect for Windows SDK Programming [Article] Kinect in Motion – An Overview [Article] Active Directory migration [Article]
Read more
  • 0
  • 0
  • 1152

article-image-media-module
Packt
06 Aug 2013
10 min read
Save for later

The Media module

Packt
06 Aug 2013
10 min read
(For more resources related to this topic, see here.) While there are many ways to build image integration into Drupal, they may all stem from different requirements and also each option should be carefully reviewed. Browsing around over 300 modules available in the Media category in Drupal's modules search for Drupal 7 (http://drupal.org/project/modules) may have you confused as to where to begin. We'll take a look at the Media module (http://drupal.org/project/media) which was sponsored by companies such as Acquia, Palantir, and Advomatic and was created to provide a solid infrastructure and common APIs for working with media assets and images specifically. To begin, download the 7.x-2.x version of the Media module (which is currently regarded as unstable but it is fairly different from 7.x-1.x which will be replaced soon enough) and unpack it to the sites/all/modules directory like we did before. The Media module also requires the File entity (http://drupal.org/project/file_entity) module to further extend how files are managed within Drupal by providing a fieldable file entity, display mods, and more. Use the 7.x-2.x unstable version for the File entity module too and download and unpack as always. To enable these modules navigate to the top administrative bar and click on Modules , scrolling to the bottom of the page we see the Media category with a collection of modules, toggle on all of them (Media field and Media Internet sources), and click on Save configuration . Adding a media asset field If you've noticed something missing in the rezepi content type fields earlier, you were right—what kind of recipes website would this be without some visual stimulation? Yes, we mean pictures! To add a new field, navigate to Structure | Content Types | rezepi | manage fields (/admin/structure/types/manage/rezepi/fields). Name the new field Picture and choose Image as the FIELD TYPE and Media file selector for the WIDGET select box and click on Save . As always, we are about to configure the new field settings, but a step before that presents first global settings for this new field, which is okay to leave as they are, so we will continue, and click on Save field settings . In the general field settings most defaults are suitable, except we want to toggle on the Required field setting and make sure the Allowed file extensions for uploaded files setting lists at least some common image types, so set it to PNG, GIF, JPG, JPEG . Click on Save settings to finalize and we've updated the rezepi content type, so let's start using it. When adding a rezepi, the form for filling up the fields should be similar to the following: The Picture field we defined to use as an image no longer has a file upload form element but rather a button to Select media . Once clicked on it, we can observe multiple tabbed options: For now, we are concerned only with the Upload tab and submit our picture for this rezepi entry. After browsing your local folder and uploading the file, upon clicking Save we are presented with the new media asset form: Our picture has been added to the website's media library and we can notice that it's no longer just a file upload somewhere, but rather it's a media asset with a thumbnail created and even has a way to configure the image HTML input element's attributes. We'll proceed with clicking on Save and once more on the add new content form too, to finalize this new rezepi submission. The media library To further explore the media asset tabs that we've seen before, we will edit the recently created rezepi entry and try to replace the previously uploaded picture with another. In the node's edit form, click on the Picture field's Select media button and browse the Library tab which should resemble the following: The Library tab is actually just a view (you can easily tell by the down-arrow and gear icons to the right of the screen) that lists all the files in your website. Furthermore, this view is equipped with some filters such as the filename, media type, and even sorting options. Straight-away, we can notice that our picture for the rezepi that was created earlier shows up there which is because it has been added as a media asset to the library. We can choose to use it again in further content that we create in the website. Without the media module and it's media assets management, we had to use the file field which only allowed to upload files to our content but never to re-use content that we, or other users, had created previously. Aside from possibly being annoying, this also meant that we had to duplicate files if we needed the same media file for more than one content type. The numbered images probably belong to some of the themes that we experimented before and the last two files are the images we've uploaded to our memo content type. Because these files were not created when the Media module was installed, they lack some of the metadata entries which the Media module keeps to better organize media assets. To manage our media library, we can click on Content from the top administrative bar which shows all content that has been created in your Drupal site. It features filtering and ordering of the columns to easily find content to moderate or investigate and even provides some bulk action updates on several content types. More important, after enabling the Media module we have a new option to choose from in the top right tabs, along with Content and Comments , we now have Files . The page lists all file uploads, both prior to the Media module as well as afterwards, and clearly states the relevant metadata such as media type, size, and the user who uploaded this file. We can also choose from List view or Thumbnail view using the top right tab options, which offers a nicer view and management of our content. The media library management page also features option to add media assets right from this page using the Add file and Import files links. While we've already seen how adding a single media file works, adding a bunch of files is something new. The Import files option allows you to specify a directory on your web server which contains media files and import them all to your Drupal website. After clicking on Preview , it will list the full paths to the files that were detected and will ask you to confirm and thus continue with the import process. Once that's successfully completed, you can return to the files thumbnail view (/admin/content/file/thumbnails) and edit the imported files, possibly setting some title text or removing some entries. You might be puzzled as to what's the point of importing media files directory from the server's web directory, after all, this would require one to have transferred the files there via FTP, SCP, or some other method, but definitely this is somewhat unconventional these days. Your hunch is correct, the import media is a nice to have feature but it's definitely not a replacement for bulk uploads of files from the web interface which Drupal should support and we will later on learn about adding this capability. When using the media library to manage these files, you will probably ask yourself first, before deleting or replacing an image, where is it actually being used? For that reason, Drupal's internal file handling keeps track of which entity makes use of each file and the Media module exposes this information via the web interface for us. Any information about a media asset is available in its Edit or View tabs, including where is it being used. Let's navigate through the media library to find the image we created previously for the rezepi entry and then click on Edit in the rightmost OPERATIONS column. In the Edit page, we can click on the USAGE tab at the top right of the page to get this information: We can tell which entity type is using this file, see the title of the node that it's being used for with a link to it, and finally the usage count. Using URL aliases If you are familiar with Drupal's internal URL aliases then you know that Drupal employs a convention of /node/<NID>[/ACTION], where NID is replaced by the node ID in the database and ACTION may be one of edit, view, or perhaps delete. To see this for yourself, you can click on one of the content items that we've previously created and when viewing it's full node display observe the URL in your browser's address bar. When working with media assets, we can employ the same URL alias convention for files too using the alias /file/<FID>[/ACTION]. For example, to see where the first file you've uploaded is being used, navigate in your browser to /file/1/usage. Remote media assets If we had wanted to replace the picture for this rezepi by specifying a link to an image that we've encountered in a website, maybe even our friend's personal blog, the only way to have done that without the Media module was to download it and upload using the file field's upload widget. With the Media module, we can specify the link for an image hosted and provided by a remote resource using the Web tab. I've Googled some images and after finding my choice for a picture, I simply copy-and-paste the image link to the URL input text as follows: After clicking on Submit , the image file will be downloaded to our website's files directory and the Media module will create the required metadata and present the picture's settings form before replacing our previous picture: There are plenty of modules such as Media: Flickr (http://drupal.org/project/media_flickr) which extends on the Media module by providing integration with remote resources for images and even provides support for a Flickr's photoset or slideshow. Just to list a few other modules: Media: Tableau (http://drupal.org/project/media_tableau) for integrating with the Tableau analytics platform Media: Slideshare (http://drupal.org/project/media_slideshare) for integrating with presentations at Slideshare website Media: Dailymotion (http://drupal.org/project/media_dailymotion) for integrating with the Dailymotion videos sharing website The only thing left for you is to download them from http://drupal.org/modules and start experimenting! Summary In this article, we dived into deep water with creating our very own content type for a food recipe website. In order to provide better user experience when dealing with images in Drupal sites, we learned about the prominent Media module and its extensive support for media resources such as providing a media library and key integration with other modules such as the Media Gallery. Resources for Article : Further resources on this subject: Installing and Configuring Drupal Commerce [Article] Drupal 7 Fields/CCK: Using the Image Field Modules [Article] Drupal 7 Preview [Article]
Read more
  • 0
  • 0
  • 1437

article-image-working-blocks
Packt
31 Jul 2013
20 min read
Save for later

Working with Blocks

Packt
31 Jul 2013
20 min read
(For more resources related to this topic, see here.) Creating a custom block type Creating block types is a great way to add custom functionality to a website. This is the preferred way to add things like calendars, dealer locators, or any other type of content that is visible and repeatable on the frontend of the website. Getting ready The code for this recipe is available to download from the book's website for free. We are going to create a fully functioning block type that will display content on our website. How to do it... The steps for creating a custom block type are as follows: First, you will need to create a directory in your website's root /blocks directory. The name of the directory should be underscored and will be used to refer to the block throughout the code. In this case, we will create a new directory called /hello_world. Once you have created the hello_world directory, you will need to create the following files: controller.php db.xml form.php add.php edit.php view.php view.css Now, we will add code to each of the files. First, we need to set up the controller file. The controller file is what powers the block. Since this is a very basic block, our controller only will contain information to tell concrete5 some details about our block, such as its name and description. Add the following code to controller.php: class HelloWorldBlockController extends BlockController { protected $btTable = "btHelloWorld"; protected $btInterfaceWidth = "300"; protected $btInterfaceHeight = "300"; public function getBlockTypeName() { return t('Hello World'); } public function getBlockTypeDescription() { return t('A basic Hello World block type!'); } } Notice that the class name is HelloWorldBlockController. concrete5 conventions dictate that you should name your block controllers with the same name as the block directory in camel case (for example: CamelCase) form, and followed by BlockController. The btTable class variable is important, as it tells concrete5 what database table should be used for this block. It is important that this table doesn't already exist in the database, so it's a good idea to give it a name of bt (short for "block type") plus the camel cased version of the block name. Now that the controller is set up, we need to set up the db.xml file. This file is based off of the ADOXMLS format, which is documented at http://phplens.com/lens/adodb/docs-datadict.htm#xmlschema. This XML file will tell concrete5 which database tables and fields should be created for this new block type (and which tables and fields should get updated when your block type gets updated). Add the following XML code to your db.xml file: lt;?xml version="1.0"?> <schema version="0.3"> <table name="btHelloWorld"> <field name="bID" type="I"> <key /> <unsigned /> </field> <field name="title" type="C" size="255"> <default value="" /> </field> <field name="content" type="X2"> <default value="" /> </field> </table> </schema> concrete5 blocks typically have both an add.php and edit.php file, both of which often do the same thing: show the form containing the block's settings. Since we don't want to repeat code, we will enter our form HTML in a third file, form.php, and <?php $form = Loader::helper('form'); ?> <div> <label for="title">Title</label> <?php echo $form->text('title', $title); ?> </div> <div> <label for="content">Content</label> <?php echo $form->textarea('content', $content); ?> </div> Once that is all set, add this line of code to both add.php and edit.php to have this HTML code appear when users add and edit the block: <?php include('form.php') ?> Add the following HTML to your view.php file: <h1><?php echo $title ?></h1> <div class="content"> <?php echo $content ?> </div> Finally, for a little visual appeal, add the following code to view.css: content { background: #eee; padding: 20px; margin: 20px 0; border-radius: 10px; } Now all of the files have been filled with the code to make our Hello World block function. Now we need to install this block in concrete5 so we can add it to our pages. To install the new block, you will need to sign into your concrete5 website and navigate to /dashboard/blocks/types/. If you happen to get a PHP fatal error here, clear your concrete5 cache by visiting /dashboard/system/optimization/clear_cache (it is always a good idea to disable the cache while developing in concrete5). At the top of the Block Types screen, you should see your Hello World block, ready to install. Click on the Install button. Now the block is installed and ready to add to your site! How it works... Let's go through the code that we just wrote, step-by-step. In controller.php, there are a few protected variables at the top of the class. The $btTable variable tells concrete5 which table in the database holds the data for this block type. The $btInterfaceWidth and $btInterfaceHeight variables determine the initial size of the dialog window that appears when users add your block to a page on their site. We put the block's description and name in special getter functions for one reason, to potentially support for translations down the road. It's best practice to wrap any strings that appear in concrete5 in the global t() function. The db.xml file tells concrete5 what database tables should be created when this block gets installed. This file uses the ADOXMLS format to generate tables and fields. In this file, we are telling concrete5 to create a table called btHelloWorld. That table should contain three fields, an ID field, the title field, and the content field. The names of these fields should be noted, because concrete5 will require them to match up with the names of the fields in the HTML form. In form.php, we are setting up the settings form that users will fill out to save the block's content. We are using the Form Helper to generate the HTML for the various fields. Notice how we are able to use the $title and $content variables without them being declared yet. concrete5 automatically exposes those variables to the form whenever the block is added or edited. We then include this form in the add.php and edit.php files. The view.php file is a template file that contains the HTML that the end users will see on the website. We are just wrapping the title in an <h1> tag and the content in a <div> with a class of .content. concrete5 will automatically include view.css (and view.js, if it happens to exist) if they are present in your block's directory. Also, if you include an auto.js file, it will automatically be included when the block is in edit mode. We added some basic styling to the .content class and concrete5 takes care of adding this CSS file to your site's <head> tag. Using block controller callback functions The block controller class contains a couple of special functions that get automatically called at different points throughout the page load process. You can look into these callbacks to power different functionalities of your block type. Getting ready To get started, you will need a block type created and installed. See the previous recipe for a lesson on creating a custom block type. We will be adding some methods to controller.php. How to do it... The steps for using block controller callback functions are as follows: Open your block's controller.php file. Add a new function called on_start(): public function on_start() { } Write a die statement that will get fired when the controller is loaded. die('hello world'); Refresh any page containing the block type. The page should stop rendering before it is complete with your debug message. Be sure to remove the die statement, otherwise your block won't work anymore! How it works... concrete5 will call the various callback functions at different points during the page load process. The on_start() function is the first to get called. It is a good place to put things that you want to happen before the block is rendered. The next function that gets called depends on how you are interacting with the block. If you are just viewing it on a page, the view() function gets called. If you are adding or editing the block, then the add() or edit() functions will get called as appropriate. These functions are a good place to send variables to the view, which we will show how to do in the next recipe. The save() and delete() functions also get called automatically at this point, if the block is performing either of those functions. After that, concrete5 will call the on_before_render() function. This is a good time to add items to the page header and footer, since it is before concrete5 renders the HTML for the page. We will be doing this later on in the article. Finally, the on_page_view() function is called. This is actually run once the page is being rendered, so it is the last place where you have the code executed in your block controller. This is helpful when adding HTML items to the page. There's more... The following functions can be added to your controller class and they will get called automatically at different points throughout the block's loading process. on_start on_before_render view add edit on_page_view save delete For a complete list of the callback functions available, check out the source for the block controller library, located in /concrete/core/libraries/block_controller.php. Sending variables from the controller to the view A common task in MVC programming is the concept of setting variables from a controller to a view. In concrete5, blocks follow the same principles. Fortunately, setting variables to the view is quite easy. Getting ready This recipe will use the block type that was created in the first recipe of this article. Feel free to adapt this code to work in any block controller, though. How to do it... In your block's controller, use the set() function of the controller class to send a variable and a value to the view. Note that the view doesn't necessarily have to be the view.php template of your block. You can send variables to add.php and edit.php as well. In this recipe, we will send a variable to view.php. The steps are as follows: Open your block's controller.php file. Add a function called view() if it doesn't already exist: public function view() { } Set a variable called name to the view. $this->set('name', 'John Doe'); Open view.php in your block's directory. Output the value of the name variable. <div class="content"> <?php echo $name ?> </div> Adding items to the page header and footer from the block controller An important part of block development is being able to add JavaScript and CSS files to the page in the appropriate places. Consider a block that is using a jQuery plugin to create a slideshow widget. You will need to include the plugin's JavaScript and CSS files in order for it to work. In this recipe, we will add a CSS <link> tag to the page's <head> element, and a JavaScript <script> tag to bottom of the page (just before the closing </body> tag). Getting ready This recipe will continue working with the block that was created in the first recipe of this article. If you need to download a copy of that block, it is included with the code samples from this book's website. This recipe also makes a reference to a CSS file and a JavaScript file. Those files are available for download in the code on this book's website as well. How to do it... The steps for adding items to the page header and footer from the block controller are as follows: Open your block's controller.php file. Create a CSS file in /css called test.css. Set a rule to change the background color of the site to black. body { background: #000 !important; } Create a JavaScript file in /js called test.js. Create an alert message in the JavaScript file. alert('Hello!'); In controller.php, create a new function called on_page_view(). public function on_page_view() { } Load the HTML helper: $html = Loader::helper('html'); Add a CSS file to the page header: $this->addHeaderItem($html->css('testing.css')); Add a JavaScript file to the page footer: $this->addFooterItem($html->javascript('test.js')); Visit a page on your site that contains this block. You should see your JavaScript alert as well as a black background. How it works... As mentioned in the Using block controller callback function recipe, the ideal place to add items to the header (the page's <head> tag) and footer (just before the closing </body> tag) is the on_before_render() callback function. The addHeaderItem and addFooterItem functions are used to place strings of text in those positions of the web document. Rather than type out <script> and <link> tags in our PHP, we will use the built-in HTML helper to generate those strings. The files should be located in the site's root /css and /js directories. Since it is typically best practice for CSS files to get loaded first and for JavaScript files to get loaded last, we place each of those items in the areas of the page that make the most sense. Creating custom block templates All blocks come with a default view template, view.php. concrete5 also supports alternative templates, which users can enable through the concrete5 interface. You can also enable these alternative templates through your custom PHP code. Getting ready You will need a block type created and installed already. In this recipe, we are going to add a template to the block type that we created at the beginning of the article. How to do it... The steps for creating custom block templates are as follows: Open your block's directory. Create a new directory in your block's directory called templates/. Create a file called no_title.php in templates/. Add the following HTML code to no_title.php: <div class="content"> <?php echo $content ?> </div> Activate the template by visiting a page that contains this block. Enter edit mode on the page and click on the block. Click on "Custom Template".     Choose "No Title" and save your changes. There's more... You can specify alternative templates right from the block controller, so you can automatically render a different template depending on certain settings, conditions, or just about anything you can think of. Simply use the render() function in a callback that gets called before the view is rendered. public function view() { $this->render('templates/no_title'); } This will use the no_title.php file instead of view.php to render the block. Notice that adding the .php file extension is not required. Just like the block's regular view.php file, developers can include view.css and view.js files in their template directories to have those files automatically included on the page. See also The Using block controller callback functions recipe The Creating a custom block type recipe Including JavaScript in block forms When adding or editing blocks, it is often desired to include more advanced functionality in the form of client-side JavaScript. concrete5 makes it extremely easy to automatically add a JavaScript file to a block's editor form. Getting ready We will be working with the block that was created in the first recipe of this article. If you need to catch up, feel free to download the code from this book's website. How to do it... The steps for including JavaScript in block forms are as follows: Open your block's directory. Create a new file called auto.js. Add a basic alert function to auto.js: alert('Hello!'); Visit a page that contains your block. Enter edit mode and edit the block. You should see your alert message appear as shown in the following screenshot: How it works... concrete5 automatically looks for the auto.js file when it enters add or edit mode on a block. Developers can use this to their advantage to contain special client-side functionality for the block's edit mode. Including JavaScript in the block view In addition to being able to include JavaScript in the block's add and edit forms, developers can also automatically include a JavaScript file when the block is viewed on the frontend. In this recipe, we will create a simple JavaScript file that will create an alert whenever the block is viewed. Getting ready We will continue working with the block that was created in the first recipe of this article. How to do it... The steps for including JavaScript in the block view are as follows: Open your block's directory. Create a new file called view.js. Add an alert to view.js: alert('This is the view!'); Visit the page containing your block. You should see the new alert appear. How it works... Much like the auto.js file discussed in the previous recipe, concrete5 will automatically include the view.js file if it exists. This allows developers to easily embed jQuery plugins or other client-side logic into their blocks very easily. Including CSS in the block view Developers and designers working on custom concrete5 block types can have a CSS file automatically included. In this recipe, we will automatically include a CSS file that will change our background to black. Getting ready We are still working with the block that was created earlier in the article. Please make sure that block exists, or adapt this recipe to suit your own concrete5 environment. How to do it... The steps for including CSS in the block view are as follows: Open your block's directory. Create a new file called view.css, if it doesn't exist. Add a rule to change the background color of the site to black: body { background: #000 !important; } Visit the page containing your block. The background should now be black! How it works... Just like it does with JavaScript, concrete5 will automatically include view.css in the page's header if it exists in your block directory. This is a great way to save some time with styles that only apply to your block. Loading a block type by its handle Block types are objects in concrete5 just like most things. This means that they have IDs in the database, as well as human-readable handles. In this recipe, we will load the instance of the block type that we created in the first recipe of this article. Getting ready We will need a place to run some arbitrary code. We will rely on /config/site_post.php once again to execute some random code. This recipe also assumes that a block with a handle of hello_world exists in your concrete5 site. Feel free to adjust that handle as needed. How to do it... The steps for loading a block type by its handle are as follows: Open /config/site_post.php in your preferred code editor. Define the handle of the block to load: $handle = 'hello_world'; Load the block by its handle: $block = BlockType::getByHandle($handle); Dump the contents of the block to make sure it loaded correctly: print_r($block); exit; How it works... concrete5 will simply query the database for you when a handle is provided. It will then return a BlockType object that contains several methods and properties that can be useful in development. Adding a block to a page Users can use the intuitive concrete5 interface to add blocks to the various areas of pages on the website. You can also programmatically add blocks to pages using the concrete5 API. Getting ready The code in this article can be run anywhere that you would like to create a block. To keep things simple, we are going to use the /config/site_post.php file to run some arbitrary code. This example assumes that a page with a path of /about exists on your concrete5 site. Feel free to create that page, or adapt this recipe to suit your needs. Also, this recipe assumes that /about has a content area called content. Again, adapt according to your own website's configuration. We will be using the block that was created at the beginning of this article. How to do it... The steps for adding a block to a page are as follows: Open /config/site_post.php in your code editor. Load the page that you would like to add a block to: $page = Page::getByPath('/about'); Load the block by its handle: $block = BlockType::getByHandle('hello_world'); Define the data that will be sent to the block: $data = array( 'title' => 'An Exciting Title', 'content' => 'This is the content!' ); Add the block to the page's content area: $page->addBlock($block, 'content', $data); How it works... First you need to get the target page. In this recipe, we get it by its path, but you can use this function on any Page object. Next, we need to load the block type that we are adding. In this case, we are using the one that was created earlier in the article. The block type handle is the same as the directory name for the block. We are using the $data variable to pass in the block's configuration options. If there are no options, you will need to pass in an empty array, as concrete5 does not allow that parameter to be blank. Finally, you will need to know the name of the content area; in this case, the content area is called "content". Getting the blocks from an area concrete5 pages can have several different areas where blocks can be added. Developers can programmatically get an array of all of the block objects in an area. In this recipe, we will load a page and get a list of all of the blocks in its main content area. Getting ready We will be using /config/site_post.php to run some arbitrary code here. You can place this code wherever you find appropriate, though. This example assumes the presence of a page with a path of /about, and with a content area called content. Make the necessary adjustments in the code as needed. How to do it... The steps for getting the blocks from an area are as follows: Open /config/site_post.php in your code editor. Load the page by its path: $page = Page::getByPath('/about'); Get the array of blocks in the page's content area. $blocks = $page->getBlocks('content'); Loop through the array, printing each block's handle. foreach ($blocks as $block) { echo $block->getBlockTypeHandle().'<br />'; } Exit the process. exit; How it works... concrete5 will return an array of block objects for every block that is contained within a content area. Developers can then loop through this array to manipulate or read the block objects. Summary This article discussed how to create custom block types and integrate blocks in your own website using concrete5's blocks. Resources for Article : Further resources on this subject: Everything in a Package with concrete5 [Article] Creating Your Own Theme [Article] concrete5: Mastering Auto-Nav for Advanced Navigation [Article]
Read more
  • 0
  • 0
  • 1517
Packt
31 Jul 2013
6 min read
Save for later

Quick start – creating your first grid

Packt
31 Jul 2013
6 min read
(For more resources related to this topic, see here.) So, let's do that right now for this example. Create a copy of the entire jqGrid folder named quickstart; easy as pie. Let's begin by creating the simplest grid possible; open up the index.html file from the newly created quickstart folder, and modify the body section to look like the following code: <body><table id="quickstart_grid"></table><script>var dataArray = [{name: 'Bob', phone: '232-532-6268'},{name: 'Jeff', phone: '365-267-8325'}];$("#quickstart_grid").jqGrid({datatype: 'local',data: dataArray,colModel: [{name: 'name', label: 'Name'},{name: 'phone', label: 'Phone Number'}]});</script></body> The first element in the body tags is a standard table element; this is what will be converted into our grid, and it's literally all the HTML needed to make one. The first four lines are just defining some data. I didn't want to get into AJAX or opening other files, so I decided to just create a simple JavaScript array with two entries. The next line is where the magic happens; this single command is being used to create and populate a grid using the information provided. We select the table element with jQuery, and call the jqGrid function, passing it all the properties needed to make a grid. The first two options set the data along with its type, in our case the data is the array we made and the type is local, which is in contrast to some of the other datatypes which use AJAX to retrieve remote data. After that we set the columns, this is done with the colModel property. Now, there are a lot of options for the colModel property, which we will get to later on, numerous settings for customizing and manipulating the data in the table. But for this simple example, we are just specifying the name and label properties, which tell jqGrid the column's label for the header and the value's key from the data array. Now, open index.html in your browser and you should see something like the following screenshot: Not particularly pretty, but you can see that with just a few short lines, we have created a grid and populated it with data that can be sorted. But we can do better; first off, we are only using two of the four standard layers we talked about: the header layer and the body layer. Let's add a caption layer to provide little context, and let's adjust the size of the grid to fit our data. So, modify the call to jqGrid with the following: $("#grid").jqGrid({datatype: 'local',data: dataArray,colModel: [{name: 'name', label: 'Name'},{name: 'phone', label: 'Phone Number'}],height: 'auto',caption: 'Users Grid',}); And refresh your browser; your grid should now look like the following screenshot: That's looking much better. Now, you may be wondering, we only set the height property to auto, so how come the width seems to have snapped to the content? This is due to the fact that the right margin we saw earlier is actually a column for the scroll bar. By default jqGrid sets your grid's height to 150 pixels, this means, regardless of whether you have only one row, or a thousand rows, the height will remain the same, so that there is a gap to hold the scroll bar in an event when you have more rows than that would fit in the given space. When we set the height to auto, it will stretch the grid vertically to contain all the items, making the scroll bar irrelevant and therefore it knows not to place it. Now this is a pretty good quick start example, but to finish things off, let's take a look at the navigation layer, just so we can say we did. For this next part, although we are going to need more data, I can't really show pagination with just two entries, luckily there is a site http://www.json-generator.com/ created by Vazha Omanashvili for doing exactly this. The way it works is, you specify the format and number of rows you want, and it generates it with random data. We are going to keep the format we have been using, of name and phone number, so in the box on the left, enter the following code: ['{{repeat(50)}}',{name: '{{firstName}} {{lastName}}',phone: '{{phone}}'}] Here we're saying we would like 50 rows with a name field containing both firstname and lastname, and we would also like a phone number. This site is not really in the scope of this book, but if you would like to know about other fields, you can click on the help button at the top. Click on Generate to produce a result object, and then just click on the copy to clipboard button. With that done, open your index.html file and replace the array for dataArray with what you copied. Your code should now look like the following code: var dataArray = {"id": 1,"jsonrpc": "2.0","total": 50,"result": [ /* the 50 rows */ ]}; As you can see, the actual rows are under a property named result, so we will need to change the data key in the call to jqGrid from dataArray to dataArray.result. Refreshing the page now you will see the first 20 rows being displayed (that is the default limit). But how can we get to the rest? Well, jqGrid provides a special navigation layer named "pager", which contains a pagination control. To display it, we will need to create an HTML element for it. So, right underneath the table element add a div like: <table id="grid"></table><div id="pager"></div> Then, we just need to add a key to the jqGrid method for the pager and row limit: caption: 'Users Grid',height: 'auto',rowNum: 10,pager: '#pager'}); And that's all there to it, you can adjust the rowNum property to display more or less entries at once, and the pages will be automatically calculated for you. Summary In this article we learned how to create a simple Grid and customize it. We used different properties and functions for the same. We also used different layers to customize the jqGrid. Resources for Article : Further resources on this subject: jQuery and Ajax Integration in Django [Article] Implementing AJAX Grid using jQuery data grid plugin jqGrid [Article] Django JavaScript Integration: jQuery In-place Editing Using Ajax [Article]
Read more
  • 0
  • 0
  • 1029

article-image-introduction-nginx
Packt
31 Jul 2013
8 min read
Save for later

Introduction to nginx

Packt
31 Jul 2013
8 min read
(For more resources related to this topic, see here.) So, what is nginx? The best way to describe nginx (pronounced engine-x) is as an event-based multi-protocol reverse proxy. This sounds fancy, but it's not just buzz words and actually affects how we approach configuring nginx. It also highlights some of the flexibility that nginx offers. While it is often used as a web server and an HTTP reverse proxy, it can also be used as an IMAP reverse proxy or even a raw TCP reverse proxy. Thanks to the plug-in ready code structure, we can utilize a large number of first and third party modules to implement a diverse amount of features to make nginx an ideal fit for many typical use cases. A more accurate description would be to say that nginx is a reverse proxy first, and a web server second. I say this because it can help us visualize the request flow through the configuration file and rationalize how to achieve the desired configuration of nginx. The core difference this creates is that nginx works with URIs instead of files and directories, and based on that determines how to process the request. This means that when we configure nginx, we tell it what should happen for a certain URI rather than what should happen for a certain file on the disk. A beneficial part of nginx being a reverse proxy is that it fits into a large number of server setups, and can handle many things that other web servers simply aren't designed for. A popular question is "Why even bother with nginx when Apache httpd is available?" The answer lies in the way the two programs are designed. The majority of Apache setups are done using prefork mode, where we spawn a certain amount of processes and then embed our dynamic language in each process. This setup is synchronous, meaning that each process can handle one request at a time, whether that connection is for a PHP script or an image file. In contrast, nginx uses an asynchronous event-based design where each spawned process can handle thousands of concurrent connections. The downside here is that nginx will, for security and technical reasons, not embed programming languages into its own process - this means that to handle those we will need to reverse proxy to a backend, such as Apache, PHP-FPM, and so on. Thankfully, as nginx is a reverse proxy first and foremost, this is extremely easy to do and still allows us major benefits, even when keeping Apache in use. Let's take a look at a use case where Apache is used as an application server described earlier rather than just a web server. We have embedded PHP, Perl, or Python into Apache, which has the primary disadvantage of each request becoming costly. This is because the Apache process is kept busy until the request has been fully served, even if it's a request for a static file. Our online service has gotten popular and we now find that our server cannot keep up with the increased demand. In this scenario introducing nginx as a spoon-feeding layer would be ideal. When an nginx server with a spoon-feeding layer will sit between our end user and Apache and a request comes in, nginx will reverse proxy it to Apache if it is for a dynamic file, while it will handle any static file requests itself. This means that we offload a lot of the request handling from the expensive Apache processes to the more lightweight nginx processes, and increase the number of end users we can serve before having to spend money on more powerful hardware. Another example scenario is where we have an application being used from all over the world. We don't have any static files so we can't easily offload a number of requests from Apache. In this use case, our PHP process is busy from the time the request comes in until the user has finished downloading the response. Sadly, not everyone in the world has fast internet and, as a result, the sending process could be busy for a relatively significant period of time. Let's assume our visitor is on an old 56k modem and has a maximum download speed of 5 KB per second, it will take them five seconds to download a 25 KB gzipped HTML file generated by PHP. That's five seconds where our process cannot handle any other request. When we introduce nginx into this setup, we have PHP spending only microseconds generating the response but have nginx spend five seconds transferring it to the end user. Because nginx is asynchronous it will happily handle other connections in the meantime, and thus, we significantly increase the number of concurrent requests we can handle. In the previous two examples I used scenarios where nginx was used in front of Apache, but naturally this is not a requirement. nginx is capable of reverse proxying via, for instance, FastCGI, UWSGI, SCGI, HTTP, or even TCP (through a plugin) enabling backends, such as PHP-FPM, Gunicorn, Thin, and Passenger. Quick start – Creating your first virtual host It's finally time to get nginx up and running. To start out, let's quickly review the configuration file. If you installed via a system package, the default configuration file location is most likely /etc/nginx/nginx.conf. If you installed via source and didn't change the path pre fix, nginx installs itself into/usr/local/nginx and places nginx.conf in a /conf subdirectory. Keep this file open as a reference to help visualize many of the things described in this article. Step 1 – Directives and contexts To understand what we'll be covering in this section, let me first introduce a bit of terminology that the nginx community at large uses. Two central concepts to the nginx configuration file are those of directives and contexts. A directive is basically just an identifier for the various configuration options. Contexts refer to the different sections of the nginx configuration file. This term is important because the documentation often states which context a directive is allowed to have within. A glance at the standard configuration file should reveal that nginx uses a layered configuration format where blocks are denoted by curly brackets {}. These blocks are what are referred to as contexts. The topmost context is called main, and is not denoted as a block but is rather the configuration file itself. The main context has only a few directives we're really interested in, the two major ones being worker_processes and user. These directives handle how many worker processes nginx should run and which user/group nginx should run these under. Within the main context there are two possible subcontexts, the first one being called events. This block handles directives that deal with the event-polling nature of nginx. Mostly we can ignore every directive in here, as nginx can automatically configure this to be the most optimal; however, there's one directive which is interesting, namely worker_connections. This directive controls the number of connections each worker can handle. It's important to note here that nginx is a terminating proxy, so if you HTTP proxy to a backend, such as Apache httpd, that will use up two connections. The second subcontext is the interesting one called http. This context deals with everything related to HTTP, and this is what we will be working with almost all of the time. While there are directives that are configured in the http context, for now we'll focus on a subcontext within http called server. The server context is the nginx equivalent of a virtual host. This context is used to handle configuration directives based on the host name your sites are under. Within the server context, we have another subcontext called location. The location context is what we use to match the URI. Basically, a request to nginx will flow through each of our contexts, matching first the server block with the hostname provided by the client, and secondly the location context with the URI provided by the client. Depending on the installation method, there might not be any server blocks in the nginx.conf file. Typically, system package managers take advantage of the include directive that allows us to do an in-place inclusion into our configuration file. This allows us to separate out each virtual host and keep our configuration file more organized. If there aren't any server blocks, check the bottom of the file for an includedirective and check the directory from which it includes, it should have a file which contains a server block. Step 2 – Define your first virtual hosts Finally, let us define our first server block! server { listen 80; server_name example.com; root /var/www/website;} That is basically all we need, and strictly speaking, we don't even need to define which port to listen on as port 80 is default. However, it's generally a good practice to keep it in there should we want to search for all virtual hosts on port 80 later on. Summary This article provided the details about the important aspects of nginx. It also briefed about the configuration of our virtual host using nginx by explaining two simple steps, along with a configuration example. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 8228