Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-ibm-filenet-p8-content-manager-exploring-object-store-level-items
Packt
10 Feb 2011
9 min read
Save for later

IBM FileNet P8 Content Manager: Exploring Object Store-level Items

Packt
10 Feb 2011
9 min read
As with the Domain, there are two basic paths in FEM to accessing things in an Object Store. The tree-view in the left-hand panel can be expanded to show Object Stores and many repository objects within them, as illustrated in the screenshot below. Each individual Object Store has a right-click context menu. Selecting Properties from that menu will bring up a multi-tabbed pop-up panel. We'll look first at the General tab of that panel. Content Access Recording Level Ordinarily, the Content Engine (CE) server does not keep track of when a particular document's content was last retrieved because it requires an expensive database update that is often uninteresting. The ContentAccessRecordingLevel property on the Object Store can be used to enable the recording of this optional information in a document or annotation's DateContentLastAccessed property. It is off by default. It is sometimes interesting to know, for example, that document content was accessed within the last week as opposed to three years ago. Once a particular document has had its content read, there is a good chance that there will be a few additional accesses in the same neighborhood of time (not for a technical reason; rather, it's just statistically likely). Rather than record the last access time for each access, an installation can choose, via this property's value, to have access times recorded only with a granularity of hourly or daily. This can greatly reduce the number of database updates while still giving a suitable approximation of the last access time. There is also an option to update the DateContentLastAccessed property on every access. Auditing The CE server can record when clients retrieve or update selected objects. Enabling that involves setting up subscriptions to object instances or classes. This is quite similar to the event subsystem in the CE server. Because it can be quite elaborate to set up the necessary auditing configuration, it can also be enabled or disabled completely at the Object Store level. Checkout type The CE server offers two document checkout types, Collaborative and Exclusive. The difference lies in who is allowed to perform the subsequent checkin. An exclusive checkout will only allow the same user to do the checkin. Via an API, an application can make the explicit choice for one type or the other, or it can use the Object Store default value. Using the default value is often handy since a given application may not have any context for deciding one form over another. Even with a collaborative checkout, the subsequent checkin is still subject to access checks, so you can still have fine-grained control over that. In fact, because you can use fine-grained security to limit who can do a checkin, you might as well make the Object Store default be Collaborative unless you have some specific use case that demands Exclusive. Text Index Date Partitioning Most of the values on the CBR tab, as shown in the figure next, are read-only because they are established when the Content Search Engine (CSE) is first set up. One item that can be changed on a per-Object Store basis is the date-based partitioning of text index collections. Partitioning of the text index collections allows for more efficient searching of large collections because the CE can narrow its search to a specific partition or partitions rather than searching the entirety of the text index. By default, there is no partitioning. If you check the box to change the partition settings, the Date Property drop-down presents a list of date-valued custom properties. In the screenshot above, you see the custom properties Received On and Sent On, which are from email-related documents. Once you select one of those properties, you're offered a choice of partitioning granularity, ranging from one month up to one year. Additional text index configuration properties are available if you select the Index Areas node in the FEM tree-view, then right-click an index area entry and select Properties. Although we are showing the screenshot here for reference, your environment may not yet have a CSE or any index areas if the needed installation procedures are not complete. Cache configuration Just as we saw at the Domain level, the Cache tab allows the configuration of various cache tuning parameters for each Object Store. As we've said before, you don't want to change these values without a good reason. The default values are suitable for most situations. Metadata One of the key features of CM is that it has an extensible metadata structure. You don't have to work within a fixed framework of pre-defined document properties. You can add additional properties to the Document class, and you can even make subclasses of Document for specific purposes. For example, you might have a subclass called CustomerProfile, another subclass called DesignDocument, yet another subclass called ProductDescription, and so on. Creating subclasses lets you define just the properties you need to specialize the class to your particular business purpose. There is no need to have informal rules about where properties should be ignored because they're not applicable. There is also generally no need to have a property that marks a document as a CustomerProfile versus something else. The class provides that distinction. CM comes with a set of pre-defined system classes, and each class has a number of pre-defined system properties (many of which are shared across most system classes). There are pre-defined system classes for Document, Folder, Annotation, CustomObject, and many others. The classes just mentioned are often described as the business object classes because they are used to directly implement common business application concepts. System properties are properties for which the CE server has some specifically-coded behavior. Some system properties are used to control server behavior, and others provide reports of some kind of system state. We've seen several examples already in the Domain and Object Store objects. It's common for applications to create their own subclasses and custom properties as part of their installation procedures, but it is equally common to do similar things manually via FEM. FEM contains several wizards to make the process simpler for the administrator, but, behind the scenes, various pieces are always in play. Property templates The foundation for any custom property is a property template. If you select the Property Templates node in the tree view, you will see a long list of existing property templates. Double-clicking on any item in the list will reveal that property template's properties. A property template is an independently persistable object, so it has its own identity and security. Most system properties do not have explicit property templates. Their characteristics come about from a different mechanism internal to the CE server. Property templates have that name because the characteristics they define act as a pattern for properties added to classes, where the property is embodied in a property definition for a particular class. Some of the property template properties can be overridden in a property definition, but some cannot. For example, the basic data type and cardinality cannot be changed once a property template is created. On the other hand, things like settability and a value being required can be modified in the property definition. When creating a new property with no existing property template, you can either create the property template independently, ahead of time, or you can follow the FEM wizard steps for adding a property to a class. FEM will prompt you with additional panels if you need to create a property template for the property being added. Choice lists Most property types allow for a few simple validity checks to be enforced by the CE server. For example, an integer-valued property has an optional minimum and maximum value based on its intended use (in addition to the expected absolute constraints imposed by the integer data type). For some use cases, it is desirable to limit the allowed values to a specific list of items. The mechanism for that in the CE server is the choice list, and it's available for stringvalued and integer-valued properties. If you select the Choice Lists node in the FEM tree view, you will see a list of existing top-level choice lists. The example choice lists in the screenshot below all happen to come from AddOns installed in the Object Store. Double-clicking on any item in the list will reveal that choice list's properties. A choice list is an independently persistable object, so it has its own identity and security. We've mentioned independent objects a couple of times, and more mentions are coming. For now, it is enough to think of them as objects that can be stored or retrieved in their own right. Most independent objects have their own access security. Contrast independent objects with dependent objects that only exist within the context of some independent object. A choice list is a collection of choice objects, although a choice list may be nested hiearchically. That is, at any position in a choice list there can be another choice list rather than a simple choice. A choice object consists of a localizable display name and a choice value (a string or an integer, depending on the type of choice list). Nested choice lists can only be referenced within some top-level choice list. Classes Within the FEM tree view are two nodes describing classes: Document Class and Other Classes. Documents are listed separately only for user convenience (since Document subclasses occur most frequently). You can think of these two nodes as one big list. In any case, expanding the node in the tree reveals hierarchically nested subclasses. Selecting a class from the tree reveals any subclasses and any custom properties. The screenshot shows the custom class EntryTemplate, which comes from a Workplace AddOn. You can see that it has two subclasses, RecordsTemplate and WebContentTemplate, and four custom properties. When we mention a specific class or property name, like EntryTemplate, we try to use the symbolic name, which has a restricted character set and never contains spaces. The FEM screenshots tend to show display names. Display names are localizable and can contain any Unicode character. Although the subclassing mechanism in CM generally mimics the subclassing concept in modern object-oriented programming languages, it does have some differences. You can add custom properties to an existing class, including many system classes. Although you can change some characteristics of properties on a subclass, there are restrictions on what you can do. For example, a particular string property on a subclass must have a maximum length equal to or less than that property's maximum length on the superclass.
Read more
  • 0
  • 0
  • 5007

article-image-setting-joomla-web-server-using-virtualbox-turnkeylinux-and-dyndns
Packt
03 Feb 2011
8 min read
Save for later

Setting up a Joomla Web Server using Virtualbox, TurnkeyLinux and DynDNS

Packt
03 Feb 2011
8 min read
VirtualBox 3.1: Beginner's Guide Virtualization is a powerful tool that can make your PC duties easier, no matter if you're a programmer, a systems administrator, a power user, or even a beginner. Have you ever wanted to test the popular Joomla! Content Management System (CMS), but couldn't spare the time and effort to install it in your PC, along with the Apache web server and the MySQL database server? Are you afraid to install Apache, MySQL and PHP in your only PC because it could mess things up? Well, you can forget about all the hassle thanks to Oracle VirtualBox, a powerful virtualization software product that lets you create one or more virtual machines, or VMs, inside your physical PC. Each VM is completely isolated from your main PC and all the other VMs, so it's like having several computers in one physical package, but you don't need the extra space to accommodate all the additional LCDs and PC cases. Cool, huh? In this article, I'm going to show you one of the quickest ways to set up a fully-functional web server right from your own home/office. And why would you need to do something like that? Well, if you want to create a website to establish your own presence on the Internet, there are some costs involved. First of all, you need to pay for a web hosting service and a domain name. So, if you want to learn how to create websites, this would be a perfect way to do it, since all the software we´ll use is free, and with the DynDNS dynamic DNS service, you don't need to pay for a domain name because you can also use one for free. Furthermore, since you're going to host your website on your virtual machine, you can also forget about the web hosting fee. Are these reasons good enough to start experimenting with virtual machines? I'm pretty sure they are! I decided to use the Joomla! Content Management System (CMS) because it has all you need to establish your Internet presence. The TurnkeyLinux Joomla! virtual appliance includes everything you need to have a website running right out of the box, so you won't have to go through the hassle of installing all the required web server software (Apache, MySQL, PHP, etc.). And in case something goes wrong, you can just wipe out your virtual machine and start again from scratch. How about that? The first steps in the tutorial will tell you how to create a virtual machine (VM) with VirtualBox, how to get a preconfigured ISO image from the TurnkeyLinux website with all the necessary stuff to install the Apache web server, the MySQL database server and the Joomla! CMS in your VM. Oh, and if you're wondering how to make your web server available on the Internet, don't worry: I'll also show you how to get a free DynDNS account, and how to configure your Cable/DSL router to open port 80 (the HTTP web server port). That way, visitors from the Internet will be able to navigate in your brand-new Joomla! website. You'll need a PC or Mac system with Windows/Linux/Mac OS X installed, at least 1 GB of RAM and a Cable/DSL connection to the Internet, so you can configure your Cable/DSL router to let your virtual machine work as a full-fledged web server. Getting Virtualbox Download the most recent version of Oracle VirtualBox from the official website: VirtualBox 4.0 for Windows VirtualBox 4.0 for Mac OS X VirtualBox 4.0 for Linux Once the download is completed, follow the instructions included in the User Manual to install VirtualBox in your specific operating system. Downloading the Turnkey Joomla Appliance You can download the Joomla appliance from the TurnkeyLinux website. Just click on the following link to start downloading it to your computer: http://www.turnkeylinux.org/download?file=turnkey-joomla-11.0-lucid-x86.iso. Creating a new virtual machine Open VirtualBox, click on New to open the New Virtual Machine Wizard and then click on Next. Type MyJoomlaVM in the Name field, select Linux as the Operating System and Ubuntu as the Version, and click on Next to continue: The Memory dialog will show up next. Select at least 384 MB (you can press the Left and Right arrow keys to increase/decrease the memory value) in the Base Memory Size slider (depending on the total memory available in your PC) and click Next to continue: Leave the default values in the Virtual Hard Disk window and click Next four times to finish configuring your virtual machine with the default values. Then click Finish twice in the Summary dialogs that will show up afterwards, and you’ll be taken back to the VirtualBox main screen. Your MyJoomlaVM virtual machine will appear in the virtual machine list, as shown below: Now we need to tweak some network settings so your virtual machine can behave as a real PC with its own IP address. Click the Settings button to open the MyJoomlaVM – Settings window, and then select the Network section: Make sure the Adapter 1 tab is selected; then click on the Attached to list box and select Bridged Adapter instead of NAT: Click on the OK button to close the MyJoomlaVM – Settings window and return to the VirtualBox main screen. Installing the Joomla TurnkeyLinux appliance To start your virtual machine, double-click on its name in the virtual machines’ list or select it and click on the Start button: The first time you open a virtual machine, the First Run Wizard dialog shows up. This wizard helps you to install an operating system to your virtual machine. Click Next to go to the Select Installation Media window, where you can select a media source to install an operating system in your virtual machine. In this case you’re going to select the Turnkey Joomla ISO live CD image you downloaded before. Click on the folder icon located at the right-side of the Media Source list box: The Choose a virtual CD/DVD disk file dialog will open up. Use this dialog to locate and select the Joomla Turnkey ISO image your previously downloaded; then click on Open to return to the Select Installation Media dialog and click Next to continue. The Summary window will appear next, showing the media you selected. Click on Finish to exit the First Run Wizard and start your virtual machine. Wait until the TurnkeyLinux boot screen shows up; then make sure the Install to hard disk option is highlighted and hit Enter to proceed (you can also wait until installation begins automatically): Wait until the Debian Installer Live screen appears. Use the keyboard to select the Guided – use the entire disk option and hit Enter to continue: The next screen will ask you if you want to write the changes to disk. Select Yes and hit Enter to continue. The Debian Installer will start installing Ubuntu and the Joomla appliance in your virtual machine. After a while, a screen will appear asking if you want to install the GRUB boot loader to the master boot record. Select Yes and hit Enter to continue. The next screen will tell you that the installation is complete, and will ask if you want to restart your computer (virtual machine). Make sure Yes is selected and hit Enter to continue. Wait until your virtual machine boots up and asks you to type a new password for the root account. Type a secure password and hit Enter to continue. Type the password again and hit Enter to proceed. Now the system will ask for the MySQL server 'root' account’s password. Type a password of your choice and hit Enter. Repeat the procedure to confirm the password. Finally, the system will ask you to type a password for the Joomla 'admin' account. Choose a secure password, type it and hit Enter. Once again, repeat the procedure to confirm the password. The next step is to write the email address for the Joomla 'admin' account. Type a real email address and hit Enter to proceed. Next you’ll see a Link TKLBAM to the Turnkey Hub screen. In this case we’re not going to use the Turnkey Hub (a backup/restore system), so don’t type anything and hit Enter to continue. The next screen that will show up is Security Updates. You can leave the default option (Install) and hit Enter to proceed. (Be patient while the security updates get installed in your virtual machine; sometimes it can take several minutes.) Once the security updates finish installing in your virtual machine, the JOOMLA appliance services screen will pop up, and your virtual machine will be ready to roll: Write down the IP address assigned to your Joomla virtual machine (in the above picture it’s 192.168.1.79, but your IP address may vary). Then, open a web browser and type http://youripaddress (remember to replace youripaddress with the IP address you wrote down) to verify your Joomla virtual machine is working. The next screen should appear in your browser: Finally, you need to unmount the TurnkeyLinux Joomla ISO image from your machine’s virtual drive. This is to avoid booting up the ISO image again instead of booting up from your hard drive. Go to the Devices menu and select CD/DVD Devices > Remove disk from virtual drive: That’s it for now. Now let’s see how to get a free domain name and configure your Cable/DSL router to accept incoming connections for your Joomla virtual machine.
Read more
  • 0
  • 1
  • 5276

article-image-getting-started-wordpress-3
Packt
02 Feb 2011
7 min read
Save for later

Getting Started with WordPress 3

Packt
02 Feb 2011
7 min read
  WordPress 2.7 Complete Create your own complete blog or web site from scratch with WordPress Everything you need to set up your own feature-rich WordPress blog or web site Clear and practical explanations of all aspects of WordPress In-depth coverage of installation, themes, syndication, and podcasting Explore WordPress as a fully functioning content management system Concise, clear, and easy to follow; rich with examples         Read more about this book       (For more resources on Wordpress, see here.) WordPress is available in easily downloadable formats from its website, http://wordpress.org/download/. WordPress is a free, open source application, and is released under GNU General Public License (GPL). This means that anyone who produces a modified version of software released under the GPL is required to keep those same freedoms, that people buying or using the software may also modify and redistribute, attached to his or her modified version. This way, WordPress and other software released under GPL are kept open source. Where to build your WordPress website The first decision you have to make is where your blog is going to live. You have two basic options for the location where you will create your site. You can: Use WordPress.com Install on a server (hosted or your own) Let's look at some of the advantages and disadvantages of each of these two choices. The advantage of using WordPress.com is that they take care of all of the technical details for you. The software is already installed; they'll upgrade it for you whenever there's an upgrade; and you're not responsible for anything else. Just manage your content! The big disadvantage is that you lose almost all of the theme and plugin control you'd have otherwise. WordPress.com will not let you upload or edit your own theme, though it will let you (for a fee) edit the CSS of any theme you use. WordPress.com will not let you upload or manage plugins at all. Some plugins are installed by default (most notably Akismet, for spam blocking, and a fancy statistics plugin), but you can neither uninstall them nor install others. Additional features are available for a fee as well. The following table is a brief overview of the essential differences between using WordPress.com versus installing WordPress on your own server:   WordPress.com Your own server Installation You don't have to install anything, just sign up Install WordPress yourself, either manually or via your host's control panel (if offered) Themes Use any theme made available by WordPress.com Use any theme available anywhere, written by anyone (including yourself) Plugins No ability to choose or add plugins Use any plugin available anywhere, written by anyone (including yourself) Upgrades WordPress.com provides automatic upgrades You have to upgrade it yourself when upgrades are available Widgets Widget availability depends on available themes You can widgetize any theme yourself Maintenance You don't have to do any maintenance You're responsible for the maintenance of your site Advertising No advertising allowed Advertise anything Using WordPress.com WordPress.com (http://wordpress.com) is a free service provided by the WordPress developers, where you can register a blog or non-blog website easily and quickly with no hassle. However, because it is a hosted service, your control over some things will be more limited than it would be if you hosted your own WordPress website. As mentioned before, WordPress.com will not let you edit or upload your own themes or plugins. Aside from this, WordPress.com is a great place to maintain your personal site if you don't need to do anything fancy with a theme. To get started, go to http://wordpress.com, which will look something like the following: To register your free website, click on the loud orange-and-white Sign up now button. You will be taken to the signup page. In the following screenshot, I've entered my username (what I'll sign in with) and a password (note that the password measurement tool will tell you if your password is strong or weak), as well as my e-mail address. Be sure to check the Legal flotsam box and leave the Gimme a blog! radio button checked. Without it, you won't get a website. After providing this information and clicking on the Next button, WordPress will ask for other choices (Blog Domain, Blog Title, Language, and Privacy), as shown in following screenshot. You can also check if it's a private blog or not. Note that you cannot change the blog domain later! So be sure it's right. After providing this information and clicking on Signup, you will be sent to a page where you can enter some basic profile information. This page will also tell you that your account is set up, but your e-mail ID needs to be verified. Be sure to check your inbox for the e-mail with the link, and click on it. Then, you'll be truly done with the installation. Installing WordPress manually The WordPress application files can be downloaded for free if you want to do a manual installation. If you've got a website host, this process is extremely easy and requires no previous programming skills or advanced blog user experience. Some web hosts offer automatic installation through the host's online control panel. However, be a little wary of this because some hosts offer automatic installation, but they do it in a way that makes updating your WordPress difficult or awkward, or restricts your ability to have free rein with your installation in the future. Preparing the environment A good first step is to make sure you have an environment setup that is ready for WordPress. This means two things: making sure that you verify that the server meets the minimum requirements, and making sure that your database is ready. For WordPress to work, your web host must provide you with a server that does the following two things: Support PHP, which must be at least Version 4.3. Provide you with write access to a MySQL database. MySQL has to be at least Version 4.1.2. You can find out if your host meets these two requirements by contacting your web host. If your web server meets these two basic requirements, you're ready to move on to the next step. As far as web servers go, Apache is the best. However, WordPress will also run on a server running the Microsoft IIS server (though using permalinks will be difficult, if possible at all). Enabling mod_rewrite to use pretty permalinks If you want to use permalinks, your server must be running Unix, and Apache's mod_rewrite option must be enabled. Apache's mod_rewrite is enabled by default in most web hosting accounts. If you are hosting your own account, you can enable mod_rewrite by modifying the Apache web server configuration file. You can check the URL http://www.tutorio.com/tutorial/enable-mod-rewrite-on-apache to learn how to enable mod_rewrite on your web server. If you are running on shared hosting, then ask your system administrator to install it for you. However, it is more likely that you already have it installed on your hosting account. Downloading WordPress Once you have checked out your environment, you need to download WordPress from http://wordpress.org/download/. Take a look at the following screenshot in which the download links are available on the right side: The .zip file is shown as a big blue button because that'll be the most useful format for the most people. If you are using Windows, Mac, or Linux operating systems, your computer will be able to unzip that downloaded file automatically. (The .tar.gz file is provided because some Unix users prefer it.) A further note on location We're going to cover installing WordPress remotely. However, if you plan to develop themes or plugins, I suggest that you also install WordPress locally on your own computer's server. Testing and deploying themes and plugins directly to the remote server will be much more time-consuming than working locally. If you look at the screenshots I will be taking of my own WordPress installation, you'll notice that I'm working locally (for example, http://wpbook:8888/ is a local URL). After you download the WordPress .zip file, extract the files, and you'll get a folder called wordpress. It will look like the following screenshot:  
Read more
  • 0
  • 0
  • 2014
Visually different images

article-image-performing-setup-tasks-wordpress-admin-panel
Packt
02 Feb 2011
3 min read
Save for later

Performing Setup Tasks in the WordPress Admin Panel

Packt
02 Feb 2011
3 min read
  WordPress 3 Complete Create your own complete website or blog from scratch with WordPress Learn everything you need for creating your own feature-rich website or blog from scratch Clear and practical explanations of all aspects of WordPress In-depth coverage of installation, themes, plugins, and syndication Explore WordPress as a fully functional content management system Clear, easy-to-follow, concise; rich with examples and screenshots         Read more about this book       (For more resources on Wordpress, see here.) The reader can benefit from the previous article on Getting Started with WordPress 3. After you've successfully installed WordPress, it's time for our first look at the WP Admin. There are some immediate basic changes that I recommend doing right away to make sure your installation is set up properly. You can always get to the WP Admin by going to this URL: http://yoursite.com/wp-admin/. Your first time here, you'll be re-directed to the login page. In the future, WordPress will check to see if you're already logged in and, if so, you'll skip the login page. Following is the login page: To log in, just enter the username and password you chose during the installation. Then click on Log In. Note for the future that on this page there is a link you can use to retrieve your lost password. Whenever you log in, you'll be taken directly to the Dashboard of the WP Admin. Following is a screenshot of the WP Admin that you will see immediately after you log into the blog you just installed: You'll see a lot of information and options here. We will focus on the items that we need to touch upon right after a successful installation. First, let's take a brief look at the top of the WP Admin and the Dashboard. The very top bar, which I'll refer to as the top bar, is mostly a medium grey and contains: A link to the front page of your WordPress website A rollover drop-down menu with handy links to New Post, Drafts, New Page, Upload, and Comments Your username linked to your profile A link to log out You'll also notice the Screen Options tab, which appears on many screens within the WP Admin. If you click on it, it will slide down a checklist of items on the page to show or hide. It will be different on each page. I encourage you to play around with that by checking and unchecking items, as you find you need them or don't need them. On the left, of course, is the main menu: You can click on any word in the main menu to be taken to the main page for that section, or you can click on the rollover arrow to slide down the subpages for that section. For example, if you click on the arrow next to Settings, you'll see the subpages for the Settings section: The top menu and the main menu exist on every page within the WP Admin. The main section on the right contains information for the current page you're on. In this case, we're on the Dashboard. It contains boxes that have a variety of information about your blog, and about WordPress in general. Before WordPress 3, the first thing you'd have to do would be to change the password to something easier to remember. However, now that you can choose your password during installation, this is no longer necessary. Let's jump right to general site settings.  
Read more
  • 0
  • 0
  • 2286

article-image-yearly-holiday-list-calendar-developed-using-jquery-ajax-xml-and-css3
Packt
01 Feb 2011
5 min read
Save for later

Yearly Holiday List Calendar Developed using jQuery, AJAX, XML and CSS3

Packt
01 Feb 2011
5 min read
  PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery About Holiday List Calendar This widget will help you in knowing the list of holidays in various countries. Here in this example, I have listed holidays pertaining to only two counties, namely India and US. You can make use of this widget on your websites or blogs to tell your readers about holidays and their importance if necessary. Adding jQuery to your page Download the latest version of jQuery from the jQuery site. This site can be added as a reference to your web pages accordingly. You can reference a local copy of jQuery after downloading <script> tag in the page. You can also directly reference the remote copy from jQuery or Google Ajax API. Pre-requisite Knowledge In order to understand the code, one should have some knowledge of AJAX concepts and XML Structure, basic knowledge of HTML, advance knowledge of CSS3 and lastly and mostly important one should know advance level of jQuery coding. Ingredients Used jQuery [Advance Level] CSS3 HTML XML Photoshop [Used for coming up with UI Design] HTML Code <div id="calendar-container"> <div class="nav-container"> <span>Year<br/><select id="selectYear"></select></span> <span class="last">Month<br /><a href="#" id="prevBtn" class="button gray"><</a> <select id="selectMonth"></select> <a href="#" id="nextBtn" class="button gray">></a></span> </div> <div class="data-container"></div> </div> XML Structure <?xml version="1.0" encoding="ISO-8859-1"?> <calendar> <year whichyear="2010" id="1"> <month name="January" id="1"> <country name="India"> <holidayList date="Jan 01st" day="Friday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 14th" day="Friday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 26th" day="Wednesday"><![CDATA[Sample Data]]></holidayList> </country> <country name="US"> <holidayList date="Jan 01st" day="Saturday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 17th" day="Monday"><![CDATA[Sample Data]]></holidayList> </country> </month> <month name="January" id="1"> --------------------- --------------------- --------------------- </month> </year> </calendar> CSS Code body{ margin: 0; padding: 0; font-family: "Lucida Grande", "Lucida Sans", sans-serif; font-size: 100%; background: #333333; } #calendar-container{ width:370px; padding:5px; border:1px solid #bcbcbc; margin:0 auto; background-color:#cccccc; -webkit-border-radius: .5em; -moz-border-radius: .5em; border-radius: .5em; -webkit-box-shadow: 0 1px 4px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 4px rgba(0,0,0,.2); box-shadow: 0 1px 4px rgba(0,0,0,.2); } .nav-container{ padding:5px; } .nav-container span{display:inline-block; text-align::left; padding-right:15px; border-right:1px solid #828282; margin-right:12px; text-shadow: 1px 1px 1px #ffffff; font-weight:bold;} .nav-container span.last{padding-right:0px; border-right:none; margin-right:0px;} .data-container{ font-family:Arial, Helvetica, sans-serif; font-size:14px; } #selectMonth{width:120px;} .data-container ul{margin:0px; padding:0px;} .data-container ul li{ list-style:none; padding:5px;} .data-container ul li.list-header{border-bottom:1px solid #bebebe; border-right:1px solid #bebebe; background-color:#eae9e9; -webkit-border-radius: .2em .2em 0 0; -moz-border-radius: .2em .2em 0 0; border-radius: .3em .3em 0 0; background:-moz-linear-gradient(center top , #eae9e9, #d0d0d0) repeat scroll 0 0 transparent; margin-top:5px; text-shadow: 1px 1px 1px #ffffff;} .data-container ul li.padding-left-10px {background-color:#EEEEEE; border-bottom:1px solid #BEBEBE; border-right:1px solid #BEBEBE; font-size:12px;} /* button ---------------------------------------------- */ .button { font-size: 25px; font-weight: 700; display: inline-block; zoom: 1; /* zoom and *display = ie7 hack for display:inline-block */ *display: inline; vertical-align: bottom; margin: 0 2px; outline: none; cursor: pointer; text-align: center; text-decoration: none; text-shadow: 1px 1px 1px #555555; padding: 0px 10px 3px 10px; -webkit-border-radius: .2em; -moz-border-radius: .2em; border-radius: .2em; -webkit-box-shadow: 0 1px 2px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 2px rgba(0,0,0,.2); box-shadow: 0 1px 2px rgba(0,0,0,.2); } .button:hover { text-decoration: none; } .button:active { position: relative; top: 1px; } select{ -webkit-border-radius: .2em .2em .2em .2em; -moz-border-radius: .2em .2em .2em .2em; border-radius: .2em; -webkit-box-shadow: 0 1px 2px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 2px rgba(0,0,0,.2); box-shadow: 0 1px 2px rgba(0,0,0,.2); padding:5px; font-size:16px; border:1px solid #4b4b4b; } /* color styles ---------------------------------------------- */ .gray { color: #e9e9e9; border: solid 1px #555; background: #6e6e6e; background: -webkit-gradient(linear, left top, left bottom, from(#888), to(#575757)); background: -moz-linear-gradient(top, #888, #575757); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#888888', endColorstr='#575757'); } .gray:hover { background: #616161; background: -webkit-gradient(linear, left top, left bottom, from(#757575), to(#4b4b4b)); background: -moz-linear-gradient(top, #757575, #4b4b4b); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#757575', endColorstr='#4b4b4b'); } .gray:active { color: #afafaf; background: -webkit-gradient(linear, left top, left bottom, from(#575757), to(#888)); background: -moz-linear-gradient(top, #575757, #888); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#575757', endColorstr='#888888'); } .grayDis{ color: #999999; background: -webkit-gradient(linear, left top, left bottom, from(#575757), to(#888)); background: -moz-linear-gradient(top, #575757, #888); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#575757', endColorstr='#888888'); } h2{ color:#ffffff; text-align:center; margin:10px 0px;} #header{ text-align:center; font-size: 1em; font-family: "Helvetica Neue", Helvetica, sans-serif; padding:1px; margin:10px 0px 80px 0px; background-color:#575757; } .ad{ width: 728px; height: 90px; margin: 50px auto 10px; } #footer{ width: 340px; margin: 0 auto; } #footer p{ color: #ffffff; font-size: .70em; margin: 0; } #footer a{ color: #15ADD1; }  
Read more
  • 0
  • 0
  • 2378

article-image-inkscape-faqs
Packt
31 Jan 2011
4 min read
Save for later

Inkscape FAQs

Packt
31 Jan 2011
4 min read
Have you got questions on Inkscape you want answers for? You've come to the right place. Whether you're new to the web design software or there are a couple of issues puzzling you, we've put together this FAQ to answer some of the most common Inkscape queries. What is Inkscape? Inkscape is an open source, free program that creates vector-based graphics that can be used in web, print, and screen design as well as interface and logo creation, and material cutting. Its capabilities are similar to those of commercial products such as Adobe Illustrator, Macromedia Freehand, and CorelDraw and can be used for any number of practical purposes. It is a software for web designers who want to add attractive visual elements to their website. What License is Inkscape released under? Inkscape is a free, open source program developed by a group of volunteers under the GNU General Public License (GPL). You not only get a free download but can use the program to create items with it and freely distribute them, modify the program itself, and share that modified program with others. What platforms does Inkscape run on? Inkscape is available for download for Windows, Macintosh, Linux, or Solaris operating systems. Where can you download Inkscape from? Go to the official Inkscape websiteand download the appropriate version of the software for your computer. How do you run Inkscape on MAC OS X operating system? To run on the Mac OS X operating system, it typically runs under X11—an implementation of the X Window System software that makes it possible to run X11-based applications in Mac OS X. The X11 application has shipped with the Mac OS X since version 10.5. When you open Inkscape on a Mac, it will first open X11 and run Inkscape within that program. Loss of some shortcut key options will occur but all functionality is present using menus and toolbars. Is the X11 application a part of the MAC OS X operating system? If you have Mac OS X version 10.5 or above. If you have a previous version of the MAC OS X operating system, you can download the X11 application package 2.4.0 or greater from this website: http://xquartz.macosforge.org/trac/wiki/X112.4.0. What is the interface of Inkscape like? The Inkscape interface is based on the GNOME UI standard which uses visual cues and feedback for any icons. For example: Hovering your mouse over any icon displays a pop-up description of the icon. If an icon has a dark gray border, it is active and can be used. If an icon is grayed out, it is not currently available to use with the current selection. All icons that are in execution mode (or busy) are covered by a dark shadow. This signifies that the application is busy and won't respond to any edit request. There is a Notification Display on the main screen that displays dynamic help messages to key shortcuts and basic information on how to use the Inkscape software in its current state or based on what objects and tools are selected. Within the main screen there is the main menu, a command, snap and status bar, tool controls, and a palette bar. What are Paths? Paths have no pre-defined lengths or widths. They are arbitrary in nature and come in three basic types: open paths (have two ends), closed paths (have no ends, like a circle), or compound paths (uses a combination of two open and/or closed paths). In Inkscape there are a few ways we can make paths: the Pencil (Freehand), Bezier (Pen), and Calligraphy tools—all of which are found in the tool box. They can also be created by converting a regular shape or text object into paths. What shapes can be created in Inkscape? Inkscape can also create shapes that are part of the SVG standard. These are: Rectangles and squares 3D boxes Circles, ellipses, and arcs Stars Polygons Spirals To create any of these shapes, see the following screenshot. Select (click) the shape tool icon in the tool box and then draw the shape on the canvas by clicking, holding, and then dragging the shape to the size you want on the canvas. What is slicing? It is a term used to describe breaking of an image created in a graphics program apart so that it can be re-assembled in HTML to create a web page. To do this, we'll use Web Slicer Extension: from the main menu select Extensions | Web | Slicer | Create a slicer rectangle.
Read more
  • 0
  • 0
  • 1647
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-how-create-image-gallery-wordpress-3
Packt
31 Jan 2011
5 min read
Save for later

How to Create an Image Gallery in WordPress 3

Packt
31 Jan 2011
5 min read
  WordPress 3 Complete Create your own complete website or blog from scratch with WordPress Learn everything you need for creating your own feature-rich website or blog from scratch Clear and practical explanations of all aspects of WordPress In-depth coverage of installation, themes, plugins, and syndication Explore WordPress as a fully functional content management system Clear, easy-to-follow, concise; rich with examples and screenshots         Read more about this book       (For more resources on Wordpress, see here.) Let's get started. Choosing a post or page For my food blog, I'm going to create a new page named My Food Photos for my image gallery. You can always do this on an existing page or post. Following is my new page: Note where I have left my cursor. I made sure to leave it in a spot on the page where I want my gallery to appear, that is, underneath my introductory text (After creating this page, I will also navigate to Appearance | Menus to add it as a subpage under About.). Uploading images Now click on the Upload/Insert image icon and upload some photos (you can choose multiple photos at once). For each photo you upload, enter the title (and a caption if you'd like). When you're done, click on the Save All Changes button. You'll be taken to the Gallery tab, which will show all of the photos you've uploaded to be attached to this page: If you want to upload more photos at this point, just click on the From Computer tab at the top, and upload another photo. When you've uploaded all the photos you want (you can add more later), you may want to change the order of the photos. Just enter the numbers 1 through 6 (or however many photos you have) in the Order column: Make sure you click Save All Changes. On most computers, you can, instead of entering numbers, simply drag-and-drop images. WordPress will then generate the order numbers for you automatically. Then, you can review the Gallery Settings. There are a number of ways to use the gallery, but there is a single approach that I've found works for most people. You can experiment on your own with other settings and plugins, of course! I suggest you set Link thumbnails to to be Image File instead of Attachment Page. You can leave the other settings as they are for now. Once all of your settings are done, click on the Insert gallery button. This overlay box will disappear, and you'll see your post again. The page will have the gallery icon placeholder in the spot where you left the cursor, as seen in the following screenshot: If you're in the HTML view, you'll see the gallery shortcode in that spot: Note that because I'm uploading these photos while adding/editing this particular page, all of these photos will be "attached" to this page. That's how I know they'll be in the gallery on this page. Other photos that I've uploaded to other posts or pages will not be included in this gallery. Learning moreThe [gallery] shortcode is quite powerful! For example, you can actually give it a list of Media ID numbers—any Media item in your Media Library—to include, or you can tell it to just exclude certain items that are attached to this post or page. You can also control how the Thumbnail version of each image shows whether the medium or large. There is more! Take a look at the codex to get all of the parameters: http://codex.wordpress.org/Gallery_Shortcode Now, publish or save your page. When you view the page, there's a gallery of your images as follows: If you click on one of the images, you'll be linked to the larger version of the image. Now, this is not ideal for navigating through a gallery of images. Let's add a plugin that will streamline your gallery. Using a lightbox plugin A lightbox effect is when the existing page content fades a little and a new item appears on top of the existing page. You've seen this effect already in the WP Admin when you clicked on Add/Insert image. We can easily add the same effect to your galleries by adding a plugin. There are a number of lightbox plugins available, but the one I like these days uses jQuery Colorbox. Find this plugin, either through the WP Admin or in the Plugins Repository (http://wordpress.org/extend/plugins/jquery-colorbox/), and install it. Once you've activated the plugin, navigate to Settings | jQuery Colorbox: Use the Theme pull-down to choose the theme you want (the preview image will update to give you an idea of what it will look like); I've chosen Theme #4. Then you can choose to either Automate jQuery Colorbox for all images or Automate jQuery Colorbox for images in WordPress galleries. You can choose whether to automate for all images; I certainly suggest you automate for images in galleries. You can experiment with the other settings on this page (if you routinely upload very large images, you'll want to use the areas that let you set the maximum size of the colorbox and resize images automatically). You'll want to disable the warning (the very last check box on the page). Then, click on Save Changes. Now, when I go to my image gallery page and click on the first image, the colorbox is activated, and I can click Next and Back to navigate through the images: Summary In this article we saw how to add and manage built-in image galleries to display photos and other images. Further resources on this subject: WordPress 3 Complete [Book] WordPress 2.9 E-Commerce [Book] Getting Started with WordPress 3 [Article] How to Write a Widget in WordPress 3 [Article] Understanding jQuery and WordPress Together [Article] Tips and Tricks for Working with jQuery and WordPress [Article]
Read more
  • 0
  • 0
  • 1975

article-image-compression-formats-linux-shell-script
Packt
31 Jan 2011
6 min read
Save for later

Compression Formats in Linux Shell Script

Packt
31 Jan 2011
6 min read
  Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes Master the art of crafting one-liner command sequence to perform tasks such as text processing, digging data from files, and lot more Practical problem solving techniques adherent to the latest Linux platform Packed with easy-to-follow examples to exercise all the features of the Linux shell scripting language Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Compressing with gunzip (gzip) gzip is a commonly used compression format in GNU/Linux platforms. Utilities such as gzip, gunzip, and zcat are available to handle gzip compression file types. gzip can be applied on a file only. It cannot archive directories and multiple files. Hence we use a tar archive and compress it with gzip. When multiple files are given as input it will produce several individually compressed (.gz) files. Let's see how to operate with gzip. How to do it... In order to compress a file with gzip use the following command: $ gzip filename $ ls filename.gz Then it will remove the file and produce a compressed file called filename.gz. Extract a gzip compressed file as follows: $ gunzip filename.gz It will remove filename.gz and produce an uncompressed version of filename.gz. In order to list out the properties of a compressed file use: $ gzip -l test.txt.gz compressed uncompressed ratio uncompressed_name 35 6 -33.3% test.txt The gzip command can read a file from stdin and also write a compressed file into stdout. Read from stdin and out as stdout as follows: $ cat file | gzip -c > file.gz The -c option is used to specify output to stdout. We can specify the compression level for gzip. Use --fast or the --best option to provide low and high compression ratios, respectively. There's more... The gzip command is often used with other commands. It also has advanced options to specify the compression ratio. Let's see how to work with these features. Gzip with tarball We usually use gzip with tarballs. A tarball can be compressed by using the –z option passed to the tar command while archiving and extracting. You can create gzipped tarballs using the following methods: Method - 1 $ tar -czvvf archive.tar.gz [FILES] Or: $ tar -cavvf archive.tar.gz [FILES] The -a option specifies that the compression format should automatically be detected from the extension. Method - 2First, create a tarball: $ tar -cvvf archive.tar [FILES] Compress it after tarballing as follows: $ gzip archive.tar If many files (a few hundreds) are to be archived in a tarball and need to be compressed, we use Method - 2 with few changes. The issue with giving many files as command arguments to tar is that it can accept only a limited number of files from the command line. In order to solve this issue, we can create a tar file by adding files one by one using a loop with an append option (-r) as follows: FILE_LIST="file1 file2 file3 file4 file5" for f in $FILE_LIST; do tar -rvf archive.tar $f done gzip archive.tar In order to extract a gzipped tarball, use the following: -x for extraction -z for gzip specification Or: $ tar -xavvf archive.tar.gz -C extract_directory In the above command, the -a option is used to detect the compression format automatically. zcat – reading gzipped files without extracting zcat is a command that can be used to dump an extracted file from a .gz file to stdout without manually extracting it. The .gz file remains as before but it will dump the extracted file into stdout as follows: $ ls test.gz $ zcat test.gz A test file # file test contains a line "A test file" $ ls test.gz Compression ratio We can specify compression ratio, which is available in range 1 to 9, where: 1 is the lowest, but fastest 9 is the best, but slowest You can also specify the ratios in between as follows: $ gzip -9 test.img This will compress the file to the maximum. Compressing with bunzip (bzip) bunzip2 is another compression technique which is very similar to gzip. bzip2 typically produces smaller (more compressed) files than gzip. It comes with all Linux distributions. Let's see how to use bzip2. How to do it... In order to compress with bzip2 use: $ bzip2 filename $ ls filename.bz2 Then it will remove the file and produce a compressed file called filename.bzip2. Extract a bzipped file as follows: $ bunzip2 filename.bz2 It will remove filename.bz2 and produce an uncompressed version of filename. bzip2 can read a file from stdin and also write a compressed file into stdout. In order to read from stdin and read out as stdout use: $ cat file | bzip2 -c > file.tar.bz2 -c is used to specify output to stdout. We usually use bzip2 with tarballs. A tarball can be compressed by using the -j option passed to the tar command while archiving and extracting. Creating a bzipped tarball can be done by using the following methods: Method - 1 $ tar -cjvvf archive.tar.bz2 [FILES] Or: $ tar -cavvf archive.tar.bz2 [FILES] The -a option specifies to automatically detect compression format from the extension. Method - 2First create the tarball: $ tar -cvvf archive.tar [FILES] Compress it after tarballing: $ bzip2 archive.tar If we need to add hundreds of files to the archive, the above commands may fail. To fix that issue, use a loop to append files to the archive one by one using the –r option. Extract a bzipped tarball as follows: $ tar -xjvvf archive.tar.bz2 -C extract_directory In this command: -x is used for extraction -j is for bzip2 specification -C is for specifying the directory to which the files are to be extracted Or, you can use the following command: $ tar -xavvf archive.tar.bz2 -C extract_directory -a will automatically detect the compression format. There's more... bunzip has several additional options to carry out different functions. Let's go through few of them. Keeping input files without removing them While using bzip2 or bunzip2, it will remove the input file and produce a compressed output file. But we can prevent it from removing input files by using the –k option. For example: $ bunzip2 test.bz2 -k $ ls test test.bz2 Compression ratio We can specify the compression ratio, which is available in the range of 1 to 9 (where 1 is the least compression, but fast, and 9 is the highest possible compression but much slower). For example: $ bzip2 -9 test.img This command provides maximum compression.  
Read more
  • 0
  • 0
  • 6968

article-image-linux-shell-script-logging-tasks
Packt
28 Jan 2011
7 min read
Save for later

Linux Shell Script: Logging Tasks

Packt
28 Jan 2011
7 min read
Linux Shell Scripting Cookbook Collecting information about the operating environment, logged in users, the time for which the computer has been powered on, and any boot failures are very helpful. This recipe will go through a few commands used to gather information about a live machine. Getting ready This recipe will introduce the commands who, w, users, uptime, last, and lastb. How to do it... To obtain information about users currently logged in to the machine use: $ who slynux   pts/0   2010-09-29 05:24 (slynuxs-macbook-pro.local) slynux   tty7    2010-09-29 07:08 (:0) Or: $ w 07:09:05 up  1:45,  2 users,  load average: 0.12, 0.06, 0.02 USER     TTY     FROM    LOGIN@   IDLE  JCPU PCPU WHAT slynux   pts/0   slynuxs 05:24  0.00s  0.65s 0.11s sshd: slynux slynux   tty7    :0      07:08  1:45m  3.28s 0.26s gnome-session It will provide information about logged in users, the pseudo TTY used by the users, the command that is currently executing from the pseudo terminal, and the IP address from which the users have logged in. If it is localhost, it will show the hostname. who and w format outputs with slight difference. The w command provides more detail than who. TTY is the device file associated with a text terminal. When a terminal is newly spawned by the user, a corresponding device is created in /dev/ (for example, /dev/pts/3). The device path for the current terminal can be found out by typing and executing the command tty. In order to list the users currently logged in to the machine, use: $ users Slynux slynux slynux hacker If a user has opened multiple pseudo terminals, it will show that many entries for the same user. In the above output, the user slynux has opened three pseudo terminals. The easiest way to print unique users is to use sort and uniq to filter as follows: $ users | tr ' ' 'n' | sort | uniq slynux hacker We have used tr to replace ' ' with 'n'. Then combination of sort and uniq will produce unique entries for each user. In order to see how long the system has been powered on, use: $ uptime 21:44:33 up  3:17,  8 users,  load average: 0.09, 0.14, 0.09 The time that follows the word up indicates the time for which the system has been powered on. We can write a simple one-liner to extract the uptime only. Load average in uptime's output is a parameter that indicates system load. In order to get information about previous boot and user logged sessions, use: $ last slynux tty7         :0              Tue Sep 28 18:27   still logged in reboot system boot 2.6.32-21-generi Tue Sep 28 18:10 - 21:46 (03:35) slynux pts/0      :0.0          Tue Sep 28 05:31 - crash (12:39) The last command will provide information about logged in sessions. It is actually a log of system logins that consists of information such as tty from which it has logged in, login time, status, and so on. The last command uses the log file /var/log/wtmp for input log data. It is also possible to explicitly specify the log file for the last command using the –f option. For example: $ last -f /var/log/wtmp In order to obtain info about login sessions for a single user, use: $ last USER Get information about reboot sessions as follows: $ last reboot reboot system boot 2.6.32-21-generi Tue Sep 28 18:10 - 21:48 (03:37) reboot system boot 2.6.32-21-generi Tue Sep 28 05:14 - 21:48 (16:33) In order to get information about failed user login sessions use: # lastb test     tty8    :0          Wed Dec 15 03:56 - 03:56  (00:00) slynux tty8    :0          Wed Dec 15 03:55 - 03:55  (00:00) You should run lastb as the root user. Logging access to files and directories Logging of file and directory access is very helpful to keep track of changes that are happening to files and folders. This recipe will describe how to log user accesses. Getting ready The inotifywait command can be used to gather information about file accesses. It doesn't come by default with every Linux distro. You have to install the inotify-tools package by using a package manager. It also requires the Linux kernel to be compiled with inotify support. Most of the new GNU/Linux distributions come with inotify enabled in the kernel. How to do it... Let's walk through the shell script to monitor the directory access: #/bin/bash #Filename: watchdir.sh #Description: Watch directory access path=$1 #Provide path of directory or file as argument to script inotifywait -m -r -e create,move,delete $path -q A sample output is as follows: $ ./watchdir.sh . ./ CREATE new ./ MOVED_FROM new ./ MOVED_TO news ./ DELETE news How it works... The previous script will log events create, move, and delete files and folders from the given path. The -m option is given for monitoring the changes continuously rather than going to exit after an event happens. -r is given for enabling a recursive watch the directories. -e specifies the list of events to be watched. -q is to reduce the verbose messages and print only required ones. This output can be redirected to a log file. We can add or remove the event list. Important events available are as follows: Logfile management with logrotate Logfiles are essential components of a Linux system's maintenance. Logfiles help to keep track of events happening on different services on the system. This helps the sysadmin to debug issues and also provides statistics on events happening on the live machine. Management of logfiles is required because as time passes the size of a logfile gets bigger and bigger. Therefore, we use a technique called rotation to limit the size of the logfile and if the logfile reaches a size beyond the limit, it will strip the logfile and store the older entries from the logfile in an archive. Hence older logs can be stored and kept for future reference. Let's see how to rotate logs and store them. Getting ready logrotate is a command every Linux system admin should know. It helps to restrict the size of logfile to the given SIZE. In a logfile, the logger appends information to the log file. Hence the recent information appears at the bottom of the log file. logrotate will scan specific logfiles according to the configuration file. It will keep the last 100 kilobytes (for example, specified SIZE = 100k) from the logfile and move rest of the data (older log data) to a new file logfile_name.1 with older entries. When more entries occur in the logfile (logfile_name.1) and it exceeds the SIZE, it updates the logfile with recent entries and creates logfile_name.2 with older logs. This process can easily be configured with logrotate.logrotate can also compress the older logs as logfile_name.1.gz, logfile_name2.gz, and so on. The option for whether older log files are to be compressed or not is available with the logrotate configuration. How to do it... logrotate has the configuration directory at /etc/logrotate.d. If you look at this directory by listing contents, many other logfile configurations can be found. We can write our custom configuration for our logfile (say /var/log/program.log) as follows: $ cat /etc/logrotate.d/program /var/log/program.log { missingok notifempty size 30k compress weekly rotate 5 create 0600 root root } Now the configuration is complete. /var/log/program.log in the configuration specifies the logfile path. It will archive old logs in the same directory path. Let's see what each of these parameters are: The options specified in the table are optional; we can specify the required options only in the logrotate configuration file. There are numerous options available with logrotate. Please refer to the man pages (http://linux.die.net/man/8/logrotate) for more information on logrotate.  
Read more
  • 0
  • 0
  • 3966

article-image-linux-shell-script-monitoring-activities
Packt
28 Jan 2011
8 min read
Save for later

Linux Shell Script: Monitoring Activities

Packt
28 Jan 2011
8 min read
Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes Master the art of crafting one-liner command sequence to perform tasks such as text processing, digging data from files, and lot more Practical problem solving techniques adherent to the latest Linux platform Packed with easy-to-follow examples to exercise all the features of the Linux shell scripting language Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible.    Disk usage hacks Disk space is a limited resource. We frequently perform disk usage calculation on hard disks or any storage media to find out the free space available on the disk. When free space becomes scarce, we will need to find out large-sized files that are to be deleted or moved in order to create free space. Disk usage manipulations are commonly used in shell scripting contexts. This recipe will illustrate various commands used for disk manipulations and problems where disk usages can be calculated with a variety of options. Getting ready df and du are the two significant commands that are used for calculating disk usage in Linux. The command df stands for disk free and du stands for disk usage. Let's see how we can use them to perform various tasks that involve disk usage calculation. How to do it... To find the disk space used by a file (or files), use: $ du FILENAME1 FILENAME2 . . For example: $ du file.txt 4 The result is, by default, shown as size in bytes. In order to obtain the disk usage for all files inside a directory along with the individual disk usage for each file showed in each line, use: $ du -a DIRECTORY -a outputs results for all files in the specified directory or directories recursively. Running du DIRECTORY will output a similar result, but it will show only the size consumed by subdirectories. However, they do not show the disk usage for each of the files. For printing the disk usage by files, -a is mandatory. For example: $  du -a test 4  test/output.txt 4  test/process_log.sh 4  test/pcpu.sh 16  test An example of using du DIRECTORY is as follows: $ du test 16  test There's more... Let's go through additional usage practices for the du command. Displaying disk usage in KB, MB, or Blocks By default, the disk usage command displays the total bytes used by a file. A more human-readable format is when disk usage is expressed in standard units KB, MB, or GB. In order to print the disk usage in a display-friendly format, use –h as follows: du -h FILENAME For example: $ du -sh test/pcpu.sh 4.0K  test/pcpu.sh # Multiple file arguments are accepted Or: # du -h DIRECTORY $ du -h hack/ 16K  hack/ Finding the 10 largest size files from a given directory Finding large-size files is a regular task we come across. We regularly require to delete those huge size files or move them. We can easily find out large-size files using du and sort commands. The following one-line script can achieve this task: $ du -ak SOURCE_DIR | sort -nrk 1 | head Here -a specifies all directories and files. Hence du traverses the SOURCE_DIR and calculates the size of all files. The first column of the output contains the size in Kilobytes since -k is specified and the second column contains the file or folder name. sort is used to perform numerical sort with column 1 and reverse it. head is used to parse the first 10 lines from the output. For example: $ du -ak /home/slynux | sort -nrk 1 | head -n 4 50220 /home/slynux 43296 /home/slynux/.mozilla 43284 /home/slynux/.mozilla/firefox 43276 /home/slynux/.mozilla/firefox/8c22khxc.default One of the drawbacks of the above one-liner is that it includes directories in the result. However, when we need to find only the largest files and not directories we can improve the one-liner to output only the large-size files as follows: $ find . -type f -exec du -k {} ; | sort -nrk 1 | head We used find to filter only files to du rather than allow du to traverse recursively by itself. Calculating execution time for a command While testing an application or comparing different algorithms for a given problem, execution time taken by a program is very critical. A good algorithm should execute in minimum amount of time. There are several situations in which we need to monitor the time taken for execution by a program. For example, while learning about sorting algorithms, how do you practically state which algorithm is faster? The answer to this is to calculate the execution time for the same data set. Let's see how to do it. How to do it... time is a command that is available with any UNIX-like operating systems. You can prefix time with the command you want to calculate execution time, for example: $ time COMMAND The command will execute and its output will be shown. Along with output, the time command appends the time taken in stderr. An example is as follows: $ time ls test.txt next.txt real    0m0.008s user    0m0.001s sys     0m0.003s It will show real, user, and system times for execution. The three different times can be defined as follows: Real is wall clock time—the time from start to finish of the call. This is all elapsed time including time slices used by other processes and the time that the process spends when blocked (for example, if it is waiting for I/O to complete). User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only the actual CPU time used in executing the process. Other processes and the time that the process spends when blocked do not count towards this figure. Sys is the amount of CPU time spent in the kernel within the process. This means executing the CPU time spent in system calls within the kernel, as opposed to library code, which is still running in the user space. Like 'user time', this is only the CPU time used by the process. An executable binary of the time command is available at /usr/bin/time as well as a shell built-in named time exists. When we run time, it calls the shell built-in by default. The shell built-in time has limited options. Hence, we should use an absolute path for the executable (/usr/bin/time) for performing additional functionalities. We can write this time statistics to a file using the -o filename option as follows: $ /usr/bin/time -o output.txt COMMAND The filename should always appear after the –o flag. In order to append the time statistics to a file without overwriting, use the -a flag along with the -o option as follows: $ /usr/bin/time -a -o output.txt COMMAND We can also format the time outputs using format strings with the -f option. A format string consists of parameters corresponding to specific options prefixed with %. The format strings for real time, user time, and sys time are as follows: Real time - %e f User - %U f sys - %S By combining parameter strings, we can create formatted output as follows: $ /usr/bin/time -f "FORMAT STRING" COMMAND For example: $ /usr/bin/time -f "Time: %U" -a -o timing.log uname Linux Here %U is the parameter for user time. When formatted output is produced, the formatted output of the command is written to the standard output and the output of the COMMAND, which is timed, is written to standard error. We can redirect the formatted output using a redirection operator (>) and redirect the time information output using the (2>) error redirection operator. For example: $ /usr/bin/time -f "Time: %U" uname> command_output.txt 2>time.log $ cat time.log Time: 0.00 $ cat command_output.txt Linux Many details regarding a process can be collected using the time command. The important details include, exit status, number of signals received, number of context switches made, and so on. Each parameter can be displayed by using a suitable format string. The following table shows some of the interesting parameters that can be used: For example, the page size can be displayed using the %Z parameters as follows: $ /usr/bin/time -f "Page size: %Z bytes" ls> /dev/null Page size: 4096 bytes Here the output of the timed command is not required and hence the standard output is directed to the /dev/null device in order to prevent it from writing to the terminal.  
Read more
  • 0
  • 0
  • 4993
article-image-why-ajax-is-a-different-type-of-software
Packt
27 Jan 2011
14 min read
Save for later

Why Ajax is a different type of software

Packt
27 Jan 2011
14 min read
Ajax is not a piece of software in the way we think about JavaScript or CSS being a piece of software. It's actually a lot more like an overlaid function. But what does that mean, exactly? Why Ajax is a bit like human speech Human speech is an overlaid function. What is meant by this is reflected in the answer to a question: "What part of the human body has the basic job of speech?" The tongue, for one answer, is used in speech, but it also tastes food and helps us swallow. The lungs and diaphragm, for another answer, perform the essential task of breathing. The brain cannot be overlooked, but it also does a great many other jobs. All of these parts of the body do something more essential than speech and, for that matter, all of these can be found among animals that cannot talk. Speech is something that is overlaid over organs that are there in the first place because of something other than speech. Something similar to this is true for Ajax, which is not a technology in itself, but something overlaid on top of other technologies. Ajax, some people say, stands for Asynchronous JavaScript and XML, but that was a retroactive expansion. JavaScript was introduced almost a decade before people began seriously talking about Ajax. Not only is it technically possible to use Ajax without JavaScript (one can substitute VBScript at the expense of browser compatibility), but there are quite a few substantial reasons to use JavaScript Object Notation (JSON) in lieu of heavy-on-the-wire eXtensible Markup Language (XML). Performing the overlaid function of Ajax with JSON replacing XML is just as eligible to be considered full-fledged Ajax as a solution incorporating XML. Ajax helps the client talk to the server Ajax is a way of using client-side technologies to talk with a server and perform partial page updates. Updates may be to all or part of the page, or simply to data handled behind the scenes. It is an alternative to the older paradigm of having a whole page replaced by a new page loaded when someone clicks on a link or submits a form. Partial page updates, in Ajax, are associated with Web 2.0, while whole page updates are associated with Web 1.0; it is important to note that "Web 2.0" and "Ajax" are not interchangeable. Web 2.0 includes more decentralized control and contributions besides Ajax, and for some objectives it may make perfect sense to develop an e-commerce site that uses Ajax but does not open the door to the same kind of community contributions as Web 2.0. Some of the key features common in Web 2.0 include: Partial page updates with JavaScript communicating with a server and rendering to a page An emphasis on user-centered design Enabling community participation to update the website Enabling information sharing as core to what this communication allows The concept of "partial page updates" may not sound very big, but part of its significance may be seen in an unintended effect. The original expectation of partial page updates was that it would enable web applications that were more responsive. The expectation was that if submitting a form would only change a small area of a page, using Ajax to just load the change would be faster than reloading the entire page for every minor change. That much was true, but once programmers began exploring, what they used Ajax for was not simply minor page updates, but making client-side applications that took on challenges more like those one would expect a desktop program to do, and the more interesting Ajax applications usually became slower. Again, this was not because you could not fetch part of the page and update it faster, but because programmers were trying to do things on the client side that simply were not possible under the older way of doing things, and were pushing the envelope on the concept of a web application and what web applications can do. Which technologies does Ajax overlay? Now let us look at some of the technologies where Ajax may be said to be overlaid. JavaScript JavaScript deserves pride of place, and while it is possible to use VBScript for Internet Explorer as much more than a proof of concept, for now if you are doing Ajax, it will almost certainly be Ajax running JavaScript as its engine. Your application will have JavaScript working with XMLHttpRequest, JavaScript working with HTML, XHTML, or HTML5; JavaScript working with the DOM, JavaScript working with CSS, JavaScript working with XML or JSON, and perhaps JavaScript working with other things. While addressing a group of Django developers or Pythonistas, it would seem appropriate to open with, "I share your enthusiasm." On the other hand, while addressing a group of JavaScript programmers, in a few ways it is more appropriate to say, "I feel your pain." JavaScript is a language that has been discovered as a gem, but its warts were enough for it to be largely unappreciated for a long time. "Ajax is the gateway drug to JavaScript," as it has been said—however, JavaScript needs a gateway drug before people get hooked on it. JavaScript is an excellent language and a terrible language rolled into one. Before discussing some of the strengths of JavaScript—and the language does have some truly deep strengths—I would like to say "I feel your pain" and discuss two quite distinct types of pain in the JavaScript language. The first source of pain is some of the language decisions in JavaScript: The Wikipedia article says it was designed to resemble Java but be easier for non-programmers, a decision reminiscent of SQL and COBOL. The Java programmer who finds the C-family idiom of for(i = 0; i < 100; ++i) available will be astonished to find that the functions are clobbering each other's assignments to i until they are explicitly declared local to the function by declaring the variables with var. There is more pain where that came from. The following two functions will not perform the naively expected mathematical calculation correctly; the assignments to i and the result will clobber each other: function outer() { result = 0; for(i = 0; i < 100; ++i) { result += inner(i); } return result } function inner(limit) { result = 0; for(i = 0; i < limit; ++i) { result += i; } return result; } The second source of pain is quite different. It is a pain of inconsistent implementation: the pain of, "Write once, debug everywhere." Strictly speaking, this is not JavaScript's fault; browsers are inconsistent. And it need not be a pain in the server-side use of JavaScript or other non-browser uses. However, it comes along for the ride for people who wish to use JavaScript to do Ajax. Cross-browser testing is a foundational practice in web development of any stripe; a good web page with semantic markup and good CSS styling that is developed on Firefox will usually look sane on Internet Explorer (or vice versa), even if not quite pixel-perfect. But program directly for the JavaScript implementation on one version of a browser, and you stand rather sharp odds of your application not working at all on another browser. The most important object by far for Ajax is the XMLHttpRequest and not only is it not the case that you may have to do different things to get an XMLHttpRequest in different browsers or sometimes different (common) versions of the same browser, and, even when you have code that will get an XMLHttpRequest object, the objects you have can be incompatible so that code that works on one will show strange bugs for another. Just because you have done the work of getting an XMLHttpRequest object in all of the major browsers, it doesn't mean you're home free. Before discussing some of the strengths of the JavaScript language itself, it would be worth pointing out that a good library significantly reduces the second source of pain. Almost any sane library will provide a single, consistent way to get XMLHttpRequest functionality, and consistent behavior for the access it provides. In other words, one of the services provided by a good JavaScript library is a much more uniform behavior, so that you are programming for only one model, or as close as it can manage, and not, for instance, pasting conditional boilerplate code to do simple things that are handled differently by different browser versions, often rendering surprisingly different interpretations of JavaScript. Many of the things we will see done well as we explore jQuery are also done well in other libraries. We previously said that JavaScript is an excellent language and a terrible language rolled into one; what is to be said in favor of JavaScript? The list of faults is hardly all that is wrong with JavaScript, and saying that libraries can dull the pain is not itself a great compliment. But in fact, something much stronger can be said for JavaScript: If you can figure out why Python is a good language, you can figure out why JavaScript is a good language. I remember, when I was chasing pointer errors in what became 60,000 lines of C, teasing a fellow student for using Perl instead of a real language. It was clear in my mind that there were interpreted scripting languages, such as the bash scripting that I used for minor convenience scripts, and then there were real languages, which were compiled to machine code. I was sure that a real language was identified with being compiled, among other things, and that power in a language was the sort of thing C traded in. (I wonder why he didn't ask me if he wasn't a real programmer because he didn't spend half his time chasing pointer errors.) Within the past year or so I've been asked if "Python is a real programming language or is just used for scripting," and something similar to the attitude shift I needed to appreciate Perl and Python is needed to properly appreciate JavaScript. The name "JavaScript" is unfortunate; like calling Python "Assembler Kit", it's a way to ask people not to see its real strengths. (Someone looking for tools for working on an assembler would be rather disgusted to buy an "Assembler Kit" and find Python inside. People looking for Java's strengths in JavaScript will almost certainly be disappointed.) JavaScript code may look like Java in an editor, but the resemblance is a façade; besides Mocha, which had been renamed LiveScript, being renamed to JavaScript just when Netscape was announcing Java support in web browsers, it is has been described as being descended from NewtonScript, Self, Smalltalk, and Lisp, as well as being influenced by Scheme, Perl, Python, C, and Java. What's under the Java façade is pretty interesting. And, in the sense of the simplifying "façade" design pattern, JavaScript was marketed in a way almost guaranteed not to communicate its strengths to programmers. It was marketed as something that nontechnical people could add snippets of, in order to achieve minor, and usually annoying, effects on their web pages. It may not have been a toy language, but it sure was dressed up like one. Python may not have functions clobbering each other's variables (at least not unless they are explicitly declared global), but Python and JavaScript are both multiparadigm languages that support object-oriented programming, and their versions of "object-oriented" have a lot in common, particularly as compared to (for instance) Java. In Java, an object's class defines its methods and the type of its fields, and this much is set in stone. In Python, an object's class defines what an object starts off as, but methods and fields can be attached and detached at will. In JavaScript, classes as such do not exist (unless simulated by a library such as Prototype), but an object can inherit from another object, making a prototype and by implication a prototype chain, and like Python it is dynamic in that fields can be attached and detached at will. In Java, the instanceof keyword is important, as are class casts, associated with strong, static typing. Python doesn't have casts, and its isinstance() function is seen by some as a mistake. The concern is that Python, like JavaScript, is a duck-typing language: "If it looks like a duck, and it quacks like a duck, it's a duck!" In a duck-typing language, if you write a program that polls weather data, and there's a ForecastFromScreenscraper object that is several years old and screenscrapes an HTML page, you should be able to write a ForecastFromRSS object that gets the same information much more cleanly from an RSS feed. You should be able to use it as a drop-in replacement as long as you have the interface right. That is different from Java; at least if it were a ForecastFromScreenscraper object, code would break immediately if you handed it a ForecastFromRSS object. Now, in fairness to Java, the "best practices" Java way to do it would probably separate out an IForecast interface, which would be implemented by both ForecastFromScreenscraper and later ForecastFromRSS, and Java has ways of allowing drop-in replacements if they have been explicitly foreseen and planned for. However, in duck-typed languages, the reality goes beyond the fact that if the people in charge designed things carefully and used an interface for a particular role played by an object, you can make a drop-in replacement. In a duck-typed language, you can make a drop-in replacement for things that the original developers never imagined you would want to replace. JavaScript's reputation is changing. More and more people are recognizing that there's more to the language than design flaws. More and more people are looking past the fact that JavaScript is packaged like Java, like packaging a hammer to give the impression that it is basically like a wrench. More and more people are looking past the silly "toy language" Halloween costume that JavaScript was stuffed into as a kid. One of the ways good programmers grow is by learning new languages, and JavaScript is not just the gateway to mainstream Ajax; it is an interesting language in itself. With that much stated, we will be making a carefully chosen, selective use of JavaScript, and not make a language lover's exploration of the JavaScript language, overall. Much of our work will be with the jQuery library; if you have just programmed a little "bare JavaScript", discovering jQuery is a bit like discovering Python, in terms of a tool that cuts like a hot knife through butter. It takes learning, but it yields power and interesting results soon as well as having some room to grow. What is XMLHttpRequest in relation to Ajax? The XMLHttpRequest object is the reason why the kind of games that can be implemented with Ajax technologies do not stop at clones of Tetris and other games that do not know or care if they are attached to a network. They include massive multiplayer online role-playing games where the network is the computer. Without having something like XMLHttpRequest, "Ajax chess" would probably mean a game of chess against a chess engine running in your browser's JavaScript engine; with XMLHttpRequest, "Ajax chess" is more likely man-to-man chess against another human player connected via the network. The XMLHttpRequest object is the object that lets Gmail, Google Maps, Bing Maps, Facebook, and many less famous Ajax applications deliver on Sun's promise: the network is the computer. There are differences and some incompatibilities between different versions of XMLHttpRequest, and efforts are underway to advance "level-2-compliant" XMLHttpRequest implementations, featuring everything that is expected of an XMLHttpRequest object today and providing further functionality in addition, somewhat in the spirit of level 2 or level 3 CSS compliance. We will not be looking at level 2 efforts, but we will look at the baseline of what is expected as standard in most XMLHttpRequest objects. The basic way that an XMLHttpRequest object is used is that the object is created or reused (the preferred practice usually being to reuse rather than create and discard a large number), a callback event handler is specified, the connection is opened, the data is sent, and then when the network operation completes, the callback handler retrieves the response from XMLHttpRequest and takes an appropriate action. A bare-bones XMLHttpRequest object can be expected to have the following methods and properties.
Read more
  • 0
  • 0
  • 2326

article-image-workflow-and-automation-records-alfresco-3
Packt
25 Jan 2011
13 min read
Save for later

Workflow and Automation for Records in Alfresco 3

Packt
25 Jan 2011
13 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix         Read more about this book       (For more resources on this subject, see here.) Current limitations in Records Management The 3.3 release of Alfresco Records Management comes with some limitations about how rules and workflow can be used. Records Management requirements, and specifically the requirements for complying with the DoD 5015.2 specification, made it necessary for Alfresco developers, at least for this first release of Records Management, to make design decisions that involved limiting some of the standard Alfresco capabilities within Records Management. The idea that records are things that need to be locked down and made unalterable is at odds with some of the capabilities of rules and workflow. Proper integration of workflow and rules with Records Management requires that a number of scenarios be carefully worked through. Because of that, as of the Alfresco 3.3 release, both workflow and rules are not yet available for use in Records Management. Another area that is lacking in the 3.3 release is the availability of client-side JavaScript APIs for automating Records Management functions. The implementation of Records Management exposes many webscripts for performing records functions, but that same records functionality hasn't been exposed via a client-side JavaScript API. It is expected that capabilities of Records Management working alongside with rules, workflow, and APIs will likely improve in future releases of Alfresco. While the topic of this article is workflow and automation, the limitations we've just mentioned don't necessarily mean that there isn't anything left to discuss in this article. Remember, Alfresco Records Management co-exists with the rest of Share, and rules and workflow are alive and well in standard Share. Also remember that prior to being filed and declared as a record, many records start out their lives as standard documents that require collaboration and versioning before they are finalized and turned into records. It is in these scenarios, prior to becoming a record, where the use of workflow often makes the most sense. With that in mind, let's look at the capabilities of rules and workflow within Alfresco Share and how these features can be used side-by-side with Records Management today, and also get a feel of how future releases of Alfresco Records Management might be able to more directly apply these capabilities. Alfresco rules The Alfresco rules engine provides an interface for implementing simple business rules for managing the processing and flow of content within an organization. Creating rules is easy to do. Alfresco rules were first available in the Alfresco Explorer client. While rules in the Explorer client were never really hard to apply, the new rules interface in Share makes the creation and use of rules even easier. Rules can be attached to a folder and triggered when content is moved into it, moved out of it, or updated. The triggering of a rule causes an action to be run which then operates on the folder or the contents within it. Filters can also be applied to the rules to limit the conditions under which the trigger will fire. A trigger can be set up to run from one of the many pre-defined actions or it can be customized to run a user-defined script. Rules are not available for Share Records Management Folders. You may find that it is possible to bypass this limitation by either navigating to records folders using the Repository browser in Share or by using the JSF Explorer client to get to the Records Management folder. From those interfaces, it is possible to create rules for record folders, but it's not a good idea. Resist the temptation. It is very easy to accidently corrupt records data by applying rules directly to records folders. Defining a rule While rules can't be applied directly to the folders of Records Management, it is possible to apply rules on folders that are outside of Records Management which can then push and file documents into records folders. We'll see that once a document is moved into a records folder, rules can be used to update its metadata and even declare it as a record. To apply rules to a folder outside of the Records Management site, we select the Manage Rules action available for a folder: On the next screen, we click on the Create Rules link, and we then see a page for creating the definition of the rules for the folder: A rule is defined by three main pieces of information: The trigger event Filters to limit the items that are processed The action that is performed Triggers for rules Three different types of events can trigger a rule: Creating or moving items into the folder Updating items in the folder Deleting or moving items from the folder Filters for rules By default, when an event occurs, the rule that is triggered applies the rule action to all items involved in the event. Filters can be applied that will make rules more restrictive, limiting the items that will be processed by the rule. Filters are a collection of conditional expressions that are built using metadata properties associated with the items. There are actually two conditions. The first condition is a list of criteria for different metadata properties, all of which must be met. Similarly, the second condition is a list of criteria for metadata properties, none of which must hold. For example, in the screenshot shown below, there is a filter defined that applies a rule only if the document name begins with FTK_, if it is a Microsoft Word file, and if it does not contain the word Alfresco in the Description property: By clicking on the + and – buttons to the right of each entry, new criteria can be added and existing criteria can be removed from each of the two sets. To help specify the filter criteria from the many possible properties available in Alfresco, a property browser lets the user navigate through properties that can be used when specifying the criteria. The browser shows all available properties associated with aspect and type names: Actions for rules Actions are the operations that a rule runs when triggered. There is a fairly extensive library of actions that come standard with Alfresco and that can be used in the definition of a rule. Many of the actions available can be configured with parameters. This means that a lot of capability can be easily customized into a rule without needing to do any programming. If the standard actions aren't sufficient to perform the desired task, an alternative is to write a custom server-side JavaScript that can be attached as the action to a rule that will be run when triggered. The complete Alfresco JavaScript API is available to be used when writing the custom action. Actions that are available for assignment as rule actions are shown in the next table: It is interesting to note that there are many Records Management functions in the list that are available as possible actions, even though rules themselves are not available to be applied directly to records folders. Actions can accept parameters. For example, the Move and Copy actions allow users to select the target destination parameter for the action. Using a pop-up in the Rules Manager, the user can find a destination location by navigating through the folder structure. In the screenshot below, we see another example where the Send email action pops up a form to help configure an e-mail notification that will be sent out when the action is run: Multiple rules per folder The Rules Manager supports the assignment of multiple rules to a single folder. A drag-and-drop interface allows individual rules to be moved into the desired order of execution. When an event occurs on a folder, each rule attached to the folder is checked sequentially to see if there is a match. When the criterion for a rule matches, the rule is run. By creating multiple rules with the same firing criteria, it's possible to arrange rules for sequential processing, allowing fairly complex operations to be performed. The screenshot below shows the rules user interface that lets the user order the processing order for the rules. Actions like move and copy allow the rules on one folder to pass documents on to other folders, which in turn may also have rules assigned to them that can perform additional processing. In this way, rules on Alfresco folders are effectively a "poor man's workflow". They are powerful enough to be able to handle a large number of business automation requirements, although at a fairly simple level. More complex workflows with many conditionals and loops need to be modeled using workflow tools like that of jBPM, which we will discuss later. The next figure shows an example for how rules can be sequentially ordered: Auto-declaration example Now let's look at an example where we file documents into a transit folder which are then automatically processed, moved into the Records Management site, and then declared as records. To do this, we'll create a transit folder and attach two rules to it for performing the processing. The first rule will run a custom script that applies record aspects to the document, completes mandatory records metadata, and then moves the document into a Folder under Records Management, effectively filing it. The second rule then declares the newly filed document as a record. The rules for this example will be applied against a folder called "Transit Folder", which is located within the document library of a standard Share site. Creating the auto-filing script Let's look at the first of the two rules that uses a custom script for the action. It is this script that does most of the work in the example. We'll break up the script into two parts and discuss each part individually: // Find the file name, minus the namespace prefix (assume cm:content)var fPieces = document.qnamePath.split('/');fName =fPieces[fPieces.length-1].substr(3);// Remember the ScriptNode object for the parent folder being filed tovar parentOrigNode = document.parent;// Get today's date. We use it later to fill in metadata.var d = new Date();// Find the ScriptNode for the destination to where we will file hardcoded here. More complex logic could be used here to categorize the incoming data to file into different locations.var destLocation = "Administrative/General Correspondence/2011_01 Correspondence";var filePlan = companyhome.childByNamePath("Sites/rm/documentlibrary");var recordFolder = filePlan.childByNamePath(destLocation);// Add aspects needed to turn this document into a recorddocument.addAspect("rma:filePlanComponent");document.addAspect("rma:record");// Complete mandatory metadata that will be needed to declare as a recorddocument.properties["rma:originator"] = document.properties["cm:creator"];document.properties["rma:originatingOrganization"] = "Formtek, Inc";document.properties["rma:publicationDate"] = d;document.properties["rma:dateFiled"] = d;// Build the unique record identifier -- based on the node-dbid valuevar idStr = '' + document.properties["sys:node-dbid"];// Pad the string with zeros to be 10 characters in lengthwhile (idStr.length < 10){ idStr = '0' + idStr ;}document.properties["rma:identifier"] = d.getFullYear() + '-' + idStr;document.save();document.move(recordFolder); At the top of the script, the filename that enters the folder is extracted from the document.qnamePath string that contains the complete filename. document is the variable passed into the script that refers to the document object created with information about the new file that is moved into the folder. The destination location to the folder in Records Management is hardcoded here. A more sophisticated script could file incoming documents, based on a variety of criteria into multiple folders. We add aspects rma:filePlanComponent and rma:record to the document to prepare it for becoming a record and then complete metadata properties that are mandatory for being able to declare the document as a record. We're bypassing some code in Records Management that normally would assign the unique record identifier to the document. Normally when filed into a folder, the unique record identifier is automatically generated within the Alfresco core Java code. Because of that, we will need to reconstruct the string and assign the property in the script. We'll follow Alfresco's convention for building the unique record identifier by appending a 10-digit zero-padded integer to the year. Alfresco already has a unique object ID with every object that is used when the record identifier is constructed. The unique ID is called the sys:node-dbid. Note that any unique string could be used for the unique record identifier, but we'll go with Alfresco's convention. Finally, the script saves the changes to the document and the document is filed into the Records Management folder. At this point, the document is now an undeclared document in the Records Management system. We could stop here with this script, but let's go one step further. Let's place a stub document in this same folder that will act as a placeholder to alert users as to where the documents that they filed have been moved. The second part of the same script handles the creation of a stub file: // Leave a marker to track the documentvar stubFileName = fName + '_stub.txt' ;// Create the new documentvar props = new Array();props["cm:title"] = ' Stub';props["cm:description"] = ' (Stub Reference to record in RM)';var stubDoc = parentOrigNode.createNode( stubFileName, "cm:content", props );stubDoc.content = "This file is now under records management control:n " + recordFolder.displayPath + '/' + fName;// Make a reference to the original document, now a recordstubDoc.addAspect("cm:referencing")stubDoc.createAssociation(document, "cm:references");stubDoc.save(); The document name we will use for the stub file is the same as the incoming filename with the suffix _stub.txt appended to it. The script then creates a new node of type cm:content in the transit directory where the user originally uploaded the file. The cm:title and cm:description properties are completed for the new node and text content is added to the document. The content contains the path to where the original file has been moved. Finally, the cm:referencing aspect is added to the document to allow a reference association to be made between the stub document and the original document that is now under Records Management. The stub document with these new properties is then saved. Installing the script In order for the script to be available for use in a rule, it must first be installed under the Data Dictionary area in the repository. To add it, we navigate to the folder Data Dictionary / Scripts in the repository browser within Share. The repository can be accessed from the Repository link across the top of the Share page: To install the script, we simply copy the script file to this folder. We also need to complete the title for the new document because the title is the string that will be used later to identify it. We will name this script Move to Records Management.
Read more
  • 0
  • 0
  • 2050

article-image-managing-records-alfresco-3
Packt
25 Jan 2011
12 min read
Save for later

Managing Records in Alfresco 3

Packt
25 Jan 2011
12 min read
Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Records Details Much of the description in this article focuses on record features that are found on the Records Details page. An abbreviated set of metadata and available actions for the record is shown on the row for the record in the File Plan. The Details page for a record is a composite screen that contains a complete listing of all information for a record, including the links to all possible actions and operations that can be performed on a record. We can get to the Details page for a record by clicking on the link to it from the File Plan page: The Record Details page provides a summary of all available information known about a record and has links to all possible actions that can be taken on it. This is the central screen from which a record can be managed. The Details screen is divided into three main columns. The first column on the screen provides a preview of the content for the record. The middle column lists the record Metadata, and the right-most column shows a list of Actions that can be taken on the record. There are other areas lower down on the page with additional functionality that include a way for the user to manually trigger events in the steps of the disposition, to get URL links to fi le content for the record, and to create relationship links to other records in the File Plan: Alfresco Flash previewer The web preview component in the left column of the Record Details page defines a region in which the content of the record can be visually previewed. It is a bit of an exaggeration to call the preview component a Universal Viewer, but it does come close to that. The viewer is capable of viewing a number of different common file formats and it can be extended to support the viewing of additional file formats. Natively, the viewer is capable of viewing both Flash SWF files and image formats like JPEG, PNG, or GIF. Microsoft Office, OpenOffice, and PDF files are also configured out-of-the-box to be previewed with the viewer by first converting the files to PDF and then to Flash. The use of an embedded viewer in Share means that client machines don't have to have a viewing application installed to be able to view the file contents of a record. For example, a client machine running an older version of Microsoft Word may not have the capability to open a record saved in the newer Word DOCX format, but within Share, using the viewer, that client would be able to preview and read the contents of the DOCX file. The top of the viewer has a header area that displays the icon of a record alongside the name of the record being viewed. Below that, there is a toolbar with controls for the viewing of the file: At the left of the toolbar, there are controls to change the zoom level. Small increments for zoom in and zoom out are controlled by clicking on the "+" and "-" buttons. The zoom setting can also be controlled by the slider or by specifying a zoom percentage or display factor like Fit Width from the drop-down menu. For multi-page documents, there are controls to go to the next or previous pages and to jump to a specific page. The Fullscreen button enlarges the view and displays it using the entire screen. Maximize enlarges the view to display it within the browser window. Image panning and positioning within the viewer can be done by using the scrollbar or by left-clicking and dragging the image with the mouse. A print option is available from an item on the right-mouse click menu. Record Metadata The centre column of the Record Details displays the metadata for the record. There are a lot of metadata properties that are stored with each record. To make it easier to locate specific properties, there is a grouping of the metadata, and each group has a label. The first metadata group is Identification and Status. It contains the Name, Title, and Description of the record. It shows the Unique Record Identifier for the record, and the unique identifier for the record Category to which the record belongs. Additional Metadata items track whether the record has been Declared, when it was Declared, and who Declared it: The General group for metadata tracks the Mimetype and the Size of the file content, as well as who Created or last made any modifications to the record. Additional metadata for the record is listed under groups like Record, Security, Vital Record Information, and Disposition. The Record group contains the metadata fields Location, Media Type, and Format, all of which are especially useful for managing non-electronic records. Record actions In the right-most column of the Record Details page, there is a list of Actions that are available to perform on the record. The list displayed is dynamic and changes based on the state of the record. For example, options like Declare as Record or Undo Cutoff are only displayed when the record is in a state where that action is possible: Download action The Download action does just that. Clicking on this action will cause the file content for the record to be downloaded to the user's desktop. Edit Metadata This action displays the Edit form matching the content type for the record. For example, if the record has a content type of cm:content, the Edit form associated with the type cm:content will be displayed to allow the editing of the metadata. Items identified with asterisks are required fields. Certain fields contain data that is not meant to change and are grayed out and non-selectable: Copy record Clicking on the Copy to action will pop up a repository directory browser that allows a copy of the record to be filed to any Folder within the File Plan. The name of the new record will start with the words "Copy of" and end with the name of the record being copied. Only a single copy of a record can be placed in a Folder without first changing the name of the first copy. It isn't possible to have two records in the same Folder with the same name. Move record Clicking on the Move to action pops up a dialog to browse to a new Folder for where the record will be moved. The record is removed from the original location and moved to the new location. File record Clicking on the File to action pops up a dialog to identify a new Folder for where the record will be filed. A reference to the record is placed in the new Folder. After this operation, the record will basically be in two locations. Deleting the record from either of the locations causes the record to be removed from both of the locations. After filing the record, a clip status icon is displayed on the upper-left next to the checkbox for selection. The status indicates that one record is filed in multiple Folders of the File Plan: Delete record Clicking on the Delete action permanently removes the item from the File Plan. Note that this action differs from Destroy that removes only the file content from a record as part of the final step of a disposition schedule. Audit log At any point in the lifecycle of a record, an audit log is available that shows a detailed history of all activities for the record. The record audit log can help to answer questions that may come up such as which users have been involved with the record and when specific lifecycle events for the record have occurred. The audit log also provides information that can confirm whether activities in the records system are both effective and compliant with record policies. The View Audit Log action creates and pops up a dialog containing a detailed historical report for the record. The report includes very detailed and granular information about every change that has ever been made to the record. Each entry in the audit log includes a timestamp for when the change was made, the user that made the change, and the type of change or event that occurred. If the event involved the change of any metadata, the original values and the changed values for the metadata are noted in the report. By clicking on the File as Record button on the dialog, the audit report for the record itself can be captured as a record that can then be filed within the File Plan. The report is saved in HTML file format. Clicking on the Export button at the top of the dialog enables the audit report to be downloaded in HTML format: The Audit log, discussed here, provides very granular information about any changes that have occurred to a specific record. Alfresco also provides a tool included with the Records Management Console, also called Audit, which can create a very detailed report showing all activities and actions that have occurred throughout the records system. Links Below the Actions component is a panel containing the Share component. This is a standard component that is also used in the Share Document Library. The component lists three URL links in fields that can be easily copied from and pasted to. The URLs allow record content and metadata to be easily shared with others. The first link in the component is the Download File URL. Referencing this link causes the content for the record to be downloaded as a file. The second link is the Document URL. It is similar to the first link, but if the browser is capable of viewing the file format type, the content will be displayed in the browser; otherwise it is downloaded as a file. The third link is the This Page URL. This is the URL to the record details page. Trying to access any of these three URLs will require the user to first authenticate himself/herself before access to any content will be allowed. Events Below the Flash preview panel on the Details page for the record is an area that displays any Events that are currently available to be manually triggered for this record. Remember that each step of a disposition schedule is actionable after either the expiration of a time deadline or by the manual triggering of an event. Events are triggered manually by a user needing to click on a button to indicate that an event has occurred. The location of the event trigger buttons differs depending on how the disposition in the record Category was applied. If the disposition was applied at the Folder level, the manual event trigger buttons will be available on the Details page for the Folder. If the disposition was applied at the record level, the event trigger buttons are available on the Record Details page. The buttons that we see on this page are the ones available from the disposition being applied at the record level. The event buttons that apply to a particular state will be grouped together based on whether or not the event has been marked as completed. After clicking on completion, the event is moved to the Completed group. If there are multiple possible events, it takes only a single one of them to complete in order to make the action available. Some actions, like cutoff, will be executed by the system. Other actions, like destruction, require a user to intervene, but will become available from the Share user interface: References Often it is useful to create references or relationships between records. A reference is a link that relates one record to another. Clicking on the link will retrieve and view the related record. In the lower right of the Details page, there is a component for tracking references from this record and to other records in the File Plan. It is especially useful for tracking, for instance, reference links to superseded or obsolete versions of the current record. To attach references, click on the Manage button on the References component: Then, from the next screen, select New Reference: A screen containing a standard Alfresco form will then be displayed. From this screen, it is possible to name the reference, pick another record to reference, and to mark the type of reference. Available reference types include: SupersededBy / Supersedes ObsoletedBy / Obsoletes Supporting Documentation / Supported Documentation VersionedBy / Versions Rendition Cross-Reference After creating the reference, you will then see the new reference show up in the list: How does it work? We've now looked at the functionality of the details page for records and the Series, Category, and Folder containers. In this "How does it work?" section, we'll investigate in greater detail how some of the internals for the record Details page work.
Read more
  • 0
  • 0
  • 1113
article-image-how-write-widget-wordpress-3
Packt
24 Jan 2011
7 min read
Save for later

How to Write a Widget in WordPress 3

Packt
24 Jan 2011
7 min read
  WordPress 3 Complete Create your own complete website or blog from scratch with WordPress Learn everything you need for creating your own feature-rich website or blog from scratch Clear and practical explanations of all aspects of WordPress In-depth coverage of installation, themes, plugins, and syndication Explore WordPress as a fully functional content management system Clear, easy-to-follow, concise; rich with examples and screenshots Recent posts from a Category Widget In this section, we will see how to write a widget that displays recent posts from a particular category in the sidebar. The user will be able to choose how many recent posts to show and whether or not to show an RSS feed link. It will look like the following screenshot: Let's get started! Naming the widget Widgets, like plugins, need to have a unique name. Again, I suggest you search the Web for the name you want to use in order to be sure of its uniqueness. Because of the widget class, you don't need to worry so much about uniqueness in your function and variable names, since the widget class unique-ifies them for you. I've given this widget the filename ahs_postfromcat_widget.php. As for the introduction, this comment code is the same as what you use for the plugin. For this widget, the introductory comment is this: /* Plugin Name: April's List Posts Cat Widget Plugin URI: http://springthistle.com/wordpress/plugin_postfromcat Description: Allows you to add a widget with some number of most recent posts from a particular category Author: April Hodge Silver Version: 1.0 Author URI: http://springthistle.com */ Widget structure When building a widget using the widget class, your widget needs to have the following structure: class UNIQUE_WIDGET_NAME extends WP_Widget { function UNIQUE_WIDGET_NAME() { $widget_ops = array(); $control_ops = array(); $this->WP_Widget(); } function form ($instance) { // prints the form on the widgets page } function update ($new_instance, $old_instance) { // used when the user saves their widget options } function widget ($args,$instance) { // used when the sidebar calls in the widget } } // initiate the widget // register the widget Of course, we need an actual unique widget name. I'm going to use Posts_From_Category. Now, let's flesh out this code one section at a time. Widget initiation function Let's start with the widget initiation function. Blank, it looks like this: function Posts_From_Category() { $widget_ops = array(); $control_ops = array(); $this->WP_Widget(); } In this function, which has the same name as the class itself and is therefore the constructor, we initialize various things that the WP_Widget class is expecting. The first two variables, to which you can give any name you want, are just a handy way to set the two array variables expected by the third line of code. Let's take a look at these three lines of code: The $widget_ops variable is where you can set the class name, which is given to the widget div itself, and the description, which is shown in the WP Admin on the widgets page. The $control_ops variable is where you can set options for the control box in the WP Admin on the widget page, like the width and height of the widget and the ID prefix used for the names and IDs of the items inside. When you call the parent class' constructor, WP_Widget(), you'll tell it the widget's unique ID, the widget's display title, and pass along the two arrays you created. For this widget, my code now looks like this: function Posts_From_Category() { $widget_ops = array( 'classname' => 'postsfromcat', 'description' => 'Allows you to display a list of recent posts within a particular category.'); $control_ops = array( 'width' => 250, 'height' => 250, 'id_base' => 'postsfromcat-widget'); $this->WP_Widget('postsfromcat-widget', 'Posts from a Category', $widget_ops, $control_ops ); } Widget form function This function has to be named form(). You may not rename it if you want the widget class to know what it's purpose is. You also need to have an argument in there, which I'm calling $instance, that the class also expects. This is where current widget settings are stored. This function needs to have all of the functionalities to create the form that users will see when adding the widget to a sidebar. Let's look at some abbreviated code and then explore what it's doing: <?php function form ($instance) { $defaults = array('numberposts' => '5','catid'=>'1','title'=>'','rss'=>''); $instance = wp_parse_args( (array) $instance, $defaults ); ?> <p> <label for="<?php echo $this->get_field_id('title'); ?>">Title:</label> <input type="text" name="<?php echo $this->get_field_name('title') ?>" id="<?php echo $this->get_field_id('title') ?> " value="<?php echo $instance['title'] ?>" size="20"> </p> <p> <label for="<?php echo $this->get_field_id('catid'); ?>">Category ID:</label> <?php wp_dropdown_categories('hide_empty=0&hierarchical=1&id='.$this->get_field_id('catid').'&name='.$this->get_field_name('catid').'&selected='.$instance['catid']); ?> </p> <p> <label for='<?php echo $this->get_field_id('numberposts'); ?>">Number of posts:</label> <select id="<?php echo $this->get_field_id('numberposts'); ?>" name="<?php echo $this->get_field_name('numberposts'); ?>"> <?php for ($i=1;$i<=20;$i++) { echo '<option value="'.$i.'"'; if ($i==$instance['numberposts']) echo ' selected="selected"'; echo '>'.$i.'</option>'; } ?> </select> </p> <p> <input type="checkbox" id="<?php echo $this->get_field_id('rss'); ?>" name="<?php echo $this->get_field_name('rss'); ?>" <?php if ($instance['rss']) echo 'checked="checked"' ?> /> <label for="<?php echo $this->get_field_id('rss'); ?>">Show RSS feed link?</label> </p> <?php } ?> First, I set some defaults, which in this case is just for the number of posts, which I think it would be nice to set to 5. You can set other defaults in this array as well. Then you use a WordPress function named wp_parse_args(), which creates an $instance array that your form will use. What's in it depends on what defaults you've set and what settings the user has already saved. Then you create form fields. Note that for each form field, I make use of the built-in functions that will create unique names and IDs and input existing values. $this->get-field_id creates a unique ID based on the widget instance (remember, you can create more than one instance of this widget). $this->get_field_name() creates a unique name based on the widget instance. The $instance array is where you will find the current values for the widget, whether they are defaults or user-saved data. All the other code in there is just regular PHP and HTML. Note that if you give the user the ability to set a title, name that field "title", WordPress will show it on the widget form when it's minimized. The widget form this will create will look like this:  
Read more
  • 0
  • 0
  • 2474

article-image-integrating-facebook-magento
Packt
21 Jan 2011
4 min read
Save for later

Integrating Facebook with Magento

Packt
21 Jan 2011
4 min read
  Magento 1.4 Themes Design Customize the appearance of your Magento 1.4 e-commerce store with Magento's powerful theming engine Install and configure Magento 1.4 and learn the fundamental principles behind Magento themes Customize the appearance of your Magento 1.4 e-commerce store with Magento's powerful theming engine by changing Magento templates, skin files and layout files Change the basics of your Magento theme from the logo of your store to the color scheme of your theme Integrate popular social media aspects such as Twitter and Facebook into your Magento store Facebook (http://www.facebook.com) is a social networking website that allows people to add each other as 'friends' and to send messages and share content. Move the mouse over the image to enlarge it. As with Twitter, there are two options you have for integrating Facebook with your Magento store: Adding a 'Like' button to your store's product pages to allow your customers to show their appreciation for individual products on your store. Integrating a widget of the latest news from your store's Facebook profile. Adding a 'Like' button to your Magento store's product pages The Facebook 'Like' button allows Facebook users to show that they approve of a particular web page and you can put this to use on your Magento store. Getting the 'Like' button markup To get the markup required for your store's 'Like' button, go to the Facebook Developers website at: http://developers.facebook.com/docs/reference/ plugins/like. Fill in the form below the description text with relevant values, leaving the URL to like field as URLTOLIKE for now, and setting the Width to 200: Click on the Get Code button at the bottom of the form and then copy the code that is presented in the iframe field: The generated markup should look like the following: <iframe src="http://www.facebook.com/plugins/like.php?href= URLTOLIKE&amp;layout=standard&amp;show_faces=true&amp;width= 200&amp;action=like&amp;colorscheme=light&amp;height=80" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> You now need to replace the URLTOLIKE in the previous markup to the URL of the current page in your Magento store. The PHP required to do this in Magento looks like the following: <?php $currentUrl = $this->helper(‘core/url’)->getCurrentUrl(); ?> The new Like button markup for your Magento store should now look like the following: <iframe src="http://www.facebook.com/plugins/like.php?href= ".<?php $currentUrl = $this->helper(‘core/url’)->getCurrentUrl(); ?>". &amp;layout=standard&amp;show_faces=true&amp;width=200&amp;action= like&amp;colorscheme=light&amp;height=80» scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> Open your theme's view.phtml file in the /app/design/frontend/default/m2/ template/catalog/product directory and locate the lines that read: <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?> </div></div> Insert the code generated by Facebook here, so that it now reads the following: <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?> </div> <iframe src="http://www.facebook.com/plugins/like.php?href=<?php echo $this->helper('core/url')->getCurrentUrl();?>&amp;layout= standard&amp;show_faces=true&amp;width=200&amp;action=like&amp; colorscheme=light&amp;height=80" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> </div> Save and upload this file back to your Magento installation and then visit a product page within your store to see the button appear below the brief description of the product: That's it, your product pages can now be liked on Facebook!
Read more
  • 0
  • 0
  • 2510