Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-ibm-filenet-p8-content-manager-administrative-tools-and-tasks
Packt
10 Feb 2011
11 min read
Save for later

IBM FileNet P8 Content Manager: Administrative Tools and Tasks

Packt
10 Feb 2011
11 min read
Getting Started with IBM FileNet P8 Content Manager Install, customize, and administer the powerful FileNet Enterprise Content Management platform Quickly get up to speed on all significant features and the major components of IBM FileNet P8 Content Manager Provides technical details that are valuable both for beginners and experienced Content Management professionals alike, without repeating product reference documentation Gives a big picture description of Enterprise Content Management and related IT areas to set the context for Content Manager Written by an IBM employee, Bill Carpenter, who has extensive experience in Content Manager product development, this book gives practical tips and notes with a step-by-step approach to design real Enterprise Content Management solutions to solve your business needs        The following will be covered in the next article. A discussion of an Object Store and what's in it An example of creating a custom class and adding custom properties to it FEM must run on a Microsoft Windows machine. Even if you are using virtual machine images or other isolated servers for your CM environment, you might wish to install FEM on a normal Windows desktop machine for your own convenience. Domain and GCD Here's a simple question: what is a P8 Domain? It's easy to give a simple answer—it's the top-level container of all P8 things in a given installation. That needs a little clarification, though, because it seems a little circular; things are in a Domain because a Domain knows about them. In a straightforward technical sense, things are in the same Domain if they share the same Global Configuration Database (GCD) . The GCD is, literally, a database. If we were installing additional CE servers, they would share that GCD if we wanted them to be part of the same Domain. When you first open FEM and look at the tree view in the left-hand panel, most of the things you are looking at are things at the Domain level. We'll be referring to the FEM tree view often, and we're talking about the left-hand part of the user interface, as seen in the following screenshot: FEM remembers the state of the tree view from session to session. When you start FEM the next time, it will try to open the nodes you had open when you exited. That will often mean something of a delay as it reads extensive data for each open Object Store node. You might find it a useful habit to close up all of the nodes before you exit FEM. Most things within a Domain know about and can connect directly to each other, and nothing in a given Domain knows about any other Domain. The GCD, and thus the Domain, contains: Simple properties of the Domain object itself Domain-level objects Configuration objects for more complex aspects of the Domain environment Pointers to other components, both as part of the CE environment and external to it It's a little bit subjective as to which things are objects and which are pointers to other components. It's also a little bit subjective as to what a configuration object is for something and what a set of properties is of that something. Let's not dwell on those philosophical subtleties. Let's instead look at a more specific list: Properties: These simple properties control the behavior of or describe characteristics of the Domain itself. Name and ID: Like most P8 objects, a Domain has both a Name and an ID. It's counterintuitive, but you will rarely need to know these, and you might even sometimes forget the name of your own Domain. The reason is that you will always be connecting to some particular CE server, and that CE server is a member of exactly one Domain. Therefore, all of the APIs related to a Domain object are able to use a defaulting mechanism that means "the current Domain". Database schemas: There are properties containing the database schemas for an Object Store for each type of database supported by P8. CM uses this schema, which is an actual script of SQL statements, by default when first fleshing out a new Object Store to create tables and columns. Interestingly, you can customize the schema when you perform the Object Store creation task (either via FEM or via the API), but you should not do so on a whim. Permissions: The Domain object itself is subject to access controls, and so it has a Permissions property. The actual set of access rights available is specific to Domain operations, but it is conceptually similar to access control on other objects. Domain-level objects: A few types of objects are contained directly within the Domain itself. We'll talk about configuration objects in a minute, but there are a couple of non-configuration objects in the Domain. AddOns: An AddOn is a bundle of metadata representing the needs of a discrete piece of functionality that is not built into the CE server. Some are provided with the product, and others are provided by third parties. An AddOn must first be created, and it is then available in the GCD for possible installation in one or more Object Stores. Marking Sets: Marking Sets are a Mandatory Access Control mechanism, Security Features and Planning. Individual markings can be applied to objects in an Object Store, but the overall definition resides directly under Domain so that they may be applied uniformly across all Object Stores. Configuration objects: Directories: All CM authentication and authorization ultimately comes down to data obtained from an LDAP directory. Some of those lookups are done by the application server, and some are done directly by the CE server. The directory configuration objects tell the CE server how to communicate with that directory or directories. Subsystem configurations: There are several logical subsystems within the CE that are controlled by their own fl avors of subsystem configuration objects. Examples include trace logging configuration and CSE configuration. These are typically configured at the domain level and inherited by lower level topology nodes. A description of topology nodes is coming up in the next section of this article. Pointers to components: Content Cache Areas: The Domain contains configuration information for content caches, which are handy for distributed deployments. Rendition Engines: The Domain contains configuration and connectivity information for separately installed Rendition Engines (sometimes called publishing engines). Fixed Content Devices: The domain contains configuration and connectivity information for external devices and federation sources for content. PE Connection Points and Isolated Regions: The domain contains configuration and connectivity information for the Process Engine. Object Stores: The heart of the CE ecosystem is the collection of ObjectStores. Text Search Engine: The Domain contains configuration and connectivity information for a separately-installed Content Search Engine. In addition to the items directly available in the tree view shown above, most of the remainder of the items contained directly within the Domain are available one way or another in the pop-up panel you get when you right-click on the Domain node in FEM and select Properties. The pop-up panel General tab contains FEM version information. The formatting may look a little strange because the CM release number, including any fix packs, and build number are mapped into the Microsoft scheme for putting version info into DLL properties. In the previous figures, 4.51.0.100 represents CM 4.5.1.0, build 100. That's reinforced by the internal designation of the build number, dap451.100, in parentheses. Luckily, you don't really have to understand this scheme. You may occasionally be asked to report the numbers to IBM support, but a faithful copying is all that is required. Topology levels There is an explicit hierarchical topology for a Domain. It shows up most frequently when configuring subsystems. For example, CE server trace logging can be configured at any of the topology levels, with the most specific configuration settings being used. What we mean by that should be clearer once we've explained how the topology levels are used. You can see these topology levels in the expanded tree view in the left-hand side of FEM in the following screenshot: At the highest level of the hierarchy is the Domain, discussed in the previous section. It corresponds to all of the components in the CE part of the CM installation. Within a domain are one or more sites. The best way to think of a site is as a portion of a Domain located in a particular geographic area. That matters because networked communications differ in character between geographically separate areas when compared to communications within an area. The difference in character is primarily due to two factors—latency and bandwidth. Latency is a characterization of the amount of time it takes a packet to travel from one end of a connection to another. It takes longer for a network packet to travel a long distance, both because of the laws of physics and because there will usually be more network switching and routing components in the path. Bandwidth is a characterization of how much information can be carried over a connection in some fixed period of time. Bandwidth is almost always more constrained over long distances due to budgetary or capacity limits. Managing network traffic traveling between geographic areas is an important planning factor for distributed deployments. A site contains one or more virtual servers. A virtual server is a collection of CE servers that act functionally as if they were a single server (from the point of view of the applications). Most often, this situation comes about through the use of clustering or farming for high availability or load balancing reasons. A site might contain multiple virtual servers for any reason that makes sense to the enterprise. Perhaps, for example, the virtual servers are used to segment different application mixes or user populations. A virtual server contains one or more servers. A server is a single, addressable CE server instance running in a J2EE application server. These are sometimes referred to as physical servers, but in the 21st century that is often not literally true. In terms of running software, the only things that tangibly exist are individual CE servers. There is no independently-running piece of software that is the Domain or GCD. There is no separate piece of software that is an Object Store (except in the sense that it's a database mediated by the RDBMS software). All CE activity happens in a CE server. There may be other servers running software in CM—Process Engine, Content Search Engine, Rendition Engine, and Application Engine. The previous paragraph is just trying to clarify that there is no piece of running software representing the topology levels other than the server. You don't have to worry about runtime requests being handed off to another level up or down the topological hierarchy. Not every installation will have the need to distinguish all of those topology levels. In our all-in-one installation, the Domain contains a single site. That site was created automatically during installation and is conventionally called Initial Site, though we could change that if we wanted to. The site contains a single virtual server, and that virtual server contains a single server. This is typical for a development or demo installation, but you should be able to see how it could be expanded with the defined topology levels to any size deployment, even to a deployment that is global in scope. You could use these different topology levels for a scheme other than the one just described; the only downside would be that nobody else would understand your deployment terms. Using topology levels We mentioned previously that many subsystems can be configured at any of the levels. Although it's most common to do domain-wide configuration, you might, for example, want to enable trace logging on a single CE server for some troubleshooting purpose. When interpreting subsystem configuration data, the CE server first looks for configuration data for the local CE server (that is, itself). If any is found, it is used. Otherwise, the CE server looks for configuration data for the containing virtual server, then the containing site, and then the Domain. Where present, the most specific configuration data is used. A set of configuration data, if used, is used as the complete configuration. That is, the configuration objects at different topology levels are not blended to create an "effective configuration". CE has a feature called request forwarding. Because the conversation between the CE server and the database holding an Object Store is chattier than the conversation between CE clients and the CE server, there can be a performance benefit to having requests handled by a CE server that is closer, in networking terms, to that database. When a CE server forwards a request internally to another CE server, it uses a URL configured on a virtual server. The site object holds the configuration options for whether CE servers can forward requests and whether they can accept forwarded requests. Sites are the containers for content cache areas, text index areas, Rendition Engine connections, storage areas, and Object Stores. That is, each of those things is associated with a specific site.
Read more
  • 0
  • 0
  • 3001

article-image-ibm-filenet-p8-content-manager-exploring-object-store-level-items
Packt
10 Feb 2011
9 min read
Save for later

IBM FileNet P8 Content Manager: Exploring Object Store-level Items

Packt
10 Feb 2011
9 min read
As with the Domain, there are two basic paths in FEM to accessing things in an Object Store. The tree-view in the left-hand panel can be expanded to show Object Stores and many repository objects within them, as illustrated in the screenshot below. Each individual Object Store has a right-click context menu. Selecting Properties from that menu will bring up a multi-tabbed pop-up panel. We'll look first at the General tab of that panel. Content Access Recording Level Ordinarily, the Content Engine (CE) server does not keep track of when a particular document's content was last retrieved because it requires an expensive database update that is often uninteresting. The ContentAccessRecordingLevel property on the Object Store can be used to enable the recording of this optional information in a document or annotation's DateContentLastAccessed property. It is off by default. It is sometimes interesting to know, for example, that document content was accessed within the last week as opposed to three years ago. Once a particular document has had its content read, there is a good chance that there will be a few additional accesses in the same neighborhood of time (not for a technical reason; rather, it's just statistically likely). Rather than record the last access time for each access, an installation can choose, via this property's value, to have access times recorded only with a granularity of hourly or daily. This can greatly reduce the number of database updates while still giving a suitable approximation of the last access time. There is also an option to update the DateContentLastAccessed property on every access. Auditing The CE server can record when clients retrieve or update selected objects. Enabling that involves setting up subscriptions to object instances or classes. This is quite similar to the event subsystem in the CE server. Because it can be quite elaborate to set up the necessary auditing configuration, it can also be enabled or disabled completely at the Object Store level. Checkout type The CE server offers two document checkout types, Collaborative and Exclusive. The difference lies in who is allowed to perform the subsequent checkin. An exclusive checkout will only allow the same user to do the checkin. Via an API, an application can make the explicit choice for one type or the other, or it can use the Object Store default value. Using the default value is often handy since a given application may not have any context for deciding one form over another. Even with a collaborative checkout, the subsequent checkin is still subject to access checks, so you can still have fine-grained control over that. In fact, because you can use fine-grained security to limit who can do a checkin, you might as well make the Object Store default be Collaborative unless you have some specific use case that demands Exclusive. Text Index Date Partitioning Most of the values on the CBR tab, as shown in the figure next, are read-only because they are established when the Content Search Engine (CSE) is first set up. One item that can be changed on a per-Object Store basis is the date-based partitioning of text index collections. Partitioning of the text index collections allows for more efficient searching of large collections because the CE can narrow its search to a specific partition or partitions rather than searching the entirety of the text index. By default, there is no partitioning. If you check the box to change the partition settings, the Date Property drop-down presents a list of date-valued custom properties. In the screenshot above, you see the custom properties Received On and Sent On, which are from email-related documents. Once you select one of those properties, you're offered a choice of partitioning granularity, ranging from one month up to one year. Additional text index configuration properties are available if you select the Index Areas node in the FEM tree-view, then right-click an index area entry and select Properties. Although we are showing the screenshot here for reference, your environment may not yet have a CSE or any index areas if the needed installation procedures are not complete. Cache configuration Just as we saw at the Domain level, the Cache tab allows the configuration of various cache tuning parameters for each Object Store. As we've said before, you don't want to change these values without a good reason. The default values are suitable for most situations. Metadata One of the key features of CM is that it has an extensible metadata structure. You don't have to work within a fixed framework of pre-defined document properties. You can add additional properties to the Document class, and you can even make subclasses of Document for specific purposes. For example, you might have a subclass called CustomerProfile, another subclass called DesignDocument, yet another subclass called ProductDescription, and so on. Creating subclasses lets you define just the properties you need to specialize the class to your particular business purpose. There is no need to have informal rules about where properties should be ignored because they're not applicable. There is also generally no need to have a property that marks a document as a CustomerProfile versus something else. The class provides that distinction. CM comes with a set of pre-defined system classes, and each class has a number of pre-defined system properties (many of which are shared across most system classes). There are pre-defined system classes for Document, Folder, Annotation, CustomObject, and many others. The classes just mentioned are often described as the business object classes because they are used to directly implement common business application concepts. System properties are properties for which the CE server has some specifically-coded behavior. Some system properties are used to control server behavior, and others provide reports of some kind of system state. We've seen several examples already in the Domain and Object Store objects. It's common for applications to create their own subclasses and custom properties as part of their installation procedures, but it is equally common to do similar things manually via FEM. FEM contains several wizards to make the process simpler for the administrator, but, behind the scenes, various pieces are always in play. Property templates The foundation for any custom property is a property template. If you select the Property Templates node in the tree view, you will see a long list of existing property templates. Double-clicking on any item in the list will reveal that property template's properties. A property template is an independently persistable object, so it has its own identity and security. Most system properties do not have explicit property templates. Their characteristics come about from a different mechanism internal to the CE server. Property templates have that name because the characteristics they define act as a pattern for properties added to classes, where the property is embodied in a property definition for a particular class. Some of the property template properties can be overridden in a property definition, but some cannot. For example, the basic data type and cardinality cannot be changed once a property template is created. On the other hand, things like settability and a value being required can be modified in the property definition. When creating a new property with no existing property template, you can either create the property template independently, ahead of time, or you can follow the FEM wizard steps for adding a property to a class. FEM will prompt you with additional panels if you need to create a property template for the property being added. Choice lists Most property types allow for a few simple validity checks to be enforced by the CE server. For example, an integer-valued property has an optional minimum and maximum value based on its intended use (in addition to the expected absolute constraints imposed by the integer data type). For some use cases, it is desirable to limit the allowed values to a specific list of items. The mechanism for that in the CE server is the choice list, and it's available for stringvalued and integer-valued properties. If you select the Choice Lists node in the FEM tree view, you will see a list of existing top-level choice lists. The example choice lists in the screenshot below all happen to come from AddOns installed in the Object Store. Double-clicking on any item in the list will reveal that choice list's properties. A choice list is an independently persistable object, so it has its own identity and security. We've mentioned independent objects a couple of times, and more mentions are coming. For now, it is enough to think of them as objects that can be stored or retrieved in their own right. Most independent objects have their own access security. Contrast independent objects with dependent objects that only exist within the context of some independent object. A choice list is a collection of choice objects, although a choice list may be nested hiearchically. That is, at any position in a choice list there can be another choice list rather than a simple choice. A choice object consists of a localizable display name and a choice value (a string or an integer, depending on the type of choice list). Nested choice lists can only be referenced within some top-level choice list. Classes Within the FEM tree view are two nodes describing classes: Document Class and Other Classes. Documents are listed separately only for user convenience (since Document subclasses occur most frequently). You can think of these two nodes as one big list. In any case, expanding the node in the tree reveals hierarchically nested subclasses. Selecting a class from the tree reveals any subclasses and any custom properties. The screenshot shows the custom class EntryTemplate, which comes from a Workplace AddOn. You can see that it has two subclasses, RecordsTemplate and WebContentTemplate, and four custom properties. When we mention a specific class or property name, like EntryTemplate, we try to use the symbolic name, which has a restricted character set and never contains spaces. The FEM screenshots tend to show display names. Display names are localizable and can contain any Unicode character. Although the subclassing mechanism in CM generally mimics the subclassing concept in modern object-oriented programming languages, it does have some differences. You can add custom properties to an existing class, including many system classes. Although you can change some characteristics of properties on a subclass, there are restrictions on what you can do. For example, a particular string property on a subclass must have a maximum length equal to or less than that property's maximum length on the superclass.
Read more
  • 0
  • 0
  • 5007

article-image-using-javascript-and-jquery-drupal-themes
Packt
10 Feb 2011
6 min read
Save for later

Using JavaScript and jQuery in Drupal Themes

Packt
10 Feb 2011
6 min read
  Drupal 6 Theming Cookbook Over 100 clear step-by-step recipes to create powerful, great-looking Drupal themes Take control of the look and feel of your Drupal website Tips and tricks to get the most out of Drupal's theming system Learn how to customize existing themes and create unique themes from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on Drupal, see here.) Introduction JavaScript libraries take out the majority of the hassle involved in writing code which will be executed in a variety of browsers each with its own vagaries. Drupal, by default, uses jQuery, a lightweight, robust, and well-supported package which, since its introduction, has become one of the most popular libraries in use today. While it is possible to wax eloquent about its features and ease of use, its most appealing factor is that it is a whole lot of fun! jQuery's efficiency and flexibility lies in its use of CSS selectors to target page elements and its use of chaining to link and perform commands in sequence. As an example, let us consider the following block of HTML which holds the items of a typical navigation menu. <div class="menu"> <ul class="menu-list"> <li>Item 1</li> <li>Item 2</li> <li>Item 3</li> <li>Item 4</li> <li>Item 5</li> <li>Item 6</li> </ul></div> Now, let us consider the situation where we want to add the class active to the first menu item in this list and, while we are at it, let us also color this element red. Using arcane JavaScript, we would have accomplished this with something like the following: var elements = document.getElementsByTagName("ul");for (var i = 0; i < elements.length; i++) { if (elements[i].className === "menu-list") { elements[i].childNodes[0].style.color = '#F00'; if (!elements[i].childNodes[0].className) { elements[i].childNodes[0].className = 'active'; } else { elements[i].childNodes[0].className = elements[i].childNodes[0].className + ' active'; } }} Now, we would accomplish the same task using jQuery as follows: $("ul.menu-list li:first-child").css('color', '#F00').addClass('active'); The statement we have just seen can be effectively read as: Retrieve all UL tags classed menu-list and having LI tags as children, take the first of these LI tags, style it with some CSS which sets its color to #F00 (red) and then add a class named active to this element. For better legibility, we can format the previous jQuery with each chained command on a separate line. $("ul.menu-list li:first-child") .css('color', '#F00') .addClass('active'); We are just scratching the surface here. More information and documentation on jQuery's features are available at http://jquery.com and http://www.visualjquery.com. A host of plugins which, like Drupal's modules, extend and provide additional functionality, are available at http://plugins.jquery.com. Another aspect of JavaScript programming that has improved in leaps and bounds is in the field of debugging. With its rising ubiquity, developers have introduced powerful debugging tools that are integrated into browsers and provide tools, such as interactive debugging, flow control, logging and monitoring, and so on, which have traditionally only been available to developers of other high-level languages. Of the many candidates out there, the most popular and feature-rich is Firebug. It can be downloaded and installed from https://addons.mozilla.org/en-US/ firefox/addon/1843. Including JavaScript files from a theme This recipe will list the steps required to include a JavaScript file from the .info file of the theme. We will be using the file to ensure that it is being included by outputting the standard Hello World! string upon page load. Getting ready While the procedure is the same for all the themes, we will be using the Zen-based myzen theme in this recipe. How to do it... The following steps are to be performed inside the myzen theme folder at sites/all/ themes/myzen. Browse into the js subfolder where JavaScript files are conventionally stored. Create a file named hello.js and open it in an editor. Add the following code: alert("Hello World!!"); Save the file and exit the editor. Browse back up to the myzen folder and open myzen.info in an editor. Include our new script using the following syntax: scripts[] = js/hello.js Save the file and exit the editor. Rebuild the theme registry and if JavaScript optimization is enabled for the site, the cache will also need to be cleared. View any page on the site to see our script taking effect. How it works... Once the theme registry is rebuilt and the cache cleared, Drupal adds hello.js to its list of JavaScript files to be loaded and embeds it in the HTML page. The JavaScript is executed before any of the content is displayed on the page and the resulting page with the alert dialog box should look something like the following screenshot: There's more... While we have successfully added our JavaScript in this recipe, Drupal and jQuery provide efficient solutions to work around this issue of the JavaScript being executed as soon as the page is loaded. Executing JavaScript only after the page is rendered A solution to the problem of the alert statement being executed before the page is ready, is to wrap our JavaScript inside jQuery's ready() function. Using it ensures that the code within is executed only once the page has been rendered and is ready to be acted upon. if (Drupal.jsEnabled) { $(document).ready(function () { alert("Hello World!!"); });} Furthermore, we have wrapped the ready() function within a check for Drupal.jsEnabled which acts as a global killswitch. If this variable is set to false, then JavaScript is turned off for the entire site and vice versa. It is set to true by default provided that the user's browser meets Drupal's requirements. Drupal's JavaScript behaviors While jQuery's ready() function works well, Drupal recommends the use of behaviors to manage our use of JavaScript. Our Hello World example would now look like this: Drupal.behaviors.myzenAlert = function (context) { alert("Hello World!!");}; All registered behaviors are called automatically by Drupal once the page is ready. Drupal.behaviors also allows us to forego the call to the ready() function as well as the check for jsEnabled as these are done implicitly. As with most things Drupal, it is always a good idea to namespace our behaviors based on the module or theme name to avoid conflicts. In this case, the behavior name has been prefixed with myzen as it is part of the myzen theme.
Read more
  • 0
  • 0
  • 1559
Visually different images

article-image-setting-joomla-web-server-using-virtualbox-turnkeylinux-and-dyndns
Packt
03 Feb 2011
8 min read
Save for later

Setting up a Joomla Web Server using Virtualbox, TurnkeyLinux and DynDNS

Packt
03 Feb 2011
8 min read
VirtualBox 3.1: Beginner's Guide Virtualization is a powerful tool that can make your PC duties easier, no matter if you're a programmer, a systems administrator, a power user, or even a beginner. Have you ever wanted to test the popular Joomla! Content Management System (CMS), but couldn't spare the time and effort to install it in your PC, along with the Apache web server and the MySQL database server? Are you afraid to install Apache, MySQL and PHP in your only PC because it could mess things up? Well, you can forget about all the hassle thanks to Oracle VirtualBox, a powerful virtualization software product that lets you create one or more virtual machines, or VMs, inside your physical PC. Each VM is completely isolated from your main PC and all the other VMs, so it's like having several computers in one physical package, but you don't need the extra space to accommodate all the additional LCDs and PC cases. Cool, huh? In this article, I'm going to show you one of the quickest ways to set up a fully-functional web server right from your own home/office. And why would you need to do something like that? Well, if you want to create a website to establish your own presence on the Internet, there are some costs involved. First of all, you need to pay for a web hosting service and a domain name. So, if you want to learn how to create websites, this would be a perfect way to do it, since all the software we´ll use is free, and with the DynDNS dynamic DNS service, you don't need to pay for a domain name because you can also use one for free. Furthermore, since you're going to host your website on your virtual machine, you can also forget about the web hosting fee. Are these reasons good enough to start experimenting with virtual machines? I'm pretty sure they are! I decided to use the Joomla! Content Management System (CMS) because it has all you need to establish your Internet presence. The TurnkeyLinux Joomla! virtual appliance includes everything you need to have a website running right out of the box, so you won't have to go through the hassle of installing all the required web server software (Apache, MySQL, PHP, etc.). And in case something goes wrong, you can just wipe out your virtual machine and start again from scratch. How about that? The first steps in the tutorial will tell you how to create a virtual machine (VM) with VirtualBox, how to get a preconfigured ISO image from the TurnkeyLinux website with all the necessary stuff to install the Apache web server, the MySQL database server and the Joomla! CMS in your VM. Oh, and if you're wondering how to make your web server available on the Internet, don't worry: I'll also show you how to get a free DynDNS account, and how to configure your Cable/DSL router to open port 80 (the HTTP web server port). That way, visitors from the Internet will be able to navigate in your brand-new Joomla! website. You'll need a PC or Mac system with Windows/Linux/Mac OS X installed, at least 1 GB of RAM and a Cable/DSL connection to the Internet, so you can configure your Cable/DSL router to let your virtual machine work as a full-fledged web server. Getting Virtualbox Download the most recent version of Oracle VirtualBox from the official website: VirtualBox 4.0 for Windows VirtualBox 4.0 for Mac OS X VirtualBox 4.0 for Linux Once the download is completed, follow the instructions included in the User Manual to install VirtualBox in your specific operating system. Downloading the Turnkey Joomla Appliance You can download the Joomla appliance from the TurnkeyLinux website. Just click on the following link to start downloading it to your computer: http://www.turnkeylinux.org/download?file=turnkey-joomla-11.0-lucid-x86.iso. Creating a new virtual machine Open VirtualBox, click on New to open the New Virtual Machine Wizard and then click on Next. Type MyJoomlaVM in the Name field, select Linux as the Operating System and Ubuntu as the Version, and click on Next to continue: The Memory dialog will show up next. Select at least 384 MB (you can press the Left and Right arrow keys to increase/decrease the memory value) in the Base Memory Size slider (depending on the total memory available in your PC) and click Next to continue: Leave the default values in the Virtual Hard Disk window and click Next four times to finish configuring your virtual machine with the default values. Then click Finish twice in the Summary dialogs that will show up afterwards, and you’ll be taken back to the VirtualBox main screen. Your MyJoomlaVM virtual machine will appear in the virtual machine list, as shown below: Now we need to tweak some network settings so your virtual machine can behave as a real PC with its own IP address. Click the Settings button to open the MyJoomlaVM – Settings window, and then select the Network section: Make sure the Adapter 1 tab is selected; then click on the Attached to list box and select Bridged Adapter instead of NAT: Click on the OK button to close the MyJoomlaVM – Settings window and return to the VirtualBox main screen. Installing the Joomla TurnkeyLinux appliance To start your virtual machine, double-click on its name in the virtual machines’ list or select it and click on the Start button: The first time you open a virtual machine, the First Run Wizard dialog shows up. This wizard helps you to install an operating system to your virtual machine. Click Next to go to the Select Installation Media window, where you can select a media source to install an operating system in your virtual machine. In this case you’re going to select the Turnkey Joomla ISO live CD image you downloaded before. Click on the folder icon located at the right-side of the Media Source list box: The Choose a virtual CD/DVD disk file dialog will open up. Use this dialog to locate and select the Joomla Turnkey ISO image your previously downloaded; then click on Open to return to the Select Installation Media dialog and click Next to continue. The Summary window will appear next, showing the media you selected. Click on Finish to exit the First Run Wizard and start your virtual machine. Wait until the TurnkeyLinux boot screen shows up; then make sure the Install to hard disk option is highlighted and hit Enter to proceed (you can also wait until installation begins automatically): Wait until the Debian Installer Live screen appears. Use the keyboard to select the Guided – use the entire disk option and hit Enter to continue: The next screen will ask you if you want to write the changes to disk. Select Yes and hit Enter to continue. The Debian Installer will start installing Ubuntu and the Joomla appliance in your virtual machine. After a while, a screen will appear asking if you want to install the GRUB boot loader to the master boot record. Select Yes and hit Enter to continue. The next screen will tell you that the installation is complete, and will ask if you want to restart your computer (virtual machine). Make sure Yes is selected and hit Enter to continue. Wait until your virtual machine boots up and asks you to type a new password for the root account. Type a secure password and hit Enter to continue. Type the password again and hit Enter to proceed. Now the system will ask for the MySQL server 'root' account’s password. Type a password of your choice and hit Enter. Repeat the procedure to confirm the password. Finally, the system will ask you to type a password for the Joomla 'admin' account. Choose a secure password, type it and hit Enter. Once again, repeat the procedure to confirm the password. The next step is to write the email address for the Joomla 'admin' account. Type a real email address and hit Enter to proceed. Next you’ll see a Link TKLBAM to the Turnkey Hub screen. In this case we’re not going to use the Turnkey Hub (a backup/restore system), so don’t type anything and hit Enter to continue. The next screen that will show up is Security Updates. You can leave the default option (Install) and hit Enter to proceed. (Be patient while the security updates get installed in your virtual machine; sometimes it can take several minutes.) Once the security updates finish installing in your virtual machine, the JOOMLA appliance services screen will pop up, and your virtual machine will be ready to roll: Write down the IP address assigned to your Joomla virtual machine (in the above picture it’s 192.168.1.79, but your IP address may vary). Then, open a web browser and type http://youripaddress (remember to replace youripaddress with the IP address you wrote down) to verify your Joomla virtual machine is working. The next screen should appear in your browser: Finally, you need to unmount the TurnkeyLinux Joomla ISO image from your machine’s virtual drive. This is to avoid booting up the ISO image again instead of booting up from your hard drive. Go to the Devices menu and select CD/DVD Devices > Remove disk from virtual drive: That’s it for now. Now let’s see how to get a free domain name and configure your Cable/DSL router to accept incoming connections for your Joomla virtual machine.
Read more
  • 0
  • 1
  • 5276

article-image-getting-started-wordpress-3
Packt
02 Feb 2011
7 min read
Save for later

Getting Started with WordPress 3

Packt
02 Feb 2011
7 min read
  WordPress 2.7 Complete Create your own complete blog or web site from scratch with WordPress Everything you need to set up your own feature-rich WordPress blog or web site Clear and practical explanations of all aspects of WordPress In-depth coverage of installation, themes, syndication, and podcasting Explore WordPress as a fully functioning content management system Concise, clear, and easy to follow; rich with examples         Read more about this book       (For more resources on Wordpress, see here.) WordPress is available in easily downloadable formats from its website, http://wordpress.org/download/. WordPress is a free, open source application, and is released under GNU General Public License (GPL). This means that anyone who produces a modified version of software released under the GPL is required to keep those same freedoms, that people buying or using the software may also modify and redistribute, attached to his or her modified version. This way, WordPress and other software released under GPL are kept open source. Where to build your WordPress website The first decision you have to make is where your blog is going to live. You have two basic options for the location where you will create your site. You can: Use WordPress.com Install on a server (hosted or your own) Let's look at some of the advantages and disadvantages of each of these two choices. The advantage of using WordPress.com is that they take care of all of the technical details for you. The software is already installed; they'll upgrade it for you whenever there's an upgrade; and you're not responsible for anything else. Just manage your content! The big disadvantage is that you lose almost all of the theme and plugin control you'd have otherwise. WordPress.com will not let you upload or edit your own theme, though it will let you (for a fee) edit the CSS of any theme you use. WordPress.com will not let you upload or manage plugins at all. Some plugins are installed by default (most notably Akismet, for spam blocking, and a fancy statistics plugin), but you can neither uninstall them nor install others. Additional features are available for a fee as well. The following table is a brief overview of the essential differences between using WordPress.com versus installing WordPress on your own server:   WordPress.com Your own server Installation You don't have to install anything, just sign up Install WordPress yourself, either manually or via your host's control panel (if offered) Themes Use any theme made available by WordPress.com Use any theme available anywhere, written by anyone (including yourself) Plugins No ability to choose or add plugins Use any plugin available anywhere, written by anyone (including yourself) Upgrades WordPress.com provides automatic upgrades You have to upgrade it yourself when upgrades are available Widgets Widget availability depends on available themes You can widgetize any theme yourself Maintenance You don't have to do any maintenance You're responsible for the maintenance of your site Advertising No advertising allowed Advertise anything Using WordPress.com WordPress.com (http://wordpress.com) is a free service provided by the WordPress developers, where you can register a blog or non-blog website easily and quickly with no hassle. However, because it is a hosted service, your control over some things will be more limited than it would be if you hosted your own WordPress website. As mentioned before, WordPress.com will not let you edit or upload your own themes or plugins. Aside from this, WordPress.com is a great place to maintain your personal site if you don't need to do anything fancy with a theme. To get started, go to http://wordpress.com, which will look something like the following: To register your free website, click on the loud orange-and-white Sign up now button. You will be taken to the signup page. In the following screenshot, I've entered my username (what I'll sign in with) and a password (note that the password measurement tool will tell you if your password is strong or weak), as well as my e-mail address. Be sure to check the Legal flotsam box and leave the Gimme a blog! radio button checked. Without it, you won't get a website. After providing this information and clicking on the Next button, WordPress will ask for other choices (Blog Domain, Blog Title, Language, and Privacy), as shown in following screenshot. You can also check if it's a private blog or not. Note that you cannot change the blog domain later! So be sure it's right. After providing this information and clicking on Signup, you will be sent to a page where you can enter some basic profile information. This page will also tell you that your account is set up, but your e-mail ID needs to be verified. Be sure to check your inbox for the e-mail with the link, and click on it. Then, you'll be truly done with the installation. Installing WordPress manually The WordPress application files can be downloaded for free if you want to do a manual installation. If you've got a website host, this process is extremely easy and requires no previous programming skills or advanced blog user experience. Some web hosts offer automatic installation through the host's online control panel. However, be a little wary of this because some hosts offer automatic installation, but they do it in a way that makes updating your WordPress difficult or awkward, or restricts your ability to have free rein with your installation in the future. Preparing the environment A good first step is to make sure you have an environment setup that is ready for WordPress. This means two things: making sure that you verify that the server meets the minimum requirements, and making sure that your database is ready. For WordPress to work, your web host must provide you with a server that does the following two things: Support PHP, which must be at least Version 4.3. Provide you with write access to a MySQL database. MySQL has to be at least Version 4.1.2. You can find out if your host meets these two requirements by contacting your web host. If your web server meets these two basic requirements, you're ready to move on to the next step. As far as web servers go, Apache is the best. However, WordPress will also run on a server running the Microsoft IIS server (though using permalinks will be difficult, if possible at all). Enabling mod_rewrite to use pretty permalinks If you want to use permalinks, your server must be running Unix, and Apache's mod_rewrite option must be enabled. Apache's mod_rewrite is enabled by default in most web hosting accounts. If you are hosting your own account, you can enable mod_rewrite by modifying the Apache web server configuration file. You can check the URL http://www.tutorio.com/tutorial/enable-mod-rewrite-on-apache to learn how to enable mod_rewrite on your web server. If you are running on shared hosting, then ask your system administrator to install it for you. However, it is more likely that you already have it installed on your hosting account. Downloading WordPress Once you have checked out your environment, you need to download WordPress from http://wordpress.org/download/. Take a look at the following screenshot in which the download links are available on the right side: The .zip file is shown as a big blue button because that'll be the most useful format for the most people. If you are using Windows, Mac, or Linux operating systems, your computer will be able to unzip that downloaded file automatically. (The .tar.gz file is provided because some Unix users prefer it.) A further note on location We're going to cover installing WordPress remotely. However, if you plan to develop themes or plugins, I suggest that you also install WordPress locally on your own computer's server. Testing and deploying themes and plugins directly to the remote server will be much more time-consuming than working locally. If you look at the screenshots I will be taking of my own WordPress installation, you'll notice that I'm working locally (for example, http://wpbook:8888/ is a local URL). After you download the WordPress .zip file, extract the files, and you'll get a folder called wordpress. It will look like the following screenshot:  
Read more
  • 0
  • 0
  • 2014

article-image-compression-formats-linux-shell-script
Packt
31 Jan 2011
6 min read
Save for later

Compression Formats in Linux Shell Script

Packt
31 Jan 2011
6 min read
  Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes Master the art of crafting one-liner command sequence to perform tasks such as text processing, digging data from files, and lot more Practical problem solving techniques adherent to the latest Linux platform Packed with easy-to-follow examples to exercise all the features of the Linux shell scripting language Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Compressing with gunzip (gzip) gzip is a commonly used compression format in GNU/Linux platforms. Utilities such as gzip, gunzip, and zcat are available to handle gzip compression file types. gzip can be applied on a file only. It cannot archive directories and multiple files. Hence we use a tar archive and compress it with gzip. When multiple files are given as input it will produce several individually compressed (.gz) files. Let's see how to operate with gzip. How to do it... In order to compress a file with gzip use the following command: $ gzip filename $ ls filename.gz Then it will remove the file and produce a compressed file called filename.gz. Extract a gzip compressed file as follows: $ gunzip filename.gz It will remove filename.gz and produce an uncompressed version of filename.gz. In order to list out the properties of a compressed file use: $ gzip -l test.txt.gz compressed uncompressed ratio uncompressed_name 35 6 -33.3% test.txt The gzip command can read a file from stdin and also write a compressed file into stdout. Read from stdin and out as stdout as follows: $ cat file | gzip -c > file.gz The -c option is used to specify output to stdout. We can specify the compression level for gzip. Use --fast or the --best option to provide low and high compression ratios, respectively. There's more... The gzip command is often used with other commands. It also has advanced options to specify the compression ratio. Let's see how to work with these features. Gzip with tarball We usually use gzip with tarballs. A tarball can be compressed by using the –z option passed to the tar command while archiving and extracting. You can create gzipped tarballs using the following methods: Method - 1 $ tar -czvvf archive.tar.gz [FILES] Or: $ tar -cavvf archive.tar.gz [FILES] The -a option specifies that the compression format should automatically be detected from the extension. Method - 2First, create a tarball: $ tar -cvvf archive.tar [FILES] Compress it after tarballing as follows: $ gzip archive.tar If many files (a few hundreds) are to be archived in a tarball and need to be compressed, we use Method - 2 with few changes. The issue with giving many files as command arguments to tar is that it can accept only a limited number of files from the command line. In order to solve this issue, we can create a tar file by adding files one by one using a loop with an append option (-r) as follows: FILE_LIST="file1 file2 file3 file4 file5" for f in $FILE_LIST; do tar -rvf archive.tar $f done gzip archive.tar In order to extract a gzipped tarball, use the following: -x for extraction -z for gzip specification Or: $ tar -xavvf archive.tar.gz -C extract_directory In the above command, the -a option is used to detect the compression format automatically. zcat – reading gzipped files without extracting zcat is a command that can be used to dump an extracted file from a .gz file to stdout without manually extracting it. The .gz file remains as before but it will dump the extracted file into stdout as follows: $ ls test.gz $ zcat test.gz A test file # file test contains a line "A test file" $ ls test.gz Compression ratio We can specify compression ratio, which is available in range 1 to 9, where: 1 is the lowest, but fastest 9 is the best, but slowest You can also specify the ratios in between as follows: $ gzip -9 test.img This will compress the file to the maximum. Compressing with bunzip (bzip) bunzip2 is another compression technique which is very similar to gzip. bzip2 typically produces smaller (more compressed) files than gzip. It comes with all Linux distributions. Let's see how to use bzip2. How to do it... In order to compress with bzip2 use: $ bzip2 filename $ ls filename.bz2 Then it will remove the file and produce a compressed file called filename.bzip2. Extract a bzipped file as follows: $ bunzip2 filename.bz2 It will remove filename.bz2 and produce an uncompressed version of filename. bzip2 can read a file from stdin and also write a compressed file into stdout. In order to read from stdin and read out as stdout use: $ cat file | bzip2 -c > file.tar.bz2 -c is used to specify output to stdout. We usually use bzip2 with tarballs. A tarball can be compressed by using the -j option passed to the tar command while archiving and extracting. Creating a bzipped tarball can be done by using the following methods: Method - 1 $ tar -cjvvf archive.tar.bz2 [FILES] Or: $ tar -cavvf archive.tar.bz2 [FILES] The -a option specifies to automatically detect compression format from the extension. Method - 2First create the tarball: $ tar -cvvf archive.tar [FILES] Compress it after tarballing: $ bzip2 archive.tar If we need to add hundreds of files to the archive, the above commands may fail. To fix that issue, use a loop to append files to the archive one by one using the –r option. Extract a bzipped tarball as follows: $ tar -xjvvf archive.tar.bz2 -C extract_directory In this command: -x is used for extraction -j is for bzip2 specification -C is for specifying the directory to which the files are to be extracted Or, you can use the following command: $ tar -xavvf archive.tar.bz2 -C extract_directory -a will automatically detect the compression format. There's more... bunzip has several additional options to carry out different functions. Let's go through few of them. Keeping input files without removing them While using bzip2 or bunzip2, it will remove the input file and produce a compressed output file. But we can prevent it from removing input files by using the –k option. For example: $ bunzip2 test.bz2 -k $ ls test test.bz2 Compression ratio We can specify the compression ratio, which is available in the range of 1 to 9 (where 1 is the least compression, but fast, and 9 is the highest possible compression but much slower). For example: $ bzip2 -9 test.img This command provides maximum compression.  
Read more
  • 0
  • 0
  • 6968
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-inkscape-faqs
Packt
31 Jan 2011
4 min read
Save for later

Inkscape FAQs

Packt
31 Jan 2011
4 min read
Have you got questions on Inkscape you want answers for? You've come to the right place. Whether you're new to the web design software or there are a couple of issues puzzling you, we've put together this FAQ to answer some of the most common Inkscape queries. What is Inkscape? Inkscape is an open source, free program that creates vector-based graphics that can be used in web, print, and screen design as well as interface and logo creation, and material cutting. Its capabilities are similar to those of commercial products such as Adobe Illustrator, Macromedia Freehand, and CorelDraw and can be used for any number of practical purposes. It is a software for web designers who want to add attractive visual elements to their website. What License is Inkscape released under? Inkscape is a free, open source program developed by a group of volunteers under the GNU General Public License (GPL). You not only get a free download but can use the program to create items with it and freely distribute them, modify the program itself, and share that modified program with others. What platforms does Inkscape run on? Inkscape is available for download for Windows, Macintosh, Linux, or Solaris operating systems. Where can you download Inkscape from? Go to the official Inkscape websiteand download the appropriate version of the software for your computer. How do you run Inkscape on MAC OS X operating system? To run on the Mac OS X operating system, it typically runs under X11—an implementation of the X Window System software that makes it possible to run X11-based applications in Mac OS X. The X11 application has shipped with the Mac OS X since version 10.5. When you open Inkscape on a Mac, it will first open X11 and run Inkscape within that program. Loss of some shortcut key options will occur but all functionality is present using menus and toolbars. Is the X11 application a part of the MAC OS X operating system? If you have Mac OS X version 10.5 or above. If you have a previous version of the MAC OS X operating system, you can download the X11 application package 2.4.0 or greater from this website: http://xquartz.macosforge.org/trac/wiki/X112.4.0. What is the interface of Inkscape like? The Inkscape interface is based on the GNOME UI standard which uses visual cues and feedback for any icons. For example: Hovering your mouse over any icon displays a pop-up description of the icon. If an icon has a dark gray border, it is active and can be used. If an icon is grayed out, it is not currently available to use with the current selection. All icons that are in execution mode (or busy) are covered by a dark shadow. This signifies that the application is busy and won't respond to any edit request. There is a Notification Display on the main screen that displays dynamic help messages to key shortcuts and basic information on how to use the Inkscape software in its current state or based on what objects and tools are selected. Within the main screen there is the main menu, a command, snap and status bar, tool controls, and a palette bar. What are Paths? Paths have no pre-defined lengths or widths. They are arbitrary in nature and come in three basic types: open paths (have two ends), closed paths (have no ends, like a circle), or compound paths (uses a combination of two open and/or closed paths). In Inkscape there are a few ways we can make paths: the Pencil (Freehand), Bezier (Pen), and Calligraphy tools—all of which are found in the tool box. They can also be created by converting a regular shape or text object into paths. What shapes can be created in Inkscape? Inkscape can also create shapes that are part of the SVG standard. These are: Rectangles and squares 3D boxes Circles, ellipses, and arcs Stars Polygons Spirals To create any of these shapes, see the following screenshot. Select (click) the shape tool icon in the tool box and then draw the shape on the canvas by clicking, holding, and then dragging the shape to the size you want on the canvas. What is slicing? It is a term used to describe breaking of an image created in a graphics program apart so that it can be re-assembled in HTML to create a web page. To do this, we'll use Web Slicer Extension: from the main menu select Extensions | Web | Slicer | Create a slicer rectangle.
Read more
  • 0
  • 0
  • 1647

article-image-linux-shell-script-logging-tasks
Packt
28 Jan 2011
7 min read
Save for later

Linux Shell Script: Logging Tasks

Packt
28 Jan 2011
7 min read
Linux Shell Scripting Cookbook Collecting information about the operating environment, logged in users, the time for which the computer has been powered on, and any boot failures are very helpful. This recipe will go through a few commands used to gather information about a live machine. Getting ready This recipe will introduce the commands who, w, users, uptime, last, and lastb. How to do it... To obtain information about users currently logged in to the machine use: $ who slynux   pts/0   2010-09-29 05:24 (slynuxs-macbook-pro.local) slynux   tty7    2010-09-29 07:08 (:0) Or: $ w 07:09:05 up  1:45,  2 users,  load average: 0.12, 0.06, 0.02 USER     TTY     FROM    LOGIN@   IDLE  JCPU PCPU WHAT slynux   pts/0   slynuxs 05:24  0.00s  0.65s 0.11s sshd: slynux slynux   tty7    :0      07:08  1:45m  3.28s 0.26s gnome-session It will provide information about logged in users, the pseudo TTY used by the users, the command that is currently executing from the pseudo terminal, and the IP address from which the users have logged in. If it is localhost, it will show the hostname. who and w format outputs with slight difference. The w command provides more detail than who. TTY is the device file associated with a text terminal. When a terminal is newly spawned by the user, a corresponding device is created in /dev/ (for example, /dev/pts/3). The device path for the current terminal can be found out by typing and executing the command tty. In order to list the users currently logged in to the machine, use: $ users Slynux slynux slynux hacker If a user has opened multiple pseudo terminals, it will show that many entries for the same user. In the above output, the user slynux has opened three pseudo terminals. The easiest way to print unique users is to use sort and uniq to filter as follows: $ users | tr ' ' 'n' | sort | uniq slynux hacker We have used tr to replace ' ' with 'n'. Then combination of sort and uniq will produce unique entries for each user. In order to see how long the system has been powered on, use: $ uptime 21:44:33 up  3:17,  8 users,  load average: 0.09, 0.14, 0.09 The time that follows the word up indicates the time for which the system has been powered on. We can write a simple one-liner to extract the uptime only. Load average in uptime's output is a parameter that indicates system load. In order to get information about previous boot and user logged sessions, use: $ last slynux tty7         :0              Tue Sep 28 18:27   still logged in reboot system boot 2.6.32-21-generi Tue Sep 28 18:10 - 21:46 (03:35) slynux pts/0      :0.0          Tue Sep 28 05:31 - crash (12:39) The last command will provide information about logged in sessions. It is actually a log of system logins that consists of information such as tty from which it has logged in, login time, status, and so on. The last command uses the log file /var/log/wtmp for input log data. It is also possible to explicitly specify the log file for the last command using the –f option. For example: $ last -f /var/log/wtmp In order to obtain info about login sessions for a single user, use: $ last USER Get information about reboot sessions as follows: $ last reboot reboot system boot 2.6.32-21-generi Tue Sep 28 18:10 - 21:48 (03:37) reboot system boot 2.6.32-21-generi Tue Sep 28 05:14 - 21:48 (16:33) In order to get information about failed user login sessions use: # lastb test     tty8    :0          Wed Dec 15 03:56 - 03:56  (00:00) slynux tty8    :0          Wed Dec 15 03:55 - 03:55  (00:00) You should run lastb as the root user. Logging access to files and directories Logging of file and directory access is very helpful to keep track of changes that are happening to files and folders. This recipe will describe how to log user accesses. Getting ready The inotifywait command can be used to gather information about file accesses. It doesn't come by default with every Linux distro. You have to install the inotify-tools package by using a package manager. It also requires the Linux kernel to be compiled with inotify support. Most of the new GNU/Linux distributions come with inotify enabled in the kernel. How to do it... Let's walk through the shell script to monitor the directory access: #/bin/bash #Filename: watchdir.sh #Description: Watch directory access path=$1 #Provide path of directory or file as argument to script inotifywait -m -r -e create,move,delete $path -q A sample output is as follows: $ ./watchdir.sh . ./ CREATE new ./ MOVED_FROM new ./ MOVED_TO news ./ DELETE news How it works... The previous script will log events create, move, and delete files and folders from the given path. The -m option is given for monitoring the changes continuously rather than going to exit after an event happens. -r is given for enabling a recursive watch the directories. -e specifies the list of events to be watched. -q is to reduce the verbose messages and print only required ones. This output can be redirected to a log file. We can add or remove the event list. Important events available are as follows: Logfile management with logrotate Logfiles are essential components of a Linux system's maintenance. Logfiles help to keep track of events happening on different services on the system. This helps the sysadmin to debug issues and also provides statistics on events happening on the live machine. Management of logfiles is required because as time passes the size of a logfile gets bigger and bigger. Therefore, we use a technique called rotation to limit the size of the logfile and if the logfile reaches a size beyond the limit, it will strip the logfile and store the older entries from the logfile in an archive. Hence older logs can be stored and kept for future reference. Let's see how to rotate logs and store them. Getting ready logrotate is a command every Linux system admin should know. It helps to restrict the size of logfile to the given SIZE. In a logfile, the logger appends information to the log file. Hence the recent information appears at the bottom of the log file. logrotate will scan specific logfiles according to the configuration file. It will keep the last 100 kilobytes (for example, specified SIZE = 100k) from the logfile and move rest of the data (older log data) to a new file logfile_name.1 with older entries. When more entries occur in the logfile (logfile_name.1) and it exceeds the SIZE, it updates the logfile with recent entries and creates logfile_name.2 with older logs. This process can easily be configured with logrotate.logrotate can also compress the older logs as logfile_name.1.gz, logfile_name2.gz, and so on. The option for whether older log files are to be compressed or not is available with the logrotate configuration. How to do it... logrotate has the configuration directory at /etc/logrotate.d. If you look at this directory by listing contents, many other logfile configurations can be found. We can write our custom configuration for our logfile (say /var/log/program.log) as follows: $ cat /etc/logrotate.d/program /var/log/program.log { missingok notifempty size 30k compress weekly rotate 5 create 0600 root root } Now the configuration is complete. /var/log/program.log in the configuration specifies the logfile path. It will archive old logs in the same directory path. Let's see what each of these parameters are: The options specified in the table are optional; we can specify the required options only in the logrotate configuration file. There are numerous options available with logrotate. Please refer to the man pages (http://linux.die.net/man/8/logrotate) for more information on logrotate.  
Read more
  • 0
  • 0
  • 3966

article-image-linux-shell-script-monitoring-activities
Packt
28 Jan 2011
8 min read
Save for later

Linux Shell Script: Monitoring Activities

Packt
28 Jan 2011
8 min read
Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes Master the art of crafting one-liner command sequence to perform tasks such as text processing, digging data from files, and lot more Practical problem solving techniques adherent to the latest Linux platform Packed with easy-to-follow examples to exercise all the features of the Linux shell scripting language Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible.    Disk usage hacks Disk space is a limited resource. We frequently perform disk usage calculation on hard disks or any storage media to find out the free space available on the disk. When free space becomes scarce, we will need to find out large-sized files that are to be deleted or moved in order to create free space. Disk usage manipulations are commonly used in shell scripting contexts. This recipe will illustrate various commands used for disk manipulations and problems where disk usages can be calculated with a variety of options. Getting ready df and du are the two significant commands that are used for calculating disk usage in Linux. The command df stands for disk free and du stands for disk usage. Let's see how we can use them to perform various tasks that involve disk usage calculation. How to do it... To find the disk space used by a file (or files), use: $ du FILENAME1 FILENAME2 . . For example: $ du file.txt 4 The result is, by default, shown as size in bytes. In order to obtain the disk usage for all files inside a directory along with the individual disk usage for each file showed in each line, use: $ du -a DIRECTORY -a outputs results for all files in the specified directory or directories recursively. Running du DIRECTORY will output a similar result, but it will show only the size consumed by subdirectories. However, they do not show the disk usage for each of the files. For printing the disk usage by files, -a is mandatory. For example: $  du -a test 4  test/output.txt 4  test/process_log.sh 4  test/pcpu.sh 16  test An example of using du DIRECTORY is as follows: $ du test 16  test There's more... Let's go through additional usage practices for the du command. Displaying disk usage in KB, MB, or Blocks By default, the disk usage command displays the total bytes used by a file. A more human-readable format is when disk usage is expressed in standard units KB, MB, or GB. In order to print the disk usage in a display-friendly format, use –h as follows: du -h FILENAME For example: $ du -sh test/pcpu.sh 4.0K  test/pcpu.sh # Multiple file arguments are accepted Or: # du -h DIRECTORY $ du -h hack/ 16K  hack/ Finding the 10 largest size files from a given directory Finding large-size files is a regular task we come across. We regularly require to delete those huge size files or move them. We can easily find out large-size files using du and sort commands. The following one-line script can achieve this task: $ du -ak SOURCE_DIR | sort -nrk 1 | head Here -a specifies all directories and files. Hence du traverses the SOURCE_DIR and calculates the size of all files. The first column of the output contains the size in Kilobytes since -k is specified and the second column contains the file or folder name. sort is used to perform numerical sort with column 1 and reverse it. head is used to parse the first 10 lines from the output. For example: $ du -ak /home/slynux | sort -nrk 1 | head -n 4 50220 /home/slynux 43296 /home/slynux/.mozilla 43284 /home/slynux/.mozilla/firefox 43276 /home/slynux/.mozilla/firefox/8c22khxc.default One of the drawbacks of the above one-liner is that it includes directories in the result. However, when we need to find only the largest files and not directories we can improve the one-liner to output only the large-size files as follows: $ find . -type f -exec du -k {} ; | sort -nrk 1 | head We used find to filter only files to du rather than allow du to traverse recursively by itself. Calculating execution time for a command While testing an application or comparing different algorithms for a given problem, execution time taken by a program is very critical. A good algorithm should execute in minimum amount of time. There are several situations in which we need to monitor the time taken for execution by a program. For example, while learning about sorting algorithms, how do you practically state which algorithm is faster? The answer to this is to calculate the execution time for the same data set. Let's see how to do it. How to do it... time is a command that is available with any UNIX-like operating systems. You can prefix time with the command you want to calculate execution time, for example: $ time COMMAND The command will execute and its output will be shown. Along with output, the time command appends the time taken in stderr. An example is as follows: $ time ls test.txt next.txt real    0m0.008s user    0m0.001s sys     0m0.003s It will show real, user, and system times for execution. The three different times can be defined as follows: Real is wall clock time—the time from start to finish of the call. This is all elapsed time including time slices used by other processes and the time that the process spends when blocked (for example, if it is waiting for I/O to complete). User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only the actual CPU time used in executing the process. Other processes and the time that the process spends when blocked do not count towards this figure. Sys is the amount of CPU time spent in the kernel within the process. This means executing the CPU time spent in system calls within the kernel, as opposed to library code, which is still running in the user space. Like 'user time', this is only the CPU time used by the process. An executable binary of the time command is available at /usr/bin/time as well as a shell built-in named time exists. When we run time, it calls the shell built-in by default. The shell built-in time has limited options. Hence, we should use an absolute path for the executable (/usr/bin/time) for performing additional functionalities. We can write this time statistics to a file using the -o filename option as follows: $ /usr/bin/time -o output.txt COMMAND The filename should always appear after the –o flag. In order to append the time statistics to a file without overwriting, use the -a flag along with the -o option as follows: $ /usr/bin/time -a -o output.txt COMMAND We can also format the time outputs using format strings with the -f option. A format string consists of parameters corresponding to specific options prefixed with %. The format strings for real time, user time, and sys time are as follows: Real time - %e f User - %U f sys - %S By combining parameter strings, we can create formatted output as follows: $ /usr/bin/time -f "FORMAT STRING" COMMAND For example: $ /usr/bin/time -f "Time: %U" -a -o timing.log uname Linux Here %U is the parameter for user time. When formatted output is produced, the formatted output of the command is written to the standard output and the output of the COMMAND, which is timed, is written to standard error. We can redirect the formatted output using a redirection operator (>) and redirect the time information output using the (2>) error redirection operator. For example: $ /usr/bin/time -f "Time: %U" uname> command_output.txt 2>time.log $ cat time.log Time: 0.00 $ cat command_output.txt Linux Many details regarding a process can be collected using the time command. The important details include, exit status, number of signals received, number of context switches made, and so on. Each parameter can be displayed by using a suitable format string. The following table shows some of the interesting parameters that can be used: For example, the page size can be displayed using the %Z parameters as follows: $ /usr/bin/time -f "Page size: %Z bytes" ls> /dev/null Page size: 4096 bytes Here the output of the timed command is not required and hence the standard output is directed to the /dev/null device in order to prevent it from writing to the terminal.  
Read more
  • 0
  • 0
  • 4993

article-image-managing-records-alfresco-3
Packt
25 Jan 2011
12 min read
Save for later

Managing Records in Alfresco 3

Packt
25 Jan 2011
12 min read
Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Records Details Much of the description in this article focuses on record features that are found on the Records Details page. An abbreviated set of metadata and available actions for the record is shown on the row for the record in the File Plan. The Details page for a record is a composite screen that contains a complete listing of all information for a record, including the links to all possible actions and operations that can be performed on a record. We can get to the Details page for a record by clicking on the link to it from the File Plan page: The Record Details page provides a summary of all available information known about a record and has links to all possible actions that can be taken on it. This is the central screen from which a record can be managed. The Details screen is divided into three main columns. The first column on the screen provides a preview of the content for the record. The middle column lists the record Metadata, and the right-most column shows a list of Actions that can be taken on the record. There are other areas lower down on the page with additional functionality that include a way for the user to manually trigger events in the steps of the disposition, to get URL links to fi le content for the record, and to create relationship links to other records in the File Plan: Alfresco Flash previewer The web preview component in the left column of the Record Details page defines a region in which the content of the record can be visually previewed. It is a bit of an exaggeration to call the preview component a Universal Viewer, but it does come close to that. The viewer is capable of viewing a number of different common file formats and it can be extended to support the viewing of additional file formats. Natively, the viewer is capable of viewing both Flash SWF files and image formats like JPEG, PNG, or GIF. Microsoft Office, OpenOffice, and PDF files are also configured out-of-the-box to be previewed with the viewer by first converting the files to PDF and then to Flash. The use of an embedded viewer in Share means that client machines don't have to have a viewing application installed to be able to view the file contents of a record. For example, a client machine running an older version of Microsoft Word may not have the capability to open a record saved in the newer Word DOCX format, but within Share, using the viewer, that client would be able to preview and read the contents of the DOCX file. The top of the viewer has a header area that displays the icon of a record alongside the name of the record being viewed. Below that, there is a toolbar with controls for the viewing of the file: At the left of the toolbar, there are controls to change the zoom level. Small increments for zoom in and zoom out are controlled by clicking on the "+" and "-" buttons. The zoom setting can also be controlled by the slider or by specifying a zoom percentage or display factor like Fit Width from the drop-down menu. For multi-page documents, there are controls to go to the next or previous pages and to jump to a specific page. The Fullscreen button enlarges the view and displays it using the entire screen. Maximize enlarges the view to display it within the browser window. Image panning and positioning within the viewer can be done by using the scrollbar or by left-clicking and dragging the image with the mouse. A print option is available from an item on the right-mouse click menu. Record Metadata The centre column of the Record Details displays the metadata for the record. There are a lot of metadata properties that are stored with each record. To make it easier to locate specific properties, there is a grouping of the metadata, and each group has a label. The first metadata group is Identification and Status. It contains the Name, Title, and Description of the record. It shows the Unique Record Identifier for the record, and the unique identifier for the record Category to which the record belongs. Additional Metadata items track whether the record has been Declared, when it was Declared, and who Declared it: The General group for metadata tracks the Mimetype and the Size of the file content, as well as who Created or last made any modifications to the record. Additional metadata for the record is listed under groups like Record, Security, Vital Record Information, and Disposition. The Record group contains the metadata fields Location, Media Type, and Format, all of which are especially useful for managing non-electronic records. Record actions In the right-most column of the Record Details page, there is a list of Actions that are available to perform on the record. The list displayed is dynamic and changes based on the state of the record. For example, options like Declare as Record or Undo Cutoff are only displayed when the record is in a state where that action is possible: Download action The Download action does just that. Clicking on this action will cause the file content for the record to be downloaded to the user's desktop. Edit Metadata This action displays the Edit form matching the content type for the record. For example, if the record has a content type of cm:content, the Edit form associated with the type cm:content will be displayed to allow the editing of the metadata. Items identified with asterisks are required fields. Certain fields contain data that is not meant to change and are grayed out and non-selectable: Copy record Clicking on the Copy to action will pop up a repository directory browser that allows a copy of the record to be filed to any Folder within the File Plan. The name of the new record will start with the words "Copy of" and end with the name of the record being copied. Only a single copy of a record can be placed in a Folder without first changing the name of the first copy. It isn't possible to have two records in the same Folder with the same name. Move record Clicking on the Move to action pops up a dialog to browse to a new Folder for where the record will be moved. The record is removed from the original location and moved to the new location. File record Clicking on the File to action pops up a dialog to identify a new Folder for where the record will be filed. A reference to the record is placed in the new Folder. After this operation, the record will basically be in two locations. Deleting the record from either of the locations causes the record to be removed from both of the locations. After filing the record, a clip status icon is displayed on the upper-left next to the checkbox for selection. The status indicates that one record is filed in multiple Folders of the File Plan: Delete record Clicking on the Delete action permanently removes the item from the File Plan. Note that this action differs from Destroy that removes only the file content from a record as part of the final step of a disposition schedule. Audit log At any point in the lifecycle of a record, an audit log is available that shows a detailed history of all activities for the record. The record audit log can help to answer questions that may come up such as which users have been involved with the record and when specific lifecycle events for the record have occurred. The audit log also provides information that can confirm whether activities in the records system are both effective and compliant with record policies. The View Audit Log action creates and pops up a dialog containing a detailed historical report for the record. The report includes very detailed and granular information about every change that has ever been made to the record. Each entry in the audit log includes a timestamp for when the change was made, the user that made the change, and the type of change or event that occurred. If the event involved the change of any metadata, the original values and the changed values for the metadata are noted in the report. By clicking on the File as Record button on the dialog, the audit report for the record itself can be captured as a record that can then be filed within the File Plan. The report is saved in HTML file format. Clicking on the Export button at the top of the dialog enables the audit report to be downloaded in HTML format: The Audit log, discussed here, provides very granular information about any changes that have occurred to a specific record. Alfresco also provides a tool included with the Records Management Console, also called Audit, which can create a very detailed report showing all activities and actions that have occurred throughout the records system. Links Below the Actions component is a panel containing the Share component. This is a standard component that is also used in the Share Document Library. The component lists three URL links in fields that can be easily copied from and pasted to. The URLs allow record content and metadata to be easily shared with others. The first link in the component is the Download File URL. Referencing this link causes the content for the record to be downloaded as a file. The second link is the Document URL. It is similar to the first link, but if the browser is capable of viewing the file format type, the content will be displayed in the browser; otherwise it is downloaded as a file. The third link is the This Page URL. This is the URL to the record details page. Trying to access any of these three URLs will require the user to first authenticate himself/herself before access to any content will be allowed. Events Below the Flash preview panel on the Details page for the record is an area that displays any Events that are currently available to be manually triggered for this record. Remember that each step of a disposition schedule is actionable after either the expiration of a time deadline or by the manual triggering of an event. Events are triggered manually by a user needing to click on a button to indicate that an event has occurred. The location of the event trigger buttons differs depending on how the disposition in the record Category was applied. If the disposition was applied at the Folder level, the manual event trigger buttons will be available on the Details page for the Folder. If the disposition was applied at the record level, the event trigger buttons are available on the Record Details page. The buttons that we see on this page are the ones available from the disposition being applied at the record level. The event buttons that apply to a particular state will be grouped together based on whether or not the event has been marked as completed. After clicking on completion, the event is moved to the Completed group. If there are multiple possible events, it takes only a single one of them to complete in order to make the action available. Some actions, like cutoff, will be executed by the system. Other actions, like destruction, require a user to intervene, but will become available from the Share user interface: References Often it is useful to create references or relationships between records. A reference is a link that relates one record to another. Clicking on the link will retrieve and view the related record. In the lower right of the Details page, there is a component for tracking references from this record and to other records in the File Plan. It is especially useful for tracking, for instance, reference links to superseded or obsolete versions of the current record. To attach references, click on the Manage button on the References component: Then, from the next screen, select New Reference: A screen containing a standard Alfresco form will then be displayed. From this screen, it is possible to name the reference, pick another record to reference, and to mark the type of reference. Available reference types include: SupersededBy / Supersedes ObsoletedBy / Obsoletes Supporting Documentation / Supported Documentation VersionedBy / Versions Rendition Cross-Reference After creating the reference, you will then see the new reference show up in the list: How does it work? We've now looked at the functionality of the details page for records and the Series, Category, and Folder containers. In this "How does it work?" section, we'll investigate in greater detail how some of the internals for the record Details page work.
Read more
  • 0
  • 0
  • 1113
article-image-workflow-and-automation-records-alfresco-3
Packt
25 Jan 2011
13 min read
Save for later

Workflow and Automation for Records in Alfresco 3

Packt
25 Jan 2011
13 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix         Read more about this book       (For more resources on this subject, see here.) Current limitations in Records Management The 3.3 release of Alfresco Records Management comes with some limitations about how rules and workflow can be used. Records Management requirements, and specifically the requirements for complying with the DoD 5015.2 specification, made it necessary for Alfresco developers, at least for this first release of Records Management, to make design decisions that involved limiting some of the standard Alfresco capabilities within Records Management. The idea that records are things that need to be locked down and made unalterable is at odds with some of the capabilities of rules and workflow. Proper integration of workflow and rules with Records Management requires that a number of scenarios be carefully worked through. Because of that, as of the Alfresco 3.3 release, both workflow and rules are not yet available for use in Records Management. Another area that is lacking in the 3.3 release is the availability of client-side JavaScript APIs for automating Records Management functions. The implementation of Records Management exposes many webscripts for performing records functions, but that same records functionality hasn't been exposed via a client-side JavaScript API. It is expected that capabilities of Records Management working alongside with rules, workflow, and APIs will likely improve in future releases of Alfresco. While the topic of this article is workflow and automation, the limitations we've just mentioned don't necessarily mean that there isn't anything left to discuss in this article. Remember, Alfresco Records Management co-exists with the rest of Share, and rules and workflow are alive and well in standard Share. Also remember that prior to being filed and declared as a record, many records start out their lives as standard documents that require collaboration and versioning before they are finalized and turned into records. It is in these scenarios, prior to becoming a record, where the use of workflow often makes the most sense. With that in mind, let's look at the capabilities of rules and workflow within Alfresco Share and how these features can be used side-by-side with Records Management today, and also get a feel of how future releases of Alfresco Records Management might be able to more directly apply these capabilities. Alfresco rules The Alfresco rules engine provides an interface for implementing simple business rules for managing the processing and flow of content within an organization. Creating rules is easy to do. Alfresco rules were first available in the Alfresco Explorer client. While rules in the Explorer client were never really hard to apply, the new rules interface in Share makes the creation and use of rules even easier. Rules can be attached to a folder and triggered when content is moved into it, moved out of it, or updated. The triggering of a rule causes an action to be run which then operates on the folder or the contents within it. Filters can also be applied to the rules to limit the conditions under which the trigger will fire. A trigger can be set up to run from one of the many pre-defined actions or it can be customized to run a user-defined script. Rules are not available for Share Records Management Folders. You may find that it is possible to bypass this limitation by either navigating to records folders using the Repository browser in Share or by using the JSF Explorer client to get to the Records Management folder. From those interfaces, it is possible to create rules for record folders, but it's not a good idea. Resist the temptation. It is very easy to accidently corrupt records data by applying rules directly to records folders. Defining a rule While rules can't be applied directly to the folders of Records Management, it is possible to apply rules on folders that are outside of Records Management which can then push and file documents into records folders. We'll see that once a document is moved into a records folder, rules can be used to update its metadata and even declare it as a record. To apply rules to a folder outside of the Records Management site, we select the Manage Rules action available for a folder: On the next screen, we click on the Create Rules link, and we then see a page for creating the definition of the rules for the folder: A rule is defined by three main pieces of information: The trigger event Filters to limit the items that are processed The action that is performed Triggers for rules Three different types of events can trigger a rule: Creating or moving items into the folder Updating items in the folder Deleting or moving items from the folder Filters for rules By default, when an event occurs, the rule that is triggered applies the rule action to all items involved in the event. Filters can be applied that will make rules more restrictive, limiting the items that will be processed by the rule. Filters are a collection of conditional expressions that are built using metadata properties associated with the items. There are actually two conditions. The first condition is a list of criteria for different metadata properties, all of which must be met. Similarly, the second condition is a list of criteria for metadata properties, none of which must hold. For example, in the screenshot shown below, there is a filter defined that applies a rule only if the document name begins with FTK_, if it is a Microsoft Word file, and if it does not contain the word Alfresco in the Description property: By clicking on the + and – buttons to the right of each entry, new criteria can be added and existing criteria can be removed from each of the two sets. To help specify the filter criteria from the many possible properties available in Alfresco, a property browser lets the user navigate through properties that can be used when specifying the criteria. The browser shows all available properties associated with aspect and type names: Actions for rules Actions are the operations that a rule runs when triggered. There is a fairly extensive library of actions that come standard with Alfresco and that can be used in the definition of a rule. Many of the actions available can be configured with parameters. This means that a lot of capability can be easily customized into a rule without needing to do any programming. If the standard actions aren't sufficient to perform the desired task, an alternative is to write a custom server-side JavaScript that can be attached as the action to a rule that will be run when triggered. The complete Alfresco JavaScript API is available to be used when writing the custom action. Actions that are available for assignment as rule actions are shown in the next table: It is interesting to note that there are many Records Management functions in the list that are available as possible actions, even though rules themselves are not available to be applied directly to records folders. Actions can accept parameters. For example, the Move and Copy actions allow users to select the target destination parameter for the action. Using a pop-up in the Rules Manager, the user can find a destination location by navigating through the folder structure. In the screenshot below, we see another example where the Send email action pops up a form to help configure an e-mail notification that will be sent out when the action is run: Multiple rules per folder The Rules Manager supports the assignment of multiple rules to a single folder. A drag-and-drop interface allows individual rules to be moved into the desired order of execution. When an event occurs on a folder, each rule attached to the folder is checked sequentially to see if there is a match. When the criterion for a rule matches, the rule is run. By creating multiple rules with the same firing criteria, it's possible to arrange rules for sequential processing, allowing fairly complex operations to be performed. The screenshot below shows the rules user interface that lets the user order the processing order for the rules. Actions like move and copy allow the rules on one folder to pass documents on to other folders, which in turn may also have rules assigned to them that can perform additional processing. In this way, rules on Alfresco folders are effectively a "poor man's workflow". They are powerful enough to be able to handle a large number of business automation requirements, although at a fairly simple level. More complex workflows with many conditionals and loops need to be modeled using workflow tools like that of jBPM, which we will discuss later. The next figure shows an example for how rules can be sequentially ordered: Auto-declaration example Now let's look at an example where we file documents into a transit folder which are then automatically processed, moved into the Records Management site, and then declared as records. To do this, we'll create a transit folder and attach two rules to it for performing the processing. The first rule will run a custom script that applies record aspects to the document, completes mandatory records metadata, and then moves the document into a Folder under Records Management, effectively filing it. The second rule then declares the newly filed document as a record. The rules for this example will be applied against a folder called "Transit Folder", which is located within the document library of a standard Share site. Creating the auto-filing script Let's look at the first of the two rules that uses a custom script for the action. It is this script that does most of the work in the example. We'll break up the script into two parts and discuss each part individually: // Find the file name, minus the namespace prefix (assume cm:content)var fPieces = document.qnamePath.split('/');fName =fPieces[fPieces.length-1].substr(3);// Remember the ScriptNode object for the parent folder being filed tovar parentOrigNode = document.parent;// Get today's date. We use it later to fill in metadata.var d = new Date();// Find the ScriptNode for the destination to where we will file hardcoded here. More complex logic could be used here to categorize the incoming data to file into different locations.var destLocation = "Administrative/General Correspondence/2011_01 Correspondence";var filePlan = companyhome.childByNamePath("Sites/rm/documentlibrary");var recordFolder = filePlan.childByNamePath(destLocation);// Add aspects needed to turn this document into a recorddocument.addAspect("rma:filePlanComponent");document.addAspect("rma:record");// Complete mandatory metadata that will be needed to declare as a recorddocument.properties["rma:originator"] = document.properties["cm:creator"];document.properties["rma:originatingOrganization"] = "Formtek, Inc";document.properties["rma:publicationDate"] = d;document.properties["rma:dateFiled"] = d;// Build the unique record identifier -- based on the node-dbid valuevar idStr = '' + document.properties["sys:node-dbid"];// Pad the string with zeros to be 10 characters in lengthwhile (idStr.length < 10){ idStr = '0' + idStr ;}document.properties["rma:identifier"] = d.getFullYear() + '-' + idStr;document.save();document.move(recordFolder); At the top of the script, the filename that enters the folder is extracted from the document.qnamePath string that contains the complete filename. document is the variable passed into the script that refers to the document object created with information about the new file that is moved into the folder. The destination location to the folder in Records Management is hardcoded here. A more sophisticated script could file incoming documents, based on a variety of criteria into multiple folders. We add aspects rma:filePlanComponent and rma:record to the document to prepare it for becoming a record and then complete metadata properties that are mandatory for being able to declare the document as a record. We're bypassing some code in Records Management that normally would assign the unique record identifier to the document. Normally when filed into a folder, the unique record identifier is automatically generated within the Alfresco core Java code. Because of that, we will need to reconstruct the string and assign the property in the script. We'll follow Alfresco's convention for building the unique record identifier by appending a 10-digit zero-padded integer to the year. Alfresco already has a unique object ID with every object that is used when the record identifier is constructed. The unique ID is called the sys:node-dbid. Note that any unique string could be used for the unique record identifier, but we'll go with Alfresco's convention. Finally, the script saves the changes to the document and the document is filed into the Records Management folder. At this point, the document is now an undeclared document in the Records Management system. We could stop here with this script, but let's go one step further. Let's place a stub document in this same folder that will act as a placeholder to alert users as to where the documents that they filed have been moved. The second part of the same script handles the creation of a stub file: // Leave a marker to track the documentvar stubFileName = fName + '_stub.txt' ;// Create the new documentvar props = new Array();props["cm:title"] = ' Stub';props["cm:description"] = ' (Stub Reference to record in RM)';var stubDoc = parentOrigNode.createNode( stubFileName, "cm:content", props );stubDoc.content = "This file is now under records management control:n " + recordFolder.displayPath + '/' + fName;// Make a reference to the original document, now a recordstubDoc.addAspect("cm:referencing")stubDoc.createAssociation(document, "cm:references");stubDoc.save(); The document name we will use for the stub file is the same as the incoming filename with the suffix _stub.txt appended to it. The script then creates a new node of type cm:content in the transit directory where the user originally uploaded the file. The cm:title and cm:description properties are completed for the new node and text content is added to the document. The content contains the path to where the original file has been moved. Finally, the cm:referencing aspect is added to the document to allow a reference association to be made between the stub document and the original document that is now under Records Management. The stub document with these new properties is then saved. Installing the script In order for the script to be available for use in a rule, it must first be installed under the Data Dictionary area in the repository. To add it, we navigate to the folder Data Dictionary / Scripts in the repository browser within Share. The repository can be accessed from the Repository link across the top of the Share page: To install the script, we simply copy the script file to this folder. We also need to complete the title for the new document because the title is the string that will be used later to identify it. We will name this script Move to Records Management.
Read more
  • 0
  • 0
  • 2050

article-image-integrating-facebook-magento
Packt
21 Jan 2011
4 min read
Save for later

Integrating Facebook with Magento

Packt
21 Jan 2011
4 min read
  Magento 1.4 Themes Design Customize the appearance of your Magento 1.4 e-commerce store with Magento's powerful theming engine Install and configure Magento 1.4 and learn the fundamental principles behind Magento themes Customize the appearance of your Magento 1.4 e-commerce store with Magento's powerful theming engine by changing Magento templates, skin files and layout files Change the basics of your Magento theme from the logo of your store to the color scheme of your theme Integrate popular social media aspects such as Twitter and Facebook into your Magento store Facebook (http://www.facebook.com) is a social networking website that allows people to add each other as 'friends' and to send messages and share content. Move the mouse over the image to enlarge it. As with Twitter, there are two options you have for integrating Facebook with your Magento store: Adding a 'Like' button to your store's product pages to allow your customers to show their appreciation for individual products on your store. Integrating a widget of the latest news from your store's Facebook profile. Adding a 'Like' button to your Magento store's product pages The Facebook 'Like' button allows Facebook users to show that they approve of a particular web page and you can put this to use on your Magento store. Getting the 'Like' button markup To get the markup required for your store's 'Like' button, go to the Facebook Developers website at: http://developers.facebook.com/docs/reference/ plugins/like. Fill in the form below the description text with relevant values, leaving the URL to like field as URLTOLIKE for now, and setting the Width to 200: Click on the Get Code button at the bottom of the form and then copy the code that is presented in the iframe field: The generated markup should look like the following: <iframe src="http://www.facebook.com/plugins/like.php?href= URLTOLIKE&amp;layout=standard&amp;show_faces=true&amp;width= 200&amp;action=like&amp;colorscheme=light&amp;height=80" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> You now need to replace the URLTOLIKE in the previous markup to the URL of the current page in your Magento store. The PHP required to do this in Magento looks like the following: <?php $currentUrl = $this->helper(‘core/url’)->getCurrentUrl(); ?> The new Like button markup for your Magento store should now look like the following: <iframe src="http://www.facebook.com/plugins/like.php?href= ".<?php $currentUrl = $this->helper(‘core/url’)->getCurrentUrl(); ?>". &amp;layout=standard&amp;show_faces=true&amp;width=200&amp;action= like&amp;colorscheme=light&amp;height=80» scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> Open your theme's view.phtml file in the /app/design/frontend/default/m2/ template/catalog/product directory and locate the lines that read: <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?> </div></div> Insert the code generated by Facebook here, so that it now reads the following: <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?> </div> <iframe src="http://www.facebook.com/plugins/like.php?href=<?php echo $this->helper('core/url')->getCurrentUrl();?>&amp;layout= standard&amp;show_faces=true&amp;width=200&amp;action=like&amp; colorscheme=light&amp;height=80" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> </div> Save and upload this file back to your Magento installation and then visit a product page within your store to see the button appear below the brief description of the product: That's it, your product pages can now be liked on Facebook!
Read more
  • 0
  • 0
  • 2510

article-image-getting-started-alfresco-records-management-module
Packt
18 Jan 2011
7 min read
Save for later

Getting Started with the Alfresco Records Management Module

Packt
18 Jan 2011
7 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix     The Alfresco stack Alfresco software was designed for enterprise, and as such, supports a variety of different stack elements. Supported Alfresco stack elements include some of the most widely used operating systems, relational databases, and application servers. The core infrastructure of Alfresco is built on Java. This core provides the flexibility for the server to run on a variety of operating systems, like Microsoft Windows, Linux, Mac OS, and Sun Solaris. The use of Hibernate allows Alfresco to map objects and data from Java into almost any relational database. The databases that the Enterprise version of Alfresco software is certified to work with include Oracle, Microsoft SQL Server, MySQL, PostgresSQL, and DB2. Alfresco also runs on a variety of Application Servers that include Tomcat, JBoss, WebLogic, and WebSphere. Other relational databases and application servers may work as well, although they have not been explicitly tested and are also not supported. Details of which Alfresco stack elements are supported can be found on the Alfresco website: http://www.alfresco.com/services/subscription/supported-platforms/3-x/. Depending on the target deployment environment, different elements of the Alfresco stack may be favored over others. The exact configuration details for setting up the various stack element options is not discussed in this book. You can find ample discussion and details on the Alfresco wiki on how to configure, set up, and change the different stack elements. The version-specific installation and setup guides provided by Alfresco also contain very detailed information. The example description and screenshots given in this article are based on the Windows operating system. The details may differ for other operating systems, but you will find that the basic steps are very similar. Additional information on the internals of Alfresco software can be found on the Alfresco wiki at http://wiki.alfresco.com/wiki/Main_Page. Alfresco software As a first step to getting Alfresco Records Management up and running, we need to first acquire the software. Whether you plan to use either the Enterprise or the Community version of Alfresco, you should note that the Records Management module was not available until late 2009. The Records Management module was first certified with the 3.2 release of Alfresco Share. The first Enterprise version of Alfresco that supported Records Management was version 3.2R, which was released in February 2010. Make sure the software versions are compatible It is important to note that there was an early version of Records Management that was built for the Alfresco JSF-based Explorer client. That version was not certified for DoD 5015.2 compliance and is no longer supported by Alfresco. In fact, the Alfresco Explorer version of Records Management is not compatible with the Share version of Records Management, and trying to use the two implementations together can result in corrupt data. It is also important to make sure that the version of the Records Management module that you use matches the version of the base Alfresco Share software. For example, trying to use the Enterprise version of Records Management on a Community install of Alfresco will lead to problems, even if the version numbers are the same. The 3.3 Enterprise version of Records Management, as another example, is also not fully compatible with the 3.2R Enterprise version of Alfresco software. Downloading the Alfresco software The easiest way to get Alfresco Records Management up and running is by doing a fresh install of the latest available Alfresco software. Alfresco Community The Community version of Alfresco is a great place to get started. Especially if you are just interested in evaluating if Alfresco software meets your needs, and with no license fees to worry about, there's really nothing to lose in going this route. Since Alfresco Community software is constantly in the "in development" state and is not as rigorously tested, it tends to not be as stable as the Enterprise version. But, in terms of the Records Management module for the 3.2+ version releases of the software, the Community implementation is feature-complete. This means that the same Records Management features in the Enterprise version are also found in the Community version. The caveat with using the Community version is that support is only available from the Alfresco community, should you run across a problem. The Enterprise release also includes support from the Alfresco support team and may have bug fixes or patches not yet available for the community release. Also of note is the fact that there are other repository features beyond those of Records Management features, especially in the area of scalability, which are available only with the Enterprise release. Building from source code It is possible to get the most recent version of the Alfresco Community software by getting a snapshot copy of the source code from the publicly accessible Alfresco Subversion source code repository. A version of the software can be built from a snapshot of the source code taken from there. But unless you are anxiously waiting for a new Alfresco feature or bug fix and need to get your hands immediately on a build with that new code included as part of it, for most people, building from source is probably not the route to go. Building from source code can be time consuming and error prone. The final software version that you build can often be very buggy or unstable due to code that has been checked-in prematurely or changes that might be in the process of being merged into the Community release, but which weren't completely checked-in at the time you updated your snapshot of the code base. If you do decide that you'd like to try to build Alfresco software from source code, details on how to get set up to do that can be found on the Alfresco wiki: http://wiki.alfresco.com/wiki/Alfresco_SVN_Development_Environment. Download a Community version snapshot build Builds of snapshots of the Alfresco Community source code are periodically taken and made available for download. Using a pre-built Community version of Alfresco software saves you much hassle and headaches from not having to do the build from scratch. While not thoroughly tested, the snapshot Community builds have been tested sufficiently so that they tend to be stable enough to see most of the functionality available for the release, although not everything may be working completely. Links to the most recent Alfresco Community version builds can be found on the Alfresco wiki: http://wiki.alfresco.com/wiki/Download_Community_Edition. Alfresco Enterprise The alternative to using Alfresco open source Community software is the Enterprise version of Alfresco. For most organizations, the fully certified Enterprise version of Alfresco software is the recommended choice. The Enterprise version of Alfresco software has been thoroughly tested and is fully supported. Alfresco customers and partners have access to the most recent Enterprise software from the Alfresco Network site: http://network.alfresco.com/. Trial copies of Alfresco Enterprise software can be downloaded from the Alfresco site: http://www.alfresco.com/try/. Time-limited access to on-demand instances of Alfresco software are also available and are a great way to get a good understanding of how Alfresco software works.
Read more
  • 0
  • 0
  • 2284
article-image-roles-and-responsibilities-records-management-implementation-alfresco-3
Packt
17 Jan 2011
10 min read
Save for later

Roles and Responsibilities for Records Management Implementation in Alfresco 3

Packt
17 Jan 2011
10 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix         Read more about this book       (For more resources on this subject, see here.) The steering committee To succeed, our Records Management program needs continued commitment from all levels of the organization. A good way to cultivate that commitment is by establishing a steering committee for the records program. From a high level, the steering committee will direct the program, set priorities for it, and assist in making decisions. The steering committee will provide the leadership to ensure that the program is adequately funded, staffed, properly prioritized with business objectives, and successfully implemented. Committee members should know the organization well and be in a position to be both able and willing to make decisions. Once the program is implemented, the steering committee should not be dissolved; it still will play an important function. It will continue to meet and oversee the Records Management program to make sure that it is properly maintained and updated. The Records Management system is not something that can simply be turned on and forgotten. The steering committee should meet regularly, track the progress of the implementation, keep abreast of changes in regulatory controls, and be proactive in addressing the needs of the Records Management program. Key stakeholders The Records Management steering committee should include executives and senior management from core business units such as Compliance, Legal, Finance, IT, Risk Management, Human Resources, and any other groups that will be affected by Records Management. Each of these groups will represent the needs and responsibilities of their respective groups. They will provide input relative to policies and procedures. The groups will work together to develop a priority-sequenced implementation plan that all can agree upon. Creating a committee that is heavily weighted with company executives will visibly demonstrate that our company is strongly committed to the program and it ensures that we will have the right people on board when it is time to make decisions, and that will keep the program on track. The steering committee should also include representatives from Records Management, IT, and users. Alternatively, representatives from these groups can be appointed and, if not members of the steering committee, they should report directly to the steering committee on a regular basis: The Program Contact The Program Contact is the chair of the steering committee. This role is typically held by someone in senior management and is often someone from the technology side of the business, such as the Director of IT. The Program Contact signs off with the final approval on technology deliverables and budget items. The Program Sponsor A key member of the records steering committee is the Program Sponsor or Project Champion. This role is typically held by a senior executive who will be able to represent the records initiative within the organization's executive team. The Sponsor will be able to establish the priority of the records program, relative to other organizational initiatives and be able to persuade the executive team and others in the company of the importance of the records management initiative. Corporate Records Manager Another key role of the steering committee is the Corporate Records Manager. This role acts as the senior champion for the records program and is responsible for defining the procedures and policies around Records Management. The person in this role will promote the rollout of and the use of the records program. They will work with each of the participating departments or groups, cultivating local champions for Records Management within each of those groups. The Corporate Records Manager must effectively communicate with business units to explain the program to all staff members and work with the various business units to collect user feedback so that those ideas can be incorporated into the planning process. The Corporate Records Manager will try to minimize any adverse user impact or disruption. Project Manager The Project Manager typically is on the steering committee or reports directly to it. The Project Manager plans and tracks the implementation of work on the program and ensures that program milestones are met. The person in this role manages both, the details of the system setup and implementation. This Project Manager also manages the staff time spent working on the program tasks. Business Analyst The Business Analyst analyzes business processes and records, and from these, creates a design and plan for the records program implementation. The Business Analyst works closely with the Corporate Records Manager to develop records procedures and provides support for the system during rollout. Systems Administrator The Systems Administrator leads the technical team for supporting the records application. The Systems Administrator specifies and puts into place the hardware required for the records program, the storage space, memory, and CPU capabilities. The person in this role monitors the system performance and backs up the system regularly. The Systems Administrator leads the team to apply software upgrades and to perform system troubleshooting. The Network Administrator The Network Administrator ensures that the network infrastructure is in place for the records program to support the appropriate bandwidth for the server and client workstations that will access the application. The Network Administrator works closely with the Systems Administrator. The Technical Analyst The Technical Analyst is responsible for analyzing the configuration of the records program. The Technical Analyst needs to work closely with the Business Analyst and Corporate Records Manager. The person in this role will specify the classification and structure used for the records program File Plan. They will also specify the classes of documents stored as records in the records application and the associated metadata for those documents. The Records Assistant The Records Assistant assists in the configuration of the records application. Tasks that the Records Assistant will perform include data entry and creating the folder structure hierarchy of the File Plan within the records application based on the specification created by the Technical Analyst. The Records Developer The Records Developer is a software engineer that is assigned to support the implementation of the records program, based on requirements derived by the Business Analyst. The Records Developer may need to edit and update configuration files, often using technologies like XML. The Records Developer may also need to make customizations to the user interface of the application. The Trainer The Trainer will work with end users to ensure that they understand the system and their responsibilities in interacting with it. The trainer typically creates training materials and provides training seminars to users. The Technical Support Specialist The Technical Support Specialist provides support to users on the functioning of the Records Management application. This person is typically an advanced user and is trained to be able to provide guidance in interacting with the application. But more than just the Records Management application, the support specialist should also be well versed in and be able to assist users and answer their questions about records processes and procedures, as well as concepts like retention and disposition of documents. The Technical Support Specialist will, very often, be faced with requests or questions that are really enhancement requests. The support specialist needs to have a good understanding of the scope of the records implementation and be able to distinguish an enhancement request from a defect or bug report. Enhancements should be collected and routed back through the Project Manager and, depending on the nature of the request or problem, possibly even to the Corporate Records Manager or the Steering Committee. Similarly, application defects or bugs that are found should be reported back through to the Project Manager. Bug reports will be prioritized by the Project Manager, as appropriate, assigned to the Technical Developers, or reported to the Systems Integrator or to Alfresco. The Users The Users are the staff members who will use the Records Management application as part of their job. Users are often the key to the success or failure of a records program. Unfortunately, users are one aspect of the project that is often overlooked. Obviously, it is important that the records application be well designed and meet the objectives and requirements set out for it. But if users complain and can't accept it, then the program will be doomed to failure. Users will often be asked to make changes to processes that they have become very comfortable with. Frequent and early communication with users is a must in order to ultimately gain their acceptance and participation. Prior to and during the implementation of the records system, users should receive status updates and explanations from the Corporate Records Manager and also from the Records Manager lead in their business unit. It is important that frequent communications be made with users to ensure their opinions and ideas are heard, and also so that they will learn to be able to most effectively use the records system. Once the application is ready, or better yet, well before the application goes live, users should attend training sessions on proper records-handling behavior; they should experience hands-on training with the application; and they should also be instructed in how best to communicate with the Technical Support Specialist, should they ever have questions or encounter any problems. Alfresco, Consultants, and Systems Integrators Alfresco is the software vendor for Alfresco Records Management, but Alfresco typically does not work directly with customers. We could go at it alone, but more likely, we'll probably choose to work directly with one of Alfresco's System Integration partners or consultants in planning for and setting up our system. Depending on the size of our organization and the available skill set within it, the Systems Integrator can take on as much or as little of the burden for helping us to get up and running with our Records Management program. Almost any of the Technical Team roles discussed in this section, like those of the Analyst and Developer, and even the role of the Project Manager, are ones that can be performed by a Systems Integrator. A list of certified Alfresco Integrators can be found on the Alfresco site: http://www.alfresco.com/partners/search.jsp?t=si A Systems Integrator can bring to our project an important breadth of experience that can help save time and ensure that our project will go smoothly. Alfresco Systems Integration partners know their stuff. They are required to be certified in Alfresco technology and they have worked with Alfresco extensively. They are familiar with best practices and have picked up numerous implementation tips and tricks having worked on similar projects with other clients.
Read more
  • 0
  • 0
  • 6988

article-image-introduction-successful-records-management-implementation-alfresco-3
Packt
14 Jan 2011
15 min read
Save for later

Introduction to Successful Records Management Implementation in Alfresco 3

Packt
14 Jan 2011
15 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations  A preliminary investigation will also give us good information about the types of records we have and roughly how many records we're talking about. We'll also dig deeper into the area of Authority Documents and we'll determine exactly what our obligations are as an organization in complying with them. The data that we collect in the preliminary investigation will provide the basis for us to make a Business Case that we can present to the executives in the organization. It will outline the benefits and advantages of implementing a records system. We also will need to put in place and communicate organization-wide a formal policy that explains concisely the goals of the records program and what it means to the organization. The information covered in this article is important and easily overlooked when starting a Records Management program. We will discuss: The Preliminary Investigation Authority Documents The Steering Committee and Roles in the Records Management Program Making the Business Case for Records Management Project Management Best practices and standards In this article, we will focus on discussing Records Management best practices. Best practices are the processes, methods, and activities that, when applied correctly, can achieve the most repeatable, effective, and efficient results. While an important function of standards is to ensure consistency and interoperability, standards also often provide a good source of information for how to achieve best practice. Much of our discussion here draws heavily on the methodology described in the DIRKS and ISO-15489 standards that describe Records Management best practices. Before getting into a description of best practices though, let's look and see how these two particular standards have come into being and how they relate to other Records Management standards, like the DoD 5015.2 standard. Origins of Records Management Somewhat surprisingly, standards have only existed in Records Management for about the past fifteen years. But that's not to say that prior to today's standards, there wasn't a body of knowledge and written guidelines that existed as best practices for managing records. Diplomatics Actually, the concept of managing records can be traced back a long way. In the Middle Ages in Europe, important written documents from court transactions were recognized as records, and even then, there were issues around establishing authenticity of records to guard against forgery. From those early concerns around authenticity, the science of document analysis called diplomatics came into being in the late 1600s and became particularly important in Europe with the rise of government bureaucracies in the 1800s. While diplomatics started out as something closer to forensic handwriting analysis than Records Management, it gradually established principles that are still important to Records Management today, such as reliability and authenticity. Diplomatics even emphasized the importance of aligning rules for managing records with business processes, and it treated all records the same, regardless of the media that they are stored on. Records Management in the United States Records Management is something that has come into being very slowly in the United States. In fact, Records Management in the United States is really a twentieth century development. It wasn't even until 1930 that 90 percent of all births and deaths in the United States were recorded. The United States National Archives was first established in 1934 to manage only the federal government historical records, but the National Archives quickly became involved in the management of all federal current records. In 1941, a records administration program was created for federal agencies to transfer their historical records to the National Archives. In 1943, the Records Disposal Act authorized the first use of record disposition schedules. In 1946, all agencies in the executive branch of government were ordered as part of Executive Order 9784 to implement Records Management programs. It wasn't until 1949 with the publication of a pamphlet called Public Records Administration, written by an archivist at the National Archives, that the idea of Records Management was beginning to be seen as an activity that is separate and distinct from the long-term archival of records for preservation. Prior to the 1950s in the United States, most businesses did not have a formalized program for records management. However, that slowly began to change as the federal government provided itself as an example for how records should be managed. The 1950 Federal Records Act formalized Records Management in the United States. The Act included ideas about the creation, maintenance, and disposition of records. Perhaps somewhat similar to the dramatic growth in electronic documents that we are seeing today, the 1950s saw a huge increase in the number of paper records that needed to be managed. The growth in the volume of records and the requirements and the responsibilities imposed by the Federal Records Act led to the creation of regional records centers in the United States, and those centers slowly became models for records managers outside of government. In 1955, the second Hoover Commission was tasked with developing recommendations for paperwork management and published a document entitled Guide to Record Retention Requirements in 1955. While not officially sanctioned as a standard, this document, in many ways, served the same purpose. The guide was popular and has been republished frequently since then and has served as an often-used reference by both government and non-government organizations. As late as 1994, a revised version of the guide was printed by the Office of the Federal Register. That same year, in 1955, ARMA International, the international organization for records managers, was founded. ARMA continues through today to provide a forum for records and information managers, both inside and outside the government, to share information about best practices in the area of Records Management. From the 1950s, companies and non-government organizations were becoming more involved with record management policies, and the US federal government continued to drive much of the evolution of Records Management within the United States. In 1976, the Federal Records Act was amended and sections were added that emphasized paperwork reduction and the importance of documenting the recordkeeping process. The concept of the record lifecycle was also described in the amendments to the Act. In 1985, the National Archives was renamed as NARA, the National Archives and Records Administration, finally acknowledging in the name the role the agency plays in managing records as well as being involved in the long-term archival and preservation of documents. However, it wasn't until the 1990s that standards around Records Management began to take shape. In 1993, a government task force in the United States that included NARA, the US Army, and the US Air Force, began to devise processes for managing records that would include both the management of paper and electronic documents. The recommendations of that task force ultimately led to the DoD-5015.2 standard that was first released in 1997. Australia's AS-4390 and DIRKS In parallel to what was happening in the United States, standards for Records Management were also advancing in Australia. AS-4390 Standards Australia issued AS-4390 in 1996, a document that defined the scope of Records Management with recommendations for implementation in both public and private sectors in Australia. This was the first standard issued by any nation, but much of the language in the standard was very specific, making it usable really only within Australia. AS-4390 approached the management of records as a "continuum model" and addressed the "whole extent of the records' existence". DIRKS In 2000, the National Archives of Australia published DIRKS (Design and Implementation of Recordkeeping System), a methodology for implementing AS-4390. The Australian National Archives developed, tested, and successfully implemented the approach, summarizing the methodology for managing records into an eight-step process. The eight steps of the DIRKS methodology include: Organization assessment: Preliminary Investigation Analysis of business activity Identification of records requirements Assess areas for improvement: Assessment of the existing system Strategies for recordkeeping Design, implement, and review the changes: Design the recordkeeping system Implement the recordkeeping system Post-implementation review An international Records Management standard These two standards, AS-4390 and DIRKS, have had a tremendous influence not only within Australia, but also internationally. In 2001, ISO-15489 was published as an international standard for best practices for Records Management. Part one of the standard was based on AS-4390, and part two was based on the guidelines, as laid out in DIRKS. The same eight-step methodology of DIRKS is used in the part two guidelines of ISO-15489. The DIRKS manual can be freely downloaded from the National Archives of Australia: http://www.naa.gov.au/recordsmanagement/publications/dirks-manual.aspx The ISO-15489 document can be purchased from ISO: http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=31908 and http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35845 ISO-15489 has been a success in terms of international acceptance. 148 countries are members of ISO, and many of the participating countries have embraced the use of ISO-15489. Some countries where ISO-15489 is actively applied include Australia, China, UK, France, Germany, Netherlands, and Jamaica. Both ARMA International and AIIM now also promote the importance of the ISO-15489 standard. Much of the appeal behind the ISO-15489 standard is the fact that it is fairly generic. Because it describes the recordkeeping process at a very high level, it avoids contentious details that may be specific to any particular Records Management implementation. Consider, for example, the eight steps of the DIRKS process, as listed above, and replace the words "record" and "recordkeeping" with the name of some other type of enterprise software or project, like "ERP". The steps and associated recommendations from DIRKS are equally applicable. In fact, we recognize clear parallels between the steps presented in the DIRKS methodology and methodologies used for Project Management. Later in this article, we will look at similarities between Records Management and Project Management methodologies like PMBOK and Agile. Does ISO-15489 overlap with standards like DoD-5015.2 and MoReq? ISO-15489 differs considerably in approach from other Records Management standards, like the DoD-5015.2 standard and the MoReq standard which developed in Europe. While ISO-15489 outlines basic principles of Records Management and describes best practices, these latter two standards are very prescriptive in terms of detailing the specifics for how to implement a Records Management system. They are essentially functional requirement documents for computer systems. MoReq (Model Requirements for the Management of Electronic Records) was initiated by the DLM Forum and funded by the European Commission. MoReq was first published in 2001 as MoReq1 and was then extensively updated and republished as MoReq2 in 2008. In 2010, an effort was undertaken to update the specification with the new name MoReq2010. The MoReq2 standard has been translated into 12 languages and is referenced frequently when building Records Management systems in Europe today. Other international standards for Records Management A number of other standards exist internationally. In Australia, for example, the Public Record Office has published a standard known as the Victorian Electronic Records Strategy (VERS) to address the problem of ensuring that electronic records can be preserved for long periods of time and still remain accessible and readable. The preliminary investigation Before we start getting our hands dirty with the sticky details of designing and implementing our records system, let's first get a big-picture idea of how Records Management currently fits into our organization and then define our vision for the future of Records Management in our organization. To do that, let's make a preliminary investigation of the records that our organization deals with. In the preliminary investigation, we'll make a survey of the records in our organization to find out how they are currently being handled. The results of the survey will provide important input into building the Business Case for moving forward with building a new Records Management system for our organization. With the results of the preliminary investigation, we will be able to create an information map or diagram of where records currently are within our organization and which groups of the organization those records are relevant to. With that information, we will be able to create a very high-level charter for the records program, provide data to be used when building the Business Case for Records Management, and then have sufficient information to be able to calculate a rough estimate of the cost and effort needed for the program scope. Before executing on the preliminary investigation, a detailed plan of attack for the investigation should be made. While the primary goal of the investigation is to gather information, a secondary goal should be to do it in a way that minimizes any disruptions to staff members. To perform the investigation, we will need assistance from the various business units in the organization. Before starting, a 'heads up' should be sent out to the managers of the different business units involved so that they will understand the nature of the investigation, when it will be carried out, and they'll know roughly the amount of time that both they and their unit will need to make available to assist in the investigation. It would also be useful to hold a briefing meeting with staff members from business units, where we expect to find most of the records. The records survey Central to the preliminary investigation is the records survey, which is taken across the organization. A records survey attempts to identify the location and record types for both the electronic and non-electronic records used in the organization. Physical surveys versus questionnaires The records survey is usually either carried out as a physical one or as one managed remotely via questionnaires. In a physical survey, members of the records management team visit each business unit, and working together with staff members from that unit, make a detailed inventory. During the survey, all physical storage locations, such as cabinets, closets, desks, and boxes are inspected. Staff members are asked where they store their files, which business applications they use, and which network drives they have access to. The alternative to the physical survey is to send questionnaires to each of the business units and to ask them to complete the forms on their own. Inspections similar to that of the physical survey would be made, but the business unit is not supported by a records management team member. Which of the two approaches we use will depend on the organization. Of course, a hybrid approach, where a combination of both physical surveys and questionnaires is used would work too. Physical in-person surveys tend to provide more accurate and complete inventories, but they also are typically more expensive and time consuming to perform. Questionnaires, while cheaper, rely on each of the individual business units to complete the information on their own, which means that the reporting and investigation styles used by the different units might not be uniform. There is also the problem that some business units may not be sufficiently motivated to complete the questionnaires in a timely manner. Preparing for the survey: Review existing documentation Before we begin the survey, we should check to see if there already exists any background documentation that describes how records are currently being handled within the organization. Documentation has a habit of getting out of date quickly. Documentation can also be deceiving because sometimes it is written, but never implemented, or implemented in ways that deviate dramatically from the originally written description. So if we're actually lucky enough to find any documentation, we'll need to also validate how accurate that information really is. These are some examples of documents which may already exist and which can provide clues about how some organizational records are being handled today: The organization's disaster recovery plan Previous records surveys or studies The organization's record management policy statement Internal and external audit reports that involve consideration of records Organizational reports like risk assessment and cost-benefit analyses Other types of documents may also exist, which can be good indicators for where records, particularly paper records, might be getting stored. These include: Blueprints, maps, and building plans that show the location of furniture and equipment Contracts with storage companies or organizations that provide records or backup services Equipment and supply inventories that may indicate computer hardware Lists of databases, enterprise application software, and shared drives It may take some footwork and digging to find out exactly where and how records in the organization are currently being stored. Physical records could be getting stored in numerous places throughout office and storage areas. Electronic records might be currently saved on shared drives, local desktops, or other document repositories. The main actions of the records survey can be summarized by the LEAD acronym: Locate the places where records are being stored Examine the records and their contents Ask questions about the records to understand their significance Document the information about the records
Read more
  • 0
  • 0
  • 2753