Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-faq-celtx
Packt
14 Mar 2011
3 min read
Save for later

FAQ on Celtx

Packt
14 Mar 2011
3 min read
Celtx: Open Source Screenwriting Beginner's Guide Write and market Hollywood-perfect movie scripts the free way! Q: What is Celtx? A: Celtx developers describe this software package as "the world's first all-in-one media pre-production system." (http://celtx.com/overview. html) We are told that Celtx: Can be used for the complete production process Lets you write scripts, storyboard scenes, and sketch setups Develop characters, breakdown, and tag elements Schedule productions plus generate useful reports Celtx is powerful software yet simple to use. It can be used in writing the various types of scripts already mentioned, including everything independent filmmakers and media creators of all types need. This includes writing, planning, scheduling, and generating reports during the various stages of all sorts of productions. The following screenshot is an example of a Celtx report screen: Q: What does the acronym, Celtx, stand for? A: The name Celtx is an acronym for Crew, Equipment, Location, Talent, and XML. Q: How far-reaching is the impact of Celtx? A: The Celtx website says that more than 500,000 media creators in 160 countries use Celtx in 33 different languages. Independent filmmakers and studio professionals, and students in over 1,800 universities and film schools have adopted Celtx for teaching and class work submission. Celtx is supported by the Celtx community of volunteer developers and a Canadian company, Greyfirst Corp. in St. John's, Newfoundland. A major reason Celtx can be an open source program is that it is built on non-proprietary standards, such as HTML and XML (basic web mark-up languages) and uses other open source programs (specifically Mozilla's engine, the same used in the Firefox browser) for basic operations. Q: What sets Celtx apart from other free screenwriting software that is available? A: An important concept of Celtx's power is that it's a client-server application. This means only part of Celtx is in that download installed on your computer. The rest is out there in the cloud (the latest buzz term for servers on the Internet). Cloud computing (using remote servers to do part of the work) allows Celtx to have much more sophisticated features, in formatting and collaboration especially, than is normally found in a relatively small free piece of software. It's rather awesome actually. Celtx, by the way, has you covered for PC, Mac, all kinds of Linux, and even eeePC Netbooks. Q: Does Celtx qualify as a web application? A: Celtx is really a web application. We have the advantage of big computers on the web doing stuff for us instead of having to depend on the much more limited resources of our local machine. This also means that improvements in script formats (as final formatting is done out on the web somewhere for you) are yours even if you haven't updated your local software. Q: Can we write movies with Celtx? A: With Celtx we can outline and write an entertainment industry standard feature movie script, short film, or animation—all properly formatted and ready to market. Q: Can we do other audio-visual projects with Celtx? A: Celtx's integral Audio-Visual editor is perfect for documentaries, commercials, public service spots, video tutorials, slide shows, light shows, or just about any other combination of visual and other content (not just sound). Q: Is Celtx equipped for audio plays and podcast? A: Celtx's Audio Play editor makes writing radio or other audio plays a breeze. It's perfect also for radio commercials or spots, and absolutely more than perfect for podcasts. Podcasts are easy to write, require minimal knowledge to produce, and are a snap to put on the Internet.
Read more
  • 0
  • 0
  • 2193

article-image-creating-and-consuming-web-services-cakephp-13
Packt
10 Mar 2011
7 min read
Save for later

Creating and Consuming Web Services in CakePHP 1.3

Packt
10 Mar 2011
7 min read
CakePHP 1.3 Application Development Cookbook Over 70 great recipes for developing, maintaining, and deploying web applications     Creating an RSS feed RSS feeds are a form of web services, as they provide a service, over the web, using a known format to expose data. Due to their simplicity, they are a great way to introduce us to the world of web services, particularly as CakePHP offers a built in method to create them. In this recipe, we will produce a feed for our site that can be used by other applications. Getting ready To go through this recipe we need a sample table to work with. Create a table named posts, using the following SQL statement: CREATE TABLE `posts`(posts `id` INT NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `body` TEXT NOT NULL, `created` DATETIME NOT NULL, `modified` DATETIME NOT NULL, PRIMARY KEY(`id`) ); Add some sample data, using the following SQL statements: INSERT INTO `posts`(`title`,posts `body`, `created`, `modified`) VALUES ('Understanding Containable', 'Post body', NOW(), NOW()), ('Creating your first test case', 'Post body', NOW(), NOW()), ('Using bake to start an application', 'Post body', NOW(), NOW()), ('Creating your first helper', 'Post body', NOW(), NOW()), ('Adding indexes', 'Post body', NOW(), NOW()); We proceed now to create the required controller. Create the class PostsController in a file named posts_controller.php and place it in your app/controllers folder, with the following contents: <?php class PostsController extends AppController { public function index() { $posts = $this->Post->find('all'); $this->set(compact('posts')); } } ?> Create a folder named posts in your app/views folder, and then create the index view in a file named index.ctp and place it in your app/views/posts folder, with the following contents: <h1>Posts</h1> <?php if (!empty($posts)) { ?> <ul> <?php foreach($posts as $post) { ?> <li><?php echo $this->Html->link( $post['Post']['title'], array( 'action'=>'view', $post['Post']['id'] ) ); ?></li> <?php } ?> </ul> <?php } ?> How to do it... Edit your app/config/routes.php file and add the following statement at the end: Router::parseExtensions('rss'); Edit your app/controllers/posts_controller.php file and add the following property to the PostsController class: public $components = array('RequestHandler'); While still editing PostsController, make the following changes to the index() method: public function index() { $options = array(); if ($this->RequestHandler->isRss()) { $options = array_merge($options, array( 'order' => array('Post.created' => 'desc'), 'limit' => 5 )); } $posts = $this->Post->find('all', $options); $this->set(compact('posts')); } Create a folder named rss in your app/views/posts folder, and inside the rss folder create a file named index.ctp, with the following contents: <?php $this->set('channel', array( 'title' => 'Recent posts', 'link' => $this->Rss->url('/', true), 'description' => 'Latest posts in my site' )); $items = array(); foreach($posts as $post) { $items[] = array( 'title' => $post['Post']['title'], 'link' => array('action'=>'view', $post['Post']['id']), 'description' => array('cdata'=>true, 'value'=>$post['Post'] ['body']), 'pubDate' => $post['Post']['created'] ); } echo $this->Rss->items($items); ?> Edit your app/views/posts/index.ctp file and add the following at the end of the view: <?php echo $this->Html->link('Feed', array('action'=>'index', 'ext'=>'rss')); ?> If you now browse to http://localhost/posts, you should see a listing of posts with a link entitled Feed. Clicking on this link should produce a valid RSS feed, as shown in the following screenshot: If you view the source of the generated response, you can see that the source for the first item within the RSS document is: <item> <title>Understanding Containable</title> <link>http://rss.cookbook7.kramer/posts/view/1</link> <description><![CDATA[Post body]]></description> <pubDate>Fri, 20 Aug 2010 18:55:47 -0300</pubDate> <guid>http://rss.cookbook7.kramer/posts/view/1</guid> </item> How it works... We started by telling CakePHP that our application accepts the rss extension with a call to Router::parseExtensions(), a method that accepts any number of extensions. Using extensions, we can create different versions of the same view. For example, if we wanted to accept both rss and xml as extensions, we would do: Router::parseExtensions('rss', 'xml'); In our recipe, we added rss to the list of valid extensions. That way, if an action is accessed using that extension, for example, by using the URL http://localhost/posts.rss, then CakePHP will identify rss as a valid extension, and will execute the ArticlesController::index() action as it normally would, but using the app/views/posts/rss/index.ctp file to render the view. The process also uses the file app/views/layouts/rss/default.ctp as its layout, or CakePHP's default RSS layout if that file is not present. We then modify how ArticlesController::index() builds the list of posts, and use the RequestHandler component to see if the current request uses the rss extension. If so, we use that knowledge to change the number and order of posts. In the app/views/posts/rss/index.ctp view, we start by setting some view variables. Because a controller view is always rendered before the layout, we can add or change view variables from the view file, and have them available in the layout. CakePHP's default RSS layout uses a $channel view variable to describe the RSS feed. Using that variable, we set our feed's title, link, and description. We proceed to output the actual item files. There are different ways to do so, the first one is making a call to the RssHelper::item() method for each item, and the other one requires only a call to RssHelper::items(), passing it an array of items. We chose the latter method due to its simplicity. While we build the array of items to be included in the feed, we only specify title, link, description, and pubDate. Looking at the generated XML source for the item, we can infer that the RssHelper used our value for the link element as the value for the guid (globally unique identifier) element. Note that the description field is specified slightly differently than the values for the other fields in our item array. This is because our description may contain HTML code, so we want to make sure that the generated document is still a valid XML document. By using the array notation for the description field, a notation that uses the value index to specify the actual value on the field, and by setting cdata to true, we are telling the RssHelper (actually the XmlHelper from which RssHelper descends) that the field should be wrapped in a section that should not be parsed as part of the XML document, denoted between a <![CDATA[ prefix and a ]]> postfix. The final task in this recipe is adding a link to our feed that is shown in the index.ctp view file. While creating this link, we set the special ext URL setting to rss. This sets the extension for the generated link, which ends up being http://localhost/posts.rss.  
Read more
  • 0
  • 0
  • 3157

article-image-documentaries-and-other-audio-visual-projects-celtx
Packt
10 Mar 2011
6 min read
Save for later

Documentaries and Other Audio-Visual Projects with Celtx

Packt
10 Mar 2011
6 min read
What is an audio-visual production? The term audio-visual production basically covers anything in the known universe that combines varying components of movement, sound, and light. Movies are nothing more than big expensive (really expensive) audio-visual shows. Television programs; the fireworks, performed music, and laser lights of a major rock concert; a business presentation; Uncle Spud showing slides of his vacation in Idaho—all are audio-visual productions. A complex audio-visual production, such as the big rock concert, combines many types of contents and is called a multimedia show, which combine sounds and music, projections of video and photos (often several at once), lights, spoken words, text on screens, and more. Audio visual shows, those of an educational nature as well as for entertainment value, might be produced with equipment such as the following: Dioramas Magic lanterns Planetarium Film projectors Slide projectors Opaque projectors Overhead projectors Tape recorders Television Video Camcorders Video projectors Interactive whiteboards Digital video clips Also productions such as TV commercials, instructional videos, those moving displays you see in airports, even the new digital billboards along our highways—all are audio-visual productions (even the ones without sound). My favorite type of production, documentaries (I've done literally hundreds of them), are audio-visual shows. A documentary is a nonfiction movie and includes newsreels, travel, politics, docudramas, nature films and animal films, music videos, and much more. In short, as we can see from the preceding discussion, you can throw just about everything into a production including your kitchen sink. Turn the faucet on and off while blasting inspiring music and hitting it with colored spotlights, and plumbers will flock to buy tickets to the show! Now, while just about every conceivable project falls into the audio-visual category, Celtx (as shown in the next screenshot) offers us specific categories that narrow the field down a little. The following screenshot from Celtx's splash page shows those categories. Film handles movies and television shows, Theatre (love that Canadian spelling, eh?) is for stage plays, Audio Play is designed for radio programs and podcasts, Storyboard is for visual planning, and Comic Book is for writing anything from comic strips to epic graphic novels. Text (not shown in the following screenshot) is the other project type that comes with Celtx and is great for doing loglines, synopses, treatments, outlines, and anything else calling for a text editor rather than a script formatter. Just about everything else can be written in an Audio-Visual project container! Let's think about that for a moment. This means that Audio-Visual is by far and away the most powerful project provided by Celtx. In the script element drop-down box, there are only five script elements—Scene Heading, Shot, Character, Dialog, and Parenthetical—whereas Film has eight! Yet, thanks to Celtx magic, these five elements, as I will show you in this article, are a lot more flexible than in Film and the other projects. It's pretty amazing. So, time to start an audio-visual project of our own. Starting an AV project in Celtx What better example to use than a short documentary on... wait for it... Celtx. This film I actually plan on producing and using to both promote Celtx (which certainly deserves letting people know about it) and also showing that this article is great for learning all this marvelous power of Celtx. The title: "Celtx Loves Indies." Indies is slang for independent producers. An independent producer is a company or quite often an individual who makes films outside Hollywood or Bollywood or any other studio system. Big studios have scores or even hundreds of people to do all those tasks needed in producing a film. Indies often have very few people, sometimes just one or two doing all the crewing and production work. Low budget (not spending too much money on making films) is our watchword. Celtx is perfect for indies—it is, as I point out in the documentary—like having a studio in a box! So, my example project for this chapter is how I set up "Celtx Loves Indies" in Celtx. Time for action — beginning our new AV project We start our project, as we did our spec script in the last chapter, by making a directory on our computer. Having a separate directory for our projects makes it a lot easier to organize and to find stuff when we need it. Therefore, I first create the new empty directory on my hard drive named Celtx Loves Indies, as shown in the following screenshot: Now, fire up Celtx. In a moment, we'll left click on Audio-Visual to open a project container that has an Audio-Visual script in it. However, first, since I have not mentioned it to date, look at the items outside the Project Templates and Recent Project boxes in the lower part of the splash page, as shown in the following screenshot: As Celtx is connected to the Internet, we get some information each time Celtx starts up from the servers at: . This information from online includes links to news, help features, ads for Celtx add-ons, and announcements. The big news here is that Celtx has added an app (application) to synchronize projects with iPhones and iPadsHowever, check these messages out each time you open Celtx. Next, we open an Audio-Visual project in Celtx. This gives us a chance to check out those five script elements we met earlier by left clicking on the downward arrow next to Scene Heading. In the next section, we'll examine each and use them. Time for action – setting up the container Continuing with our initial setup of the container for this project, rename the A/V Script in the Project Library. I renamed mine, naturally, Celtx Loves Indies. Also, remember we can have hundreds of files, directories, subdirectories, and so on in the Project Library—our research and more. This is why a Celtx project is really a container. Just right click on A/V Script, choose Rename... and type in the new title, as shown in the following screenshot: Left click on File at the top left of the Celtx screen, then on Save Project As... (or use the Ctrl+Shift+S key shortcut) to save the project into your new directory, all properly titled and ready for action, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 4039
Visually different images

article-image-getting-started-inkscape
Packt
09 Mar 2011
9 min read
Save for later

Getting Started with Inkscape

Packt
09 Mar 2011
9 min read
Inkscape 0.48 Essentials for Web Designers Use the fascinating Inkscape graphics editor to create attractive layout designs, images, and icons for your website   Vector graphics Vector graphics are made up of paths. Each path is basically a line with a start and end point, curves, angles, and points that are calculated with a mathematical equation. These paths are not limited to being straight—they can be of any shape, size, and even encompass any number of curves. When you combine them, they create drawings, diagrams, and can even help create certain fonts. These characteristics make vector graphics very different than JPEGs, GIFs, or BMP images—all of which are considered rasterized or bitmap images made up of tiny squares which are called pixels or bits. If you magnify these images, you will see they are made up of a grid (bitmaps) and if you keep magnifying them, they will become blurry and grainy as each pixel with bitmap square's zoom level grows larger. Computer monitors also use pixels in a grid. However, they use millions of them so that when you look at a display, your eyes see a picture. In high-resolution monitors, the pixels are smaller and closer together to give a crisper image. How does this all relate to vector-based graphics? Vector-based graphics aren't made up of squares. Since they are based on paths, you can make them larger (by scaling) and the image quality stays the same, lines and edges stay clean, and the same images can be used on items as small as letterheads or business cards or blown up to be billboards or used in high definition animation sequences. This flexibility, often accompanied by smaller file sizes, makes vector graphics ideal—especially in the world of the Internet, varying computer displays, and hosting services for web spaces, which leads us nicely to Inkscape, a tool that can be invaluable for use in web design. What is Inkscape and how can it be used? Inkscape is a free, open source program developed by a group of volunteers under the GNU General Public License (GPL). You not only get a free download but can use the program to create items with it and freely distribute them, modify the program itself, and share that modified program with others. Inkscape uses Scalable Vector Graphics (SVG), a vector-based drawing language that uses some basic principles: A drawing can (and should) be scalable to any size without losing detail A drawing can use an unlimited number of smaller drawings used in any number of ways (and reused) and still be a part of a larger whole SVG and World Wide Web Consortium (W3C) web standards are built into Inkscape which give it a number of features including a rich body of XML (eXtensible Markup Language) format with complete descriptions and animations. Inkscape drawings can be reused in other SVG-compliant drawing programs and can adapt to different presentation methods. It has support across most web browsers (Firefox, Chrome, Opera, Safari, Internet Explorer). When you draw your objects (rectangles, circles, and so on.), arbitrary paths, and text in Inkscape, you also give them attributes such as color, gradient, or patterned fills. Inkscape automatically creates a web code (XML) for each of these objects and tags your images with this code. If need be, the graphics can then be transformed, cloned, and grouped in the code itself, Hyperlinks can even be added for use in web browsers, multi-lingual scripting (which isn't available in most commercial vector-based programs) and more—all within Inkscape or in a native programming language. It makes your vector graphics more versatile in the web space than a standard JPG or GIF graphic. There are still some limitations in the Inkscape program, even though it aims to be fully SVG compliant. For example, as of version 0.48 it still does not support animation or SVG fonts—though there are plans to add these capabilities into future versions. Installing Inkscape Inkscape is available for download for Windows, Macintosh, Linux, or Solaris operating systems. To run on the Mac OS X operating system, it typically runs under X11—an implementation of the X Window System software that makes it possible to run X11-based applications in Mac OS X. The X11 application has shipped with the Mac OS X since version 10.5. When you open Inkscape on a Mac, it will first open X11 and run Inkscape within that program. Loss of some shortcut key options will occur but all functionality is present using menus and toolbars. Let's briefly go over how to download and install Inkscape: Go to the official Inkscape website at: http://www.inkscape.org/ and download the appropriate version of the software for your computer. For the Mac OS X Leopard software, you will also need to download an additional application. It is the X11 application package 2.4.0 or greater from this website: http://xquartz.macosforge.org/trac/wiki/X112.4.0. Once downloaded, double-click the X11-2.4.0.DMG package first. It will open another folder with the X11 application installer. Double-click that icon to be prompted through an installation wizard. Double-click the downloaded Inkscape installation package to start the installation. For the Mac OS, a DMG file is downloaded. Double-click on it and then drag and drop the Inkscape package to the Application Folder. For any Windows device, an .EXE file is downloaded. Double-click that file to start and complete the installation. For Linux-based computers, there are a number of distributions available. Be sure to download and install the correct installation package for your system. Now find the Inkscape icon in the Application or Programs folders to open the program. Double-click the Inkscape icon and the program will automatically open to the main screen. The basics of the software When you open Inkscape for the first time, you'll see that the main screen and a new blank document opened are ready to go. If you are using a Macintosh computer, Inkscape opens within the X11 application and may take slightly longer to load. The Inkscape interface is based on the GNOME UI standard which uses visual cues and feedback for any icons. For example: Hovering your mouse over any icon displays a pop-up description of the icon. If an icon has a dark gray border, it is active and can be used. If an icon is grayed out, it is not currently available to use with the current selection. All icons that are in execution mode (or busy) are covered by a dark shadow. This signifies that the application is busy and won't respond to any edit request. There is a Notification Display on the main screen that displays dynamic help messages to key shortcuts and basic information on how to use the Inkscape software in its current state or based on what objects and tools are selected. Main screen basics Within the main screen there is the main menu, a command, snap and status bar, tool controls, and a palette bar. Main menu You will use the main menu bar the most when working on your projects. This is the central location to find every tool and menu item in the program—even those found in the visual-based toolbars below it on the screen. When you select a main menu item the Inkscape dialog displays the icon, a text description, and shortcut key combination for the feature. This can be helpful while first learning the program—as it provides you with easier and often faster ways to use your most commonly used functions of the program. Toolbars Let's take a general tour of the tool bars seen on this main screen. We'll pay close attention to the tools we'll use most frequently. If you don't like the location of any of the toolbars, you can also make them as floating windows on your screen. This lets you move them from their pre-defined locations and move them to a location of your liking. To move any of the toolbars, from their docking point on the left side, click and drag them out of the window. When you click the upper left button to close the toolbar window, it will be relocated back into the screen. Command bar This toolbar represents the common and most frequently used commands in Inkscape: As seen in the previous screenshot you can create a new document, open an existing one, save, print, cut, paste, zoom, add text, and much more. Hover your mouse over each icon for details on its function. By default, when you open Inkscape, this toolbar is on the right side of the main screen. Snap bar Also found vertically on the right side of the main screen, this toolbar is designed to help with the Snap to features of Inkscape. It lets you easily align items (snap to guides), force objects to align to paths (snap to paths), or snap to bounding boxes and edges. Tool controls This toolbar's options change depending on which tool you have selected in the toolbox (described in the next section). When you are creating objects, it provides you all the detailed options—size, position, angles, and attributes specific to the tool you are currently using. By default, it looks like the following screenshot: (Move the mouse over the image to enlarge.) You have options to select/deselect objects within a layer, rotate or mirror objects, adjust object locations on the canvas, and scaling options and much more. Use it to define object properties when they are selected on the canvas. Toolbox bar You'll use the tool box frequently. It contains all of the main tools for creating objects, selecting and modifying objects, and drawing. To select a tool, click the icon. If you double-click a tool, you can see that tool's preferences (and change them). If you are new to Inkscape, there are a couple of hints about creating and editing text. The Text tool (A icon) in the Tool Box shown above is the only way of creating new text on the canvas. The T icon shown in the Command Bar is used only while editing text that already exists on the canvas.  
Read more
  • 0
  • 0
  • 6812

article-image-cakephp-13-model-bindings
Packt
08 Mar 2011
13 min read
Save for later

CakePHP 1.3: Model Bindings

Packt
08 Mar 2011
13 min read
  CakePHP 1.3 Application Development Cookbook Over 70 great recipes for developing, maintaining, and deploying web applications Introduction This article deals with one of the most important aspects of a CakePHP application: the relationship between models, also known as model bindings or associations. Being an integral part of any application's logic, it is of crucial importance that we master all aspects of how model bindings can be manipulated to get the data we need, when we need it. In order to do so, we will go through a series of recipes that will show us how to change the way bindings are fetched, what bindings and what information from a binding is returned, how to create new bindings, and how to build hierarchical data structures. Adding Containable to all models The Containable behavior is a part of the CakePHP core, and is probably one of the most important behaviors we have to help us deal with model bindings. Almost all CakePHP applications will benefit from its functionalities, so in this recipe we see how to enable it for all models. How to do it... Create a file named app_model.php and place it in your app/ folder, with the following contents. If you already have one, make sure that either you add the actsAs property shown as follows, or that your actsAs property includes Containable. <?php class AppModel extends Model { public $actsAs = array('Containable'); } ?> How it works... The Containable behavior is nothing more and nothing less than a wrapper around the bindModel() and unbindModel() methods, defined in the CakePHP's Model class. It is there to help us deal with the management of associations without having to go through a lengthy process of redefining all the associations when calling one of these methods, thus making our code much more readable and maintainable. This is a very important point, because a common mistake CakePHP users make is to think that Containable is involved in the query-making process, that is, during the stage where CakePHP creates actual SQL queries to fetch data. Containable saves us some unneeded queries, and optimizes the information that is fetched for each related model, but it will not serve as a way to change how queries are built in CakePHP. Limiting the bindings returned in a find This recipe shows how to use Containable to specify what related models are returned as a result of a find operation. It also shows us how to limit which fields are obtained for each association. Getting ready To go through this recipe we need some sample tables to work with. Create a table named families, using the following SQL statement: CREATE TABLE `families`( `id` INT UNSIGNED AUTO_INCREMENT NOT NULL, `name` VARCHAR(255) NOT NULL, PRIMARY KEY(`id`) ); Create a table named people, using the following SQL statement: CREATE TABLE `people`( `id` INT UNSIGNED AUTO_INCREMENT NOT NULL, `family_id` INT UNSIGNED NOT NULL, `name` VARCHAR(255) NOT NULL, `email` VARCHAR(255) NOT NULL, PRIMARY KEY(`id`), KEY `family_id`(`family_id`), CONSTRAINT `people__families` FOREIGN KEY(`family_id`) REFERENCES `families`(`id`) ); Create a table named profiles, using the following SQL statement: CREATE TABLE `profiles`( `id` INT UNSIGNED AUTO_INCREMENT NOT NULL, `person_id` INT UNSIGNED NOT NULL, `website` VARCHAR(255) default NULL, `birthdate` DATE default NULL, PRIMARY KEY(`id`), KEY `person_id`(`person_id`), CONSTRAINT `profiles__people` FOREIGN KEY(`person_id`) REFERENCES `people`(`id`) ); Create a table named posts, using the following SQL statement: CREATE TABLE `posts`( `id` INT UNSIGNED AUTO_INCREMENT NOT NULL, `person_id` INT UNSIGNED NOT NULL, `title` VARCHAR(255) NOT NULL, `body` TEXT NOT NULL, `created` DATETIME NOT NULL, `modified` DATETIME NOT NULL, PRIMARY KEY(`id`), KEY `person_id`(`person_id`), CONSTRAINT `posts__people` FOREIGN KEY(`person_id`) REFERENCES `people`(`id`) ); Even if you do not want to add foreign key constraints to your tables, make sure you use KEYs for each field that is a reference to a record in another table. By doing so, you will significantly improve the speed of your SQL queries when the referenced tables are joined. Add some sample data, using the following SQL statements: INSERT INTO `families`(`id`, `name`) VALUES (1, 'The Does'); INSERT INTO `people`(`id`, `family_id`, `name`, `email`) VALUES (1, 1, 'John Doe', '[email protected]'), (2, 1, 'Jane Doe', '[email protected]'); INSERT INTO `profiles`(`person_id`,`website`,`birthdate`) VALUES (1, 'http://john.example.com', '1978-07-13'), (2, NULL, '1981-09-18'); INSERT INTO `posts`(`person_id`, `title`, `body`, `created`, `modified`) VALUES (1, 'John's Post 1', 'Body for John's Post 1', NOW(), NOW()), (1, 'John's Post 2', 'Body for John's Post 2', NOW(), NOW()); We need Containable added to all our models. We proceed now to create the main model. Create a file named person.php and place it in your app/models folder with the following contents: <?php class Person extends AppModel { public $belongsTo = array('Family'); public $hasOne = array('Profile'); public $hasMany = array('Post'); } ?> Create the model Family in a file named family.php and place it in your app/models folder with the following contents: <?php class Family extends AppModel { public $hasMany = array('Person'); } ?> How to do it... When Containable is available for our models, we can add a setting to the find operation called contain. In that setting we specify, in an array-based hierarchy, the associated data we want returned. A special value contain can receive is false, or an empty array, which tells Containable not to return any associated data. For example, to get the first Person record without associated data, we simply do: $person = $this->Person->find('first', array( 'contain' => false )); Another way to tell CakePHP not to obtain related data is through the use of the recursive find setting. Setting recursive to -1 will have exactly the same effect as setting contain to false. If we want to obtain the first Person record together with the Family they belong to, we do: $person = $this->Person->find('first', array( 'contain' => array('Family') )); Using our sample data, the above query will result in the following array structure: array( 'Person' => array( 'id' => '1', 'family_id' => '1', 'name' => 'John Doe', 'email' => '[email protected]' ), 'Family' => array( 'id' => '1', 'name' => 'The Does' ) ) Let's say that now we also want to obtain all Post records for the person and all members in the family that Person belongs to. We would then have to do: $person = $this->Person->find('first', array( 'contain' => array( 'Family.Person' 'Post' ) )); The above would result in the following array structure (the created and modified fields have been removed for readability): array( 'Person' => array( 'id' => '1', 'family_id' => '1', 'name' => 'John Doe', 'email' => '[email protected]' ), 'Family' => array( 'id' => '1', 'name' => 'The Does', 'Person' => array( array( 'id' => '1', 'family_id' => '1', 'name' => 'John Doe', 'email' => '[email protected]' ), array( 'id' => '2', 'family_id' => '1', 'name' => 'Jane Doe', 'email' => '[email protected]' ) ) ), 'Post' => array( array( 'id' => '1', 'person_id' => '1', 'title' => 'John's Post 1', 'body' => 'Body for John's Post 1' ), array( 'id' => '2', 'person_id' => '1', 'title' => 'John's Post 2', 'body' => 'Body for John's Post 2' ) ) ) We can also use Containable to specify which fields from a related model we want to fetch. Using the preceding sample, let's limit the Post fields so we only return the title and the Person records for the person's Family, so we only return the name field. We do so by adding the name of the field to the associated model hierarchy: $person = $this->Person->find('first', array( 'contain' => array( 'Family.Person.name', 'Post.title' ) )); The returned data structure will then look like this: array( 'Person' => array( 'id' => '1', 'family_id' => '1', 'name' => 'John Doe', 'email' => '[email protected]' ), 'Family' => array( 'id' => '1', 'name' => 'The Does', 'Person' => array( array( 'name' => 'John Doe', 'family_id' => '1', 'id' => '1' ), array( 'name' => 'Jane Doe', 'family_id' => '1', 'id' => '2' ) ) ), 'Post' => array( array( 'title' => 'John's Post 1', 'id' => '1', 'person_id' => '1' ), array( 'title' => 'John's Post 2', 'id' => '2', 'person_id' => '1' ) ) ) You may notice that even when we indicated specific fields for the Family => Person binding, and for the Post binding, there are some extra fields being returned. Those fields (such as family_id) are needed by CakePHP, and known as foreign key fields, to fetch the associated data, so Containable is smart enough to include them in the query. Let us say that we also want a person's e-mail. As there is more than a field needed, we will need to use the array notation, using the fields setting to specify the list of fields: $person = $this->Person->find('first', array( 'contain' => array( 'Family' => array( 'Person' => array( 'fields' => array('email', 'name') ) ), 'Post.title' ) )); How it works... We use the contain find setting to specify what type of containment we want to use for the find operation. That containment is given as an array, where the array hierarchy mimics that of the model relationships. As the hierarchy can get deep enough to make array notation complex to deal with, the dot notation used throughout this recipe serves as an useful and more readable alternative. If we want to refer to the model Person that belongs to the model Family, the proper contain syntax for that is Person => Family (we can also use Person.Family, which is more concise.) We also use the fields setting to specify which fields we want fetched for a binding. We do that by specifying an array of field names as part of the binding Containable setting. Containable looks for the contain find setting right before we issue a find operation on a model. If it finds one, it alters the model bindings to be returned by issuing unbindModel() calls on the appropriate models to unbind those relationships that are not specified in the contain find setting. It then sets the recursive find setting to the minimum value required to fetch the associated data. Let us use a practical example to further understand this wrapping process. Using our Person model (which has a belongsTo relationship to Family, a hasOne relationship to Profile, and a hasMany relationship to Post), the following Containable based query: $person = $this->Person->find('first', array( 'contain' => array('Family.Person') )); or the same query using array notation: $person = $this->Person->find('first', array( 'contain' => array('Family' => 'Person') )); is equivalent to the following set of instructions, which do not use Containable, but the built in unbindModel() method available in CakePHP's Model class: $this->Person->unbindModel(array( 'hasOne' => array('Profile'), 'hasMany' => array('Post') )); $person = $this->Person->find('first', array( 'recursive' => 2 )); Not using Containable is not only much more complicated, but can also pose a problem if we decide to alter some of our relationships. In the preceding example, if we decide to remove the Profile binding, or change its relationship type, we would have to modify the unbindModel() call. However, if we are using Containable, the same code applies, without us having to worry about such changes. Format of the contain find parameter We have seen how to use the contain find parameter to limit which bindings are returned after a find operation. Even when its format seems self-explanatory, let us go through another example to have a deeper understanding of Containable's array notation. Assume that we have the models and relationships shown in the following diagram: Transforming that diagram to something the Containable behavior understands is as simple as writing it using an array structure. For example, if we are issuing a find operation on the User model and we want to refer to the Profile relationship, a simple array('Profile') expression would suffice, as the Profile model is directly related to the User model. If we want to refer to the Comment relationship for the Article records the User is an owner of, which belongs to an Article that itself belongs to our User model, then we add another dimension to the structure, which is now represented as array('Article' => 'Comment'). We can already deduce how the next example will look like. Assume we want to obtain the Comment together with the Profile of the User that commented on each Article. The structure will then look like: array('Article' => array('Comment' => array('User' => 'Profile'))). Sometimes we want to simplify the readability, and fortunately the Containable behavior allows the above expression to be rewritten as array('Article.Comment.User.Profile'), which is known as dot notation. However, if you want to change other parameters to the binding, then this syntax would have to be changed to the full array-based expression. Reset of binding changes When you issue a find operation that uses the Containable behavior to change some of its bindings, CakePHP will reset all bindings' changes to their original states, once the find is completed. This is what is normally wanted on most cases, but there are some scenarios where you want to keep your changes until you manually reset them, such as when you need to issue more than one find operation and have all those finds use the modified bindings. To force our binding changes to be kept, we use the reset option in the contain find parameter, setting it to false. When we are ready to reset them, we issue a call to the resetBindings() method added by the Containable behavior to our model. The following sample code shows this procedure: $person = $this->Person->find('first', array( 'contain' => array( 'reset' => false, 'Family' ) )); // ... $this->Person->resetBindings(); Another way to achieve the same result is by calling the contain() method (setting its first argument to the contained bindings, and its second argument to false to indicate that we wish to keep these containments), available to all models that use Containable, issue the find (without, need to use the contain setting), and then reset the bindings: $this->Person->contain(array('Family'), false); $person = $this->Person->find('first'); // ... $this->Person->resetBindings();  
Read more
  • 0
  • 0
  • 2228

article-image-blackberry-enterprise-server-5-activating-devices-and-users
Packt
03 Mar 2011
11 min read
Save for later

BlackBerry Enterprise Server 5: Activating Devices and Users

Packt
03 Mar 2011
11 min read
BlackBerry Enterprise Server 5 Implementation Guide Simplify the implementation of BlackBerry Enterprise Server in your corporate environment Install, configure, and manage a BlackBerry Enterprise Server Use Microsoft Internet Explorer along with Active X plugins to control and administer the BES with the help of Blackberry Administration Service Troubleshoot, monitor, and offer high availability of the BES in your organization Updated to the latest version – BlackBerry Enterprise Server 5 Implementation Guide       BlackBerry Enterprise users must already exist on the Microsoft Exchange Server. As with the administrative users, to make tasks and management of device users easier, we can create groups and add users to the groups, and then assign policies to the whole group rather than individual users. Again, users can be part of multiple groups and we will see how the policies are affected and applied when users are in more than one group. Creating users on the BES 5.0 We will go through the following steps to create users on the BES 5.0: Within the BlackBerry Administration Service, navigate to the BlackBerry solution management section. Expand User and select Create a user. We can now search for the user we want to add either by typing the user's display name or e-mail address. Enter the search criteria and select Search. We then have the ability to add the user to any group we have already created; in our case we only have an administrative group. We have three options on how the user will be created, with regards to how the device for the user will be activated: With activation password: This will allow us to set an activation password along with the expiry time of the activation password for the user With generated activation password: The system will autogenerate a password for activation, based on the settings we have made in our BlackBerry Server (shown further on in this article) Without activation password: This will create just a user who will have no pre-configured method for assigning a device For this example, we will select Create a user without activation password. Once we have covered the theory and explored the settings within this article regarding activating devices, we will return to the other two options. We can create a user even if the search results do not display the user—generally this occurs when the Exchange Server has not yet synched the user account to the BlackBerry Configuration Database, typically when new users are added. This method is shown in Lab. Groups can be created to help manage users within our network and simplify tasks. Next we are going to look at creating a group that will house users—all belonging to our Sales Team. Creating a user-based group To create a user-based group, go through the following steps: Expand Group, select Create a group, in the Name field enter Sales Team, and click on Save. Select View group list. Click on Sales Team. Select Add users to group membership. Select the user we have just created by placing a tick in the checkbox next to the user's name, and click on Add to group membership. We can click on View group membership to confirm the addition of our user to the group. We will be adding more users to this group later on in the Lab when we import the users via a text file. Preparing to distribute a BlackBerry device Before we can distribute a BlackBerry device to a user using various methods, we need to address a few more settings that will affect how the device will initially be populated. By default when a device is activated for a user, the BlackBerry Enterprise Server will prepopulate/synchronize the BlackBerry device with the headers of 200 e-mail messages from the previous five days. We can alter these settings so that headers and the full body of the e-mail message can be synched to the device for up to a maximum of 750 messages over the past 14 days. In the BlackBerry Administration Service, under Servers and components expand BlackBerry Domain | Component view | Email and select the BES instance. On the right-hand pane select the Messaging tab. Scroll down and select Edit instance. To ensure that both headers and the full e-mail message is populated to the BlackBerry Device, in the Message prepopulation settings, change the Send headers only drop-down to False. Change the Prepopulation by message age to a max of 14 days, by entering 14. We can change the number of e-mails that are prepopulated on the device by changing the number of Prepopulation by message count, again a max of 750. By making the preceding two values to zero, we can ensure that no previous e-mails are populated on the device. Within the same tab, we can set our Messaging options, which we will examine next. We have the ability to set: A Prepended disclaimer (goes before the body of the message) An Appended disclaimer (goes after the user's signature) We can enter the text of our disclaimer in the space provided, then choose what happens if there is a conflict. The majority of these settings can also be set at a user level (settings made on the server override any settings made by the user, that's why it is best practice to have these set on the server level), which we will see later in Lab. If user setting exists then we need to notify the server how to deal with a potential conflict. The default setting is to use the user's disclaimer first then the one set on the server. Bear in mind, the default setting will show both the user's disclaimer and then the server disclaimer on the e-mail message. Wireless message reconciliation should be set to True—the BlackBerry Enterprise Server synchronizes e-mail message status changes between the BlackBerry device and Outlook on the user's computer. The BES reconciles e-mail messages that are moved from one folder to another, deleted messages, and also changes the status of read and unread messages. By default the BES performs a reconcile every 30 minutes; the reconcile is in effect checking that for a particular user the Outlook and the BlackBerry have the same information in their databases. If this is set to False then the above mentioned changes will only take effect when the device is plugged in to Desktop Manager or Web Desktop Access. We have the option of setting the maximum size for a single attachment or multiple attachments in KB. We can also specify the maximum download size for a single attachment. Rich content turned on set to True allows e-mail messages that contain HTML and rich content to be delivered to BlackBerry devices; having it set to False would mean all messages are delivered in plain text. This will save a lot of resources on the server(s) housing the BES components. We can set the same principle for downloading inline images. Remote search turned on set to True—this will allow users to search the Microsoft Exchange server for e-mails from their BlackBerry devices. In BES 5, we have a new feature that allows the user, when on his device-prior to sending out a meeting request—to check if a potential participant is available at that time or not. (Microsoft Exchange 2007 users need to make some changes to support this feature; see the BlackBerry website for further details on the hot fixes required.) Free busy lookup turned on is set to True if you want the above service. If system resources are being utilized heavily, this feature can be turned off by selecting False. Hard deletes reconciliation allows users to delete e-mail messages permanently in Microsoft Outlook (by holding the shift + del keys). You can also configure the BES to remove permanently deleted messages from the user's BlackBerry device. You must have wireless reconciliation turned on for this to work. Now that we have prepared our messaging environment, we are ready to activate our first user. Activating users When it comes to activating users, we have five options to choose from: BlackBerry Administration Service: We can connect the device to a computer and log on to the BAS to assign and activate a device for a user Over the Wireless Network (OTA): We can activate a BlackBerry to join our BES without needing it to be physically connected to our organization Over the LAN: A user who has BlackBerry Desktop Manager running on his or her computer in the corporate LAN can activate the device by plugging the device into his or her machine and running the BlackBerry Desktop Manager BlackBerry Web Desktop Manager: This is a new feature of BES 5 that allows users to connect the device to a computer and log in to the BlackBerry Web Desktop Manager to activate the device, with no other software required Over your corporate organization's Wi-Fi network: You can activate Wi-Fi-enabled BlackBerry devices over your corporate Wi-Fi network Before we look at each of the options available to us, let's examine what enterprise activation is and how it works along with its settings; this will also help us choose the best option for activating devices for users and avoid errors during the enterprise activation. Understanding enterprise activation To allow a user's device to join the BlackBerry Enterprise Server, we need to activate the device for the user when we create a user and assign the user an activation password. The user will enter his or her corporate e-mail address and the activation password into the device in the Enterprise Activation screen, which can be reached on the device by going to Options | Advance Options | Enterprise Activation. Once the user types in the information and selects Activate, the BlackBerry device will generate an ETP.dat message. It is important that if you have any virus scanning or e-mail sweeping systems running in your organization, we ensure that this type of filename with extension is added to the safe list. Please note that this ETP.dat message is only generated when we activate a device over the air. If we use other methods where the device is plugged in via a cable to activate it, NO ETP.dat file is generated. The ETP.dat message is then sent to the user's mailbox on the Exchange Server over the wireless network. To ensure that the activation occurs smoothly, make sure the device has good battery life and the wireless coverage on the device is less than 100db. This can be checked by pressing the following combination on the device Alt + NMLL. The BlackBerry Enterprise Server then confirms that the activation password is correct and generates a new permanent encryption key and sends it to the BlackBerry device. The BlackBerry Policy service then receives a request to send out an IT policy. Service books control the wireless synchronization data. Data is now transferred between the BlackBerry device and the user's mailbox using a slow synch process. The information that is sent to the BlackBerry device is stored in databases on the device, and each application database is shown with a percentage completed next to it during the slow synch. Once the activation is complete, a message will pop up on the device stating 'Activation complete'. The device is now fully in synch with the user's mailbox and is ready to send and receive data. Now that we have got a general grasp of the device activation process, we are going to look at the five options mentioned previously, in more detail. Activating a device using BlackBerry Administration Service This method provides a higher level of control over the device, but is more labor-intensive on the administrator as it requires no user interaction. Connect the device to a computer that can access the BlackBerry Administration Service, and log in to the service using an account that has permissions to assign devices. Under the Devices section, expand Attached devices. Click on Manage current device and then select Assign current device. This will then prompt you to search for the user's account that we want to assign the device to. Once we have found the user, we can click on User and then select Associate user and finally click on Assign current device.
Read more
  • 0
  • 0
  • 2429
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-blackberry-enterprise-server-5-mds-applications
Packt
25 Feb 2011
6 min read
Save for later

BlackBerry Enterprise Server 5: MDS Applications

Packt
25 Feb 2011
6 min read
  BlackBerry Enterprise Server 5 Implementation Guide MDS (Mobile Data Service) runtime applications are custom applications that are developed for your organizational needs. MDS runtime applications are created using BlackBerry MDS Studio or Microsoft Visual Studio—a BlackBerry plugin. In general, these applications are form-based applications that users can use on their device to access databases or web services based inside your organization's firewall—the corporate LAN. For the purpose of this article you can download a sample MDS application from the BlackBerry website under the development section, current link is: http://us.blackberry.com/developers/javaappdev/devtools.jsp. This application is an Expenses Tracker, which an employee can populate in real time from his device as business expenses occur during a trip. Once the trip is complete, the application e-mails your finance department and attaches an Excel spreadsheet outlining the employee's business trip expenses. Understanding and setting up our MDS environment The MDS has two component services: MDS Connection Service: This service provides access to content on the Internet, intranet, and access to the organization's application servers MDS Integration Service: This service facilitates installation and management of applications and allows access to the server system in your corporate LAN via database connections or web services. Firstly, we need to set up our MDS environment. This includes the following: Ensure that the BlackBerry MDS integration Service is installed and running on our BlackBerry Enterprise Server. This service should have been selected during the initial installation of the BES; if it was not selected we can run the setup and install the MDS Sservices. If the MDS service is already installed, you will see the services running in the Windows server. Send the BlackBerry MDS Runtime platform to devices in our BlackBerry domain This can be achieved by using Software Configuration policies, as shown next: Publish the BlackBerry MDS application This will be done using the MDS console that is installed during the installation of MDS services Configure our IT policy and any application control policies for the MDS application Using IT policies and application policies we can lock down our MDS application Install the MDS application on the devices Using the MDS console and the application repository for MDS applications, we can deploy and install the MDS applications on the devices Each of the preceding sections will now be looked at in greater detail. Running MDS services During the installation of our BlackBerry Enterprise Server we can chose to install the MDS components. We need to ensure that the MDS service is running in our environment. This can be checked by going to services on the server that hosts the BlackBerry Enterprise Server and ensuring that the BlackBerry MDS Connection Service and BlackBerry MDS Integration Service are started, as shown in the following screenshot: Installing MDS runtime platform For MDS runtime applications to work, we need to ensure that the MDS runtime platform is installed on to devices in our corporate network. The version of MDS runtime platform that you need to install on to the devices will depend on the following: Model of the device BlackBerry software version on the device So, depending on the different devices and the different BlackBerry device software running on the devices, you might need to create several MDS runtime software configuration packages to cover the different models and device software within your corporate environment. We can use a software configuration to deploy the MDS runtime platform that is needed on the devices. For the purpose of this article, we are going to assume all our devices are the same make and have the same device software: BlackBerries 8900. Creating a software configuration to deploy the MDS runtime platform to devices Download the appropriate MDS runtime platform for your device from the BlackBerry website-the current link is: https://www.blackberry.com/Downloads/entry.do?code=F9BE311E65D81A9AD8150A60844BB94C. For our example, we are going to download the MDS runtime package for a BlackBerry 8900 device, which is entitled BlackBerry MDS runtime v4.6.1.21 Extract the contents to a shared folder on the BES server. Log in to the BlackBerry Administration Service. Under BlackBerry solution management expand Software then Applications and click on Add or update applications. Browse to the ZIP files for the MDS runtime application, and once selected click Next. Select to publish the application To ensure the correct packages were created browse to the BSC share (code downlosd, ch:5) and ensure the following files are present: We now need to create our software configuration (since the preceding steps have just added the MDS runtime application to the application repository only). Select Create a software configuration. Enter the name Runtime, and leave the other settings as default. Click on Manage software configurations and select Runtime. Select the Applications tab and click on Edit software configuration, as shown in the following screenshot: Click on Add applications to software configuration. Click on Search or fill in the search criteria to display the Runtime packages. Select the Runtime applications (in some cases two applications may have been created; select both, one is the default launcher and one is the runtime platform, this is dependant on the device). In our example, we need both the MDS Runtime and the MDS Default Launcher, so we need to place a tick in both to show additional configuration steps, as shown in the following screenshot: Select Wireless as the Deployment method and the Standard Required for the Application control policy, and Required for the Disposition setting. Once added, click on Save all. We now need to assign this software configuration to the devices in our BES environment. For the purpose of this article, we are going to assign it to the Sales Group. Please bear in mind that—as mentioned before—if you have different devices or same devices but with different device software operating on them then you will need to download the right MDS runtime platform for each scenario and configure the appropriate number of software configurations. Click on Manage groups. Select the Sales Team. Click on Edit group. Select the Software configuration tab. In the Available software configurations list, click on Runtime and select Add, as shown in the following screenshot: Click on Save all. Now that our devices are ready to run MDS applications we need to add our MDS application to the MDS application repository. The MDS application repository is installed by default during the initial installation of the BES as long as we choose to install all default components of MDS. The MDS application console is a web-based administration tool, like the BlackBerry Administration Service, which is used to control, install, manage, and update MDS applications Please note that you use the BlackBerry Administration Service to control Java-based applications and you use the MDS console to administer MDS applications.
Read more
  • 0
  • 0
  • 1722

article-image-tips-and-tricks-using-alfresco-3-business-solutions
Packt
25 Feb 2011
4 min read
Save for later

Tips and Tricks for using Alfresco 3 Business Solutions

Packt
25 Feb 2011
4 min read
  Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems.           Read more about this book       (For more resources on Alfresco, see here.) Node references are important. Tip: Node references are used to identify a specific node in one of the stores in the repository. You construct a node reference by combining a Store Reference such as workspace://SpacesStore with an identifier. The identifier is a Universally Unique Identifier (UUID) and it is generated automatically when a node is created. A UUID looks something like this: 986570b5-4a1b-11dd-823c-f5095e006c11 and it represents a 128-bit value. A complete Node Reference looks like workspace://SpacesStore/986570b5-4a1b-11dd-823c-f5095e006c11. The node reference is one of the most important concepts when developing custom behavior for Alfresco as it is required by a lot of the application interface methods. Avoid CRUD operations directly against the database. Tip: One should not do any CRUD operations directly against the database bypassing the foundation services when building a custom solution on top of Alfresco. This will cause the code to break in the future if the database design is ever changed. Alfresco is required to keep older APIs available for backward compatibility (if they ever change), so it is better to always use the published service APIs. Query the database directly only when: The customization built with available APIs is not providing acceptable performance and you need to come up with a solution that works satisfyingly Reporting is necessary Information is needed during development for debugging purposes For bootstrapping tweaking, such as when you want to run a patch again Executing patches in a specific order Tip: If we have several patches to execute and they should be in a specific order, we can control that with the targetSchema value. The fixesToSchema value is set to Alfresco's current schema version (that is, via the version.schema variable), which means that this patch will always be run no matter what version of Alfresco is being used. It is a good idea to export complex folder structures into ACP packages. Tip: When we set up more complex folder structures with rules, permission settings, template documents etc, it is a good idea to export them into Alfresco Content Packages (ACP) and store them in the version control system. The same is true for any Space Templates that we create. These packages are also useful to include in releases. Deploying the Share JAR extension Tip: When working with Spring Surf extensions for Alfresco Share it is not necessary to stop and start the Alfresco server between each deployment. We can set up Apache Tomcat to watch the JAR file we are working with and tell it to reload the JAR every time it changes. Update the tomcat/conf/context.xml configuration file to include the following line: <WatchedResource>WEB-INF/lib/3340_03_Share_Code.jar</WatchedResource> Now every time we update this Share extension, JAR Tomcat will reload it for us and this shortens the development cycle quite a bit. The Tomcat console should print something like this when this happens: INFO: Reloading context [/share] To deploy a new version of the JAR just run the deploy-share-jar ant target: C:3340_03_Codebestmoneyalf_extensionstrunk>ant -q deploy-share-jar [echo] Packaging extension JAR file for share.war [echo] Copies extension JAR file to share.war WEB-INF libBUILD SUCCESSFULTotal time: 0 seconds Debugging AMP extensions Tip: To debug AMP extensions, start the Alfresco server so that it listens for remote debugging connections; or more correctly, start the JVM so that it listens for remote debugging connection attempts. This can be done by adding the following line to the operating system as an environment variable: CATALINA_OPTS=-Dcom.sun.management.jmxremote -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 This means that any Alfresco installation that we have installed locally on our development machine will be available for debugging as soon as we start it. Change the address as you see fit according to your development environment. With this setting we can now debug both into Alfresco's source code and our own source code at the same time.
Read more
  • 0
  • 0
  • 1423

article-image-tips-and-tricks-ibm-filenet-p8-content-manager
Packt
24 Feb 2011
11 min read
Save for later

Tips and Tricks on IBM FileNet P8 Content Manager

Packt
24 Feb 2011
11 min read
Getting Started with IBM FileNet P8 Content Manager Install, customize, and administer the powerful FileNet Enterprise Content Management platform Quickly get up to speed on all significant features and the major components of IBM FileNet P8 Content Manager Provides technical details that are valuable both for beginners and experienced Content Management professionals alike, without repeating product reference documentation Gives a big picture description of Enterprise Content Management and related IT areas to set the context for Content Manager Written by an IBM employee, Bill Carpenter, who has extensive experience in Content Manager product development, this book gives practical tips and notes with a step-by-step approach to design real Enterprise Content Management solutions to solve your business needs   Installation care If you are using a virtual server image with snapshot capability, it’s a good idea to use snapshots. In fact, we recommend that after each of the major installation steps. If something goes wrong in a later step, you can recover back to the snapshot point to save yourself the trouble of starting over. WAS Bootstrap Hostname In a development environment, the domain name might not resolve in your DNS. In that case, enter the IP address for that server instead. Populating Tivoli Directory Server (TDS) We could use the TDS Realms interface to construct our users and groups. If you use TDS in your enterprise, that’s a good way to go. It offers several user interface niceties for directory administration, and it also offers partial referential integrity for the entries. Directory concepts and notation Directory concepts and notation can seem pretty odd. Most people don’t encounter them every day. There is a lot of material available on the web to explain both the concepts and the notation. Here is one example that is clearly written and oriented toward directory novices: http://www.skills-1st.co.uk/papers/ldapschema-design-feb-2005/index.html. Close up all of the nodes before you exit FEM FEM remembers the state of the tree view from session to session. When you start FEM the next time, it will try to open the nodes you had open when you exited. That will often mean something of a delay as it reads extensive data for each open Object Store node. You might find it a useful habit to close up all of the nodes before you exit FEM. Using topology levels A set of configuration data, if used, is used as the complete configuration. That is, the configuration objects at different topology levels are not blended to create an "effective configuration". Trace logging Although similar technologies are used to provide trace logging in the CE server and the client APIs, the configuration mechanisms are completely separate. The panels in FEM control only tracing within the CE server and do not apply to any client tracing. If you find that performance still drags or that the trace log file continues to grow even after you have disabled trace logging in the Domain configuration, it could be that trace logging is still configured at a more specific level. That's very easy to overlook, especially in more complex deployments or where CM administration duties are shared. Collaborative checkout Even with a collaborative checkout, the subsequent checkin is still subject to access checks, so you can still have fine-grained control over that. In fact, because you can use fine-grained security to limit who can do a checkin, you might as well make the Object Store default be Collaborative unless you have some specific use case that demands Exclusive. Cancel the creation of the class Although the final step in the Create a Class Wizard will still let you cancel the creation of the class, any property templates and choice lists you created along the way will already have been created in the Object Store. If you wish to completely undo your work, you will have to delete them manually. FEM query interface A historical quirk of the FEM query interface is that the SELECT list must begin with the This property. That is not a general requirement of CE SQL. Running the CSE installer If you are running the CSE installer and eventually the CSE itself on the same machine as the CE, you might be tempted to use localhost as the CSE server host. From the CE point of view, that would be technically correct. However, exploiting little tricks like that is a bad habit to get into. It certainly won't work in any environment where you install the CSE separately from the CE or have multiple CSE servers installed. We suggest you use a proper host name. Be sure to get the server name correct since the installer and the Verity software will sprinkle it liberally throughout several configuration files. If it is not correct by default, which is one of the hazards of using dynamic IP addresses, correct it now. CBR Locale field uni stands for Unicode and is generally the best choice for mixed languages support. If you think you don't need mixed-languages support, there's a pretty good chance you are mistaken, even if all of your users have the same locale settings in their environments. In any case, if you are tempted to use a different CBR locale, you should first read the K2 locale customization guide, since it's a reasonably complicated topic. Process Service does not start If the Process Service does not start, check to make sure that the Windows service named Process Engine Services Manager is started. If not, start it manually and make sure it is marked it for automatic startup. Configure two WAS profiles When trying to duplicate configuration aspects of one WAS profile into another WAS profile, we could theoretically have the WAS consoles open simultaneously in separate browser windows, which would facilitate side-by-side comparisons. In practice, this is likely to confuse the browser cookies holding the session information and drive you slightly crazy. If you have two different browsers installed, for example Firefox and Internet Explorer, you can open one WAS console in each. Disk space used by XT Disk space used by XT may exceed your expectations. We recommend having at least 2 gigabytes of disk space available when doing an XT installation. A lot of that can be recovered after XT is deployed into the application server. Object deletion Deleted objects are really, permanently deleted by the CE server. There is no undo or recycle bin or similar mechanism unless an application implements one. Notional locking & Cooperative locking Don't confuse the notional locking that comes via checkout with the unrelated feature of cooperative locking. Cooperative locking is an explicit mechanism for applications to mark a Document, Folder, or Custom Object as being locked. As the name implies, this only matters for applications which check for and honor cooperative locks. The CE will not prevent any update operation—other than locking operations themselves—just because there is a cooperative lock on the object. Synchronous or asynchronous subscription As a terminology convenience, events or event handlers are sometimes referred to as being synchronous or asynchronous. This is not technically correct because the designation is always made on the subscription object. An event can have either kind of subscription, and an event handler can be invoked both ways. Synchronous subscription event handlers The CE does not always throw an exception if the event handler for a synchronous subscription updates the triggering object. This has allowed many developers to ignore the rule that such updates are not allowed, assuming it is merely a best practice. Nonetheless, it has always been the rule that synchronous subscription event handlers are not allowed to do that. Even if it works in a particular instance, it may fail at random times that escape detection in testing. Don't fall into this trap! AddOn in the P8 Domain If you don't happen to be a perfect person, you might have to iterate a few times during the creation and testing of your AddOn until you get things exactly the way you want them. For the sake of mere mechanical efficiency, we usually do this kind of work using a virtual machine image that includes a snapshot capability. We make a snapshot just before creating the AddOn in the P8 Domain. Then we do the testing. If we need to iterate, it's pretty fast to roll back to the snapshot point. "anonymous access" complaints from the CE When an application server sees a Subject that it doesn't trust, since there is no trust relationship with the sending application server, it will often simply discard the Subject or strip vital information out of it. Hence the complaints from the CE that you are trying to do "anonymous access" often mean that there is something wrong with your trust relationship setup. Unknown ACE An "unknown" Access Control Entry (ACE) in an Access Control List (ACL) comes about because ACEs sometimes get orphaned. The user or group mentioned in the ACE gets deleted from the directory, but the ACE still exists in the CE repository. These ACEs will never match any calling user and so will never figure into any access control calculation. Application developers have to be aware of this kind of ACE when programmatically displaying or modifying the ACL. The unknown ACEs should be silently filtered out and not displayed to end users. (FEM displays unknown ACEs, but it is an administrator tool.) If updates are made to the ACL, the unknown ACEs definitely must be filtered out. Otherwise, the CE will throw an exception because it cannot resolve the user or group in the directory. Virtualization Several years ago, CM product documentation said that virtual machine technology was supported, but that you might have to reproduce any problems directly on physical hardware if you needed support. That's no longer the case, and virtualization is supported as a first-class citizen. For your own purposes, you will probably want to evaluate whether there are any significant performance costs to the virtualization technology you have chosen. The safest way to evaluate that is under similar configuration and load as that of your intended production environment. File Storage Area Folders used internally within a File Storage Area for content have no relationship to the folders used for filing objects within an Object Store. On reflection, this should be obvious, since you can store content for unfiled documents. Whereas the folders in an Object Store are an organizing technique for objects, the folders in a File Storage Area are used to avoid overwhelming the native filesystem with too many files in a single directory (which can impact performance). Sticky sessions All API interactions with the CE are stateless. In other words, except for load balancing, it doesn't matter which CE server is used for any particular API request. Requests are treated independently, and the CE does not maintain any session state on behalf of the application. On the other hand, some CM web applications do need to be configured for sticky sessions. A sticky session means that incoming requests (usually from a web browser) must return to the same copy of the application for subsequent requests. Disaster Recovery (DR) There is technology available for near real time replication for DR. It can be tempting to think of your DR site as your data backup, or at least eliminating the need for traditional backups. It seems too good to be true since all of your updates are almost instantaneously copied to another datacenter. The trap is that the replication can't tell desirable updates from mistakes. If you have to recover some of your data because of an operational mistake (for example, if you drop the tables in an Object Store database), the DR copy will reflect the same mistake. You should still do traditional backups even if you have a replicated DR site. Further resources on this subject: IBM FileNet P8 Content Manager: Administrative Tools and Tasks [Article] IBM FileNet P8 Content Manager: Exploring Object Store-level Items [Article] IBM FileNet P8 Content Manager: End User Tools and Tasks [Article]
Read more
  • 0
  • 0
  • 3477

article-image-how-overcome-pitfalls-magento
Packt
17 Feb 2011
7 min read
Save for later

How to Overcome the Pitfalls of Magento

Packt
17 Feb 2011
7 min read
  Magento 1.4 Development Cookbook Extend your Magento store to the optimum level by developing modules and widgets Develop Modules and Extensions for Magento 1.4 using PHP with ease Socialize your store by writing custom modules and widgets to drive in more customers Achieve a tremendous performance boost by applying powerful techniques such as YSlow, PageSpeed, and Siege Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on Magento, see here.) The reader can benefit from the previous article on Magento 1.4: Performance Optimization. Using APC/Memcached as the cache backend Magento has got a cache system that is based on files by default. We can boost the overall performance by changing the cache handler to a better engine like APC or Memcached. This recipe will help us to set up APC or Memcached as the cache backend. Getting ready Installation of APC: Alternative PHP Cache (APC) is a PECL extension. For any Debian-based Distro, it can be installed with an easy command from the terminal: sudo apt-get install php5-apc Or: sudo pecl install APC You can also install it from the source. The package download location for APC is: http://pecl.php.net/package/APC. Check whether it exists or not in phpinfo(). If you cannot see an APC block there, then you might not have added APC in the php.ini file. Installation of Memcached: Memcached is also available in most OS package repositories. You can install it from the command line: sudo apt-get install php5-memcached Memcached could be installed from source as well. Check whether it exists or not in phpinfo(). If you cannot see a Memcached block there, then you might not have added Memcached in the php.ini file. You can also check it via the telnet client. Issue the following command in the terminal: telnet localhost 11211 We can issue the get command now: get greeting Nothing happened? We have to set it first. set greeting 1 0 11 Hello World STORED get greeting Hello World END quit How to do it... Okay, we are all set to go for the APC or Memcached. Let's do it now for APC. Open local.xml in your favorite PHP editor. Add the cache block as follows: <?xml version="1.0"?> <config> <global> <install> <date><![CDATA[Sat, 26 Jun 2010 11:55:18 +0000]]></date> </install> <cache> <backend>apc</backend> <prefix>alphanumeric</prefix> </cache> <crypt> <key><![CDATA[870f60e1ba58fd34dbf730bfa8c9c152]]></key> </crypt> <disable_local_modules>false</disable_local_modules> <resources> <db> <table_prefix><![CDATA[]]></table_prefix> </db> <default_setup> <connection> <host><![CDATA[localhost]]></host> <username><![CDATA[root]]></username> <password><![CDATA[f]]></password> <dbname><![CDATA[magento]]></dbname> <active>1</active> </connection> </default_setup> </resources> <session_save><![CDATA[files]]></session_save> </global> <admin> <routers> <adminhtml> <args> <frontName><![CDATA[backend]]></frontName> </args> </adminhtml> </routers> </admin> </config> Delete all files from the var/cache/ directory. Reload your Magento and benchmark it now to see the boost in performance. Run the benchmark several times to get an accurate result. ab -c 5 -n 100 http://magento.local.com/ You can use either APC or Memcached. Let's test it with Memcached now. Delete the cache block as we set with APC previously and add the cache block as follows: <?xml version="1.0"?> <config> <global> <install> <date><![CDATA[Sat, 26 Jun 2010 11:55:18 +0000]]></date> </install> <crypt> <key><![CDATA[870f60e1ba58fd34dbf730bfa8c9c152]]></key> </crypt> <disable_local_modules>false</disable_local_modules> <resources> <db> <table_prefix><![CDATA[]]></table_prefix> </db> <default_setup> <connection> <host><![CDATA[localhost]]></host> <username><![CDATA[root]]></username> <password><![CDATA[f]]></password> <dbname><![CDATA[magento]]></dbname> <active>1</active> </connection> </default_setup> </resources> <session_save><![CDATA[files]]></session_save> <cache> <backend>memcached</backend> apc / memcached / xcache / empty=file <slow_backend>file</slow_backend> database / file (default) - used for 2 levels cache setup, necessary for all shared memory storages <memcached> memcached cache backend related config <servers> any number of server nodes can be included <server> <host><![CDATA[127.0.0.1]]></host> <port><![CDATA[11211]]></port> <persistent><![CDATA[1]]></persistent> <weight><![CDATA[2]]></weight> <timeout><![CDATA[10]]></timeout> <retry_interval><![CDATA[10]]></retry_interval> <status><![CDATA[1]]></status> </server> </servers> <compression><![CDATA[0]]></compression> <cache_dir><![CDATA[]]></cache_dir> <hashed_directory_level><![CDATA[]]> </hashed_directory_level> <hashed_directory_umask><![CDATA[]]> </hashed_directory_umask> <file_name_prefix><![CDATA[]]></file_name_prefix> </memcached> </cache> </global> <admin> <routers> <adminhtml> <args> <frontName><![CDATA[backend]]></frontName> </args> </adminhtml> </routers> </admin> </config> Save the local.xml file, clear all cache files from /var/cache/ and reload your Magento in the frontend and check the performance. Mount var/cache as TMPFS: mount tmpfs /path/to/your/magento/var/cache -t tmpfs -o size=64m How it works... Alternative PHP Cache (APC) is a free, open source opcode cache framework that optimizes PHP intermediate code and caches data and compiled code from the PHP bytecode compiler in shared memory, which is similar to Memcached. APC is quickly becoming the de facto standard PHP caching mechanism, as it will be included built-in to the core of PHP, starting with PHP 6. The biggest problem with APC is that you can only access the local APC cache. Memcached's magic lies in its two-stage hash approach. It behaves as though it were a giant hash table, looking up key = value pairs. Give it a key, and set or get some arbitrary data. When doing a memcached lookup, first the client hashes the key against the whole list of servers. Once it has chosen a server, the client then sends its request, and the server does an internal hash key lookup for the actual item data. Memcached affords us endless possibilities (query caching, content caching, session storage) and great flexibility. It's an excellent option for increasing performance and scalability on any website without requiring a lot of additional resources. Changing the var/cache to TMPFS is a very good trick to increase disk I/O. I personally found both APC's and Memcached's performance pretty similar. Both are good to go. If you want to split your cache in multiple servers go for the Memcached. Good Luck! The highlighted sections in code are for the APC and Memcached settings, respectively.  
Read more
  • 0
  • 0
  • 1159
article-image-magento-14-performance-optimization
Packt
17 Feb 2011
12 min read
Save for later

Magento 1.4: Performance Optimization

Packt
17 Feb 2011
12 min read
  Magento 1.4 Development Cookbook Extend your Magento store to the optimum level by developing modules and widgets Develop Modules and Extensions for Magento 1.4 using PHP with ease Socialize your store by writing custom modules and widgets to drive in more customers Achieve a tremendous performance boost by applying powerful techniques such as YSlow, PageSpeed, and Siege Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on Magento, see here.) The reader can benefit from the previous article on How to Overcome the Pitfalls of Magento. Users really respond to speed.—Marissa Mayer, Google vice president of the research section and user experience. We will explain why this quote is true throughout this article. Her key insight for the crowd at the Web 2.0 Summit is that "slow and steady doesn't win the race". Today people want fast and furious. Not convinced? Okay, let's have a look at some arguments: 500ms lost is to lose 20 percent of traffic for Google (this might be why there are only ten results per page in the research). Increased latency of 100ms costs 1 percent of sales for Amazon. Reducing by 25 percent the weight of a page is to win 25 percent of users in the medium term for Google. Losing 400ms is to have a 5-9 percent drop in addition to Yahoo!, an editorial site. This is the era of milliseconds and terabytes, so we have to pay a big price if we can't keep up. This article will describe how to ensure the optimum performance of your Magento store. Measuring/benchmarking your Magento with Siege, ab, Magento profiler, YSlow, Page Speed, GTmetrix, and WebPagetest The very first task of any website's performance optimization is to know its pitfalls. In other words, know why it is taking too much time. Who are the culprits? Fortunately, we have some amicable friends who will guide us through. Let's list them: ab (ApacheBench): This is bundled with every Apache as a benchmarking utility. Siege: This is an open source stress/regression test and benchmark utility by Joe Dog. Magento profiler: This is a built-in Magento profiler. YSlow: This is a tool from Yahoo! We have been using it for years. This is in fact a firebug add-on. Page Speed: This is yet another firebug add-on from Google to analyze page performance on some common rules. GTmetrix: This is a cool online web application from which you can get both YSlow and Page Speed in the same place. Opera fanboys who don't like Firefox or Chrome might use it for YSlow and Page Speed here. WebPagetest: This is another online tool for benchmarking as GTmetrix. It also collects and shows screenshots with the reports. Okay, we are introduced to our new friends. In this recipe, we will work with them and find the pitfalls of our Magento store. Getting ready Before starting the work, we have to make sure that every required tool is in place. Let's check it. ab: This Apache benchmarking tool is bundled with every Apache installation. If you are on a Linux-based distribution, you can give it a go by issuing the following command in the terminal: ab -h Siege: We will use this tool in the same box as our server. So make sure you have it installed. You can see it by typing this command (note that the option is capital V): siege -V If it's installed, you should see the installed version information of Siege. If it's not, you can install it with the following command in any Debian-based Distro: sudo apt-get install siege You can also install it from source. To do so, grab the latest source from here: ftp://ftp.joedog.org/pub/siege/siege-latest.tar.gz, then issue the following steps sequentially: # go the location where you downloaded siegetar xvzf siege-latest.tar.gz# go to the siege folder. You should read it with something likesiege-2.70./configuremakemake install If you are on a Windows-based box, you would find it as: apache/bin/ab.exe Magento Profile: This is also a built-in tool with Magento. YSlow: This firebug add-on from Firefox could be installed via the Internet from here: http://developer.yahoo.com/yslow/. Firebug add-on is a dependency for YSlow. Page Speed: This is also a firebug add-on that can be downloaded and installed from: http://code.google.com/speed/page-speed/download.html. For using GTmetrix and WebPagetest, we will need an active Internet connection. Make sure you have these. How to do it... Using the simple tool ab: If you are on a Windows environment, go to the apache/bin/ folder and if you are on Unix, fire up your terminal and issue the following command: ab -c 10 -n 50 -g mage.tsv http://magento.local.com/ In the previous command, the params denote the following: -c: This is the concurrency number of multiple requests to perform at a time. The default is one request at a time. -n: This requests the number of requests to perform for the benchmarking session. The default is to just perform a single request, which usually leads to non-representative benchmarking results. -g (gnuplot-file): This writes all measured values out as a gnuplot or TSV (tab separate values) file. This file can easily be imported into packages like Gnuplot, IDL, Mathematica, Igor, or even Excel. The labels are on the first line of the file. The preceding command generates some benchmarking report in the terminal and a file named mage.tsv in the current location, as we specified in the command. If we open the mage.tsv file in a spreadsheet editor such as Open Office or MS Excel, it should read as follows: You can tweak the ab params and view a full listing of params by typing ab -h in the terminal. Using Siege: Let's lay Siege to it! Siege is an HTTP regression testing and benchmarking utility. It was designed to let web developers measure the performance of their code under duress, to see how it will stand up to load on the Internet. Siege supports basic authentication, cookies, HTTP, and HTTPS protocols. It allows the user to hit a web server with a configurable number of concurrent simulated users. These users place the web server 'under Siege'. Let's create a text file with the URLs that would be tested under Siege. We can pass a single URL in the command line as well. We will use an external text file to use more URLs through a single command. Create a new text file in the terminal's current location. Let's assume that we are in the /Desktop/mage_benchmark/ directory. Create a file named mage_urls.txt here and put the following URLs in it: http://magento.local.com/http://magento.local.com/skin/frontend/default/default/favicon.icohttp://magento.local.com/js/index.php?c=auto&f=,prototype/prototype.js,prototype/validation.js,scriptaculous/builder.js,scriptaculous/effects.js,scriptaculous/dragdrop.js,scriptaculous/controls.js,scriptaculous/slider.js,varien/js.js,varien/form.js,varien/menu.js,mage/translate.js,mage/cookies.jshttp://magento.local.com/skin/frontend/default/default/css/print.csshttp://magento.local.com/skin/frontend/default/default/css/stylesie.csshttp://magento.local.com/skin/frontend/default/default/css/styles.csshttp://magento.local.com/skin/frontend/default/default/images/np_cart_thumb.gifhttp://magento.local.com/skin/frontend/default/default/images/np_product_main.gifhttp://magento.local.com/skin/frontend/default/default/images/np_thumb.gifhttp://magento.local.com/skin/frontend/default/default/images/slider_btn_zoom_in.gifhttp://magento.local.com/skin/frontend/default/default/images/slider_btn_zoom_out.gifhttp://magento.local.com/skin/frontend/default/default/images/spacer.gifhttp://magento.local.com/skin/frontend/default/default/images/media/404_callout1.jpghttp://magento.local.com/electronics/cameras.htmlhttp://magento.local.com/skin/frontend/default/default/images/media/furniture_callout_spot.jpghttp://magento.local.com/skin/adminhtml/default/default/boxes.csshttp://magento.local.com/skin/adminhtml/default/default/ie7.csshttp://magento.local.com/skin/adminhtml/default/default/reset.csshttp://magento.local.com/skin/adminhtml/default/default/menu.csshttp://magento.local.com/skin/adminhtml/default/default/print.csshttp://magento.local.com/nine-west-women-s-lucero-pump.html These URLs will vary with yours. Modify it as it fits. You can add more URLs if you want. Make sure that you are in the /Desktop/mage_benchmark/ directory in your terminal. Now issue the following command: siege -c 50 -i -t 1M -d 3 -f mage_urls.txt This will take a fair amount of time. Be patient. After completion it should return a result something like the following: Lifting the server siege.. done.Transactions: 603 hitsAvailability: 96.33 %Elapsed time: 59.06 secsData transferred: 10.59 MBResponse time: 1.24 secsTransaction rate: 10.21 trans/secThroughput: 0.18 MB/secConcurrency: 12.69Successful transactions: 603Failed transactions: 23Longest transaction: 29.46Shortest transaction: 0.00 Repeat the steps 1 and 3 to produce reports with some variations and save them wherever you want. The option details could be found by typing the following command in the terminal: siege -h Magento profiler: Magento has a built-in profiler. You can enable it from the backend's System | Configuration | Advanced | Developer | Debug section. Now open the index.php file from your Magento root directory. Uncomment line numbers 65 and 71. The lines read as follows: line 65: #Varien_Profiler::enable(); // delete #line 71: #ini_set(<display_errors>, 1); // delete # Save this file and reload your Magento frontend in the browser. You should see the profiler data at the bottom of the page, similar to the following screenshot: (Move the mouse over the image to enlarge.) Yslow: We have already installed the YSlow firebug add-on. Open the Firefox browser and let's activate it by pressing the F12 button or clicking the firebug icon from the bottom-right corner of Firefox. Click on the YSlow link in firebug. Select the Rulesets. In my case I chose YSlow (V2). Click on the Run Test button. After a few seconds you will see a report page with the grade details. Here is mine: You can click on the links and see what it says. Page Speed: Fire up your Firefox browser. Activate the firebug panel by pressing F12. Click on the Page Speed link. Click on the Performance button and see the Page Speed Score and details. The output should be something like the following screenshot: Using GTmetrix: This is an online tool to benchmark a page with a combination of YSlow and Page Speed. Visit http://gtmetrix.com/ and DIY (Do It Yourself). Using WebPagetest: This is a similar tool as GTmetrix, which can be accessed from here: http://www.webpagetest.org/. How it works... ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving. The analysis that Siege leaves you with can tell you a lot about the sustainability of your code and server under duress. Obviously, availability is the most critical factor. Anything less than 100 percent means there's a user who may not be able to access your site. So, in the above case, there's some issue to be looked at, given that availability was only 96.33 percent on a sustained 50 concurrent, one minute user Siege. Concurrency is measured as the time of each transaction (defined as the number of server hits including any possible authentication challenges) divided by the elapsed time. It tells us the average number of simultaneous connections. High concurrency can be a leading indicator that the server is struggling. The longer it takes the server to complete a transaction while it's still opening sockets to deal with new traffic, the higher the concurrent traffic and the worse the server performance will be. Yahoo!'s exceptional performance team has identified 34 rules that affect web page performance. YSlow's web page analysis is based on the 22 of these 34 rules that are testable. We used one of their predefined ruleset. You can modify and create your own as well. When analyzing a web page, YSlow deducts points for each infraction of a rule and then applies a grade to each rule. An overall grade and score for the web page is computed by summing up the values of the score for each rule weighted by the rule's importance. Note that the rules are weighted for an average page. For various reasons, there may be some rules that are less important for your particular page. In YSlow 2.0, you can create your own custom rulesets in addition to the following three predefined rulesets: YSlow(V2): This ruleset contains the 22 rules Classic(V1): This ruleset contains the first 13 rules Small Site or Blog: This ruleset contains 14 rules that are applicable to small websites or blogs Page Speed generates its results based on the state of the page at the time you run the tool. To ensure the most accurate results, you should wait until the page finishes loading before running Page Speed. Otherwise, Page Speed may not be able to fully analyze resources that haven't finished downloading. Windows users can use Fiddler as an alternative to Siege. You can download it from http://www.fiddler2.com/fiddler2/, which is developed by Microsoft.  
Read more
  • 0
  • 0
  • 1450

article-image-alfresco-3-business-solutions-planning-and-implementing-document-migration
Packt
15 Feb 2011
5 min read
Save for later

Alfresco 3 Business Solutions: Planning and Implementing Document Migration

Packt
15 Feb 2011
5 min read
  Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems. Planning document migration Now we have got a strategy for how to do the document migration and we have several import methods to choose from, but we have not yet thought about planning the document migration. The end users will need time to select and organize the files they want to migrate and we might need some time to write temporary import scripts. So we need to plan this well ahead of production day. The end users will have to go through all their documents and decide which ones they want to keep and which ones they will no longer need. Sometimes the decision to keep a document is not up to the end user but instead might be controlled by regulations, so this requires extra research The following screenshot shows the Best Money schedule for document migration: It is not only electronic files that might need to be imported, sometimes there are paper-based files that need to be scanned and imported. This needs to be planned into the schedule too. Implementing document migration So we have a document migration strategy and we have a plan. Now let's see a couple of examples of how we can implement document migration in practice. Using Alfresco bulk filesystem import tool A tool such as the Alfresco bulk filesystem import tool is probably what most people will use and it is also the preferred import tool in the Best Money project. So let's start looking at how this tool is used. It is delivered in an AMP and is installed by dropping the AMP into the ALFRESCO_HOME/amps directory and restarting Alfresco. However, we prefer to install it manually with the Module Management Tool (MMT) as we have other AMPs, such as the Best Money AMP, that have been installed with the MMT tool. Copy the alfresco-bulk-filesystem-import-0.8.amp (or newest version) file into the ALFRESCO_HOME/bin directory. Stop Alfresco and then install the AMP as follows: C:Alfresco3.3bin>java -jar alfresco-mmt.jar install alfresco- bulkfilesystem-import-0.8.amp C:Alfresco3.3tomcatwebapps alfresco.war-verbose Running Alfresco bulk import tool Remove the ALFRESCO_HOME/tomcat/webapps/alfresco directory, so the files contained in the new AMP are recognized when the updated WAR file is exploded on restart of Alfresco. The tool provides a UI form in Alfresco Explorer that makes it very simple to do the import. It can be accessed via the http://localhost:8080/alfresco/service/bulk/import/filesystem URL, which will display the following form (you will be prompted to log in first, so make sure to log in with a user that has access to the spaces where you want to upload the content): Here, the Import directory field is mandatory and specifies the absolute path to the filesystem directory from where to load the documents and folders from. It should be specified in an OS-specific format such as for example C:docmigrationmeetings or /docmigration/meetings. Note that this directory must be locally accessible to the server where the Alfresco instance is running. It must either be a local filesystem or a locally mounted remote filesystem. The Target space field is also mandatory and specifies the target space/folder to load the documents and folders into. It is specified as a path starting with /Company Home. The separator character is Unix-style (that is, "/"), regardless of the platform Alfresco is running on. This field includes an AJAX auto-suggest feature, so you may type any part of the target space name, and an AJAX search will be performed to find and display matching items. The Update existing files checkbox field specifies whether to update files that already exist in the repository (checked) or skip them (unchecked). The import is started by clicking on the Initiate Bulk Import button. Once an import has been initiated, a status Web Script will display that reports on the status of the background import process. This Web Script automatically refreshes every 10 seconds until the import process completes. For the Best Money project, we have set up a staging area for the document migration where users can add documents to be imported into Alfresco. Let's import the Meetings folder, which looks as follows, in the staging area: One Committee meeting has been added and that is what we will test to import with the tool. Fill out the Bulk Import form as follows Click Initiate Bulk Import button to start the import. The form should show the progress of the import and when finished we should see something like this: In this case, the import took 9.5 seconds and 31 documents (totaling 28 MB) were imported and five folders created. If we look at the document nodes, we will see that they all have the bmc:document type applied and the bmc:documentData aspect applied. This is accomplished by a type rule which is added to the Meetings folder. All documents also have the cm:versionable aspect applied via the "Apply Versioning" rule, which is added to the Meetings folder.
Read more
  • 0
  • 0
  • 1683

article-image-alfresco-3-business-solutions-document-migration-strategies
Packt
15 Feb 2011
13 min read
Save for later

Alfresco 3 Business Solutions: Document Migration Strategies

Packt
15 Feb 2011
13 min read
Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real-world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems. The Best Money CMS project is now in full swing and we have the folder structure with business rules designed and implemented and the domain content model created. It is now time to start importing any existing documents into the Alfresco repository. Most companies that implement an ECM system, and Best Money is no exception, will have a substantial amount of files that they want to import, classify, and make searchable in the new CMS system. The planning and preparation for the document migration actually has to start a lot earlier, as there are a lot of things that need to be prepared: Who is going to manage sorting out files that should be migrated? What is the strategy and process for the migration? What sort of classification should be done during the import? What filesystem metadata needs to be preserved during the import? Do we need to write any temporary scripts or rules just for the import? Document migration strategies The first thing we need to do is to figure out how the document migration is actually going to be done. There are several ways of making this happen. We will discuss a couple of different ways, such as via the CIFS interface and via tools. There are also some general strategies that apply to any migration method. General migration strategies There are some common things that need to be done no matter which import method is used, such as setting up a document migration staging area. Document staging area The end users need to be able to copy or move documents—that they want to migrate—to a kind of staging area that mirrors the new folder structure that we have set up in Alfresco. The best way to set up the staging area is to copy it from Alfresco via CIFS. When this is done the end users can start copying files to the staging area. However, it is a good idea to train the users in the new folder structure before they start copying documents to it. We should talk to them about folder structure changes, what rules and naming conventions have been set up, the idea behind it, and why it should be followed. If we do not train the end users in the new folder structure, they will not honor it and the old structure will get mixed up with the new structure via document migration, and this is not something that we want. We did plan and implement the new structure for today's requirements and future requirements and we do not want it broken before we even start using the system. The end users will typically work with the staging area over some time. It is good if they get a couple of weeks for this. It will take them time to think about what documents they want to migrate and if any re-organization is needed. Some documents might also need to be renamed. Preserving Modified Date on imported documents We know that Best Money wants all their modified dates on the files to be preserved during an import, as they have a review process that is dependent on it. This means that we have to use an import method that can preserve the Modified Date on the network drive files when they are merged into the Alfresco repository. The CIFS interface cannot be used for this as it sets Modified Date to Current Date. There are a couple of methods that can be used to import content into the repository and preserve the Modified Date: Create an ACP file via an external tool and then import it Custom code the import with the Foundation API and turn off the Audit Aspect before the import Use an import tool that also has the possibility to turn off the Audit Aspect At the time of writing (when I am using Alfresco 3.3.3 Enterprise and Alfresco Community 3.4a) there is no easy way to import files and preserve the Modified Date. When a file is added via Alfresco Explorer, Alfresco Share, FTP, CIFS, Foundation API, REST API, and so on, the Created Date and Modified Date is set to "now", so we lose all the Modified Date data that was set on the files on the network drive. The Created Date, Creator, Modified Date, Modifier, and Access Date are all so called Audit properties that are automatically managed by Alfresco if a node has the cm:auditable aspect applied. If we try and set these properties during an import via one of the APIs, it will not succeed. Most people want to import files via CIFS or via an external import tool. Alfresco is working towards supporting preserving dates when using both these methods for import. Currently, there is a solution to add files via the Foundation API and preserve the dates, which can be used by custom tools. The Alfresco product itself also needs this functionality in, for example, the Transfer Service Receiver, so the dates can be preserved when it receives files. The new solution that enables the use of the Foundation API to set Auditable properties manually has been implemented in version 3.3.2 Enterprise and 3.4a Community. To be able to set audit properties do the following: Inject the policy behavior filter in the class that should do the property update: <property name="behaviourFilter" ref="policyBehaviourFilter"/> Then in the class, turn off the audit aspect before the update, it has to be inside a new transaction, as in the following example: RetryingTransactionCallback<Object> txnWork = new RetryingTransactionCallback<Object>() { public Object execute() throws Exception { behaviourFilter.disableBehaviour (ContentModel.ASPECT_AUDITABLE); Then in the same transaction update the Created or Modified Date: nodeService.setProperty(nodeRef, ContentModel.PROP_MODIFIED, someDate); . . . } }; With JDK 6, the Modified Date is the only file data that we can access, so no other file metadata is available via the CIFS interface. If we use JDK 7, there is a new NIO 2 interface that gives access to more metadata. So, if we are implementing an import tool that creates an ACP file, we could use JDK 7 and preserve Created Date, Modified Date, and potentially other metadata as well. Post migration processing scripts When the document migration has been completed, we might want to do further processing of the documents such as setting extra metadata. This is specifically needed when documents are imported into Alfresco via the CIFS interface, which does not allow any custom metadata to be set during the import. There might also be situations, such as in the case of Best Money, where a lot of the imported documents have older filenames (that is, following an older naming convention) with important metadata that should be extracted and applied to the new document nodes. For post migration processing, JavaScript is a convenient tool to use. We can easily define Lucene queries for the nodes we want to process, as the rules have applied domain document types such as Meeting to the imported documents, and we can use regular expressions to match and extract the metadata we want to apply to the nodes. Search restrictions when running post-migration scripts What we have to think about though when running these post-migration scripts, is that the repository now contains a lot of content, so each query we run might very well return much more than 1,000 rows. And 1,000 rows is the default max limit that a search will return. To change this to allow for 5,000 rows to be returned, we have to make some changes to the permission check configuration (Alfresco checks the permissions for each node that is being accessed, so the user running the query is not getting back content that he or she should not have access to). Open the alfresco-global.properties file located in the alfresco/tomcat/shared/classes directory and add the following properties: # The maximum time spent pruning results (was 10000) system.acl.maxPermissionCheckTimeMillis=100000 # The maximum number of results to perform permission checks against (was 1000) system.acl.maxPermissionChecks=5000 Unwanted Modified Date updates when running scripts So we have turned off the audit feature during document migration or made some custom code changes to Alfresco, to get the document's Modified Date to be preserved during import. Then we have turned on auditing again so the system behaves in the way the users expect. The last thing we want now is for all those preserved modified dates to be set to the current date when we update metadata. And this is what will happen if we are not running the post-migration scripts with the audit feature turned off. So this is important to think about unless you want to start all over again with the document migration. Versioning problems when running post-migration scripts Another thing that can cause problems is when we have versioning turned on for documents that we are updating the post-migration scripts. If we see the following error: org.alfresco.service.cmr.version.VersionServiceException: 07120018 The current implementation of the version service does not support the creation of branches. By default, new versions will be created even when we just update properties/metadata. This can cause errors such as the preceding error and we might not even be able to check-in and check-out the document. To prevent this error from popping up, and turn off versioning during property updates once and for all, we can set the following property at the same time as we set the other domain metadata in the scripts: legacyContentFile.properties["cm:autoVersionOnUpdateProps"] = false; Setting this property to false effectively turns off versioning during any property/metadata update for the document. Another thing that can be a problem is if folders have been set up as versionable by mistake. The most likely reason for this is that we probably forgot to set up the Versioning Rule to only apply to cm:content (and not to "All Items"). Folders in the workspace://SpacesStore store do not support versioning The WCM system comes with an AVM store that supports advanced folder versioning and change sets. Note that the WCM system can also store its data in the Workspace store. So we need to update the versioning rule to apply to the content and remove the versionable aspect from all folders, which have it applied, before we can update any content in these folders. Here is a script that removes the cm:versionable aspect from any folder having it applied: var store = "workspace://SpacesStore"; var query = "PATH:"/app:company_home//*" AND TYPE:"cm:folder" AND ASPECT:"cm:versionable""; var versionableFolders = search.luceneSearch(store, query); for each (versionableFolder in versionableFolders) { versionableFolder.removeAspect("cm:versionable"); logger.log("Removed versionable aspect from folder: " + versionableFolder.name); } logger.log("Removed versionable aspect from " + versionableFolders.length + " folders"); Post-migration script to extract legacy meeting metadata Best Money has a lot of documents that they are migrating to the Alfresco repository. Many of the documents have filenames following a certain naming convention. This is the case for the meeting documents that are imported. The naming convention for the old imported documents are not exactly the same as the new meeting naming convention, so we have to write the regular expression a little bit differently. An example of a filename with the new naming convention looks like this: 10En-FM.02_3_annex1.doc and the same filename with the old naming convention looks like this: 10Eng-FM.02_3_annex1.doc. The difference is that the old naming convention does not specify a two-character code for language but instead a list that looks like this: Arabic,Chinese,Eng|eng,F|Fr,G|Ger,Indonesian,Jpn,Port,Rus|Russian,Sp,Sw,Tagalog,Turkish. What we are interested in extracting is the language and the department code and the following script will do that with a regular expression: // Regulars Expression Definition var re = new RegExp("^d{2}(Arabic|Chinese|Eng|eng|F|Fr|G|Ger| Indonesian|Ital|Jpn|Port|Rus|Russian|Sp|Sw|Tagalog|Turkish)-(A| HR|FM|FS|FU|IT|M|L).*"); var store = "workspace://SpacesStore"; var query = "+PATH:"/app:company_home/cm:Meetings//*" + TYPE:"cm:content""; var legacyContentFiles = search.luceneSearch(store, query); for each (legacyContentFile in legacyContentFiles) { if (re.test(legacyContentFile.name) == true) { var language = getLanguageCode(RegExp.$1); var department = RegExp.$2; logger.log("Extracted and updated metadata (language=" + language + ")(department=" + department + ") for file: " + legacyContentFile.name); if (legacyContentFile.hasAspect("bmc:document_data")) { // Set some metadata extracted from file name legacyContentFile.properties["bmc:language"] = language; legacyContentFile.properties["bmc:department"] = department; // Make sure versioning is not enabled for property updates legacyContentFile.properties["cm:autoVersionOnUpdateProps"] = false; legacyContentFile.save(); } else { logger.log("Aspect bmc:document_data is not set for document" + legacyContentFile.name); } } else { logger.log("Did NOT extract metadata from file: " + legacyContentFile.name); } } /** * Convert from legacy language code to new 2 char language code * * @param parsedLanguage legacy language code */ function getLanguageCode(parsedLanguage) { if (parsedLanguage == "Arabic") { return "Ar"; } else if (parsedLanguage == "Chinese") { return "Ch"; } else if (parsedLanguage == "Eng" || parsedLanguage == "eng") { return "En"; } else if (parsedLanguage == "F" || parsedLanguage == "Fr") { return "Fr"; } else if (parsedLanguage == "G" || parsedLanguage == "Ger") { return "Ge"; } else if (parsedLanguage == "Indonesian") { return "In"; } else if (parsedLanguage == "Ital") { return ""; } else if (parsedLanguage == "Jpn") { return "Jp"; } else if (parsedLanguage == "Port") { return "Po"; } else if (parsedLanguage == "Rus" || parsedLanguage == "Russian") { return "Ru"; } else if (parsedLanguage == "Sp") { return "Sp"; } else if (parsedLanguage == "Sw") { return "Sw"; } else if (parsedLanguage == "Tagalog") { return "Ta"; } else if (parsedLanguage == "Turkish") { return "Tu"; } else { logger.log("Invalid parsed language code: " + parsedLanguage); return ""; } } This script can be run from any folder and it will search for all documents under the /Company Home/Meetings folder or any of its subfolders. All the documents that are returned by the search are looped through and matched with the regular expression. The regular expression defines two groups: one for the language code and one for the department. So after a document has been matched with the regular expression it is possible to back-reference the values that were matched in the groups by using RegExp.$1 and RegExp.$2. When the language code and the department code properties are set, we also set the cm:autoVersionOnUpdateProps property, so we do not get any problem with versioning during the update.
Read more
  • 0
  • 0
  • 1697
article-image-ibm-filenet-p8-content-manager-end-user-tools-and-tasks
Packt
15 Feb 2011
10 min read
Save for later

IBM FileNet P8 Content Manager: End User Tools and Tasks

Packt
15 Feb 2011
10 min read
Getting Started with IBM FileNet P8 Content Manager Install, customize, and administer the powerful FileNet Enterprise Content Management platform Quickly get up to speed on all significant features and the major components of IBM FileNet P8 Content Manager Provides technical details that are valuable both for beginners and experienced Content Management professionals alike, without repeating product reference documentation Gives a big picture description of Enterprise Content Management and related IT areas to set the context for Content Manager Written by an IBM employee, Bill Carpenter, who has extensive experience in Content Manager product development, this book gives practical tips and notes with a step-by-step approach to design real Enterprise Content Management solutions to solve your business needs Parts of some of these topics will cover things that are features of the XT application rather than general features of CM and the P8 platform. We'll point those out so there is no confusion. What is Workplace XT? IBM provides complete, comprehensive APIs for writing applications to work with the CM product and the P8 platform. They also provide several pre-built, ready to use environments for working with CM. These range from connectors and other integrations, to IBM and third-party applications, to standalone applications provided with CM. Business needs will dictate which of these will be used. It is common for a given enterprise to use a mix of custom coding, product integrations, and standalone CM applications. Even in cases where the standalone CM applications are not widely deployed throughout the enterprise, they can still be used for ad hoc exploration or troubleshooting by administrators or power users. XT is a complete, standalone application included with CM. It's a good application for human-centered document management, where users in various roles actively participate in the creation and management of individual items. XT exposes most CM features, including the marriage of content management and process management (workflow). XT is a thin client web application built with modern user interface technologies so that it has something of a Web 2.0 look and feel. To run XT, open its start page with your web browser. The URL is the server name where XT is installed, the appropriate port number, and the default context of WorkplaceXT. In our installation, that's http://wjc-rhel.example.net:9080/WorkplaceXT. We don't show it here, but for cases where XT is in wider use than our all-in-one development system, it's common to configure things so that it shows up on port 80, the default HTTP port. This can be done by reconfiguring the application server to use those ports directly or by interposing a web server (for example, IBM HTTP Server, IHS) as a relay between the browser clients and the application server. It's also common to configure things such that at least the login page is protected by TLS/SSL. Details for both of these configuration items are covered in depth in the product documentation (they vary by application server type). For some of the examples in this article, we'll log on as the high-privileged user poweruser, and, for others, we'll log on as the low-privileged user unpriv. You can create them now or substitute any pair of non-administrator accounts from your own directory. Browsing folders and documents Let's have a look at XT's opening screen. Log onto XT as user poweruser. With the folder icon selected from the top-left group of four icons, as in the figure below, XT shows a tree view that allows browsing through folders for content. Of course, we don't actually have any content in the Object Store yet, so all we see when we expand the Object Store One node are pseudo-folders (that is, things XT puts into the tree but which are not really folders in the Object Store). Let's add some content right now. For now, we'll concentrate on the user view of things. Adding folders In the icon bar are two icons with small, green "+" signs on them (you can see them in the screenshot above). The left icon, which looks like a piece of paper, is for adding documents to the currently expanded folder. The icon to the right of that, which looks like an office supply folder, is for adding a subfolder to the currently expanded folder. Select Object Store One in the tree view, and click the icon for adding a folder. The first panel of a pop-up wizard appears, as shown above, prompting you for a folder name. We have chosen the name literature to continue the example that we started in Administrative Tools and Tasks. Click the Add button, and the folder will be created and will appear in the tree view. Follow the same procedure to add a subfolder to that called shakespeare. That is, create a folder whose path is /literature/shakespeare. You can modify the security of most objects by right-clicking and selecting More Information | Security. A pop-up panel shows the object's Access Control List (ACL). For now, we just want to allow other users to add items to the shakespeare folder (we'll need that for the illustration of entry templates when we get to that section below). Open that folder's security panel. Click the link for #AUTHENTICATEDUSERS, and check the File In Folder box in the Allow column, highlighted in the following screenshot: Adding documents Now let's add some actual documents to our repository. We'll add a few of Shakespeare's famous works as sample documents. There are many sources for electronic copies of Shakespeare's works readily available on the Internet. One of our favorites for exercises like this is at the Massachusetts Institute of Technology: http://shakespeare.mit.edu. It's handy because it's really just the text without a lot of notes, criticisms, and so on. The first thing you see is a list of all the works categorized by type of work, and you're only a click or two away from the full HTML text of the work. It doesn't hurt that they explicitly state that they have placed the HTML versions in the public domain. We'll use the full versions in a single HTML page for our sample documents. In some convenient place on your desktop machine, download a few of the full text files. We chose As You Like It (asyoulikeit_full.html), Henry V (henryv_full.html), Othello (othello_full.html), and Venus and Adonis (VenusAndAdonis.html). Select the /literature/shakespeare folder in the tree view, and click the icon for adding a document. The document add wizard pops up, as shown next: Browse to the location of the first document file, asyoulikeit_full.html, and click the Next button. Don't click Add Now or you won't get the correct document class for our example. Initially, the class Document is indicated. Click on Class and select Work of Literature. The list of properties automatically adjusts to reflect the custom properties defined for our custom class. Supply the values indicated (note in particular that you have to adjust the Document Title property because it defaults to the file name). XT uses the usual convention of marking required properties with an asterisk. Click Add. Repeat the above steps for the other three documents. You'll now have a short list in the shakespeare folder. XT also provides a "landing zone" for the drag-and-drop of documents. It's located in the upper right-hand corner of the browser window, as shown next. This can save you the trouble of browsing for documents in your filesystem. Even though it can accept multiple documents in a single drag-and-drop, it prompts only for a single set of property values that are applied to all of the documents. Viewing documents Clicking on a document link in XT will lead to the download of the content and the launching of a suitable application. For most documents, the web browser is used to find and launch an application based on the document content type, although XT does have some configurability in its site preferences for customizing that behavior. The behavior you can normally expect is the same as if you clicked on a link for a document on any typical website. For graphical image content (JPEG, PNG, and similar formats), XT launches the Image Viewer applet. The Image Viewer applet is especially handy for dealing with Tagged Image Format Files (TIFF) graphics because most browsers do not handle TIFF natively. It is common for fax and scanning applications to generate TIFF images of pages. However, even for common graphics formats that can be rendered by the browser, the Image Viewer applet has more functionality. The most interesting extra features are for adding textual or graphical annotations to the image. Rather than directly manipulating the original image, the annotations are created in an overlay layer and saved as Annotation objects in the repository. For example, in the image below, being displayed in the Image Viewer applet, the stamp tool has been used to mark it as a DRAFT. That annotation can easily be repositioned or even removed without affecting the original image. The included Image Viewer applet is licensed only for use within the FileNet components where it's already integrated. It is an OEM version of ViewONE from Daeja Image Systems. The ViewONE Pro application, which has additional functionality, is available for license directly from Daeja and can be integrated into FileNet applications as a supported configuration. However, in such cases, support for the viewer itself comes directly from Daeja. Entry templates Although each step of document and folder creation is individually straightforward, taken together they can become bewildering to non-technical users, especially if coupled with naming, security, and other conventions. Even when the process is completely understood, there are several details which are purely clerical in nature but which still might suffer from mis-typing and so on. From these motivations comes an XT feature called Entry Templates. Someone, usually an administrator, creates an entry template as an aid for other users who are creating folders or documents. A great many details can be specified in advance, but the user can still be given choices at appropriate points. To create an entry template, navigate to Tools | Advanced Tools | Entry Templates | Add. A wizard is launched from which you can define a Document Entry Template or a Folder Entry Template. We won't go through all of the steps here since the user interface is easy to understand. Both types of entry templates are Document subclasses, and XT files created entry templates into folders. When you double-click on an entry template, XT presents a user interface that adheres to the entry template design. For example, in this screen shot which uses an entry template called Shakespearean Document, the document class and target folder are already selected and cannot be changed by the user. Likewise, the author last and full names are pre-populated. Other properties, which genuinely need user input, can be edited as usual.
Read more
  • 0
  • 0
  • 3345

article-image-alfresco-3-business-solutions-types-e-mail-integration
Packt
11 Feb 2011
9 min read
Save for later

Alfresco 3 Business Solutions: Types of E-mail Integration

Packt
11 Feb 2011
9 min read
Alfresco 3 Business Solutions It is becoming more and more common that an ECM solution should include the possibility of storing e-mails in the repository, so they can be managed and searched in the same way all other content can. When we talk about managing e-mails in the content management system, it is important to know exactly what we mean by that. Today most companies and organizations want to use Alfresco for e-mail archiving, which is not something that is easily supported out of the box. E-mail integration solutions There are a number of different ways that an e-mail system can be integrated with the Alfresco CMS system. We will look at three of these and present advantages and disadvantages with each one. The three different e-mail integration solutions are: E-mail client talking directly to Alfresco via the IMAP protocol E-mail client talking to Alfresco via custom built plugin and Web Scripts E-mail server talking to Alfresco via custom module and Web Scripts E-mail client talking directly to Alfresco via the IMAP protocol This is the solution that is available out of the box with Alfresco. From version 3.2 and onwards Alfresco supports the IMAP protocol, which is one way an e-mail client can talk to e-mail servers (the other way is POP). So, with this solution Alfresco can behave like an IMAP e-mail server. The following image illustrates how this solution works: The e-mail clients typically receive an e-mail in their Inbox and then they can drag-and-drop that e-mail into an Alfresco folder via the IMAP channel. Any attachment can be extracted and handled separately to the e-mail in the Alfresco repository. This is a manual process that requires the end user to manage what e-mails he or she wants to be stored in Alfresco. Nothing happens automatically and no e-mails are stored in Alfresco unless a user manually drag-and-drops them there To achieve automatic archiving of e-mails, a user could set up an e-mail rule in their e-mail client that automatically files some or all e-mails into an Alfresco folder. However, we would still have to manually set up this rule on all users' e-mail clients. So we could not say that this would be an archiving solution that is transparent to the user, as it does not automatically force all e-mails to be saved for auditing purposes. Further on, the e-mail client has to be running in order for the e-mail rule to execute. This solution is best thought of as an e-mail management solution where users collaborate and share information in e-mails. The advantages of this solution are: No client installation:In most e-mail clients we can set up an extra IMAP account connecting to Alfresco without the need to install any extra software on the client workstation. This includes Outlook, Lotus Notes, Mozilla Thunderbird, and GroupWise. Users don't have to change working style:This is a big thing, users do not want to start learning a complete new way of managing e-mails, they just want to work in the same way they always have. The Alfresco account just shows up as another e-mail inbox in the e-mail client. Users can drag-and-drop e-mails between mailboxes just as they normally do. They do not have to learn any extra functionality. Users don't have to change working style:This is a big thing, users do not want to start learning a complete new way of managing e-mails, they just want to work in the same way they always have. The Alfresco account just shows up as another e-mail inbox in the e-mail client. Users can drag-and-drop e-mails between mailboxes just as they normally do. They do not have to learn any extra functionality. Supported out-of-the-box:No need to install any extra Alfresco modules, just configure some properties and the solution is ready to go. The disadvantages of this solution are: No document search:Users cannot search for documents in Alfresco and then attach them to an e-mail they want to send. Cannot set custom metadata:Because this solution does not use any custom plugin on the e-mail client side there is no possibility of setting custom metadata for an e-mail, such as for example customer ID, before it is stored in Alfresco. However, you can often solve this problem by creating business rules on the server-side and apply custom metadata based on which folder an e-mail is dropped into. No archiving solution:This is an e-mail collaboration and e-mail sharing solution, it does not force e-mail to be stored in the repository for compliance and regulatory reasons. Because this solution doesn't require any client installation, or updates to the Alfresco server, it will probably be the most popular e-mail management solution. It can also easily be extended with folder rules to create sophisticated e-mail filing solutions. E-mail client talking to Alfresco through custom built plugin and Web Scripts There are one or two products out there that have taken a different approach to integrating e-mail clients with Alfresco. One of these products is Anovio Email Management solution for Outlook 2007 (http://www.anovio.de/aem). This product provides a solution that enables you to also work with documents from the e-mail client, and search for documents via the e-mail client. To do this they had to implement a plugin for the e-mail client that is almost exclusively Outlook, and use Web Scripts to talk to Alfresco. The IMAP channel approach is not used as it can only handle e-mails. The following picture gives us an overview of this solution: This solution is also an e-mail management solution as it is up to the end user to actually save the e-mail into the repository. There is no automatic archiving going on. The advantages of this solution are: Document search: You can do a full text search for documents in the repository via the e-mail client. A document can then be attached to an e-mail that is about to be sent. Users don't have to change working style: Users can drag-and-drop e-mails into the Alfresco repository in a way they are used to. They do not have to use, and learn, the extra document management functionality in the Outlook plugin if they do not want to. Stores attachments directly: Attachments can be stored directly into the repository without storing the e-mail. Stores attachments directly: Attachments can be stored directly into the repository without storing the e-mail. The disadvantages of this solution are: Client installationYou can do a full text search for documents in the repository via the e-mail client. A document can then be attached to an e-mail that is about to be sent. Does not work for all e-mail clientsIt works only with certain e-mail clients, such as for example Outlook 2007, in the case of the Anovio product. Users have to learn new functionalityIf users want to handle documents from the e-mail client then they have to learn new functionality. Also, there are usually new menus and features users have to learn even for the standard e-mail management functionality. No archiving solution:This is an e-mail collaboration and sharing solution, it does not force e-mail to be stored in the repository for compliance and regulation reasons. Not supported out of the box:It is not a part of the Alfresco package, so will need to be purchased separately. This kind of solution can be very good for users that frequently need to attach documents from the Alfresco repository to e-mails that they are sending. However, if there is a larger user base, the maintenance burden could be quite substantial as you would need to install the plugin on every user's PC. E-mail server talking to Alfresco through custom module and Web Scripts This is the classical e-mail archiving solution where the e-mail system integration has been done on the e-mail server-side. This solution is totally transparent for the end users and usually complies with security regulations. What this means is that all e-mails are archived automatically without the user having to do anything, which guarantees that every incoming and outgoing e-mail has been filed and can be audited later. There are—unfortunately, at the time of writing—no such solutions available for Alfresco. But for reference purposes this is how such a solution would typically look: This solution would require us to build an extension module for the e-mail server that captures all inbound and outbound e-mails and stores them in Alfresco without the users having to do anything. So all e-mails are captured and stored for archiving and auditing purposes. Users can then for example, access the e-mails through the standard IMAP channel, if they are stored as standard MIME messages according to RFC-822 (http://tools.ietf.org/html/rfc822) The advantages of this solution are: Supports archiving and auditingThis is the only solution that would be compliant with security regulations as users are not involved, and cannot decide if an e-mail should be stored or not. Users don't have to change working styleUsers can use their standard e-mail client to view archived e-mails. The disadvantages of this solution are: Requires server installation:We need to have access to the e-mail server and be able to install the integration module. This might be challenging in many situations when you might not be allowed to install anything on the e-mail server, or the e-mail server might be hosted externally so we would not have access to it. Attachments are not extracted:The attachments would probably not be extracted and sorted into their own subfolder. This is assumed as the purpose of an e-mail archiving solution to store the complete original e-mail for auditing reasons, and not for e-mail management use. Not a collaboration and sharing solutionE-mails are stored in an archiving structure and not in a project or case structure. Users would have more difficulty in collaborating around e-mail content. Duplicate e-mails exist:There would be a lot of duplicate e-mails because of security regulations such as Sarbanes-Oxley that requires all e-mails to be stored for auditing purpose, even if it is a duplicate. Not supported out of the boxIt is not a part of the Alfresco package so will need to be purchased separately, if it is available This solution is mentioned here so we can easily tell the difference between an e-mail management solution and an e-mail archiving solution when we discuss this with potential clients. There has been a lot of misunderstanding around what e-mail integration solutions are currently available for Alfresco, where they are sometimes referred to as e-mail archiving solutions, which they are not.
Read more
  • 0
  • 0
  • 2755