Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-graphic-design-working-clip-art-and-making-your-own
Packt
09 Nov 2012
9 min read
Save for later

Graphic Design - Working with Clip Art and Making Your Own

Packt
09 Nov 2012
9 min read
(For more resources related to this topic, see here.) Making symbols from Character Palette into clip art—where to find clip art for iWork Clip art is the collective name for predrawn images, pictures, and symbols that can be quickly added to documents. In standalone products, a separate Clip art folder is often added to the package. iWork doesn't have one — this has been the subject of numerous complaints on the Internet forums. However, even though there is no clip art folder as such in iWork, there are hundreds of clip-art images on Mac computers that come as part of our computers. Unlike MS Office or Open Office that are separate Universes on your machine, iWork (even though we buy it separately) is an integral part of the Mac. It complements and works with applications that are already there, such as iLife (iPhoto), Mail, Preview, Address Book, Dictionaries, and Spotlight. Getting ready So, where is the clip art for iWork? First, elements of the Pages templates can be used as clip art—just copy and paste them. Look at this wrought iron fence post from the Collector Newsletter template. It is used there as a column divider. Select and copy-paste it into your project, set the image placement to Floating, and move it in between the columns or text boxes. The Collector Newsletter template also has a paper clip, a price tag, and several images of slightly rumpled and yellowed sheets of paper that can be used as backgrounds. Images with little grey houses and house keys from the Real Estate Newsletter template are good to use with any project related to property. The index card image from the Back Page of the Green Grocery Newsletter template can be used for designing a cooking recipe, and the background image of a yellowing piece of paper from the Musical Concert poster would make a good background for an article on history. Clip art in many templates is editable and easy to resize or modify. Some of the images are locked or grouped. Under the Arrange menu, select the Unlock and Ungroup options, to use those images as separate graphic elements. Many of the clip-art images are easy to recreate with iWork tools. Bear in mind, however, that some of the images have low resolution and should only be used with small dimensions. You will find various clip-art images in the following locations: A dozen or so attractive clip-art images are in Image Bullets, under the Bullets drop-down menu: Open the Text Inspector, click on the List tab, and choose Image Bullets from the Bullets & Numbering drop-down menu. There, you will find checkboxes and other images. Silver and gold pearls look very attractive, but any of your own original images can also be made into bullets. In Bullets, choose Custom Image | Choose and import your own image. Note that images with shadows may distort the surrounding text. Use them with care or avoid applying shadows. Navigate to Macintosh HD | Library | Desktop Pictures: Double-click on the hard disk icon on your desktop and go to Library | Desktop Pictures. There are several dozen images including the dew drop and the lady bug. These are large files, good enough for using as background images. They are not, strictly speaking, clip art but are worth keeping in mind. Navigate to Home| Pictures | iChat Icons (or HD | Library | Application Support | Apple | iChat Icons): The Home folder icon (a little house) is available in the side panel of any folder on your Mac. This is where documents associated with your account are stored on your computer. It has a Pictures folder with a dozen very small images sitting in the folder called iChat Icons. National flags are stored here as button-like images. The apple image can be found in the Fruit folder. The gems icons, such as the ruby heart, from this folder look attractive as bullets. Navigate to HD | Library | User Pictures: You can find animals, flowers, nature, sports, and other clip-art images in this folder. These are small TIFF files that can be used as icons when a personal account is set up on a Mac. But of course, they can be used as clip art. The Sports folder has a selection of balls, but not a cricket ball, even though cricket may have the biggest following in the world (Britain, South Africa, Australia, India, Pakistan, Bangladesh, Sri Lanka, and many Caribbean countries). But, a free image of the cricket ball from Wikipedia/Wikimedia can easily be made into clip art. There may be several Libraries on your Mac. The main Library is on your hard drive; don't move or rename any folders here. Duplicate images from this folder and use copies. Your personal library (it is created for each account on your machine) is in the Home folder. This may sound a bit confusing, but you don't have to wade through endless folders to find what you want, just use Spotlight to find relevant images on your computer, in the same way that you would use Google to search on the Internet. Character Palette has hundreds of very useful clip-art-like characters and symbols. You can find the Character Palette via Edit | Special Characters. Alternatively, open System Preferences | International | Input Menu. Check the Character Palette and Show input menu in menu bar boxes: Now, you will be able to open the Character Palette from the screen-top menu. Character Palette can also be accessed through Font Panel. Open it with the Command + T keyboard shortcut. Click on the action wheel at the bottom of the panel and choose Characters... to open the Character Palette. Check the Character Palette box to find what you need. Images here range from the familiar Command symbol on the Mac keyboard to zodiac symbols, to chess pieces and cards icons, to mathematical and musical signs, various daily life shapes, including icons of telephones, pens and pencils, scissors, airplanes, and so on. And there are Greek letters that can be used in scientific papers (for instance, the letter ∏). To import the Character Palette symbols into an iWork document, just click-and-drag them into your project. The beauty of the Character Palette characters is that they behave like letters. You can change the color and font size in the Format bar and add shadows and other effects in the Graphics Inspector or via the Font Panel. To use the Character Palette characters as clip art, we need to turn them into images in PDF, JPEG, or some other format. How to do it... Let's see how a character can be turned into a piece of clip art. This applies to both letters and symbols from the Character Palette. Open Character Palette | Symbols | Miscellaneous Symbols. In this folder, we have a selection of scissors that can be used to show, with a dotted line, where to cut out coupons or forms from brochures, flyers, posters, and other marketing material. Click on the scissors symbol with a snapped off blade and drag it into an iWork document. Select the symbol in the same way as you would select a letter, and enlarge it substantially. To enlarge, click on the Font Size drop-down menu in the Format bar and select a bigger size, or use the shortcut key Command + plus sign (hit the plus key several times). Next, turn the scissors into an image. Make a screenshot (Command + Shift + 4) or use the Print dialog to make a PDF or a JPEG. You can crop the image in iPhoto or Preview before using it in iWork, or you can import it straight into your iWork project and remove the white background with the Alpha tool. If Alpha is not in your toolbar, you can find it under Format |Instant Alpha. Move the scissors onto the dotted line of your coupon. Now, the blade that is snapped in half appears to be cutting through the dotted line. Remember that you can rotate the clip art image to put scissors either on the horizontal or on the vertical sides of the coupon. Use other scissors symbols from the Character Palette, if they are more suitable for your project. Store the "scissors" clip art in iPhoto or another folder for future use if you are likely to need it again. There's more... There are other easily accessible sources of clip art. MS Office clip art is compatible If you have kept your old copy of MS Office, nothing is simpler than copy-pasting or draggingand- dropping clip art from the Office folder right into your iWork project. When using clip art, it's worth remembering that some predrawn images quickly become dated. For example, if you put a clip art image of an incandescent lamp in your marketing documents for electric works, it may give an impression that you are not familiar with more modern and economic lighting technologies. Likewise, a clip art image of an old-fashioned computer with a CRT display put on your promotional literature for computer services can send the wrong message, because modern machines use flat-screen displays. Wikipedia/Wikimedia Look on Wikipedia for free generic images. Search for articles about tools, domestic appliances, furniture, houses, and various other objects. Most articles have downloadable images with no copyright restrictions for re-use. They can easily be made into clip art. This image of a hammer from Wikipedia can be used for any articles about DIY (do-it-yourself) projects. Create your own clip art Above all, it is fun to create your own clip art in iWork. For example, take a few snapshots with your digital camera or cell phone, put them in one of iWork's shapes, and get an original piece of clip art. It could be a nice way to involve children in your project.
Read more
  • 0
  • 0
  • 1850

article-image-importing-videos-and-basic-editing-mechanics
Packt
01 Oct 2012
8 min read
Save for later

Importing videos and basic editing mechanics

Packt
01 Oct 2012
8 min read
Importing from a tapeless video camera Chances are, if you've bought a video camera in the last few years, it doesn't record to tape; it records to some form of tapeless media. In most consumer and prosumer cameras, this is typically an SD card, but could also be an internal drive, other various solid-state memory cards, or the thankfully short-lived trend of recordable mini DVDs. In the professional world, examples include Compact Flash, P2 cards (usually found in Panasonic models), SxS cards (many Sony and JVC models, Arri Alexa), or some other form of internal flash storage. How to do it... Plug your camera in to your Mac's USB port, or if you're using a higher-end setup with a capture box, plug the box into likely your FireWire or Thunderbolt box. If your camera uses an SD card as its storage medium, you can also simply stick the SD card into your Mac's card reader or external reader. If you are plugging the camera directly in, turn it on, and set it to the device's playback mode. If FCPX is running, it should automatically launch the Import from Camera window. If it does not, click on the Import from Camera icon in the left of the toolbar. You will see thumbnails of all of your camera's clips. You can easily scrub through them simply by passing your mouse over each one. You can import clips one at a time by selecting a range and then clicking on Import Selected… or you can simply highlight them all and click on Import All… . To select a range, simply move your mouse over a clip until you find the point where you want to start and hit I on your keyboard. Then scrub ahead until you reach where you want the clip to end and hit O. Whether you chose to select one, a few, or all your clips, once you click on the Import button you will arrive at the Import options screen. Choose what event you want your clips to live in, choose if you want to transcode the clips, and select any analyses you want FCPX to perform on the clips as it imports them. Click on Import. FCPX begins the import process. You can close the window and begin editing immediately! How it works... The reason you can edit so quickly, even if you're importing a massive amount of footage, is thanks to some clever programming on Apple's part. While it might take a few minutes or even longer to import all the media off of your camera or memory card, FCPX will access the media directly on the original storage device, until it has finished its import process, and then switch over to the newly imported versions. There's more... Creating a camera archive Creating a camera archive is the simplest and best way to make a backup of your raw footage. Tapeless cameras often store their media in really weird-looking ways with complex folder structures. In many cases, FCPX needs that exact folder structure in order to easily import the media. A camera archive essentially takes a snapshot or image of your camera's currently stored media and saves it to one simple file that you can access in FCPX over and over again. This of course also frees you to delete the contents of the memory card or media drive and reuse it for another shoot. In the Camera Import window, make sure your camera is selected in the left column and click on the Create Archive button in the bottom left corner. The resulting window will let you name the archive and pick a destination drive. Obviously, store your archive on an external drive if it's for backup purposes. If you were to keep it on the same drive as your FCPX system and the drive fails, you'd lose your backup as well! The process creates a proprietary disk image with the original file structure of the memory card. FCPX needs the original file structure (not just the video files) in order to properly capture from the card. By default, it stores the archive in a folder called Final Cut Camera Archives on whatever drive you selected. Later when you need to reimport from a camera archive, simply open the Camera Import window again, and if you don't see your needed archive under Camera Archives on the left, click on Open Archive… and find it in the resulting window. To import all or not to import all If you've got the time, there's nothing to stop you from looking at each and every clip one at a time in the Import from Camera window, selecting a range, and then importing that one clip. However, that's going to take you a while as you'll have to deal with the settings window every time you click on the Import button. If you've got the storage space (and most of us do today), just import everything and worry about weeding out the trash later. But what about XYZ format? There are two web pages you should bookmark to keep up to date. One is www.apple.com/finalcutpro/specs/. This web page lists most of the formats FCPX can work with. Expect this list to grow with future versions. The second site is help.apple.com/finalcutpro/cameras/en/index.html. This web site lets you search camera models for compatibility with FCPX. Just because a format isn't listed on Apple's specs page, doesn't mean it's impossible to work with. Many camera manufacturers release plugins which enhance a program's capabilities. One great example is Canon (www.canon.com), who released a plugin for FCPX allowing users to import MXF files from a wide variety of their cameras. Importing MTS, M2TS, and M2T files If you've ever browsed the file structure of a memory card pulled from an AVCHD camera, you'll have seen a somewhat complex system of files and folders and almost nothing resembling a normal video file. Deep inside you're likely to find files with the extension .mts, .m2ts, or .m2t (on some HDV cameras). By themselves, these files are sitting ducks, unable to be read by most basic video playback software or imported directly by FCPX. But somehow, once you open up the Import from Camera window in FCPX, FCPX is able to translate all that apparent gobbledygook from the memory card into movie files. FCPX needs that gobbledygook to import the footage. But what if someone has given you a hard drive full of nothing but these standalone files? You'll need to convert or rewrap (explained in the following section) the clips before heading in to FCPX. Getting ready There are a number of programs out there that can tackle this task, but a highly recommended one is ClipWrap (http://www.divergentmedia.com/clipwrap). There is a trial, but you'll probably want to go ahead and buy the full version. How to do it... Open ClipWrap. Drag-and-drop your video files (ending in .mts, .m2ts, or .m2t) into the main interface. Set a destination for your new files under Movie Destination. Click on the drop-down menu titled Output Format. You can choose to convert the files to a number of formats including ProRes 422 (the same format that is created when you select the Create optimized media option in FCPX). A faster, space-saving option, however, is to leave the default setting, Rewrap (don't alter video samples): Click on Convert. When the process is done, you will have new video files that end in .mov and can be directly imported into FCPX via File | Import | Files. How it works... In the previous exercise, we chose not to transcode/convert the video files into another format. What we did was take the video and audio stream out of one container (.mts, .m2ts, or .m2t) and put it into another (QuickTime, seen as .mov). It may sound crazy at first, but we basically took the birthday present (the video and audio) out of an ugly gift box that FCPX won't even open and put it into a prettier one that FCPX likes. There's more... Other alternatives ClipWrap is far from the only solution out there, but it is definitely one of the best. The appendix of this book covers the basics of Compressor, Apple's compression software which can't convert raw AVCHD files in most cases, but can convert just about any file that QuickTime can play. The software company, iSkySoft, (www.iskysoft.com) makes a large number of video conversion tools for a reasonable price. If you're looking for a fully featured video encoding software package, look no further than Telestream Episode (www.telestream. net) or Sorenson Squeeze (www.sorensonmedia.com). These two applications are expensive, but can take just about any video file format out there and transcode it to almost anything else, with a wide variety of customizable settings. Rewrapping or transcoding As mentioned in step 3 in the previous section, we could have chosen to transcode to ProRes 422 instead of rewrapping. This is a totally fine option, just know the differences: transcoding, takes much longer, it takes up much more file space, but on the plus side, it is Final Cut Pro X's favorite format (because it's native to FCPX, made by Apple for Apple) and you may save time in the actual editing process by working with a faster more efficient codec once inside FCPX. If you chose to rewrap, you still have the option to transcode when you import into FCPX.
Read more
  • 0
  • 0
  • 864

article-image-extgwt-rich-internet-application-crafting-ui-real-estate
Packt
14 Sep 2012
3 min read
Save for later

ExtGWT Rich Internet Application: Crafting UI Real Estate

Packt
14 Sep 2012
3 min read
(For more resources on ExtGwt, see here.) Introduction Layouts are a fundamental part of the GXT library. They provide the ability to create flexible and beautiful application UIs easily. However, with this power comes a level of complexity. A solid understanding of layouts is the key to using the library effectively. With GWT Panels, the panel itself is responsible for creating the panel's markup and inserting its children at the appropriate location, and creating appropriate markup as changes are made. Unlike GWT Panels, LayoutContainer (a concrete GXT container with support for layouts) does not physically connect its child components to the container's DOM. The Document Object Model is used to represent an HTML document in a tree-like structure in the browser's memory. We can dynamically change the content of the HTML page by manipulating the DOM. Rather, it is the job of the layout to both build the internal structure of the container, and to connect its child widgets. In order for a GXT container's HTML to be rendered, the container's layout() method must execute. This is different from GWT panels, in which the HTML is rendered when the components are attached to the panel. There are several ways in which the layout can execute. For now, let's go with the simplest case in which the layout executes when the container is attached. Attached is a GWT term that indicates that the widget is part of the browser's DOM. Attaching and detaching could be a subject on its own, so let's just assume it means when the widget is added to and removed from the page. When we add a container to RootPanel (for example, RootPanel.get(). add(container)), the container will be attached, and the container's layout will execute, generating the needed HTML markup. If we add another component to the now rendered container, (container.add(new Label("New Item"))) we will have to manually execute/ refresh the container (container.layout()) for the additions (as well as removals) to be effected. This sort of Lazy-Rendering is the default behavior of GXT as of 2.2.3 with GXT 3 planning to use the same approach as GWT itself. Many GXT layouts can be used in conjunction with LayoutData, which are configuration objects assigned to each child widget within a container, and provides the layout object with additional information to be used when executing the layout. Aside from a layout being executed when the container is attached, or when layout() is called manually on the container, there are two other ways in which a layout will be executed. After a container executes its layout, it looks and sees if any of its children are containers. When it finds a child container, it then executes its layout. So as long as there is a chain of containers, the execution of layouts will cascade to the child containers. This is a very important concept as you can lay out a top-level container, and the child containers will have a chance to adjust their layouts as well. A container's layout will also execute when its size is adjusted. This is default behavior, and can be disabled. This is another important concept as it means that if a container's size is changed, the layout has a chance to update based on the container's new size.
Read more
  • 0
  • 0
  • 1294
Visually different images

article-image-loading-submitting-and-validating-forms-using-ext-js-4
Packt
31 Aug 2012
25 min read
Save for later

Working with forms using Ext JS 4

Packt
31 Aug 2012
25 min read
Ext JS 4 is Sencha’s latest JavaScript framework for developing cross-platform web applications. Built upon web standards, Ext JS provides a comprehensive library of user interface widgets and data manipulation classes to turbo-charge your application’s development. In this article, written by Stuart Ashworth and Andrew Duncan, the authors of Ext JS 4 Web Application Development Cookbook, we will cover: Constructing a complex form layout Populating your form with data Submitting your form's data Validating form fields with VTypes Creating custom VTypes Uploading files to the server Handling exceptions and callbacks This article introduces forms in Ext JS 4. We begin by creating a support ticket form in the first recipe. To get the most out of this article you should be aware that this form is used by a number of recipes throughout the article. Instead of focusing on how to configure specific fields, we demonstrate more generic tasks for working with forms. Specifically, these are populating forms, submitting forms, performing client-side validation, and handling callbacks/exceptions. Constructing a complex form layout In the previous releases of Ext JS, complicated form layouts were quite difficult to achieve. This was due to the nature of the FormLayout, which was required to display labels and error messages correctly, and how it had to be combined with other nested layouts. Ext JS 4 takes a different approach and utilizes the Ext.form.Labelable mixin, which allows form fields to be decorated with labels and error messages without requiring a specific layout to be applied to the container. This means we can combine all of the layout types the framework has to offer without having to overnest components in order to satisfy the form field's layout requirements. We will describe how to create a complex form using multiple nested layouts and demonstrate how easy it is to get a form to look exactly as we want. Our example will take the structure of a Support Ticket Request form and, once we are finished, it will look like the following screenshot: (Move the mouse over the image to enlarge.) How to do it... We start this recipe by creating a simple form panel that will contain all of the layout containers and their fields: var formPanel = Ext.create('Ext.form.Panel', { title: 'Support Ticket Request', width: 650, height: 500, renderTo: Ext.getBody(), style: 'margin: 50px', items: [] }); Now, we will create our first set of fields— the FirstName and LastName fields. These will be wrapped in an Ext.container.Container component, which is given an hbox layout so our fields appear next to each other on one line: var formPanel = Ext.create('Ext.form.Panel', { title: 'Support Ticket Request', width: 650, height: 500, renderTo: Ext.getBody(), style: 'margin: 50px', items: [{ xtype: 'container', layout: 'hbox', items: [{ xtype: 'textfield', fieldLabel: 'First Name', name: 'FirstName', labelAlign: 'top', cls: 'field-margin', flex: 1 }, { xtype: 'textfield', fieldLabel: 'Last Name', name: 'LastName', labelAlign: 'top', cls: 'field-margin', flex: 1 }] }] }); We have added a CSS class (field-margin) to each field, to provide some spacing between them. We can now add this style inside <style> tags in the head of our document: <style type="text/css"> .field-margin { margin: 10px; }</style> Next, we create a container with a column layout to position our e-mail address and telephone number fields. We nest our telephone number fields in an Ext.form.FieldContainer class , which we will discuss later in the recipe: items: [ ... { xtype: 'container', layout: 'column', items: [{ xtype: 'textfield', fieldLabel: 'Email Address', name: 'EmailAddress', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.6 }, { xtype: 'fieldcontainer', layout: 'hbox', fieldLabel: 'Tel. Number', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.4, items: [{ xtype: 'textfield', name: 'TelNumberCode', style: 'margin-right: 5px;', flex: 2 }, { xtype: 'textfield', name: 'TelNumber', flex: 4 }] }] } ... ] The text area and checkbox group are created and laid out in a similar way to the previous sets, by using an hbox layout: items: [ ... { xtype: 'container', layout: 'hbox', items: [{ xtype: 'textarea', fieldLabel: 'Request Details', name: 'RequestDetails', labelAlign: 'top', cls: 'field-margin', height: 250, flex: 2 }, { xtype: 'checkboxgroup', name: 'RequestType', fieldLabel: 'Request Type', labelAlign: 'top', columns: 1, cls: 'field-margin', vertical: true, items: [{ boxLabel: 'Type 1', name: 'type1', inputValue: '1' }, { boxLabel: 'Type 2', name: 'type2', inputValue: '2' }, { boxLabel: 'Type 3', name: 'type3', inputValue: '3' }, { boxLabel: 'Type 4', name: 'type4', inputValue: '4' }, { boxLabel: 'Type 5', name: 'type5', inputValue: '5' }, { boxLabel: 'Type 6', name: 'type6', inputValue: '6' }], flex: 1 }] } ... ] Finally, we add the last field, which is a file upload field, to allow users to provide attachments: items: [ ... { xtype: 'filefield', cls: 'field-margin', fieldLabel: 'Attachment', width: 300 } ... ] How it works... All Ext JS form fields inherit from the base Ext.Component class and so can be included in all of the framework's layouts. For this reason, we can include form fields as children of containers with layouts (such as hbox and column layouts) and their position and size will be calculated accordingly. Upgrade Tip: Ext JS 4 does not have a form layout meaning a level of nesting can be removed and the form fields' labels will still be displayed correctly by just specifying the fieldLabel config. The Ext.form.FieldContainer class used in step 4 is a special component that allows us to combine multiple fields into a single container, which also implements the Ext.form. Labelable mixin . This allows the container itself to display its own label that applies to all of its child fields while also giving us the opportunity to configure a layout for its child components. Populating your form with data After creating our beautifully crafted and user-friendly form we will inevitably need to populate it with some data so users can edit it. Ext JS makes this easy, and this recipe will demonstrate four simple ways of achieving it. We will start by explaining how to populate the form on a field-by-field basis, then move on to ways of populating the entire form at once. We will also cover populating it from a simple object, a Model instance, and a remote server call. Getting ready We will be using the form created in this article's first recipe as our base for this section, and many of the subsequent recipes in this article, so please look back if you are not familiar with it. All the code we will write in this recipe should be placed under the definition of this form panel. You will also require a working web server for the There's More example, which loads data from an external file. How to do it... We'll demonstrate how to populate an entire form's fields in bulk and also how to populate them individually. Populating individual fields We will start by grabbing a reference to the first name field using the items property's get method. The items property contains an instance of Ext.util. MixedCollection, which holds a reference to each of the container's child components. We use its get method to retrieve the component at the specified index: var firstNameField = formPanel.items.get(0).items.get(0); Next, we use the setValue method of the field to populate it: firstNameField.setValue('Joe'); Populating the entire form To populate the entire form, we must create a data object containing a value for each field. The property names of this object will be mapped to the corresponding form field by the field's name property. For example, the FirstName property of our requestData object will be mapped to a form field with a name property value of FirstName: var requestData = { FirstName: 'Joe', LastName: 'Bloggs', EmailAddress: '[email protected]', TelNumberCode: '0777', TelNumber: '7777777', RequestDetails: 'This is some Request Detail body text', RequestType: { type1: true, type2: false, type3: false, type4: true, type5: true, type6: false } }; We then call the setValues method of the form panel's Ext.form.Basic instance, accessed through the getForm method, passing it our requestData variable: formPanel.getForm().setValues(requestData); How it works... Each field contains a method called setValue , which updates the field's value with the value that is passed in. We can see this in action in the first part of the How to do it section. A form panel contains an internal instance of the Ext.form.Basic class (accessible through the getForm method ), which provides all of the validation, submission, loading, and general field management that is required by a form. This class contains a setValues method , which can be used to populate all of the fields that are managed by the basic form class. This method works by simply iterating through all of the fields it contains and calling their respective setValue methods. This method accepts either a simple data object, as in our example, whose properties are mapped to fields based on the field's name property. Alternatively, an array of objects can be supplied, containing id and value properties, with the id mapping to the field's name property. The following code snippet demonstrates this usage: formPanel.getForm().setValues([{id: 'FirstName', value: 'Joe'}]);   There's more... Further to the two previously discussed methods there are two others that we will demonstrate here. Populating a form from a Model instance Being able to populate a form directly from a Model instance is extremely useful and is very simple to achieve. This allows us to easily translate our data structures into a form without having to manually map it to each field. We initially define a Model and create an instance of it (using the data object we used earlier in the recipe): Ext.define('Request', { extend: 'Ext.data.Model', fields: [ 'FirstName', 'LastName', 'EmailAddress', 'TelNumberCode', 'TelNumber', 'RequestDetails', 'RequestType' ] }); var requestModel = Ext.create('Request', requestData); Following this we call the loadRecord method of the Ext.form.Basic class and supply the Model instance as its only parameter. This will populate the form, mapping each Model field to its corresponding form field based on the name: formPanel.getForm().loadRecord(requestModel); Populating a form directly from the server It is also possible to load a form's data directly from the server through an AJAX call. Firstly, we define a JSON file, containing our request data, which will be loaded by the form: { "success": true, "data": { "FirstName": "Joe", "LastName": "Bloggs", "EmailAddress": "[email protected]", "TelNumberCode": "0777", "TelNumber": "7777777", "RequestDetails": "This is some Request Detail body text", "RequestType": { "type1": true, "type2": false, "type3": false, "type4": true, "type5": true, "type6": false } } } Notice the format of the data: we must provide a success property to indicate that the load was successful and put our form data inside a data property. Next we use the basic form's load method and provide it with a configuration object containing a url property pointing to our JSON file: formPanel.getForm().load({ url: 'requestDetails.json' }); This method automatically performs an AJAX request to the specified URL and populates the form's fields with the data that was retrieved. This is all that is required to successfully load the JSON data into the form. The basic form's load method accepts similar configuration options to a regular AJAX request Submitting your form's data Having taken care of populating the form it's now time to look at sending newly added or edited data back to the server. As with form population you'll learn just how easy this is with the Ext JS framework. There are two parts to this example. Firstly, we will submit data using the options of the basic form that wraps the form panel. The second example will demonstrate binding the form to a Model and saving our data. Getting ready We will be using the form created in the first recipe as our base for this section, so refer to the Constructing a complex form layout recipe, if you are not familiar with it. How to do it... Add a function to submit the form: var submitForm = function(){ formPanel.getForm().submit({ url: 'submit.php' }); }; Add a button to the form that calls the submitForm function: var formPanel = Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Submit Form', handler: submitForm }], items: [ ... ] }); How it works... As we learned in the previous recipe, a form panel contains an internal instance of the Ext.form.Basic class (accessible through the getForm method). The submit method in Ext.form.Basic is a shortcut to the Ext.form.action.Submit action. This class handles the form submission for us. All we are required to do is provide it with a URL and it will handle the rest. It's also possible to define the URL in the configuration for the Ext.form.Panel.. Before submitting, it must first gather the data from the form. The Ext.form.Basic class contains a getValues method, which is used to gather the data values for each form field. It does this by iterating through all fields in the form making a call to their respective getValue methods. There's more... The previous recipe demonstrated how to populate the form from a Model instance. Here we will take it a step further and use the same Model instance to submit the form as well. Submitting a form from a Model instance Extend the Model with a proxy and load the data into the form: xt.define('Request', { extend: 'Ext.data.Model', fields: ['FirstName', 'LastName', 'EmailAddress', 'TelNumberCode', 'TelNumber', 'RequestDetails', 'RequestType'], proxy: { type: 'ajax', api: { create: 'addTicketRequest.php', update: 'updateTicketRequest.php' }, reader: { type: 'json' } } }); var requestModel = Ext.create('Request', { FirstName: 'Joe', LastName: 'Bloggs', EmailAddress: '[email protected]' }); formPanel.getForm().loadRecord(requestModel); Change the submitForm function to get the Model instance, update the record with the form data, and save the record to the server: var submitForm = function(){ var record = formPanel.getForm().getRecord(); formPanel.getForm().updateRecord(record); record.save(); }; Validating form fields with VTypes In addition to form fields' built-in validation (such as allowBlank and minLength), we can apply more advanced and more extensible validation by using VTypes. A VType (contained in the Ext.form.field.VTypes singleton) can be applied to a field and its validation logic will be executed as part of the field's periodic validation routine. A VType encapsulates a validation function, an error message (which will be displayed if the validation fails), and a regular expression mask to prevent any undesired characters from being entered into the field. This recipe will explain how to apply a VType to the e-mail address field in our example form, so that only properly formatted e-mail addresses are deemed valid and an error will be displayed if it doesn't conform to this pattern. How to do it... We will start by defining our form and its fields. We will be using our example form that was created in the first recipe of this article as our base. Now that we have a form we can add the vtype configuration option to our e-mail address field: { xtype: 'textfield', fieldLabel: 'Email Address', name: 'EmailAddress', labelAlign: 'top', cls: 'field-margin', columnWidth: 0.6, vtype: 'email' } That is all we have to do to add e-mail address validation to a field. We can see the results in the following screenshot, with an incorrectly formatted e-mail address on the left and a valid one on the right: How it works... When a field is validated it runs through various checks. When a VType is defined the associated validation routine is executed and will flag the field invalid or not . As previously mentioned, each VType has an error message coupled with it, which is displayed if it is found to be invalid, and a mask expression which prevents unwanted characters being entered. Unfortunately, only one VType can be applied to a field and so, if multiple checks are required, a custom hybrid may need to be created. See the next recipe for details on how to do this. There's more... Along with the e-mail VType, the framework provides three other VTypes that can be applied straight out of the box. These are: alpha: this restricts the field to only alphabetic characters alphnum: this VType allows only alphanumeric characters url: this ensures that the value is a valid URL Creating custom VTypes We have seen in the previous recipe how to use VTypes to apply more advanced validation to our form's fields. The built-in VTypes provided by the framework are excellent but we will often want to create custom implementations to impose more complex and domain specific validation to a field. We will walkthrough creating a custom VType to be applied to our telephone number field to ensure it is in the format that a telephone number should be. Although our telephone number field is split into two (the first field for the area code and the second for the rest of the number), for this example we will combine them so our VType is more comprehensive. For this example, we will be validating a very simple, strict telephone number format of "0777-777-7777". How to do it... We start by defining our VType's structure. This consists of a simple object literal with three properties. A function called telNumber and two strings called telNumberText (which will contain the error message text) and telNumberMask (which holds a regex to restrict the characters allowed to be entered into the field) respectively. var telNumberVType = { telNumber: function(val, field){ // function executed when field is validated // return true when field's value (val) is valid return true; }, telNumberText: 'Your Telephone Number must only include numbers and hyphens.', telNumberMask: /[d-]/ }; Next we define the regular expression that we will use to validate the field's value. We add this as a variable to the telNumber function: telNumber: function(val, field){ var telNumberRegex = /^d{4}-d{3}-d{4}$/; return true; } Once this has been done we can add the logic to this telNumber function that will decide whether the field's current value is valid. This is a simple call to the regular expression string's test method, which returns true if the value matches or false if it doesn't: telNumber: function(val, field){ var telNumberRegex = /^d{4}-d{3}-d{4}$/; return telNumberRegex.test(val); } The final step to defining our new VType is to apply it to the Ext.form.field. VTypes singleton, which is where all of the VTypes are located and where our field's validation routine will go to get its definition: Ext.apply(Ext.form.field.VTypes, telNumberVType); Now that our VType has been defined and registered with the framework, we can apply it to the field by using the vtype configuration option. The result can be seen in the following screenshot: { xtype: 'textfield', name: 'TelNumber', flex: 4, vtype: 'telNumber' } How it works... A VType consists of three parts: The validity checking function The validation error text A keystroke filtering mask (optional) VTypes rely heavily on naming conventions so they can be executed dynamically within a field's validation routine. This means that each of these three parts must follow the standard convention. The validation function's name will become the name used to reference the VType and form the prefix for the other two properties. In our example, this name was telNumber, which can be seen referencing the VType in Step 5. The error text property is then named with the VType's name prefixing the word Text (that is, telNumberText ). Similarly, the filtering mask is the VType's name followed by the word Mask (that is, telNumberMask ). The final step to create our VType is to merge it into the Ext.form.field.VTypes singleton allowing it to be accessed dynamically during validation. The Ext.apply function does this by merging the VType's three properties into the Ext.form.field.VTypes class instance. When the field is validated, and a vtype is defined, the VType's validation function is executed with the current value of the field and a reference to the field itself being passed in. If the function returns true then all is well and the routine moves on. However, if it evaluates to false the VType's Text property is retrieved and pushed onto the errors array. This message is then displayed to the user as our screenshot shown earlier. This process can be seen in the code snippet as follows, taken directly from the framework: if (vtype) { if(!vtypes[vtype](value, me)){ errors.push(me.vtypeText || vtypes[vtype +'Text']); } } There's more... It is often necessary to validate fields based on the values of other fields as well as their own. We will demonstrate this by creating a simple VType for validating that a confirm password field's value matches the value entered in an initial password field. We start by creating our VType structure as we did before: Ext.apply(Ext.form.field.VTypes, { password: function(val, field){ return false; }, passwordText: 'Your Passwords do not match.' }); We then complete the validation logic. We use the field's up method to get a reference to its parent form. Using that reference, we get the values for all of the form's fields by using the getValues method : password: function(val, field){ var parentForm = field.up('form'); // get parent form // get the form's values var formValues = parentForm.getValues(); return false; } The next step is to get the first password field's value. We do this by using an extra property ( firstPasswordFieldName) that we will specify when we add our VType to the confirm password field. This property will contain the name of the initial password field (in this example Password ). We can then compare the confirm password's value with the retrieved value and return the outcome: password: function(val, field){ var parentForm = field.up('form'); // get parent form // get the form's values var formValues = parentForm.getValues(); // get the value from the configured 'First Password' field var firstPasswordValue = formValues[field.firstPasswordFieldName]; // return true if they match return val === firstPasswordValue; } The VType is added to the confirm password field in exactly the same way as before but we must include the extra firstPasswordFieldName option to link the fields together: { xtype: 'textfield', fieldLabel: 'Confirm Password', name: 'ConfirmPassword', labelAlign: 'top', cls: 'field-margin', flex: 1, vtype: 'password', firstPasswordFieldName: 'Password' } Uploading files to the server Uploading files is very straightforward with Ext JS 4. This recipe will demonstrate how to create a basic file upload form and send the data to your server: Getting Ready This recipe requires the use of a web server for accepting the uploaded file. A PHP file is provided to handle the file upload; however, you can integrate this Ext JS code with any server-side technology you wish. How to do it... Create a simple form panel. Ext.create('Ext.form.Panel', { title: 'Document Upload', width: 400, bodyPadding: 10, renderTo: Ext.getBody(), style: 'margin: 50px', items: [], buttons: [] }); In the panel's items collection add a file field: Ext.create('Ext.form.Panel', { ... items: [{ xtype: 'filefield', name: 'document', fieldLabel: 'Document', msgTarget: 'side', allowBlank: false, anchor: '100%' }], buttons: [] }); Add a button to the panel's buttons collection to handle the form submission: Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Upload Document', handler: function(){ var form = this.up('form').getForm(); if (form.isValid()) { form.submit({ url: 'upload.php', waitMsg: 'Uploading...' }); } } }] }); How it works... Your server-side code should handle these form submissions in the same way they would handle a regular HTML file upload form. You should not have to do anything special to make your server-side code compatible with Ext JS. The example works by defining an Ext.form.field.File ( xtype: 'filefield' ), which takes care of the styling and the button for selecting local files. The form submission handler works the same way as any other form submission; however, behind the scenes the framework tweaks how the form is submitted to the server. A form with a file upload field is not submitted using an XMLHttpRequest object—instead the framework creates and submits a temporary hidden <form> element whose target is referenced to a temporary hidden <iframe>. The request header's Content-Type is set to multipart/form. When the upload is finished and the server has responded, the temporary form and <iframe> are removed. A fake XMLHttpRequest object is then created containing a responseText property (populated from the contents of the <iframe> ) to ensure that event handlers and callbacks work as if we were submitting the form using AJAX. If your server is responding to the client with JSON, you must ensure that the response Content-Type header is text/html. There's more... It's possible to customize your Ext.form.field.File. Some useful config options are highlighted as follows: buttonOnly: Boolean Setting buttonOnly: true removes the visible text field from the file field. buttonText: String If you wish to change the text in the button from the default of "Browse…" it's possible to do so by setting the buttonText config option. buttonConfig: Object Changing the entire configuration of the button is done by defining a standard Ext.button. Button config object in the buttonConfig option. Anything defined in the buttonText config option will be ignored if you use this. Handling exception and callbacks This recipe demonstrates how to handle callbacks when loading and submitting forms. This is particularly useful for two reasons: You may wish to carry our further processing once the form has been submitted (for example, display a thank you message to the user) In the unfortunate event when the submission fails, it's good to be ready and inform the user something has gone wrong and perhaps perform extra processing The recipe shows you what to do in the following circumstances: The server responds informing you the submission was successful The server responds with an unusual status code (for example, 404 , 500 , and so on) The server responds informing you the submission was unsuccessful (for example, there was a problem processing the data) The form is unable to load data because the server has sent an empty data property The form is unable to submit data because the framework has deemed the values in the form to be invalid Getting ready The following recipe requires you to submit values to a server. An example submit.php file has been provided. However, please ensure you have a web server for serving this file. How to do it... Start by creating a simple form panel: var formPanel = Ext.create('Ext.form.Panel', { title: 'Form', width: 300, bodyPadding: 10, renderTo: Ext.getBody(), style: 'margin: 50px', items: [], buttons: [] }); Add a field to the form and set allowBlank to false: var formPanel = Ext.create('Ext.form.Panel', { ... items: [{ xtype: 'textfield', fieldLabel: 'Text field', name: 'field', allowBlank: false }], buttons: [] }); Add a button to handle the forms submission and add success and failure handlers to the submit method's only parameter: var formPanel = Ext.create('Ext.form.Panel', { ... buttons: [{ text: 'Submit', handler: function(){ formPanel.getForm().submit({ url: 'submit.php', success: function(form, action){ Ext.Msg.alert('Success', action.result.message); }, failure: function(form, action){ if (action.failureType === Ext.form.action.Action. CLIENT_INVALID) { Ext.Msg.alert('CLIENT_INVALID', 'Something has been missed. Please check and try again.'); } if (action.failureType === Ext.form.action.Action. CONNECT_FAILURE) { Ext.Msg.alert('CONNECT_FAILURE', 'Status: ' + action.response.status + ': ' + action.response.statusText); } if (action.failureType === Ext.form.action.Action. SERVER_INVALID) { Ext.Msg.alert('SERVER_INVALID', action.result. message); } } }); } }] }); When you run the code, watch for the different failureTypes or the success callback: CLIENT_INVALID is fired when there is no value in the text field. The success callback is fired when the server returns true in the success property. Switch the response in submit.php file and watch for SERVER_INVALID failureType. This is fired when the success property is set to false. Finally, edit url: 'submit.php' to url: 'unknown.php' and CONNECT_FAILURE will be fired. How it works... The Ext.form.action.Submit and Ext.form.action.Load classes both have a failure and success function. One of these two functions will be called depending on the outcome of the action. The success callback is called when the action is successful and the success property is true. The failure callback , on the other hand, can be extended to look for specific reasons why the failure occurred (for example, there was an internal server error, the form did not pass client-side validation, and so on). This is done by looking at the failureType property of the action parameter. Ext.form.action.Action has four failureType static properties: CLIENT_INVALID, SERVER_INVALID, CONNECT_FAILURE, and LOAD_FAILURE, which can be used to compare with what has been returned by the server. There's more... A number of additional options are described as follows: Handling form population failures The Ext.form.action.Action.LOAD_FAILURE static property can be used in the failure callback when loading data into your form. The LOAD_FAILURE is returned as the action parameter's failureType when the success property is false or the data property contains no fields. The following code shows how this failure type can be caught inside the failure callback function: failure: function(form, action){ ... if(action.failureType == Ext.form.action.Action.LOAD_FAILURE){ Ext.Msg.alert('LOAD_FAILURE', action.result.message); } ... } An alternative to CLIENT_INVALID The isValid method in Ext.form.Basic is an alternative method for handling client-side validation before the form is submitted. isValid will return true when client-side validation passes: handler: function(){ if (formPanel.getForm().isValid()) { formPanel.getForm().submit({ url: 'submit.php' }); } }   Further resources on this subject: Ext JS 4: Working with the Grid Component [Article] Ext JS 4: Working with Tree and Form Components [Article] Infinispan Data Grid: Infinispan and JBoss AS 7 [Article]
Read more
  • 0
  • 0
  • 7014

article-image-publishing-project-various-formats-using-adobe-captivate-6
Packt
27 Aug 2012
16 min read
Save for later

Publishing the project in various formats using Adobe Captivate 6

Packt
27 Aug 2012
16 min read
Publishing to Flash In the history of Captivate, publishing to Flash has always been the primary publishing option. Even though HTML5 publishing is a game changer, publishing to Flash still is an important capability of Captivate. Remember that this publishing format is currently the only one that supports every single feature, animation, and object of Captivate. In the following exercise, we will publish our movie in Flash format using the default options: Return to the Chapter06/encoderDemo_800.cptx file. Click on the Publish icon situated right next to the Preview icon. Alternatively, you can also use the File | Publish menu. The Publish dialog box opens, as shown in the following screenshot:   The Publish dialog box is divided into four main areas: The Publish Format area (1) – This is where we choose the format in which we want to publish our movies. Basically, we can choose between three options: SWF/HTML5, Media, and Print. The other options (E-mail, FTP, and so on) are actually suboptions of the SWF/HTML5, Media, and Print formats. The Output Format Options area (2) – The content of this area depends on the format chosen in the Publish Format (1) area. The Project Information area (3) – This area is a summary of the main project preferences and metadata. Clicking on the links of this area will bring us back to the various project preferences boxes. The Advanced Options area (4) – This area provides some additional advanced publishing options. We will now move on to the actual publication of the project in Flash Format. In the Publish Format area, make sure the chosen format is SWF/HTML5. In the Flash(.swf) Options area, change the Project Title to encoderDemo_800_flash. Click on the Browse button situated just below the Folder field and choose to publish your movie in the Chapter06/Publish folder of your exercises folder. Make sure the Publish to Folder checkbox is selected. Take a quick look at the remaining options, but leave them all at their current settings. Click on the Publish button at the bottom-right corner of the Publish dialog box. When Captivate has finished publishing the movie, an information box appears on the screen asking if you want to view the output. Click on No to discard the information box and return to Captivate. We will now use the Finder (Mac) or the Windows Explorer (Windows) to take a look at the files Captivate has generated. Use the Finder (Mac) or the Windows Explorer (Window) to browse to the Chapter06/Publish folder of your exercises. Because we selected the Publish to Folder checkbox in the Publish dialog, Captivate has automatically created the encoderDemo_800_flash subfolder in the Chapter06/ Publish folder. Open the encoderDemo_800_flash subfolder to inspect its content. encoderDemo_800_flash.swf – This is the main Flash file containing the compiled version of the .cptx project encoderDemo_800_flash.html – This file is an HTML page used to embed the Flash file standard.js – is a JavaScript file used to make the Flash player work well within the HTML page demo_en.flv – is the video file used on slide 2 of the movie captivate.css – provides the necessary style rules to ensure the proper formatting of the HTML page If we want to embed the compiled Captivate movie in an existing HTML page, only the .swf file (plus, in this case, the .flv video) is needed. The HTML editor (such as Adobe Dreamweaver) will recreate the necessary HTML, JavaScript, and CSS files. Captivate and Dreamweaver Adobe Dreamweaver CS6 is the HTML editor of the Creative Suite and the industry leading solution for authoring professional web pages. Inserting a Captivate file in a Dreamweaver page is dead easy! First, move or copy the main Flash file (.swf) as well as the needed support files (in our case the .flv video file), if any, somewhere in the root folder of the Dreamweaver site. When done, use the Files panel of Dreamweaver to drag-and-drop the main swf file on the HTML page. That's it! We will now test the movie in a web browser. This is an important test as it recreates the conditions in which our students will experience our movie once in production. Double-click on the encoderDemo_800_flash.html file to open it in a web browser. Enjoy the fnal version of the demonstration that we have created together! Now that we have experienced the workfow of publishing our project to Flash with the default options, we will add some changes into the mix and create a scalable version of our project. Scalable HTML content One of the solutions about choosing the right size for our project is to use the new Scalable HTML content option of Captivate 6. Thanks to this new option, our eLearning content will be automatically resized to fit the screen on which it is viewed. Let's experiment with this option hands-on, using the following steps: If needed, return to the Chapter06/encoderDemo_800.cptx file. Click on the Publish icon situated right next to the Preview icon. Alternatively, you can also use the File | Publish menu. In the Publish Format area, make sure the chosen format is Flash(.swf) Options area. In the Flash(.swf) Options area, change the Project Title to encoderDemo_800_flashScalable. Click on the Browse button situated just below the Folder field and ensure that the publish folder still is the Chapter06/Publish folder of your exercises. Make sure the Publish to Folder checkbox is selected. In the Advanced Options section (lower-right corner of the Publish dialog), select the Scalable HTML content checkbox. Leave the remaining options at their current value and click on the Publish button at the bottom-right corner of the Publish dialog box. A message informs you that object refection is not supported in scalable content. We used object refection on slide 3 to enhance the AMELogo image. Click on Yes to discard the message and start the publishing process. When Captivate has fnished publishing the movie, an information box appears on the screen asking if you want to view the output. Click on Yes to discard the information box and open the published movie in the default web browser. During the playback, use your mouse to resize your browser window and notice how our movie is also resized in order to ft the browser window. Also notice that the refection effect we used on the AMELogo image has been discarded. Publishing to HTML5 Publishing to HTML5 is the killer new feature of Captivate 6. One of the main goals of HTML5 is to provide a plugin free paradigm. It means that the interactivity and strong visual experience brought to the Internet by the plugins should now be supported natively by the browsers and their underlying technologies (mainly HTML, CSS, and JavaScript) without the need for an extra third-party plugin. Because a plugin is no longer necessary to deliver rich interactive content, any modern browser should be capable of rendering our interactive eLearning courses. And that includes the browsers installed on mobile devices, such as Tablets and Smartphones. This is an enormous change, not only for the industry, but also for us, the Captivate users and eLearning developers. Thanks to HTML5, our students will be able to enjoy our eLearning content across all their devices. The door is open for the next revolution of our industry: the mLearning (for Mobile Learning) revolution. Blog posts To get a better idea of what's at stake with HTML5 in eLearning and mLearning, I recommend these two blog posts, available at http://blogs.adobe.com/captivate/2011/11/the-how-why-of-ipads-html5-mobile-devices-in-elearning-training-education.html by Allen Partridge on the official Adobe Captivate blog and http://rjacquez.com/the-m-in-mlearning-means-more/ by RJ Jacquez. Using the HTML5 Tracker At the time of this writing (June 2012), HTML5 is still under development. Some parts of the HTML5 specification are already final and well implemented in the browsers while other parts of the specification are still under discussion. Consequently, some features of Captivate that are supported in Flash are not yet supported in HTML5. In the following exercise, we will use the HTML5 tracker to better understand what features of our Encoder Demonstration are supported in HTML5: If needed, return to the encoderDemo_800.cptx file. Use the Window | HTML5 Tracker to open the HTML5 Tracker floating panel. The HTML5 Tracker informs us that some features that we used in this project are not (yet) supported in HTML5, as shown in the following screenshot: On slide 1 and slide 22, the Text Animations are not supported in HTML5. Same thing for the three orange arrow animations we inserted on slide 5. Close the HTML5 Tracker panel. A comprehensive list of all the objects and features that are not yet supported in the HTML5 output is available in the offcial Captivate Help at http://help.adobe.com/en_US/captivate/cp/using/WS16484b78be4e1542-74219321367c91074e-8000.html. Make sure you read that page before publishing your projects in HTML5. In the next exercise, we will publish a second version of our Encoder Demonstration using the new HTML5 publishing option. Publishing the project in HTML5 The process of publishing the project to HTML5 is very similar to the process of publishing the project to Flash. Perform the following steps to publish the project in HTML5: If needed, return to the encoderDemo_800.cptx file. Click on the Publish icon or use the File | Publish menu item to open the Publish dialog box. In the left-most column of the Publish dialog, make sure you are using the SWF/HTML5 option. Change the Project Title to encoderDemo_800_HTML5. Click on the Browse button and choose the Chapter06/publish folder of the exercises as the publish location. Make sure the Publish to Folder checkbox is selected. In the Output Format Option section, select the HTML5 checkbox. Once done, uncheck the SWF checkbox. This is the single most important setting of the entire procedure. Note that you can select both the SWF and the HTML5 options. In the Advanced Options area of the Publish dialog, deselect the Scalable HTML content checkbox. Leave the other options at their current settings and click on the Publish button. Captivate informs us that some features used in this project are not supported in HTML5. Click on Yes to discard the message and start the publication to HTML5. The process of publishing to HTML5 is much longer than the publication to Flash. One of the reasons is that Captivate needs to open the Adobe Media Encoder to convert the .flv video used in slide 2 and the Full Motion Recording of slide 13 to the .mp4 format. When the publish process is complete, a second message appears asking if you want to view the output. Click on No to discard the message and return to the standard Captivate interface. We will now use the Windows Explorer (Windows) or the Finder (Mac) to take a closer look at the generated files. Use the Windows Explorer (Windows) or the Finder (Mac) to go to the Chapter06/publish/encoderDemo_800_HTML5 folder of the exercises. You should find a bunch of files and folders in the publish/encoderDemo_800_ HTML5 folder, as follows: index.html – is the main HTML file. This is the file to load in the web browser to play the course. The /ar folder – contains all the needed sound clips in .mp3 format. The /dr folder – contains all the needed images. Notice that the mouse pointers, the slide backgrounds, as well as all the Text Captions are exported as .png images. The /vr folder – contains the needed video files in .mp4 format. The /assets folders – contains the needed CSS and JavaScript files. We will now test this version of the project in a web browser. Supported browsers and OS for HTML5 On the desktop, the HTML5 version of our eLearning project requires Internet Explorer 9 or later versions, Safari 5.1 or later versions, or Google Chrome 17 or later versions. For mobile devices, HTML5 is supported on iPads with iOS 5 or later versions. Make sure you use one of the browsers mentioned for the testing phase of this exercise. Open the .index.html. file in one of the supported browsers. When testing the HTML5 version of the project in a web browser, notice that the unsupported Text Animations of slide 1 and 22 have been replaced by a standard Text Caption with a Fade In effect. On slide 3, the effect we added on the AMELogo image is not reproduced in the HTML5 output. Surprisingly, this was not mentioned in the HTML5 tracker panel. On slide 5, the unsupported orange arrows Animations have been replaced by static images. On slide 16, the zooming animation is supported, but Text Captions that should be invisible are showing in the Zoom Destination area. Apart from the few problems mentioned in the previous list, Captivate 6 does a pretty good job in converting our demonstration to HTML5. That being said, HTML5 publishing is still an emerging technology. The room for improvement is enormous. In the coming years more parts of the HTML5 specifcation will be finalized and new techniques, tools, and framework will emerge. We will then be able to better implement HTML5 across devices, both in Captivate and throughout the entire Internet Publishing to PDF Another publishing option available in Captivate is to publish our project as an Adobe PDF document. This process is very close to the Flash publishing process we covered previously. When converting to PDF, Captivate first converts the project to Flash and then embeds the resulting .swf file in a PDF document. To read the Flash file embedded in the PDF document, the free Adobe Acrobat Reader simply contains a copy of the Flash player. Publishing the Captivate project to PDF is a great way to make the eLearning course available offline. The students can, for example, download the PDF file from a website and take the course in a train or in an airplane where no Internet connection is available. On the other hand, as the Captivate movie can be viewed offline, any Captivate feature that requires an Internet connection (such as reporting the scores to an LMS (Learning Management System)) will not work! In the following exercise, we will publish the Encoder Demonstration to PDF: Return to the Chapter06/encoderDemo_800.cptx file. Click on the Publish icon situated right next to the Preview icon. Alternatively, you can use the File | Publish menu item. In the Publish Format area, make sure the chosen format is SWF/HTML5. If needed, deselect the HTML5 checkbox and make sure the .SWF checkbox is the only one selected. In the Flash(.swf) Options area, change the Project Title to encoderDemo_800_pdf. Make sure the publish Folder still is the Chapter06/Publish folder of the exercises. Make sure the Publish to Folder checkbox is still selected. At the end of the Output Format Options area, select the Export PDF checkbox. Click on the Publish button situated in the lower-right corner of the Publish dialog. When the publishing process is complete, a message tells you that Acrobat 9 or higher is required to read the generated PDF file. Click on OK to acknowledge the message. A second information box opens. Click on No to discard the second message and close the Publish dialog. Use the Finder (Mac) or the Windows Explorer (Windows) to browse to the Chapter06/publish/encoderDemo_800_pdf folder. There should be six additional files in the Chapter06/publish/encoderDemo_800_ pdf folder. Actually, publishing to PDF is an extra option of the standard publishing to Flash feature. Delete all but the PDF file from the Chapter06/publish/encoderDemo_800_ pdf folder. Double-click on the encoderDemo_800_pdf.pdf file to open it in Adobe Acrobat. Notice that the file plays normally in Adobe Acrobat. This proves that all the necessary files and assets have been correctly embedded into the PDF file. In the next section, we will explore the third publishing option of Captivate: publishing as a standalone application. Publishing as a standalone application When publishing as a standalone application, Captivate generates an .exe file for playback on Windows or an .app file for playback on Macintosh. The .exe (Windows) or .app (Mac) file contains the compiled .swf file plus the Flash player. The advantages and disadvantages of a standalone application are similar to those of a PDF file. That is, the file can be viewed offline in a train, in an airplane, or elsewhere, but the features requiring an Internet connection will not work. In the following exercise, we will publish the Captivate file as a standalone application using the following steps: If needed, return to the Chapter06/encoderDemo_800.cptx file. Click on the Publish icon situated right next to the Preview icon. Alternatively, you can use the File | Publish menu item. Click on the Media icon situated on the left-most column of the Publish dialog box. The middle area is updated. Open the Select Type drop-down list. If you are on a Windows PC, choose Windows Executable (*.exe) and if you are using a Mac, choose MAC Executable (*.app). If needed, change the Project Title to encoderDemo_800. In the Folder field, make sure that the Chapter06/Publish folder still is the chosen value. Take some time to inspect the other options of the Publish dialog. One of them allows us to choose a custom icon for the generated .exe (Win) or .app (Mac) file. Leave the other options at their current value and click on the Publish button. When the publish process is complete, an information box will ask you if you want to see the generated output. Click on No to clear the information message and to close the Publish dialog. Now that the standalone application has been generated, we will use the Finder (Mac) or the Windows Explorer (Win) to take a look at the Chapter06/Publish folder. Use the Finder (Mac) or the Windows Explorer (Windows) to browse to the Chapter06/Publish folder of the exercises. Double-click on the encoderDemo_800.exe (Win) or on the encoderDemo_800.app (Mac) to open the generated application. Our Captivate movie opens as a standalone application in its own window. Notice that no browser is necessary to play the movie. This publish format is particularly useful when we want to burn the movie on a CD-ROM. When generating a Windows executable (.exe), Captivate can even generate an autorun.ini file so that the movie automatically plays when the CD-ROM is inserted in the computer.
Read more
  • 0
  • 0
  • 3261

article-image-article-enabling-plugin-internationalization
Packt
10 Aug 2012
5 min read
Save for later

Enabling Plugin Internationalization

Packt
10 Aug 2012
5 min read
In this article by Yannick Lefebvre, the author of WordPress Plugin Development Cookbook, we will learn about plugin localization through the following topics: Changing the WordPress language configuration Adapting default user settings for translation Making admin page code ready for translation Modifying shortcode output for translation Translating text strings using Poedit Loading a language file in the plugin initialization   Introduction WordPress is a worldwide phenomenon, with users embracing the platform all around the globe. To create a more specific experience for users in different locales, WordPress offers the ability to translate all of its user and visitor-facing content, resulting in numerous localizations becoming available for download online. Like most other functionalities in the platform, internationalization is also available to plugin developers through a set of easy-to-use functions. The main difference being that plugin translations are typically included with the extension, instead of being downloaded separately as is the case with WordPress. To prepare their plugin to be localized, developers must use special internationalization functions when dealing with text elements. Once this structure is in place, any user can create localizations by themselves for languages that they know and submit them back to the plugin author for inclusion in a future update to the extension. This article explains how to prepare a plugin to be translated and shows how to use the Poedit tool to create a new language file for a simple plugin. Changing the WordPress language configuration The first step to translating a plugin is to configure WordPress to a different language setting other than English. This will automatically trigger mechanisms in the platform to look for alternate language content for any internationalized string. In this recipe we will set the site to French. Getting ready You should have access to a WordPress development environment. How to do it... Navigate to the root of your WordPress installation. Open the file called wp-config.php in a code editor. Change the line that declares the site language from define('WPLANG', ''); to define('WPLANG', 'fr_FR');. Save and close the configuration file. How it works... Whenever WordPress renders a page for visitors or site administrators, it executes the contents of the wp-config.php file, which declares a number of site-wide constants. One of these constants is the site language. By default, this constant has no value, indicating that WordPress should display all content in U.S. English. If defined, the system tries to find a translation file under the wp-content/languages or wp-includes/languages directories of the site to locate translation strings for the target language. In this case, it will try to find a file called fr_FR.mo. While it will not actually find this file in a default installation, setting this configuration option will facilitate the creation and testing of a plugin translation file in later recipes. To learn more about translation files and find out where to download them from, visit the WordPress Codex available at http://codex.wordpress.org/WordPress_in_ Your_Language. Adapting default user settings for translation As mentioned in the introduction, plugin code needs to be specifically written to allow text items to be translated. This work starts in the plugin's activation routine, where default plugin option values are set, to find alternate values when a language other than English is specified in the site's configuration file. This recipe shows how to assign a translated string to a plugin's default options array on initialization. Getting ready You should have already followed the Changing the WordPress language configuration recipe to have a specified translation language for the site. How to do it... Navigate to the WordPress plugin directory of your development installation. Create a new directory called hello-world. Navigate to the directory and create a text file called hello-world.php. Open the new file in a code editor and add an appropriate header at the top of the plugin file, naming the plugin Hello World. Add the following line of code before the plugin's closing ?> PHP command to register a function to be called when the plugin is activated: code 1 Insert the following block of code to provide an implementation for the hw_set_default_options_array function: code 2 Save and close the plugin file. Navigate to the Plugins management page and activate the Hello World plugin. Using phpMyAdmin or the NetBeans IDE, find the options table entry where the option_name field has a value of hw_options to see the newly-created option. How it works... The __ function (that's two underscores) is a WordPress utility function that tries to find a translation for the text that it receives in its first argument, within the text domain specified in the second argument. A text domain is essentially a subsection of the global translation table that is managed by WordPress. In this example, the text to be translated is the string Hello World, for which the system tries to find a translation in the hw_hello_world domain. Since this domain is not available at this time, the function returns the original string that it received as its first parameter. The plugin code assigns the value it receives to the default configuration array. It should be noted that the __ function is actually an alias for the translate function. While both functions have the same functionality, using __ makes the code shorter when it contains a lot of text elements to be translated. While it may be tempting for developers to use a variable or constant in the first parameter of the __ function if they need to display the same text multiple times, this should not be done as it will cause problems with the translation lookup mechanism. See also Changing the WordPress language configuration recipe
Read more
  • 0
  • 0
  • 1083
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-mastering-newer-prezi-features
Packt
25 Jul 2012
10 min read
Save for later

Mastering the Newer Prezi Features

Packt
25 Jul 2012
10 min read
Templates There will always be time restraints put on us when building any business presentation. Mostly these will be pretty unrealistic time restraints as well. If you do find yourself against the clock when building a Prezi, then why not give yourself a slight advantage and use one of Prezi's templates to get your design started. There are lots of templates you can chose from and here's how to make the most out of them when the clock is ticking. The templates When you create any new Prezi online or in the desktop editor, you'll be presented with a choice of template as shown in the following screenshot: Before you decide which one to choose, you can explore them by simply selecting one and clicking the Preview button. You can see in the following screenshot that we've selected the Our Project template. Rolling your mouse over a template's thumbnail will show you some more details as well to help you choose. At the top of the screen, you'll see the options to either Start Editing or go Back to the templates screen. Before you make your choice, have a look around the template preview and check out all of the various objects available to you. Zoom in and out of certain areas that look interesting and use the arrows in the bottom right to go through the template's path and see how it flows. In the following screenshot, you can see that we've zoomed in to take a closer look at the assets included in this template: As you can see in the preceding screenshot, the Our Project template has some lovely assets included. The assets you'll be able to use in the template are images and sketches such as the Doodles that you can see in the top right of the screenshot. All of these assets can be moved around and used anywhere on your canvas. If you preview a template and decide it's the right one for you to use, then just click the Start Editing button to go into edit mode and begin building your Prezi. Getting the most from templates Once you go into edit mode, don't think that you're stuck with how everything is laid out. You can (and should) move things around to fit with the message you're trying to deliver to your audience. Paths The very first thing we'd suggest is clicking on the Paths button and taking a look at how the Prezi flows. The whole reason you're using a template is because you're pushed for time, but you should know how many frames you need and how many different areas you'll want to focus on in your presentation before you get started. If you do, then you can adjust the paths, add new path points, or delete some that are there already. Assets All of the templates, especially Our Project, will come with various assets included. Use them wherever you can. It'll save you lots of time searching for your own imagery if you can just move the existing assets around. As shown in the preceding screenshot, you are totally free to resize any asset in a template. Make the most of them and save yourself a whole heap of time. Branding The only down side of using templates is that they of course won't have any of your company colors, logo, or branding on them. This is easily fixed by using the Colors & Fonts|Theme Wizard found in the bubble menu. On the very first screen of the wizard, click the Replace Logo button to add your company logo. The logo must be a JPEG file no bigger than 250 pixels wide and 100 pixels high. Clicking the button will allow you to search for your logo and it will then be placed in the bottom left-hand corner of your Prezi at all times. On this screen, you can also change the background color of your entire canvas. On the next screen of the wizard, we recommend you switch to Manual mode by clicking the option in the bottom-left corner. In this screen, you can select the fonts to use in your Prezi. At the present time, Prezi still has only a limited number of fonts but we're confident you can find something close to the one your company uses. The reason we suggest switching to manual mode is because you'll be able to use your corporate colors for the fonts you select, and also on the frames and shapes within the Prezi. You'll need to know the RGB color values specified in your corporate branding. By using this final step, you'll get all the benefits of having an already designed Prezi without getting told off by your marketing team for going against their strict branding guidelines. Shapes A very simple element of the Prezi bubble menu which gets overlooked a lot is the InsertShapes| option. In this part of the article, we'll look at some things you may not have known about how shapes work within Prezi. Shortcut for shapes To quickly enter the Shapes menu when working in the Prezi canvas, just press the S key on your keyboard. Get creative In the first part of this chapter, we looked at the assets from a template called OurProject. Some of those assets were the line drawings shown below the male and female characters. When you see these "Doodles" as they're titled, you might think they've been drawn in some kind of graphics package and inserted into the Prezi canvas as you would anything else. On closer inspection in edit mode, you can see that each of the characters is actually made up of different lines from the Shapes menu. This is a great use of the line tool and we'd encourage you to try and create your own simple drawings wherever you can. These can then be reused over time, and will in turn save you lots of time searching for imagery via the Google image insert. Let's say that we want to add some more detail to the male character. Maybe we'll give him a more exciting hair style to replace the boring one that he has at the moment. First select the current hairline and delete it from the character's head. Now select the line tool from the Shapes menu and let's give this guy a flat top straight from the 80's. One of our lines is too long on the right. To adjust it, simply double-click to enter edit mode and drag the points to the right position as shown in the following screenshot: So there we have a great example of how to quickly draw your own image on the Prezi canvas by just using lines. It's an excellent feature of Prezi and as you can see, it's given our character a stunning new look. It's a shame his girlfriend doesn't think so too! Editing shapes In step three of giving our character a new haircut, you saw the edit menu which is accessed by a simple double-click. You can use the edit function on all items in the shapes menu apart from the Pencil and Highlighter tools. Any shape can be double-clicked to change its size and color as shown in the following screenshot. You can see that all of the shapes on the left have been copied and then edited to change their color and size. The edited versions on the right have all been double- clicked and one of the five extra available colors have been selected. The points of each shape have also been clicked on and dragged to change the dimensions of the shape. Holding the Shift key will not keep your shapes to scale. If you want to scale the shapes up or down, we recommend you use the transformation zebra by clicking the plus (+) or minus (-) signs. Editing lines When editing lines or arrows, you can change them from being straight to curved by dragging the center point in any direction. This is extremely useful when creating the line drawings we saw earlier. It's also useful to get arrows pointing at various objects on your canvas. Highlighter The highlighter tool from the shapes menu is extremely useful for pointing out key pieces of information like in the interesting fact shown in the following screenshot: Just drag it across the text you'd like to highlight. Once you've done that the highlighter marks become objects in their own right, so you can use the transformation zebra to change their size or position as shown in the following screenshot: Pencil The pencil tool can be used to draw freehand sketches like the one shown in the following screenshot. If you hadn't guessed it yet, our drawing is supposed to represent a brain which links to the interesting fact about ants. The pencil tool is great if you're good at sketching things out with your mouse. But if like us, your art skills need a little more work, you might want to stick to using the lines and shapes to create imagery! To change the color of your highlighter or pencil drawings, you will need to go into the Theme Wizard and edit the RGB values. This will help you keep things within your corporate branding guidelines again. Drawings and diagrams Another useful new feature and a big time saver within the Prezi insert menu are drawings and diagrams. You can locate the drawings and diagrams templates by clicking the button in-between YouTube and File from the Insert menu. There are twelve templates to choose from and each has been given a name that best describes their purpose. Rolling over each thumbnail will show you a little more detail to help you choose the right one. Once you have chosen, double-click the thumbnail and then decide where to place your drawing on the canvas. You can see in the following screenshot that the drawing or diagram is grouped together and will not become active until you click the green tick. Once you make the drawing active, you can access all of its frames, text, and any other elements that are included. In the following screenshot, you can see that we've zoomed into a section of the tree diagram. You can see in the preceding screenshot that the diagram uses lines, circular frames, and text which can all be edited in any way you like. This is the case for all of the diagrams and drawings available from the menu. Using these diagrams and drawings gives you a great chance to explain concepts and ideas to your colleagues with ease. You can see from the preceding screenshot that there's a good range of useful drawings and diagrams that you're used to seeing in business presentations. You can easily create organograms, timelines for projects, or business processes and cycles, simply by using the templates available and inserting your own content and imagery. By using the Theme wizard explained earlier in this chapter, you can make sure your drawings and diagrams use your corporate colors.
Read more
  • 0
  • 0
  • 2082

article-image-building-site-directory-sharepoint-search
Packt
17 Jul 2012
8 min read
Save for later

Building a Site Directory with SharePoint Search

Packt
17 Jul 2012
8 min read
  (For more resources on Microsoft Sharepoint, see here.) Site Directory options There are two main approaches to providing a Site Directory feature: A central list that has to be maintained Using a search-based tool that can provide the information dynamically   List-based Site Directory With a list-based Site Directory, a list is provisioned in a central site collection, such as the root of a portal or intranet. Like all lists, site columns can be defined to help describe the site's metadata. Since it is stored in a central list, the information can easily be queried, which can make it easy to show a listing of all sites and perform filtering, sorting, and grouping, like all SharePoint lists. It is important to consider the overall site topology within the farm. If everything of relevance is stored within a single site collection, a list-based Site Directory, accessible throughout that site collection, may be easy to implement. But as soon as you have a large number of site collections or web applications, you will no longer be able to easily use that Site Directory without creating custom solutions that can access the central content and display it on those other sites. In addition, you will need to ensure that all users have access to read from that central site and list. Another downside to this approach is that the list-based Site Directory has to be maintained to be effective, and in many cases it is very difficult to keep up with this. It is possible to add new sites to the directory programmatically, using an event receiver, or as part of a process that automates the site creation. However, through the site's life cycle, changes will inevitably have to be made, and in many cases sites will be retired, archived, or deleted. While this approach tends to work well in small, centrally controlled environments, it does not work well at all in most of the large, distributed environments where the number of sites is expected to be larger and the rate of change is typically more frequent. Search-based site discovery An alternative to the list-based Site Directory is a completely dynamic site discovery based on the search system. In this case the content is completely dynamic and requires no specific maintenance. As sites are created, updated, or removed, the changes will be updated in the index as the scheduled crawls complete. For environments with a large number of sites, with a high frequency of new sites being created, this is the preferred approach. The content can also be accessed throughout the environment without having to worry about site collection boundaries, and can also be leveraged using out of the box features. The downside to this approach is that there will be a limit to the metadata you can associate with the site. Standard metadata that will be related to the site include the site's name, description, URL, and to a lesser extent, the managed path used to configure the site collection. From these items you can infer keyword relevance, but there is no support for extended properties that can help correlate the site with categories, divisions, or other specific attributes. How to leverage search Most users are familiar with how to use the Search features to find content, but are not familiar with some of the capabilities that can help them pinpoint specific content or specific types of content. This section will provide an overview on how to leverage search to provide features that help support users finding results that are only related to sites. Content classes SharePoint Search includes an object classification system that can be used to identify specific types of items as shown in the next table. It is stored in the index as a property of the item, making it available for all queries. Content Class Description STS_Site Site Collection objects STS_Web Subsite/Web objects STS_list_[templatename] List objects where [templatename] is the name of the template such as Announcements or DocumentLibrary. STS_listitem_[templatename] List Item objects where [templatename] is the name of the template such as Announcements or DocumentLibrary. SPSPeople User Profile objects (requires a User Profile Service Application) The contentclass property can be included as part of an ad hoc search performed by a user, included in the search query within a customization, or as we will see in the next section, used to provide a filter to a Search Scope. Search Scopes Search Scopes provide a way to filter down the entire search index. As the index grows and is filled with potentially similar information, it can be helpful to define Search Scopes to put specific set of rules in place to reduce the initial index that the search query is executed against. This allows you to execute a search within a specific context. The rules can be set based on the specific location, specific property values, or the crawl source of the content. The Search Scopes can be either defined centrally within the Search service application by an administrator or within a given Site Collection by a Site Collection administrator. If the scope is going to be used in multiple Site Collections, it should be defined in the Search service application. Once defined, it is available in the Search Scopes dropdown box for any ad hoc queries, within the custom code, or within the Search Web Parts. Defining the Site Directory Search Scope To support dynamic discovery of the sites, we will configure a Search Scope that will look at just site collections and subsites. As we saw above, this will enable us to separate out the site objects from the rest of the content in the search index. This Search Scope will serve as the foundation for all of the solutions in this article. To create a custom Search Scope: Navigate to the Search Service Application. Click on the Search Scopes link on the QuickLaunch menu under the Queries and Results heading. Set the Title field to Site Directory. Provide a Description. Click on the OK button as shown in the following screenshot: From the View Scopes page, click on the Add Rules link next to the new Search Scope. For the Scope Rule Type select the Property Query option. For the Property Query select the contentclass option. Set the property value to STS_Site. For the Behaviorsection, select the Include option. From the Scope Properties page, select the New Rule link. For the Scope Rule Type section, select the Property Query option./li> For the Property Query select the contentclass option. Set the property value to STS_Web. For the Behavior section, select the Include option.   The end result will be a Search Scope that will include all Site Collection and subsite entries. There will be no user generated content included in the search results of this scope. After finishing the configuration for the rules there will be a short delay before the scope is available for use. A scheduled job will need to compile the search scope changes. Once compiled, the View Scopes page will list out the currently configured search scopes, their status, and how many items in the index match the rules within the search scopes. Enabling the Search Scope on a Site Collection Once a Search Scope has been defined you can then associate it with the Site Collection(s) you would like to use it from. Associating the Search Scope to the Site Collection will allow the scope to be selected from within the Scopes dropdown on applicable search forms. This can be done by a Site Collection administrator one Site Collection at a time or it can be set via a PowerShell script on all Site Collections. To associate the search scope manually: Navigate to the Site Settings page. Under the Site Collection Administration section, click on the Search Scopes link. In the menu, select the Display Groups action. Select the Search Dropdown item. You can now select the Sites Scope for display and adjust its position within the list. Click on the OK button when complete.   Testing the Site Directory Search Scope Once the scope has been associated with the Site Collection's search settings, you will be able to select the Site Directory scope and perform a search, as shown in the following screenshot: Any matching Site Collections or subsites will be displayed. As we can see from the results shown in the next screenshot, the ERP Upgrade project site collection comes back as well as the Project Blog subsite.
Read more
  • 0
  • 0
  • 5118

article-image-installing-and-configuring-drupal
Packt
11 Jul 2012
7 min read
Save for later

Installing and Configuring Drupal

Packt
11 Jul 2012
7 min read
(For more resources on Drupal 7, see here.) Installing Drupal There are a number of different ways to install Drupal on a web server, but in this recipe we will focus on the standard, most common installation, which is to say, Drupal running on an Apache server, which runs PHP with a MySQL database. In order to do this we will download the latest Drupal release, and walk you through all of the steps required to get it up and running. Getting ready Before beginning, you need to ensure that you meet the following minimal requirements: Web hosting with FTP access (or file access through a control panel). A server running PHP 5.2.5+ (5.3+ recommended). A blank MySQL database and the login credentials to access it. Ensure that register globals is set to off in the PHP.ini file. You may need to contact your hosting provider to do this. How to do it... The first step is to download the latest Drupal 7 release from the Drupal download page, which is located at http://drupal.org/project/drupal : This page displays the most recent and recommended releases for both Drupal 6 and 7. It also displays the most recent development versions, but be sure to download the recommended release (development versions are for developers who want to stay on the cutting edge). When the file is downloaded, extract it and upload the files to your chosen web server document root directory on the server. This may take some time. Configure your web server document root and server name (usually through a vhost directive). When the upload is complete, open your browser and in the address bar, type in the server name configured in the previous step to begin the installation wizard. Select Standard option and then select Save and continue: The next screen that you will see is the language selection screen; there should only be one language available at this point. Ensure that English is selected before proceeding: Following a requirements check, you will arrive at the database settings page. Enter your database name, username, and password in the required fields. Unless your database details have been supplied with a specific host name and port, you should leave the advanced options as they are and continue. You will now see the Site configuration page. Under Site information enter the name you would like to appear as the site's name. For Site e-mail address enter an e-mail address. Under the SITE MAINTENANCE ACCOUNT box, enter a username for the admin user (also known as user 1), followed by an e-mail address and password: (Move the mouse over the image to enlarge.) In the Server settings box, select your country from the drop-down, followed by your local time zone. Finally, in the Update notification box, ensure that both options are selected. Click on Save and continue to complete the installation. You will be presented with the congratulations page with a link to your new site. How it works... On the server requirements page, Drupal will carry out a number of tests. It is a requirement that PHP "register globals" is set to off or disabled. Register globals is a feature of PHP which allows global variables to be set from the contents of the Environment, GET, POST, Cookie, and Server variables. It can be a major security risk, as it enables potential hackers to overwrite important variables and gain unauthorized access. The Configure site page is where you specify the site name and e-mail addresses for the site and the admin user. The admin e-mail address will be used to contact the administrator with notifications from the site, and the site e-mail address is used as the originating e-mail address when the site sends e-mails to users. You can change these settings later on in the Site information page in the Configuration section. It's important to select the options to receive the site notifications so that you are aware when software updates are available for your site core and contrib modules; important security updates are available from time to time. There's more... In this recipe we have seen a regular Drupal installation procedure. There are various different ways to install and configure Drupal. We will explore some of these alternatives in the following sections. We will also cover some of the potential pitfalls you may come across with the requirements page. Uploading through a control panel If your web-hosting provider provides web access to your files through a control panel such as CPanel, you can save time by uploading the compressed Drupal installation package and running the unzip function on the file, if that functionality is provided. This will dramatically reduce the amount of time taken to perform the installation. Auto-installers There are other ways in which Drupal can be installed. Your hosting may come with an auto- installer such as Fantastico De Luxe or Softaculous. Both of these services provide a simple way to achieve the same results without the need to use FTP or to configure a database. Database table prefixes At the database setup screen there is an option to use a table prefix. Any prefix entered into the field would be added to the start of all table names in the database. This means that you could run multiple installations of Drupal, or possibly other CMSs from the same database by setting a different prefix. This method, however, will have implications for performance and maintenance. Installing on a Windows environment This recipe deals with installing Drupal on a Linux server. However, Drupal runs perfectly well on an IIS (Windows) server. Using Microsoft's WebMatrix software, it's easy to set up a Drupal site: http://www.microsoft.com/web/drupal Alternative languages Drupal supports many different languages. You can view and download the language packs at http://localize.drupal.org/download. You then need to upload the file to Drupal root/profiles/standard/translations. You will then see the option for that new language in the language selection page of the installation. Verifying the requirements page If all goes to plan, and the server is already configured correctly, then step 3, the server requirements page, will be skipped. However, you may come across problems in a few areas: Register Globals: This should be set to off in the php.ini file. This is very important in securing your site. If you find that register globals is turned on, then you will need to consult your hosting provider's documentation on this feature in order to switch it off. Drupal will attempt to create the following folder: Drupal root/sites/default/ files. If it fails, you may have to manually create this file on the server and give it the permission 755. Drupal will attempt to create a settings.php file by copying the default.settings.php file. If Drupal has trouble doing this, copy the default.settings.php file in the following directory: Drupal root/sites/default/default.settings.php and rename the copied file as settings.php. Give settings.php full write access CHMODD 777. After Drupal finishes the installation process, it will try to set the permission of this file to 444; you must check that this has been done, and manually set the file to 444, if it has not. See also See Installing Drupal distributions for more installation options using a preconfigured Drupal distribution. For more information about installing Drupal, see the installation guide at Drupal.org: http://drupal.org/documentation/install
Read more
  • 0
  • 0
  • 1601

article-image-setting-basics-drupal-multilingual-site-languages-and-ui-translation
Packt
25 Jun 2012
10 min read
Save for later

Setting up the Basics for a Drupal Multilingual site: Languages and UI Translation

Packt
25 Jun 2012
10 min read
(For more resources on Drupal, see here.) Getting up and running Before we get started, we obviously need a Drupal 7 website to work on. This section gives you two options, namely, roll your own or install the demo. Using your own site You can use your own Drupal 7 site. It can be an existing site or one you create from scratch. If you are creating a brand new site and weren't planning on using a particular installation profile, you can get a head start by using the Localized Drupal Distribution install profile at drupal.org/project/l10n_install. It is probably obvious, but it's best to run the site on a development machine and not in a production environment. Once all the basic Drupal core modules are configured, you will also want to set up the following additional modules to get the most out of the exercises: Panels: A tool for creating pages with custom layouts Path auto: Settings for creating path aliases automatically Views: A tool for creating custom pages and blocks Using the demo site If you'd prefer a jump-start, a full demo website can be created using a special install profile. Instructions for downloading and installing the demo website are included on the Drupal project page available at drupal.org/project/multilingual_book_demo. The demo site contains additional modules including the modules listed previously as well as the following: Administration Menu: Toolbar for quick access to the site configuration Views Bulk Operations: Extra functionality for Views forms Views Slideshow: Slideshows of content coming from Views These modules provide us with a starting point. As more modules are needed for particular exercises, they will be listed so you can add them. Roles, users, and permissions Although you might already have multiple users on your test site, for simplicity it will be assumed that you are logged in as the super admin (user ID 1). Working with languages If we want a multilingual site, the logical first step is to add more languages! In this section, we will add languages to our site, configure how our languages are detected, and set up ways to go between these languages. Adding languages with the Locale module Drupal has language support built into the core, but it's not fully turned on by default. If you go to your site right now and navigate to Configuration | Regional and language, you will see the Regional settings and Date and time comfit page s for configuring default country, time zone, and date/time formats: To get our languages hooked in, let's enable the core module, Locale. Now go back to Configuration | Regional and language to see more options: Click on Languages and you'll see we only have English in our list so far: Now let's add a language by clicking on the Add language link. You can add a predefined language such as German or you can create a custom language. For our purposes, we will work with predefined languages. So choose a language and click on the Add language button. Drupal will then redirect you to the main language admin page and your new language will be added to the list. Now you can simply repeat the process for each language. In my case, I've added three new languages, namely, Arabic, German, and Polish: The overview table shows the language's name (English and native), its code, and its directionality. The language's direction can be Left to right (LTR) or Right to left (RTL), with most languages using the former. 'Right to left' just means that you start at the right side of the page and move towards the left side when you are writing. RTL languages include Arabic, Hebrew, and Syriac, which are written in their own alphabets. You can choose which languages to enable, order them, and set the site default. Links are provided to edit and delete each language. English only has an edit link since it is the system language and cannot be deleted, but English can be disabled if you use a non-English default. If we edit a language, we can modify all the information from the overview table except for the language's code since we need that as a consistent reference string. Do not change the default language once you have started translating or translations might break. Install String Translation from the Internationalization package (drupal.org/project/i18n), go to Configuration | Regional and language | Multilingual settings | Strings, select the Source language, and click on Save configuration. Do not change this setting once it's configured. Detecting languages We have our languages, so now what? If you click around your site, nothing looks different. That's because we are looking at the English version of the site and we haven't told Drupal how we want to deal with the other languages. We'll do that now. Navigate to Configuration | Regional and language | Languages | Detection and selection and you'll see we have a number of choices available to us: The Default detection method is enabled for us, but we can also enable the URL, Session, User, and Browser options. If you want a cookie-based option, check out the Language Cookie and Locale Cookie modules. Let's go over the core options in more detail. URL If you enable this method, users can navigate to URLs such as example.com/de/ news or example.com/deutsch/news (when using the path prefix option) and example.de/news, deutschexample.com/news, or deutsch.example.com/news (when using the domain option). Configuring domains requires web server changes, but using path prefixes does not. This is a common configuration for multilingual sites, and one we'll use shortly. The language's path prefix can be changed when editing the language. If you want to use path-prefixed URLs, then you should decide on your path prefixes before translating content as changing path prefixes might break links (unless you set up proper redirects). If desired, you can choose one language that does not have any path prefix. This is common for the site's default language. For example, if German is the default language and no path prefix is used, the news page would be accessed as example.com/news whereas other languages would be accessed using a path prefix (for example, example.com/en/news). Session The Session option is available if you want to store a user's language preference inside their user session. It was actually proposed by some Drupal community members that this method be removed from the set of choices as it caused a number of issues in other code. One reason you may not want to use this option is due to the possible inconsistency between the content and the URL language. For example, you could enable both URL and Session methods and order them so that the Session method is first. If a user is at example.com/de and if the session is set to French, then the user will see French content even though the URL corresponds with German. My advice is to just skip this one, or, if you need it, at least make sure that it's ordered below the URL option. User Once the Locale module is enabled, users can specify their preferred language when they edit their account profile. If you enable the User method in the detection settings, the user's profile language will be checked when deciding what language to display. Note that the user profile language defaults to the site's default language. Browser Users can configure their browsers to specify which languages they prefer. If the Browser method is enabled, Drupal will check the browser's request to find out the language setting and use it for the language choice. This option may or may not be useful depending on your site audience. Default The default site language is configured on the Configuration | Regional and language | Languages settings page, and is used for the Default detection method. Although you can't disable this method, you can reorder it if you choose. But, it makes the most sense to keep it at the bottom of the list to use it as the fallback language. Detection method order It is important to note that the detection method order is critical to how detection works. If you were to drag the Default method to the top of the list, then none of the other methods would be used and the site would only use the default language. Similarly, if you allow a user profile language and drag User to top of the list, then the URL method would not matter even if it's enabled. Also, if URL detection is ordered below Session, User, and Browser options, the user might see a UI language that does not match up with the URL language, which could be confusing. Make sure to think carefully about the order of these settings. If you use the URL method, it's likely you will want it first. The Default method should be last. The other detection method positions depend on your preference. When using path-prefixed URLs, if one language does not have a prefix, then detection for that language will work differently. For example, if the URL method is first, then no other detection methods will trigger for any URLs with no path prefix such as example.com/news or example.com/about-us. Our choice For our purposes, let's stick with URL detection and use the path-prefix option as this is the easiest to configure (it doesn't require extra domains). This choice will keep our URLs in sync with our interface language, which is also user and SEO-friendly. Check Enabled for the URL method and press the Save settings button. Now click on Configure for that method and you'll see options for Path prefix and Domain. We'll use the default option, that is Path prefix (for example, example.com/de). Don't panic on the next step. You won't see anything different in the UI until we finish our interface translation process later in the article. Now change the URL in your browser to include the path prefix for one of your languages. In my case, I'll try German and go to example.com/de. You should be able to use the path prefixes for each of your configured languages. Switching between languages Most likely you don't want your users to have to manually type in a different URL to switch between languages. Drupal core provides a language switcher block that you can put somewhere convenient for your users. To use the block, navigate to Structure | Blocks, find the Language switcher (User interface text) block, position it where you'd like, and save your block settings. The order of the languages in the block is based on the order configured at Configuration | Regional and language | Languages. Once enabled, the language switcher block looks like the following screenshot: You can now easily switch between your site languages, and the language chosen is highlighted. The UI won't look different when switching until we finish the next section. Two alternatives to the core language switcher block are provided by the Language Switcher and Language Switcher Drop-down modules. Also, if you want country flag icons added next to each language, you can install the Language Icons module.
Read more
  • 0
  • 0
  • 5731
article-image-supporting-hypervisors-opennebula
Packt
25 May 2012
7 min read
Save for later

Supporting hypervisors by OpenNebula

Packt
25 May 2012
7 min read
(For more resources on Open Source, see here.) A host is a server that has the ability to run virtual machines using a special software component called a hypervisor that is managed by the OpenNebula frontend. All the hosts do not need to have homogeneous configuration, but it is possible to use different hypervisors on different GNU/Linux distributions on a single OpenNebula cluster. Using different hypervisors in your infrastructure is not just a technical exercise but assures you greater flexibility and reliability. A few examples where having multiple hypervisors would prove to be beneficial are as follows: A bug in the current release of A hypervisor does not permit the installation of a virtual machine with a particular legacy OS (let's say, for example,Windows 2000 Service Pack 4), but you can execute it with B hypervisor without any problem. You have a production infrastructure that is running a closed source free-to-use hypervisor, and during the next year the software house developing that hypervisor will request a license payment or declare bankruptcy due to economic crisis. The current version of OpenNebula will give you great flexibility regarding hypervisor usage since it natively supports KVM/Xen (which are open source) and VMware ESXi. In the future it will probably support both VirtualBox (Oracle) and Hyper-V (Microsoft). Configuring hosts The first thing to do before starting with the installation of a particular hypervisor on a host is to perform some general configuration steps. They are as follows: Create a dedicated oneadmin UNIX account (which should have sudo privileges for executing particular tasks, for example, iptables/ebtables,and network hooks that we have configured. The frontend and host's hostname should be resolved by a local DNS or a shared/etc/hosts file. The oneadmin on the frontend should be able to connect remotely through SSH to the oneadmin on the hosts without a password. Configure the shared network bridge that will be used by VM to get the physical network.   The oneadmin account and passwordless login Every host should have a oneadmin UNIX account that will be used by the OpenNebula frontend to connect and execute commands. If during the operating system install you did not create it, create a oneadmin user on the host by using the following command: youruser@host1 $ sudo adduser oneadmin You can configure any password you like (even blank) because we are going to set up a passwordless login from the frontend: oneadmin@front-end $ ssh-copy-id oneadmin@host1 Now if you connect from the oneadmin account on the frontend to the oneadminaccount of the host, you should get the shell prompt without entering any password by using the following command: oneadmin@front-end $ ssh oneadmin@host1 Uniformity of oneadmin UID number Later, we will learn about the possible storage solutions available with OpenNebula. However, keep in mind that if we are going to set up a shared storage, we need to make sure that the UID number of the oneadmin user is homogeneous between the frontend and every other host. In other words, check that with the id command the oneadmin UID is the same both on the frontend and the hosts. Verifying the SSH host fingerprints The first time you connect to a remote SSH server from a particular host, the SSH client will provide you the fingerprintprint of the remote server and ask for your permission to continue with the following message: The authenticity of host host01 (192.168.254.2)can't be established. RSA key fingerprint is 5a:65:0f:6f:21:bb:fd:6a:4a:68:cd: 72:58:5c:fb:9f. Are you sure you want to continue connecting (yes/no)? Knowing the fingerprint of the remote SSH key and saving it to the local SSH client fingerprint cache (saved in ~/.ssh/known_hosts) should be good enough to prevent man-in-the-middle attacks. For this reason, you need to connect from the oneadmin user on the frontend to every host in order to save the fingerprints of the remote hosts in the oneadmin known_hosts for the first time. Not doing this will prevent OpenNebula from connecting to the remote hosts. In large environments, this requirement may be a slow-down when cofiguring new hosts. However, it is possible to bypass this operation by instructing the remote client used by OpenNebula to connect to remote hosts and not check the remote SSH key in ~/.ssh/config. The command prompt will show the following content when the operation is bypassed: Host* StrictHostKeyChecking no. If you do not have a local DNS (or you cannot/do not want to set it up), you can manually manage the /etc/hosts file in every host, using the following IP addresses: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Now you should be able to remotely connect from a node to another with your hostname using the following command: $ ssh oneadmin@kvm01 Configuring a simple DNS with dnsmasq If you do not have a local DNS and manually managing the plain host's file on every host does not excite you, you can try to install and configure dnsmasq. It is a lightweight, easy-to-configure DNS forwarder (optionally DHCP and TFTP can be provided within it) that services well to a small-scale network. The OpenNebula frontend may be a good place to install it. For an Ubuntu/Debian installation use the following command: $ sudo apt-get install dnsmasq The default configuration should be fine. You just need to make sure that /etc/resolv.conf configuration details look similar to the following: # dnsmasq nameserver 127.0.0.1 # another local DNS nameserver 192.168.0.1 # ISP or public DNS nameserver 208.67.220.220 nameserver 208.67.222.222 The /etc/hosts configuration details will look similar to the following: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Configure any other hostname here in the hosts file on the frontend by running dnsmasq. Configure /etc/resolv.conf configuration details on the other hosts using the following code: # ip where dnsmasq is installed nameserver 192.168.0.2 Now you should be able to remotely connect from a node to another using your plain hostname using the following command: $ ssh oneadmin@kvm01 When you add new hosts, simply add them at /etc/hosts on the frontend and they will automatically work on every other host, thanks to dnsmasq. Configuring sudo To give administrative privileges to the oneadmin account on the hosts, add it to the sudo or admin group depending on your /etc/sudoers configuration using the following code: # /etc/sudoers Defaults env_reset root ALL=(ALL) ALL %sudo ALL=NOPASSWD: ALL With this simple sudo configuration, every user in the sudo group can execute any command with root privileges, without requiring to enter the user password before each command. Now add the oneadmin user to the sudo group with the following command: $ sudo adduser oneadmin sudo Giving full administrative privileges to the oneadmin account might be considered inappropriate for most security-focused people. However, I can assure you that if you are taking the first step with OpenNebula now, having full administrative privileges could save some headaches. This is a suggested configuration but it is not required to run OpenNebula. Configuring network bridges Every host should have its bridges configured with the same name. Check the following /etc/network/interfaces code as an example: # The loopback network interface auto lo iface lo inet loopback # The primary network interface iface eth0 inet manual auto lan0 iface lan0 inet static bridge_ports eth0 bridge_stp off bridge_fd 0 address 192.168.66.97 netmask 255.255.255.0 gateway 192.168.66.1 dns-nameservers 192.168.66.1 You can have as many bridges as you need, bound or not bound to a physical network. By eliminating the bridge_ports parameter you get a pure virtual network for your VMs but remember that without a physical network different VMs on different hosts cannot communicate with each other.
Read more
  • 0
  • 0
  • 4962

article-image-magento-designs-and-themes
Packt
19 May 2012
13 min read
Save for later

Magento: Designs and Themes

Packt
19 May 2012
13 min read
(For more resources on e-Commerce, see here.) The Magento theme structure The same holds true for themes. You can specify the look and feel of your stores at the Global, Website, or Store levels (themes can be applied for individual store views relating to a store) by assigning a specific theme. In Magento,a group of related themes is referred to as a design package. Design packages contain files that control various functional elements that are common among the themes within the package. By default, Magento Community installs two design packages: Base package: A special package that contains all the default elements for a Magento installation (we will discuss this in more detail in a moment) Default package: This contains the layout elements of the default store (look and feel) Themes within a design package contain the various elements that determine the look and feel of the site: layout files, templates, CSS, images, and JavaScript. Each design package must have at least one default theme, but can contain other theme variants. You can include any number of theme variants within a design package and use them, for example, for seasonal purposes (that is, holidays, back-to-school, and so on). The following image shows the relationship between design packages and themes: A design package and theme can be specified at the Global, Website or Store levels. Most Magento users will use the same design package for a website and all descendant stores. Usually, related stores within a website business share very similar functional elements, as well as similar style features. This is not mandatory; you are free to specify a completely different design package and theme for each store view within your website hierarchy. The Theme structure Magento divides themes into two group of files: templating and skin. Templating files contain the HTML, PHTML, and PHP code that determines the functional aspects of the pages in your Magento website. Skin files are made of CSS, image, and JavaScript files that give your site its outward design. Ingeniously, Magento further separates these areas by putting them into different directories of your installation: Templating files are stored in the app/design directory, where the extra security of this section protects the functional parts of your site design Skin files are stored within the skin directory (at the root level of the installation), and can be granted a higher permission level, as these are the files that are delivered to a visitor's browser for rendering the page Templating hierarchy Frontend theme template files (the files used to produce your store's pages) are stored within three subdirectories: layout: It contains the XML files that contain the various core information that defines various areas of a page. These files also contain meta and encoding information. template: This stores the PHTML files (HTML files that contain PHP code and processed by the PHP server engine) used for constructing the visual structure of the page. locale: This add files within this directory to provide additional language translations for site elements, such as labels and messages. Magento has a distinct path for storing templating files used for your website: app/design/frontend/[Design Package]/[Theme]/. Skin hierarchy The skin files for a given design package and theme are subdivided into the following: css: This stores the CSS stylesheets, and, in some cases, related image files that are called by CSS files (this is not an acceptable convention, but I have seen some designers do this) images:This contains the JPG, PNG, and GIF files used in the display of your site js: This contains the JavaScript files that are specific to a theme (JavaScript files used for core functionality are kept in the js directory at the root level) The path for the frontend skin files is: skin/frontend/[Design Package]/[Theme]/. The concept of theme fallback A very important and brilliant aspect of Magento is what is called the Magento theme fallback model. Basically, this concept means that when building a page, Magento first looks to the assigned theme for a store. If the theme is missing any necessary templating or skin files, Magento then looks to the required default theme within the assigned design package. If the file is not found there, Magento finally looks into the default theme of the Base design package. For this reason, the Base design package is never to be altered or removed; it is the failsafe for your site. The following flowchart outlines the process by which Magento finds the necessary files for fulfilling a page rendering request. This model also gives the designers some tremendous assistance. When a new theme is created, it only has to contain those elements that are different from what is provided by the Base package. For example, if all parts of a desired site design are similar to the Base theme, except for the graphic appearance of the site, a new theme can be created simply by adding new CSS and image files to the new theme (stored within the skin directory). Any new CSS files will need to be included in the local.xml file for your theme (we will discuss the local.xml file later in this article). If the design requires different layout structures, only the changed layout and template files need to be created; everything that remains the same need not be duplicated. While previous versions of Magento were built with fallback mechanisms, only in the current versions has this become a true and complete fallback. In the earlier versions, the fallback was to the default theme within a package, not to the Base design package. Therefore, each default theme within a package had to contain all the files of the Base package. If Magento base files were updated in subsequent software versions, these changes had to be redistributed manually to each additional design package within a Magento installation. With Magento CE 1.4 and above, upgrades to the Base package automatically enhance all design packages. If you are careful not to alter the Base design package, then future upgrades to the core functionality of Magento will not break your installation. You will have access to the new improvements based on your custom design package or theme, making your installation virtually upgrade proof. For the same reason, never install a custom theme inside the Base design package. Default installation design packages and themes In a new, clean Magento Community installation, you are provided with the following design packages and themes: Depending on your needs, you could add additional a custom design packages, or custom themes within the default design package: If you're going to install a group of related themes, you should probably create a new design package, containing a default theme as your fallback theme On the other hand, if you're using only one or two themes based on the features of the default design package, you can install the themes within the default design package hierarchy I like to make sure that whatever I customize can be undone, if necessary. It's difficult for me to make changes to the core, installed files; I prefer to work on duplicate copies, preserving the originals in case I need to revert back. After re-installing Magento for the umpteenth time because I had altered too many core files, I learned the hard way! As Magento Community installs a basic variety of good theme variants from which to start, the first thing you should do before adding or altering theme components is to duplicate the default design package files, renaming the duplicate to an appropriate name, such as a description of your installation (for example, Acme or Sports). Any changes you make within this new design package will not alter the originally installed components, thereby allowing you to revert any or all of your themes to the originals. Your new theme hierarchy might now look like this: When creating new packages, you also need to create new folders in the /skin directory to match your directory hierarchy in the /app/design directory. Likewise, if you decide to use one of the installed default themes as the basis for designing a new custom theme, duplicate and rename the theme to preserve the original as your fallback. The new Blank theme A fairly recent default installed theme is Blank. If your customization to your Magento stores is primarily one of colors and graphics, this is not a bad theme to use as a starting point. As the name implies, it has a pretty stark layout, as shown in the following screenshot. However, it does give you all the basic structures and components. Using images and CSS styles, you can go a long way to creating a good-looking, functional website, as shown in the next screenshot for www.aviationlogs.com: When duplicating any design package or theme, don't forget that each of them is defined by directories under /app/design/frontend/ and /skin/frontend/ Installing third-party themes In most cases, Magento users who are beginners will explore hundreds of the available Magento themes created by third-party designers. There are many free ones available, but most are sold by dedicated designers. Shopping for themes One of the great good/bad aspects of Magento is the third-party themes. The architecture of the Magento theme model gives knowledgeable theme designers tremendous abilities to construct themes that are virtually upgrade proof, while possessing powerful enhancements. Unfortunately, not all designers have either upgraded older themes properly or created new themes fully honoring the fallback model. If the older fallback model is still used for current Magento versions, upgrades to the Base package could adversely affect your theme. Therefore, as you review third-party themes, take time to investigate how the designer constructs their themes. Most provide some type of site demo. As you learn more about using themes, you'll find it easier to analyze third-party themes. Apart from a few free themes offered through the Magento website, most of them require that you install the necessary files manually, by FTP or SFTP to your server. Every third-party theme I have ever used has included some instructions on how to install the files to your server. However, allow me to offer the following helpful guidelines: When using FTP/SFTP to upload theme files, use the merge function so that only additional files are added to each directory, instead of replacing entire directories. If you're not sure whether your FTP client provides merge capabilities, or not sure how to configure for merge, you will need to open each directory in the theme and upload the individual files to the corresponding directories on your server. If you have set your CSS and JavaScript files to merge, under System | Configuration | Developer, you should turn merging off while installing and modifying your theme. After uploading themes or any component files (for example, templates, CSS, or images), clear the Magento caches under System | Cache Management in your backend. Disable your Magento cache while you install and configure themes. While not critical, it will allow you to see changes immediately instead of having to constantly clear the Magento cache. You can disable the cache under System | Cache Management in the backend. If you wish to make any changes to a theme's individual file, make a duplicate of the original file before making your changes. That way, if something goes awry, you can always re-install the duplicated original. If you have followed the earlier advice to duplicate the Default design package before customizing, instructions to install files within /app/design/frontend/default/ and /skin/frontend/default/ should be interpreted as /app/design/frontend/[your design package name]/ and /skin/frontend/[your design package name]/, respectively. As most of the new Magento users don't duplicate the Default design package, it's common for theme designers to instruct users to install new themes and files within the Default design package. (We know better, now, don't we?) Creating variants Let's assume that we have created a new design package called outdoor_package. Within this design package, we duplicate the Blank theme and call it outdoor_theme. Our new design package file hierarchy, in both /app/design/ and /skin/frontend/ might resemble the following hierarchy: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_theme/ However, let's also take one more customization step here. Since Magento separates the template structure from the skin structure—the layout from the design, so to speak—we could create variations of a theme that are simply controlled by CSS and images, by creating more than one skin. For Acme, we might want to have our English language store in a blue color scheme, but our French language store in a green color scheme. We could take the acme/skin directory and duplicate it, renaming both for the new colors: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_blue/ outdoor_green/ Before we continue, let's go over something which is especially relevant to what we just created. For our outdoor theme, we created two skin variants: blue and green. However, what if the difference between the two is only one or two files? If we make changes to other files that would affect both color schemes, but which are otherwise the same for both, this would create more work to keep both color variations in sync, right? Remember, with the Magento fallback method, if your site calls on a file, it first looks into the assigned theme, then the default theme within the same design package, and, finally, within the Base design package. Therefore, in this example, you could use the default skin, under /skin/frontend/outdoor_package/default/ to contain all files common to both blue and green. Only include those files that will forever remain different to each of them within their respective skin directories. Assigning themes As mentioned earlier, you can assign design packages and themes at any level of the GWS hierarchy. As with any configuration, the choice depends on the level you wish to assign control. Global configurations affect the entire Magento installation. Website level choices set the default for all subordinant store views, which can also have their own theme specifics, if desired. Let's walk through the process of assigning custom design package and themes. For the sake of this exercise, let's continue with our Outdoor theme, as described earlier.Refer to the following screenshot: We're going to now assign our Outdoor theme to a Outdoor website and store views. Our first task is to assign the design package and theme to the website as the default for all subordinant store views: Go to System | Configuration | General | Design in your Magento backend. In the Current Configuration Scope drop-down menu, choose Outdoor Products. As shown in the following screenshot, enter the name of your design package, template, layout, and skin. You will have to uncheck the boxes labeled Use Default beside each field you wish to use. Click on the Save Config button. The reason you enter default in the fields, as shown in the previous screenshot, is to provide the fallback protection I described earlier. Magento needs to know where to look for any files that may be missing from your theme files.
Read more
  • 0
  • 1
  • 2041

article-image-microsoft-silverlight-5-working-services
Packt
23 Apr 2012
11 min read
Save for later

Microsoft Silverlight 5: Working with Services

Packt
23 Apr 2012
11 min read
(For more resources on silverlight, see here.) Introduction Looking at the namespaces and classes in the Silverlight assemblies, it's easy to see that there are no ADO.NET-related classes available in Silverlight. Silverlight does not contain a DataReader, a DataSet, or any option to connect to a database directly. Thus, it's not possible to simply define a connection string for a database and let Silverlight applications connect with that database directly. The solution adds a layer on top of the database in the form of services. The services that talk directly to a database (or, more preferably, to a business and data access layer) can expose the data so that Silverlight can work with it. However, the data that is exposed in this way does not always have to come from a database. It can come from a third-party service, by reading a file, or be the result of an intensive calculation executed on the server. Silverlight has a wide range of options to connect with services. This is important as it's the main way of getting data into our applications. In this article, we'll look at the concepts of connecting with several types of services and external data. We'll start our journey by looking at how Silverlight connects and works with a regular service. We'll see the concepts that we use here recur for other types of service communications as well. One of these concepts is cross-domain service access. In other words, this means accessing a service on a domain that is different from the one where the Silverlight application is hosted. We'll see why Microsoft has implemented cross-domain restrictions in Silverlight and what we need to do to access externally hosted services. Next, we'll talk about working with the Windows Azure Platform. More specifically, we'll talk about how we can get our Silverlight application to get data from a SQL Azure database, how to communicate with a service in the cloud, and even how to host the Silverlight application in the cloud, using a hosted service or serving it from Azure Storage. Finally, we'll finish this chapter by looking at socket communication. This type of communication is rare and chances are that you'll never have to use it. However, if your application needs the fastest possible access to data, sockets may provide the answer. Connecting and reading from a standardized service Applies to Silverlight 3, 4 and 5 If we need data inside a Silverlight application, chances are that this data resides in a database or another data store on the server. Silverlight is a client-side technology, so when we need to connect to data sources, we need to rely on services. Silverlight has a broad spectrum of services to which it can connect. In this recipe, we'll look at the concepts of connecting with services, which are usually very similar for all types of services Silverlight can connect with. We'll start by creating an ASMX webservice—in other words, a regular web service. We'll then connect to this service from the Silverlight application and invoke and read its response after connecting to it. Getting ready In this recipe, we'll build the application from scratch. However, the completed code for this recipe can be found in the Chapter07/SilverlightJackpot_Read_Completed folder in the code bundle that is available on the Packt website. How to do it... We'll start to explore the usage of services with Silverlight using the following scenario. Imagine we are building a small game application in which a unique code belonging to a user needs to be checked to find out whether or not it is a winning code for some online lottery. The collection of winning codes is present on the server, perhaps in a database or an XML file. We'll create and invoke a service that will allow us to validate the user's code with the collection on the server. The following are the steps we need to follow: We'll build this application from scratch. Our first step is creating a new Silverlight application called SilverlightJackpot. As always, let Visual Studio create a hosting website for the Silverlight client by selecting the Host the Silverlight application in a new Web site checkbox in the New Silverlight Application dialog box. This will ensure that we have a website created for us, in which we can create the service as well. We need to start by creating a service. For the sake of simplicity, we'll create a basic ASMX web service. To do so, right-click on the project node in the SilverlightJackpot. Web project and select Add | New Item... in the menu. In the Add New Item dialog, select the Web Service item. We'll call the new service as JackpotService. Visual Studio creates an ASMX file (JackpotService.asmx) and a code-behind file (JackpotService.asmx.cs). To keep things simple, we'll mock the data retrieval by hardcoding the winning numbers. We'll do so by creating a new class called CodesRepository.cs in the web project. This class returns a list of winning codes. In real-world scenarios, this code would go out to a database and get the list of winning codes from there. The code in this class is very easy. The following is the code for this class: public class CodesRepository{ private List<string> winningCodes; public CodesRepository() { FillWinningCodes(); } private void FillWinningCodes() { if (winningCodes == null) { winningCodes = new List<string>(); winningCodes.Add("12345abc"); winningCodes.Add("azertyse"); winningCodes.Add("abcdefgh"); winningCodes.Add("helloall"); winningCodes.Add("ohnice11"); winningCodes.Add("yesigot1"); winningCodes.Add("superwin"); } } public List<string> WinningCodes { get { return winningCodes; } }} At this point, we need only one method in our JackpotService. This method should accept the code sent from the Silverlight application, check it with the list of winning codes, and return whether or not the user is lucky to have a winning code. Only the methods that are marked with the WebMethod attribute are made available over the service. The following is the code for our service: [WebService(Namespace = "http://tempuri.org/")][WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)][System.ComponentModel.ToolboxItem(false)]public class JackpotService : System.Web.Services.WebService{ List<string> winningCodes; public JackpotService() { winningCodes = new CodesRepository().WinningCodes; } [WebMethod] public bool IsWinningCode(string code) { if(winningCodes.Contains(code)) return true; return false; }} Build the solution at this point to ensure that our service will compile and can be connected from the client side. Now that the service is ready and waiting to be invoked, let's focus on the Silverlight application. To make the service known to our application, we need to add a reference to it. This is done by right-clicking on the SilverlightJackpot project node, and selecting the Add Service Reference... item. In the dialog that appears, we have the option to enter the address of the service ourselves. However, we can click on the Discover button as the service lives in the same solution as the Silverlight application. Visual Studio will search the solution for the available services. If there are no errors, our freshly created service should show up in the list. Select it and rename the Namespace: as JackpotService, as shown in the following screenshot. Visual Studio will now create a proxy class: The UI for the application is kept quite simple. An image of the UI can be seen a little further ahead. It contains a TextBox, where the user can enter a code, a Button that will invoke a check, and a TextBlock that will display the result. This can be seen in the following code: <StackPanel> <TextBox x_Name="CodeTextBox" Width="100" Height="20"> </TextBox> <Button x_Name="CheckForWinButton" Content="Check if I'm a winner!" Click="CheckForWinButton_Click"> </Button> <TextBlock x_Name="ResultTextBlock"> </TextBlock></StackPanel> In the Click event handler, we'll create an instance of the proxy class that was created by Visual Studio as shown in the following code: private void CheckForWinButton_Click(object sender, RoutedEventArgs e){ JackpotService.JackpotServiceSoapClient client = new SilverlightJackpot.JackpotService.JackpotServiceSoapClient();} All service communications in Silverlight happen asynchronously. Therefore, we need to provide a callback method that will be invoked when the service returns: client.IsWinningCodeCompleted += new EventHandler <SilverlightJackpot.JackpotService. IsWinningCodeCompletedEventArgs> (client_IsWinningCodeCompleted); To actually invoke the service, we need to call the IsWinningCodeAsync method as shown in the following line of code. This method will make the actual call to the service. We pass in the value that the user entered: client.IsWinningCodeAsync(CodeTextBox.Text); Finally, in the callback method, we can work with the result of the service via the Result property of the IsWinningCodeCompletedEventArgs instance. Based on the value, we display another message as shown in the following code: void client_IsWinningCodeCompleted(object sender, SilverlightJackpot.JackpotService. IsWinningCodeCompletedEventArgs e){ bool result = e.Result; if (result) ResultTextBlock.Text = "You are a winner! Enter your data below and we will contact you!"; else ResultTextBlock.Text = "You lose... Better luck next time!";} We now have a fully working Silverlight application that uses a service for its data needs. The following screenshot shows the result from entering a valid code: How it works... As it stands, the current version of Silverlight does not have support for using a local database. Silverlight thus needs to rely on external services for getting external data. Even if we had local database support, we would still need to use services in many scenarios. The sample used in this recipe is a good example of data that would need to reside in a secure location (meaning on the server). In any case, we should never store the winning codes in a local database that would be downloaded to the client side. Silverlight has the necessary plumbing on board to connect with the most common types of services. Services such as ASMX, WCF, REST, RSS, and so on, don't pose a problem for Silverlight. While the implementation of connecting with different types of services differs, the concepts are similar. In this recipe, we used a plain old web service. Only the methods that are attributed with the WebMethodAttribute are made available over the service. This means that even if we create a public method on the service, it won't be available to clients if it's not marked as a WebMethod. In this case, we only create a single method called IsWinningCode, which retrieves a list of winning codes from a class called CodesRepository. In real-world applications, this data could be read from a database or an XML file. Thus, this service is the entry point to the data. For Silverlight to work with the service, we need to add a reference to it. When doing so, Visual Studio will create a proxy class. Visual Studio can do this for us because the service exposes a Web Service Description Language (WSDL) file. This file contains an overview of the methods supported by the service. A proxy can be considered a copy of the server-side service class, but without the implementations. Instead, each copied method contains a call to the actual service method. The proxy creation process carried out by Visual Studio is the same as adding a service reference in a regular .NET application. However, invoking the service is somewhat different. All communication with services in Silverlight is carried out asynchronously. If this wasn't the case, Silverlight would have had to wait for the service to return its result. In the meantime, the UI thread would be blocked and no interaction with the rest of the application would be possible. To support the asynchronous service call inside the proxy, the IsWinningCodeAsync method as well as the IsWinningCodeCompleted event is generated. The IsWinningCodeAsync method is used to make the actual call to the service. To get access to the results of a service call, we need to define a callback method. This is where the IsWinningCodeCompleted event comes in. Using this event, we define which method should be called when the service returns (in our case, the client_IsWinningCodeCompleted method). Inside this method, we have access to the results through the Result parameter, which is always of the same type as the return type of the service method. See also Apart from reading data, we also have to persist data. In the next recipe, Persisting data using a standardized service, we'll do exactly that.
Read more
  • 0
  • 0
  • 1608
article-image-setting-biztalk-server-environment
Packt
09 Apr 2012
18 min read
Save for later

Setting up a BizTalk Server Environment

Packt
09 Apr 2012
18 min read
Gathering requirements by asking the right questions Although, this is not an exact recipe, asking questions to obtain requirements for your BizTalk environment is important. Having a clear view and understanding of the requirements enables you to deploy the desired BizTalk environment that meets expectations of the customer. What are the right questions you may ask yourself? Well, there is quite a large area in general you basically need to cover with questions. These questions will be around the following topics: A BizTalk work load(s) that is functional Non-functional (high availability, scalability, and so on) Licensing (software) Hardware Virtualization Development, Test, Acceptance, and Production (DTAP) environment Tracking/Tracing Hosting Security Getting ready Organize the sessions, and/or the workshop(s) to discuss the BizTalk architecture (environment), functionality, and non-functional requirements, where you do a series of interviews with appropriate stakeholders. This way you will be able to retrieve the necessary requirements and information for a BizTalk environment. You will need to focus on business first and IT later. You will notice that each business will have a different set of requirements on integration of data and processes. Some of these are listed as follows: Business is able to have the access of information from anywhere any time Have the proper information to present to the proper people Have the necessary information available when needed Manage knowledge efficiently and be able to share it with the business Change the information when needed Automate the business process that is error-prone Automate the business process to reduce the processing time of orders, invoices, and so on Regarding the business requirements, BizTalk will have certain workloads, and with the business you determine if you want BizTalk to aid in automating processes, exchange of information with partners, maintaining business rules, visibility of psychical events, and/or integration with different systems. One important factor to reckon with bringing BizTalk into an organization is risk-associated with transitioning to its platform. This risk can be of a technical, operational, political, and financial nature. BizTalk solutions have to operate correctly, meet the business requirements, and be accepted by stakeholders within the organization and should not be too expensive. With IT, you focus more on the technical side of the BizTalk Environment such as, "What messages in size, format, and encoding are sent to the BizTalk system or what does it need to output?" You should consider security around it, when information going to or coming from trading partners is confidential. Encryption and decryption of data such as, "What processes that are automated need to interact with internal and external systems?" or "How are you going to monitor messages that are going in and out?" can come into play. Support needs to be set up properly to keep BizTalk and its solutions healthy. Solutions need to be developed and tested, preferably using different environments such as test and acceptance. For that, you will need an agreed deployment process with IT. These are factors to reckon with and need to be addressed when interviewing or talking to IT stakeholders within the organization. How to do it… Categorize your stakeholders into two categories—business and IT. Create a communication plan and list of questions related to areas mentioned earlier. With the list of questions you can assign each question to a person you think can answer it. This way you ask the right questions to the right people. The following table shows a sample of roles belonging to business and/or IT. It could be that you identify more roles depending on your situation: Category Role Business CEO, CIO, Security Officer, Business Analyst, Enterprise Architect, and Solution Architect. IT IT Manager, Enterprise Architect, Solution Architect, System/Application Architect, System Analyst, Developer, System Engineer, and DBA. Having the roles clear belonging to either business, IT, or both, you will then need to have a list of questions and assign these to the appropriate role. You can find an example list of questions associated to a particular role in the following table: Question Role Will BizTalk integrate with systems in the enterprise? Which consumers and host systems will it integrate with? Enterprise Architect, Solution Architect What are the applicable workloads? Enterprise Architect Is BizTalk going to be strategic for integration with internal/external systems? CEO, CIO, Enterprise Architect, and Business Analyst Number of messages a day/hour Enterprise Architect What are the candidate processes to automate with BizTalk? Business Analyst, Solution Architect What communication protocols are required? Enterprise Architect, Solution Architect Choice of Microsoft platform-Operating System, SQL Server Database Enterprise Architect, Security Officer, Solution Architect, System Engineer, and DBA Encryption algorithm for data Enterprise Architect, Security Officer, Solution Architect, and System Engineer Is Secure Socket Layer required for communication? Enterprise Architect, Security Oficer, Solution Architect, and System Engineer What kind of certificate store is there? Enterprise Architect, Security Officer, Solution Architect, and System Engineer Is the Support for BizTalk going to be outsourced CEO, IT Manager There's more… The best approach to gather the requirements is to view it as a project or a part of the project. You can use a methodology such as PRINCE2. PRINCE2 Projects in Controlled Environments (PRINCE) is a project management method. It covers the management, control, and organization of a project. PRINCE2 is the second major release of it. More information is available at http://www.prince2.com/. Microsoft BizTalk Server website The Microsoft BizTalk Server website provides a lot of information. Especially, the Production Information section provides detailed information on system requirements, roadmap, and the FAQs. The latter sections provide details on pricing, licensing, and so on. Go to http://www.microsoft.com/biztalk/en/us/default.aspx. Analyzing requirements and creating a design Analyzing requirements and creating a design for the BizTalk landscape is the next step forward before planning and installing. With the gathered requirements, you can make decisions on how to design a BizTalk environment(s). If BizTalk is used for the first time in an enterprise environment capacity, planning and server allocation is something to focus on. Once you gather requirements and ask questions, you will have a clear picture of where the platform will be hosted and whether it needs to be scaled up or out. If everything gets placed on one big server, it will introduce a serious single point of failure. You should try to avoid this scenario. Therefore, separating BizTalk from the SQL Server is the first thing you will do in your design, each on a separate hardware preferably. Depending on availability requirements, you will probably cluster the SQL Server. Besides that, you can choose to scale out BizTalk into a multiserver group, because of availability requirements and if the expected load cannot be handled by one BizTalk instance. You can opt for installing BizTalk and SQL separately first and then scale-out after performing benchmark tests. You can scale vertically (scaleup) by increasing the number of processors and the amount of memory each server uses, or you can scale horizontally (scaleout) by adding more servers to your BizTalk Server configuration. Other options you can consider during your design are as follows: Having multiple MessageBox databases Separate BizTalk databases These options are best visualized by the scale-out poster from Microsoft (http://www.microsoft.com/download/en/details.aspx?id=13103). Based on the requirements, you can consider isolating the BizTalk hosts to be able to manage BizTalk applications better and divide the load. By separating send, receive, and processing functionality in different hosts, you will benefit from better memory and thread management. If you expect a high load of large messages or orchestrations that would consume large amounts of resources, you should isolate send and/or receive adapters. Another consideration is to separate a host to handle tracking and relieve processing hosts from it. So far we have discussed scalability and design decisions you could consider. There are some other design considerations for a BizTalk environment such as security, tracking, fault tolerance, load balancing, choice of license, and support for virtualization (http:// support.microsoft.com/kb/842301). BizTalk security can be enhanced by deploying Secure Socket Layer (SSL), IPSec Tunneling, the Inter Security and Acceleration (ISA) server, and certificate services included with the Windows Server 2008. With the BizTalk Server, you can apply access control, implement least rights to limit access, and provide integrated security through Enterprise Single Sign-On (http://msdn.microsoft.com/en-us/library/aa577802%28v=bts.70%29.aspx). Furthermore, you can protect and secure applications and data by authenticating the sender of a message and authorizing the receiver of a message. Tracking messages in BizTalk messages can be useful to see what messages come in and out of the system, or for auditing, troubleshooting, or archiving purposes. Tracking of messages within BizTalk is a process by which parts of a message such as the body, properties, and metadata are stored in a database. These parts can be viewed by running queries from the Group Hub page in the BizTalk Server Administration console. It is important that you decide, or take up into the design, what needs to be tracked based on the requirements. There are some considerations to make regarding tracking. Tracking everything is not the smart thing to do, as each time a message is touched in BizTalk; a copy is made and stored. Focus on scope by tracking only on a specific port, which is better for performance and keeps the database uncluttered. For the latter, it is important that the data purge and archive job is configured properly. As mentioned earlier, it is worth considering a dedicated host for tracking. Fault tolerance and load balancing for BizTalk can be achieved through clustering, separating hosts as described earlier, implement a Storage Area Network (SAN) to house the BizTalk Server databases, cluster Enterprise Single Sign-On (SSO) Master Secret Server, and configuring the Internet Information Services (IIS) web server for isolated host instances and the BAM Portal web page to be highly available using Network Load Balancing (NLB) or other load balancing devices. The best way to implement this is to follow the steps in the Checklist: Providing High Availability with Fault Tolerance or Load Balancing document found on MSDN (http://msdn.microsoft.com/en-us/library/gg634479%28v=bts.70%29.aspx). Another important topic regarding your BizTalk environment is costs and based on requirements you will choose the Branch, Standard, or Enterprise Edition. The editions differ not only in price, but also in functionality. As with the Standard Edition, it is not possible to support scenarios for high availability, fault tolerance, and is limited on CPU and applications. The Branch Edition is even more limited and is designed for hub and spoke deployment scenarios including Radio Frequency Identification (RFID). With any version, you probably want to consider whether or not to virtualize. With virtualization in mind, licensing can be difficult. With the Standard Edition, you need a license for each virtual processor used by the virtual OS environment, regardless of whether the number of virtual processors is less than, or greater than, the number of physical processors on the server. With the Enterprise Edition, if you license all physical CPUs on the server you can run any number of instances in the physical or virtual OS environment. With both of these, a virtual processor is assumed to have the same number of cores as the physical processor. Using less than the number of cores available in the physical processor still counts as a full virtual processor (http://www.microsoft. com/biztalk/en/us/editions.aspx). Last, but not least, you need to consider how to support your BizTalk environment. It is worth considering the System Center Operation Manager to monitor your BizTalk environment using management packs for the SQL Server, Windows Server, and BizTalk Server 2010. The management pack for the BizTalk Server 2010 provides two views, one for the enterprise IT administrator and one for the BizTalk Server administrator. The first will be monitoring the state and health of the various enterprise deployments, the machines hosting the SQL Server databases, machines hosting the Enterprise SSO service, host instance machines, IIS, network services, and is interested in the overall health of the "physical deployment" of a BizTalk Server setup. The BizTalk Server Administrator will be monitoring the state and health of various BizTalk Server application artifacts, such as orchestrations, send ports, receive locations, and is interested in monitoring and tracking the BizTalk Server's health. If necessary, he/she can carry out corrective measures to keep applications running as expected. What you have read so far are considerations, which are useful while analyzing requirements and preparing your design. You need to take a considerable amount of time for analyzing requirements to be able to create a solid design for your BizTalk environment. There is a wealth of information provided by Microsoft in this book. It will be worth investing time now as you will lose a lot time and money if your applications do not perform or the system cripples under load while receiving the process. How to do it... To analyze the requirements, you will need to categorize them to certain topics mentioned in the Gathering requirements by asking the right questions recipe. You will then go over each requirement and decide how it can be met best. For each requirement, you will consider what the best option is and capture that in your design for the BizTalk setup. The BizTalk design will be a Word document, where you capture your design, considerations, and decisions. How it works... During analysis of each requirement, you will capture your considerations and decisions in a word document. Besides that, you will also describe the situation at the enterprise where the BizTalk environment will be deployed. You will find an example structure of a design document for a Development, Test, Acceptance, and Production (DTAP) environment, as follows, where you can place all the information: Introduction Purpose Current situation IT landscape Design Decisions Considerations/Issues Overview DTAP landscape Scope MS BizTalk and SQL Server editions SQL Database Server ICT Policy Operating systems Windows Server Backup Antivirus Windows update Security Settings Backup and Restore Backup procedure Restore procedure Development Development environment Development server Developer machine Test Test server Acceptance SQL Server clustering BizTalk group Acceptance server Production SQL Server clustering BizTalk group (load balancing) Production server Management and security Groups and accounts SCOM Single Sign-On Hosts In process hosts Isolated hosts Trusted and untrusted hosts Hosts configuration DTAP Resources Appendix A Redistributable CAB Files Design decisions are the important parts of your document. Here, you summarize all your design decisions and reference them to each corresponding chapter/section in the document, where a decision is described; you also note issues around your design. There's more... Analyzing requirements is an important task, which should not be taken lightly. Knowing architectural patterns, for instance, can help you choose the right technology and create the appropriate design. It can be that the BizTalk Server is not the right fit for the purpose. The following resources can aid you in analyzing the requirements: Architectural Patterns: Packt has published a book called Applied Architecture Patterns on Microsoft Platform that can aid you in analyzing the requirements by selecting the right technology. Wiki TechNet article: Refer to the Recommendations for Installing, Sizing, Deploying, and Maintaining a BizTalk Server Solution article at http://social.technet. microsoft.com/wiki/contents/articles/666.aspx. Microsoft BizTalk Server 2010 Operations Guide: Microsoft has created a BizTalk Server 2010 Operations Guide for anyone involved in the implementation and administration of a BizTalk solution, particularly IT professionals. You can find it online (http://msdn.microsoft.com/en-us/library/ gg634499%28v=bts.70%29.aspx) or you can download it from http://www. microsoft.com/downloads/en/details.aspx?FamilyID=4ef9eebb-b3f4-4534-b733-3eb2cb83d867&displaylang=en. Microsoft volume licensing brief: Licensing Microsoft Server Products in Virtual Environments is an interesting white paper from Microsoft. It describes licensing models under virtual environments for the server operating systems and server applications. It can help you understand how to use Microsoft server products with virtualization technologies, such as Microsoft Hyper-V technology, Microsoft Virtual Server 2005 R2, or third-party virtualization solutions that are provided by VMWare and Parallels. You can download from the URL: http://www.microsoft.com/ downloads/en/details.aspx?FamilyID=9ef7fc47-c531-40f1-a4e9-9859e593a1f1&displaylang=en. Microsoft poster scale-out configurations: Microsoft has published a poster (normal or interactive) that can be downloaded describing typical scenarios and commonly used options for scaling out the BizTalk Server 2010's physical configurations. This post clearly illustrates how to scale for achieving high availability through load balancing and fault tolerance. It also shows how to configure for high-throughput scenarios. A normal poster can be obtained from the URL: http://www.microsoft.com/ downloads/en/details.aspx?FamilyID=2b70cbfc-d158-45a6-8bbd-99782d6747dc. An interactive poster created in Silverlight can be obtained from the URL:http:// www.microsoft.com/downloads/en/details.aspx?FamilyID=7ef9ae69-9cc8-442a-8193-831a414dfc30. Installing and using the BizTalk Best Practices Analyzer The Best Practices Analyzer (BPA) examines a BizTalk Server 2010 deployment and generates a list of issues pertaining to best practice standards for BizTalk Server deployments. This tool is designed to assess the configuration of a BizTalk installation. The BPA performs configuration-level verification by gathering data from different information sources, such as Windows Management Instrumentation (WMI) classes, SQL Server databases, and registry entries and presents a report to the user. Under the hood, it uses the data to evaluate the deployment configuration. It does not modify any system settings and is not a self-tuning tool. The tool is there to deliver support in achieving the best suitable configuration and report issues or possible issues, that could potentially harm the BizTalk environment. Getting ready The latest version of the BPA tool (V1.2) can be obtained from the Microsoft download center (http://www.microsoft.com/downloads/en/details.aspx?FamilyID=93d432fe-1370-4b6d-aaa8-a0c43c30f5ab&displaylang=en) and must be installed on the BizTalk machine. As a user, you need an account that has local administrative rights, that is a member of the BizTalk Server Administrators group, and a member of the SSO Administrators group to be able to run the BPA. You may need to explicitly set some WMI permissions before you can use the BPA in a distributed environment, where the SQL Server is not installed on the same computer as the BizTalk Server. This is because when the BPA tries to connect to a remote computer running the SQL Server, WMI may not have sufficient access to determine whether the SQL Server Agent is running. This may result in incorrect BPA evaluations. How to do it... To run the Best Practices Analyzer, perform one of the following: Start the BizTalk Server Best Practices Analyzer from the Start menu. Go to Start | Programs | Microsoft BizTalk Server Best Practices Analyzer. Open Windows Explorer and navigate to the Best Practices Analyzer installation directory (by default, c:Program FilesBizTalkBPA) and double-click on BizTalkBPA.exe. Open a command prompt, change to the installation directory, and then enter BizTalkBPACmd.exe. The following steps need to be performed to do the analysis: As soon as you start the BPA, it will check for updates. The user can decide whether or not to check for updates for newer versions of the configuration: (Move the mouse over the image to enlarge.) If a newer version is found, you are able to download the latest updates. The next step is to perform a scan by clicking on Start a scan: After starting the scan, starts data will be gathered from different information sources as described earlier. After the scan has been completed, the user can decide to view the report of the performed scan: You can click View a report of this Best Practices scan and the report will be generated. After generation of the report, several tabs will appear: Critical Issues All Issues Non-Default Settings Recent Changes Baseline Informational Items How it works... When the BPA is running, it gathers information and evaluates them to best practice rules from the Microsoft product group and support. A report is presented to the user providing information on issues, non-default settings, changes, and so on. The report enables you to take action and apply the necessary changes to resolve identified issues. The BPA can be run again to verify that it adheres to all the necessary best practices. This shows the value of the tool when assessing the deployed BizTalk environment before it is operational. When BizTalk becomes operational, the MessageBox Viewer (MBV) has more value. There's more... The BPA is very useful and gives you information that helps you to tune BizTalk and to keep it healthy. There are more tools that can help in sustaining a healthy environment overall. The Microsoft SQL Server 2008 R2 BPA is a diagnostic tool that provides information about a server and a Microsoft SQL Server 2008 or Microsoft SQL Server 2008 R2 instance installed on that server. The Microsoft SQL Server 2008 R2 Best Practices Analyzer can be downloaded from http://www.microsoft.com/download/en/details.aspx?id=15289. There are a couple of analyzers provided by Microsoft that do a good job helping you and the system engineer to put out a healthy, robust, and stable environment: Best Practices Analyzer: http://technet.microsoft.com/en-us/library/dd759260.aspx Microsoft Baseline Configuration Analyzer 2.0: http://www.microsoft.com/download/en/details.aspx?id=16475 Microsoft Baseline Security Analyzer 2.1.1: http://www.microsoft.com/download/en/details.aspx?id=19892
Read more
  • 0
  • 0
  • 4649

article-image-creating-views-3-programmatically
Packt
21 Mar 2012
18 min read
Save for later

Creating Views 3 Programmatically

Packt
21 Mar 2012
18 min read
(For more resources on Drupal, see here.) Programming a view Creating a view with a module is a convenient way to have a predefined view available with Drupal. As long as the module is installed and enabled, the view will be there to be used. If you have never created a module in Drupal, or even never written a line of Drupal code, you will still be able to create a simple view using this recipe. Getting ready Creating a module involves the creation of the following two files at a minimum: An .info file that gives Drupal the information needed to add the module A .module file that contains the PHP script More complex modules will consist of more files, but those two are all we will need for now. How to do it... Carry out the following steps: Create a new directory named _custom inside your contributed modules directory (so, probably sites/all/modules/_custom). Create a subdirectory inside that directory; we will name it d7vr (Drupal 7 Views Recipes). Open a new file with your editor and add the following lines: ; $Id: name = Programmatic Views description = Provides supplementary resources such as programmatic views package = D7 Views Recipes version = "7.x-1.0" core = "7.x" php = 5.2 Save the file as d7vrpv.info. Open a new file with your editor and add the following lines: Feel free to download this code from the author's web site rather than typing it, at http://theaccidentalcoder.com/ content/drupal-7-views-cookbook <?php /** * Implements hook_views_api(). */ function d7vrpv_views_api() { return array( 'api' => 2, 'path' => drupal_get_path('module', 'd7vrpv'), ); } /** * Implements hook_views_default_views(). */ function d7vrpv_views_default_views() { return d7vrpv_list_all_nodes(); } /** * Begin view */ function d7vrpv_list_all_nodes() { /* * View 'list_all_nodes' */ $view = views_new_view(); $view->name = 'list_all_nodes'; $view->description = 'Provide a list of node titles, creation dates, owner and status'; $view->tag = ''; $view->view_php = ''; $view->base_table = 'node'; $view->is_cacheable = FALSE; $view->api_version = '3.0-alpha1'; $view->disabled = FALSE; /* Edit this to true to make a default view disabled initially */ /* Display: Defaults */ $handler = $view->new_display('default', 'Defaults', 'default'); $handler->display->display_options['title'] = 'List All Nodes'; $handler->display->display_options['access']['type'] = 'role'; $handler->display->display_options['access']['role'] = array( '3' => '3', ); $handler->display->display_options['cache']['type'] = 'none'; $handler->display->display_options['exposed_form']['type'] = 'basic'; $handler->display->display_options['pager']['type'] = 'full'; $handler->display-> display_options['pager']['options']['items_per_page'] = '15'; $handler->display->display_options['pager']['options'] ['offset'] = '0'; $handler->display->display_options['pager']['options'] ['id'] = '0'; $handler->display->display_options['style_plugin'] = 'table'; $handler->display->display_options['style_options'] ['columns'] = array( 'title' => 'title', 'type' => 'type', 'created' => 'created', 'name' => 'name', 'status' => 'status', ); $handler->display->display_options['style_options'] ['default'] = 'created'; $handler->display->display_options['style_options'] ['info'] = array( 'title' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'type' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'created' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'name' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'status' => array( 'sortable' => 1, 'align' => 'views-align-left', 145 'separator' => '', ), ); $handler->display->display_options['style_options'] ['override'] = 1; $handler->display->display_options['style_options'] ['sticky'] = 0; $handler->display->display_options['style_options'] ['order'] = 'desc'; /* Header: Global: Text area */ $handler->display->display_options['header']['area'] ['id'] = 'area'; $handler->display->display_options['header']['area'] ['table'] = 'views'; $handler->display->display_options['header']['area'] ['field'] = 'area'; $handler->display->display_options['header']['area'] ['empty'] = TRUE; $handler->display->display_options['header']['area'] ['content'] = '<h2>Following is a list of all non-page nodes.</h2>'; $handler->display->display_options['header']['area'] ['format'] = '3'; /* Footer: Global: Text area */ $handler->display->display_options['footer']['area'] ['id'] = 'area'; $handler->display->display_options['footer']['area'] ['table'] = 'views'; $handler->display->display_options['footer']['area'] ['field'] = 'area'; $handler->display->display_options['footer']['area'] ['empty'] = TRUE; $handler->display->display_options['footer']['area'] ['content'] = '<small>This view is brought to you courtesy of the D7 Views Recipes module</small>'; $handler->display->display_options['footer']['area'] ['format'] = '3'; /* Field: Node: Title */ $handler->display->display_options['fields']['title'] ['id'] = 'title'; $handler->display->display_options['fields']['title'] ['table'] = 'node'; $handler->display->display_options['fields']['title'] ['field'] = 'title'; $handler->display-> display_options['fields']['title']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['title']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['title']['alter']['trim'] = 0; $handler->display-> display_options['fields']['title']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['title']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['title']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['title']['alter']['html'] = 0; $handler->display-> display_options['fields']['title']['hide_empty'] = 0; $handler->display-> display_options['fields']['title']['empty_zero'] = 0; $handler->display-> display_options['fields']['title']['link_to_node'] = 0; /* Field: Node: Type */ $handler->display->display_options['fields']['type'] ['id'] = 'type'; $handler->display->display_options['fields']['type'] ['table'] = 'node'; $handler->display->display_options['fields']['type'] ['field'] = 'type'; $handler->display-> display_options['fields']['type']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['type']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['type']['alter']['trim'] = 0; $handler->display-> display_options['fields']['type']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['type']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['type']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['type']['alter']['html'] = 0; $handler->display-> display_options['fields']['type']['hide_empty'] = 0; $handler->display-> display_options['fields']['type']['empty_zero'] = 0; $handler->display-> display_options['fields']['type']['link_to_node'] = 0; $handler->display-> display_options['fields']['type']['machine_name'] = 0; /* Field: Node: Post date */ $handler->display->display_options['fields']['created'] ['id'] = 'created'; $handler->display->display_options['fields']['created'] ['table'] = 'node'; $handler->display->display_options['fields']['created'] ['field'] = 'created'; $handler->display-> display_options['fields']['created']['alter'] ['alter_text'] = 0; $handler->display-> display_options['fields']['created']['alter'] ['make_link'] = 0; $handler->display-> display_options['fields']['created']['alter']['trim'] = 0; $handler->display-> display_options['fields']['created']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['created']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['created']['alter'] ['strip_tags'] = 0; $handler->display-> display_options['fields']['created']['alter']['html'] = 0; $handler->display-> display_options['fields']['created']['hide_empty'] = 0; $handler->display-> display_options['fields']['created']['empty_zero'] = 0; $handler->display-> display_options['fields']['created']['date_format'] = 'custom'; $handler->display-> display_options['fields']['created']['custom_date_format'] = 'Y-m-d'; /* Field: User: Name */ $handler->display->display_options['fields']['name'] ['id'] = 'name'; $handler->display->display_options['fields']['name'] ['table'] = 'users'; $handler->display->display_options['fields']['name'] ['field'] = 'name'; $handler->display->display_options['fields']['name'] ['label'] = 'Author'; $handler->display-> display_options['fields']['name']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['name']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['name']['alter']['trim'] = 0; $handler->display-> display_options['fields']['name']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['name']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['name']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['name']['alter']['html'] = 0; $handler->display-> display_options['fields']['name']['hide_empty'] = 0; $handler->display-> display_options['fields']['name']['empty_zero'] = 0; $handler->display-> display_options['fields']['name']['link_to_user'] = 0; $handler->display-> display_options['fields']['name']['overwrite_anonymous'] = 0; /* Field: Node: Published */ $handler->display->display_options['fields']['status'] ['id'] = 'status'; $handler->display->display_options['fields']['status'] ['table'] = 'node'; $handler->display->display_options['fields']['status'] ['field'] = 'status'; $handler->display-> display_options['fields']['status']['alter'] ['alter_text'] = 0; $handler->display-> display_options['fields']['status']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['status']['alter']['trim'] = 0; $handler->display-> display_options['fields']['status']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['status']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['status']['alter'] ['strip_tags'] = 0; $handler->display-> display_options['fields']['status']['alter']['html'] = 0; $handler->display-> display_options['fields']['status']['hide_empty'] = 0; $handler->display-> display_options['fields']['status']['empty_zero'] = 0; $handler->display->display_options['fields']['status'] ['type'] = 'true-false'; $handler->display->display_options['fields']['status'] ['not'] = 0; /* Sort criterion: Node: Post date */ $handler->display->display_options['sorts']['created'] ['id'] = 'created'; $handler->display->display_options['sorts']['created'] ['table'] = 'node'; $handler->display->display_options['sorts']['created'] ['field'] = 'created'; $handler->display->display_options['sorts']['created'] ['order'] = 'DESC'; /* Filter: Node: Type */ $handler->display->display_options['filters']['type'] ['id'] = 'type'; $handler->display->display_options['filters']['type'] ['table'] = 'node'; $handler->display->display_options['filters']['type'] ['field'] = 'type'; $handler->display-> display_options['filters']['type']['operator'] = 'not in'; $handler->display->display_options['filters']['type'] ['value'] = array( 'page' => 'page', ); /* Display: Page */ $handler = $view->new_display('page', 'Page', 'page_1'); $handler->display->display_options['path'] = 'list-all-nodes'; $views[$view->name] = $view; return $views; } ?>   Save the file as d7vrpv.module. Navigate to the modules admin page at admin/modules. Scroll down to the new module and activate it, as shown in the following screenshot: Navigate to the Views Admin page (admin/structure/views) to verify that the view appears in the list: Finally, navigate to list-all-nodes to see the view, as shown in the following screenshot: How it works... The module we have just created could have many other features associated with it, beyond simply a view, and enabling the module will make those features and the view available, while disabling it will hide those same features and view. When compiling the list of installed modules, Drupal looks first in its own modules directory for .info files, and then in the site's modules directories. As can be deduced from the fact that we put our .info file in a second-level directory of sites/all/modules and it was found there, Drupal will traverse the modules directory tree looking for .info files. We created a .info file that provided Drupal with the name and description of our module, its version, the version of Drupal it is meant to work with, and a list of files used by the module, in our case just one. We saved the .info file as d7vrpv.info (Drupal 7 Views Recipes programmatic view); the name of the directory in which the module files appear (d7vr) has no bearing on the module itself. The module file contains the code that will be executed, at least initially. Drupal does not "call" the module code in an active way. Instead, there are events that occur during Drupal's creation of a page, and modules can elect to register with Drupal to be notifi ed of such events when they occur, so that the module can provide the code to be executed at that time; for example, you registering with a business to receive an e-mail in the event of a sale. Just like you are free to act or not, but the sales go on regardless, so too Drupal continues whether or not the module decides to do something when given the chance. Our module 'hooks' the views_api and views_default_views events in order to establish the fact that we do have a view to offer. The latter hook instructs the Views module which function in our code executes our view: d7vrpv_list_all_nodes(). The first thing it does is create a view object by calling a function provided by the Views module. Having instantiated the new object, we then proceed to provide the information it needs, such as the name of the view, its description, and all the information that we would have selected through the Views UI had we used it. As we are specifying the view options in the code, we need to provide the information that is needed by each handler of the view functionality. The net effect of the code is that when we have cleared cache and enabled our module, Drupal then includes it in its list of modules to poll during events. When we navigate to the Views Admin page, an event occurs in which any module wishing to include a view in the list on the admin screen does so, including ours. One of the things our module does is defi ne a path for the page display of our view, which is then used to establish a callback. When that path, list-all-nodes, is requested, it results in the function in our module being invoked, which in turn provides all the information necessary for our view to be rendered and presented. There's more The details of the code provided to each handler are outside the scope of this book, but you don't really need to understand it all in order to use it. You can enable the Views Bulk Export module (it comes with Views), create a view using the Views UI in admin, and choose to Bulk Export it. Give the exporter the name of your new module and it will create a file and populate it with nearly all the code necessary for you. Handling a view field As you may have noticed in the preceding code that you typed or pasted, Views makes tremendous use of handlers. What is a handler? It is simply a script that performs a special task on one or more elements. Think of a house being built. The person who comes in to tape, mud, and sand the wallboard is a handler. In Views, one type of handler is the field handler, which handles any number of things, from providing settings options in the field configuration dialog, to facilitating the field being retrieved from the database if it is not part of the primary record, to rendering the data. We will create a field handler in this recipe that will add to the display of a zip code a string showing how many other nodes have the same zip code, and we will add some formatting options to it in the next recipe. Getting ready A handler lives inside a module, so we will create one: Create a directory in your contributed modules path for this module. Open a new text file in your editor and paste the following code into it: ; $Id: name = Zip Code Handler description = Provides a view handler to format a field as a zip code package = D7 Views Recipes ; Handler files[] = d7vrzch_handler_field_zip_code.inc files[] = d7vrzch_views.inc version = "7.x-1.0" core = "7.x" php = 5.2 Save the file as d7vrzch.info. Create another text file and paste the following code into it: <?php /** * Implements hook_views_data_alter() */ function d7vrzch_field_views_data_alter(&$data, $field) { if (array_key_exists('field_data_field_zip_code', $data)) { $data['field_data_field_zip_code']['field_zip_code'] ['field']['handler'] = 'd7vrzch_handler_field_zip_code'; } } Save the file as d7vrzch.views.inc. Create another text file and paste the following into it: <?php /** * Implements hook_views_api(). */ function d7vrzch_views_api() { return array( 'api' => 3, 'path' => drupal_get_path('module', 'd7vrzch'), ); } Save the file as d7vrzch.module. How to do it... Carry out the folowing steps: Create another text file and paste the following into it: <?php // $Id: $ /** * Field handler to format a zip code. * * @ingroup views_field_handlers */ class d7vrzch_handler_field_zip_code extends views_handler_field_field { function option_definition() { $options = parent::option_definition(); $options['display_zip_totals'] = array( 'contains' => array( 'display_zip_totals' => array('default' => FALSE), ) ); return $options; } /** * Provide a link to the page being visited. */ function options_form(&$form, &$form_state) { parent::options_form($form, $form_state); $form['display_zip_totals'] = array( '#title' => t('Display Zip total'), '#description' => t('Appends in parentheses the number of nodes containing the same zip code'), '#type' => 'checkbox', '#default_value' => !empty($this-> options['display_zip_totals']), ); } function pre_render(&$values) { if (isset($this->view->build_info['summary']) || empty($values)) { return parent::pre_render($values); } static $entity_type_map; if (!empty($values)) { // Cache the entity type map for repeat usage. if (empty($entity_type_map)) { $entity_type_map = db_query('SELECT etid, type FROM {field_config_entity_type}')->fetchAllKeyed(); } // Create an array mapping the Views values to their object types. $objects_by_type = array(); foreach ($values as $key => $object) { // Derive the entity type. For some field types, etid might be empty. if (isset($object->{$this->aliases['etid']}) && isset($entity_type_map[$object->{$this-> aliases['etid']}])) { $entity_type = $entity_type_map[$object->{$this-> aliases['etid']}]; $entity_id = $object->{$this->field_alias}; $objects_by_type[$entity_type][$key] = $entity_id; } } // Load the objects. foreach ($objects_by_type as $entity_type => $oids) { $objects = entity_load($entity_type, $oids); foreach ($oids as $key => $entity_id) { $values[$key]->_field_cache[$this->field_alias] = array( 'entity_type' => $entity_type, 'object' => $objects[$entity_id], ); } } } } function render($values) { $value = $values->_field_cache[$this->field_alias] ['object']->{$this->definition['field_name']} ['und'][0]['safe_value']; $newvalue = $value; if (!empty($this->options['display_zip_totals'])) { $result = db_query("SELECT count(*) AS recs FROM {field_data_field_zip_code} WHERE field_zip_code_value = :zip",array(':zip' => $value)); foreach ($result as $item) { $newvalue .= ' (' . $item->recs . ')'; } } return $newvalue; } Save the file as d7vrzch_handler_field_zip_code.inc. Navigate to admin/build/modules and enable the new module, which shows as the Zip Code Handler. We will test the handler in a quick view. Navigate to admin/build/views. Click on the +Add new view link , enter test as the View name, check the box for description and enter Zip code handler test; clear the Create a page checkbox , and click on the Continue & edit button . On the Views edit page, click on the add link in the Filter Criteria pane, check the box next to Content: Type, and click on the Add and configure filter criteria button . In the Content: Type configuration box , select Home and click on the Apply button . Click on the add link next to Fields, check the box next to Content: Zip code, and click on the Add and configure fields button. Check the box at the bottom of the Content: Zip code configuration box titled Display Zip total and click on the Apply button. Click on the Save button and see the result of our custom handler in the Live preview: How it works... The Views field handler is simply a set of functions that provide support for populating and formatting a field for Views, much in the way a printer driver does for the operating system. We created a module in which our handler resides, and whenever that field is requested within a view, our handler will be invoked. We also added a display option to the configuration options for our field, which when selected, takes each zip code value to be displayed, determines how many nodes have the same zip code, and appends the parenthesized total to the output. The three functions, two in the views.inc file and one in the module file, are very important. Their result is that our custom handler file will be used for field_zip_code instead of the default handler used for entity text fields. In the next recipe, we will add zip code formatting options to our custom handler.
Read more
  • 0
  • 0
  • 6279