Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-customizing-default-expression-engine-v167-website-template
Packt
22 Oct 2009
5 min read
Save for later

Customizing the Default Expression Engine (v1.6.7) website template

Packt
22 Oct 2009
5 min read
Introduction The blogging revolution has changed the nature of the internet and as more and more individuals and organizations choose the blog format as the preferred web solution (i.e content driven websites which are regularly updated by one or more users). The question of which blogging application to use becomes an increasingly important one. I have tested all the popular blogging solutions and the one I have found to be the most impressive for designers with a little CSS and XHTML know-how is Expression Engine. After spending a little time on the Expression Engine support forums I have noticed that a high number of the support questions are relating to solving CSS problems within EE. This demonstrates that many designers who choose EE are used to working from within Adobe Dreamweaver (or other WYSWYG applications) and although there are no compatibility issues between the two systems, it is clear that they are making the transition from graphics based web design to lovely CSS web based design. When you are installing Expression Engine 1.6.7 for the first time, you are asked to choose a theme from a drop-down menu, which you may expect would offer you a satisfactory selection of pre-installed themes to choose from. Sadly this is not the case in Expression Engine (EE) version 1.6.7, and if you have not downloaded and manually saved a theme inside the right folder within the EE system file structure you will be given only one theme to choose from: the dreaded "default weblog theme". Once you complete the installation, no other themes can be imported or installed, so you can either restart the installation, opt to select the default weblog theme or start from a blank canvas (sorry about that). The good news is that the next release of EE (v.2.0) ships with a very nice default template, but the bad news is that you will have to wait a few months longer to get a copy and you will need to renew your license to upgrade for a fee.  Even then you will probably want to modify the new and improved default template in EE 2.0, or you may due to your needs opt to  choose another EE template altogether so that your website does not look like a thousand other websites based on the same template (not good). This article will demonstrate how to use Cascading Style Sheets (CSS) to improve the layout of the default EE website/blog template, and how to keep the structure (XHTML) and the presentation (CSS) entirely separate-which is best practice in web development. This article is intended as a practical guide to intermediate level web designers wanting to program with CSS more effectively within Expression Engine, and to create better websites, which adhere to current WC3 web standards. It will not attempt teach you the fundamentals of programming with CSS and XHTML or how to install, use or develop a website with EE. This article will demonstrate how to use CSS to effectively take control of the appearance of any EE template. If you are new to EE then it is recommended that you consult the book "Building a Website with Expression Engine 1.6.7", published by Packt Publishing and visiting www.expressionengine.com to consult the EE user guide. If you get stuck at any time when using Expression Engine you can visit the EE support forums via the man EE site to get help with EE, XHTML and CSS issues and for regular updates, the Elislab EE feed within EE is an excellent source of news from the EE community. The trouble with templates Lets open up EE’s default "weblog" template in Firefox. I have the very useful "developers toolbar" add-on installed. You can see the many options which are available lined-up across the bottom of Firefox’s toolbar. When you select "Outline > Outline Current Element", Firefox renders an outline around the block of the element and is set by default to display the name of the selected element. This add-on features many other timesaving and task-facilitating functions, which range from DOM tools for JavaScript development to nifty layout tools like displaying a ruler inside the Firefox window. The default template is a useful guide to some basic EE tags embedded into the XHTML, but the CSS should render a more clean and simple to customize design-so lets make some much needed changes. I will not be looking onto the EE tags in this article because EE tags are very powerful and are beyond the scope of this article. Inside the template module create a new template group and call it "site_extended". Template groups in EE organize your templates into virtual folders. We will make a copy of the existing template group and templates so all the changes we make are non-destructive. Choose, do not duplicate any existing template groups, but select "Make the index template in this group your site's home page?" and press submit. Easy. Next create a new template and call it "site_extended_css" and lets duplicate the site/site_css template. This powerful feature instructs EE to clone an existing template with a new name and location. Now let's create a copy of the default site weblog and call it "index_extended". Select "duplicate an existing template", choose "site/index" from the options drop-down list. The first part of the location is "site"/ being the template group and the site/"index" the actual template. Now your template management tab should look like: Notice that the index template has a red star next to it.
Read more
  • 0
  • 0
  • 2473

article-image-content-modeling
Packt
22 Oct 2009
12 min read
Save for later

Content Modeling

Packt
22 Oct 2009
12 min read
Organizing content in a meaningful way is nothing new. We have been doing it for centuries in our libraries—the Dewey decimal system being a perfect example. So, why can't we take known approaches and apply them to the Web? The main reason is that a web page has more than two dimensions. A page on a book might have footnotes or refer to other pages, but the content only appears in one place. On a web page, content can directly link to other content and even show a summary of it. It goes way beyond just the content that appears on the page—links, related content, reviews, ratings, etc. All of this brings extra dimensions to the core content of the page and how it is displayed. This is why it's so important to ensure your content model is sound. However, there is no such thing as the "right" content model. Each content model can only be judged on how well it achieves the goals of the website now and in the future. The Purpose of a Content Model The idea of a content model is new, but it has similarities to both a database design and an object model. The purpose of both of these is to provide a foundation for the logic of the operation. With a database design, we want to structure the data in a meaningful way to make storage and retrieval effective. With an object model, we define the objects and how they relate to each other so that accessing and managing objects is efficient and effective. The same applies to a content model. It's about structuring the content and the relationships between the classes to allow the content to be accessed and displayed easily. The following diagram is a simple content model that shows the key content classes and how they relate to each other. In this diagram, we see that resources belong to a collection which in turn belongs to a context. Also, a particular resource can belong to more than one collection. As stated before, there is no such thing as the "right" model. What we are trying to achieve is the most "effective" model for the project at hand. This means coming up with a model that will provide the most effective way of organizing content so that it can be easily displayed in the manner defined in the functional specification. The way a content model is defined will have an impact on how easy it is to code templates, how quickly the code will run, how easy it is for the editors to input content, and also how easy it is to change down the track. From experience, rarely is a project completed and then never touched again. Usually, there are changes, modifications, updates, etc. down the track. If the model is well structured, these changes will be easy, if not, they can require a significant amount of work to implement. In some cases, the project has to be rebuilt entirely and content re-entered to achieve the goals of the client. This is why the model is so important. If done well, it means the client pays less and has a better-running solution. A poor model will take longer to implement and changes will be more difficult to implement. What Makes a Good Model? It's not easy to define exactly what makes a good model. Like any form of design, simplicity is the key. The more the elements, the more complex it gets. Ideally, a model should be technology independent, but there are certain ways in which eZ publish operates that can influence how we structure the content model. Do we always need a content model? No, it depends on the scale of the project. Smaller projects don't really need a formal model. It's only when there are specific relationships between content classes that we need to go to the effort of creating a model. For example, a basic website that has a number of sections, e.g., About Us, Services, Articles, Contact, etc., doesn't need a model. There's no need for an underlying structure. It's just content added to sections. The in-built content classes in eZ publish will be enough to cater for that type of site. It's when the content itself has specific relationships e.g., a book belongs to a category or a product belongs to a product group, which belongs to a division of the business—this is when you need to create a model to capture the objects and the relationships between them. T o start with, we need to understand the content we are dealing with. The broad categories are existing/known content and new content. If we know the structure of the content we are dealing with and it already exists, this can help to shape the model. If we are dealing with content that doesn't exist yet (i.e. is to be written or created for this project) it's harder to know if we are on the right track. For example, when dealing with products, generally the product data will already exist in a database or ERP system. This gives us a basis from which to work. We can establish the structure of the content and the relationships from the existing data. That doesn't mean that we simply copy what's there, but it can guide us in the right direction. Sometimes the structure of the data isn't effective for the way it's to be displayed on the Web or it's missing elements. (As a typical example, in a recent project, the product data was stored in three places—the core details were in the Point of Sale system, product details and categorization were in a spreadsheet, and the images were stored on a file system.) So, the first step is to get an understanding of all the content we are dealing with. If the content doesn't exist as yet, at least get some examples of what it is likely to be. Without knowing what you are dealing with, you can't be sure your model will accommodate everything. T his means you'll need to allow for modifications down the track. Of course we want to minimize this but it's not always possible. Clients change their minds so the best we can do is hope that our model will accommodate what we think are the likely changes. This really can only be done through experience. There are patterns in content as well as how it's displayed. Through these patterns e.g., a related-content box on each page, we can try to foresee the way things might alter and build room for this into the model. A good example was that on a recent project, for each object, there was the main content but there were also a number of related objects (widgets) that were to be displayed in the right-hand column of the page. Initially, the content class defined the specific widgets to be associated with the object. The table below contains the details of a particular resource (as shown in the previous content model). It captures the details of the "research report" resource content class. Attribute Type Notes Title Text line Short Title Text Line If present, will be used in menus and URLs Flash Flash Navigator object Hero Image Image (displays if no flash) Caption Rich text   Body* Rich Text   Free Form Widgets Related Objects Select one or more Multimedia Widget Related Object Select one This would mean that when the editor added content, they would pick the free-form widgets and then the multimedia widget to be associated with the research report. Displaying the content would be straightforward as from the parent object we would have the object IDs for each widget. The idea is sound but lacks flexibility. It would mean that the order in which the object was added would dictate the order in which it was displayed. It also means that if the editor wants to choose to add a different type of widget, they couldn't unless the model was changed, i.e., another attribute was added to the content class. We updated the content class as follows: Attribute Type Notes Title* Text line Short Title Text Line If present, will be used in menus and URLs Flash Flash Navigator object Hero Image Image (displays if no flash) Caption Rich text   Body* Rich Text   Widgets Related Objects Select one or more This approach is less strict and provides more flexibility. The editor can choose any widget and also select the order. In terms of programming the template, there's the same amount of work. But, if we decide to add another widget type down the track, there's no need to update the content class to accommodate it. Does this mean that anytime we have a related object we should use the latter approach? No, the reason we did it in this situation is that the content was still being written as we were creating the model, and there was a good chance that once the content was entered and we saw the end result, the client was going to say something like "can we add widget x" to the right-hand column of a context object? In a different project, in which a particular widget should only be related to a particular content class, it's better to enforce the rule by only allowing that widget to be associated with that content class. Defining a Content Model The process of creating a content model requires a number of steps. It's not just a matter of analyzing the content; the modeler also needs to take into consideration the domain, users, groups, and the relationships between different classes within the model. To do this, we start with a walkthrough of the domain. Step 1: Domain Walkthrough The client and domain experts walk us through the entire project. This is a vital part of the process. We need to get an understanding of the entire system, not just the part that is captured in the final solution. The model that we end up creating may need to interact with other systems and knowing what they are and how they work will inform the shape of the model. A good example is with e-commerce systems, any information captured on a sale will eventually need to be entered into the existing financial system (whether is it automated or manual). Without an understanding of the bigger picture, we lack the understanding of how the solution we are creating will fit in with what the business does. That's when there is an existing business process. Sometimes there is no business process and the client is making things up as they go along, e.g. they have decided to do online shopping but they have never dealt with overseas orders so don't know how that will work and have no idea how they would deal with shipping costs. One of the typical problems that will surface during the domain walkthrough is that the client will try to tell you how they want the solution to work. By doing this, they are actually defining the model and interactions. This is something to be wary of. It is unlikely that they would be aware of how best to structure a solution; what you want to be asking is what they currently do, what's their current business process. You want to deal with facts that are in existence so that you can decide how best to model the solution. To get the client back on track ask questions like: How do you currently do "it" (i.e. the business process)? What information to you currently capture? How do you capture that information? What format is that information in? How often is the information updated? Who updates it? This gives you a picture of what is currently happening. Then you can start to shape the model to ensure that you are dealing with the real world, not what the client thinks they want. Sometimes they won't be able to answer the question and you'll have to get the right person from the business involved to get the answers you want. Sometimes you discover that what the client thought was happening is not really what happens. Another benefit of this process is gaining a common understanding. If both you and the client are in the room when the process for calculating shipping costs is being explained by the Shipping Manager, you'll both appreciate how complex the process is. If the client thinks it's easy, they won't expect it to cost much. If they are in the room when the shipping manager explains there are five different shipping methods and each has its own way of calculating the costs for a shipment based on their own set of international zones, you know modeling that part of the system is not going to be straightforward unlike what the client initially thought. What this means is that the domain walkthrough gives you a sense of what's real, not what people think the situation is. It's the most important part of the process. Assumptions that "shipping costs" are straightforward, so you don't need to worry about that, can be a disaster later down the track when you find out it's not the case. Also, don't necessarily rely on requirements documents (unless you have written them yourself). A statement in a requirements document may not reflect what really happens; that's why you want to make sure you go through everything to confirm that you have all the facts. Sometimes, a particular requirement can be stated in the document but when you go through it in more detail, ask a few questions, pose a few scenarios, the client changes their mind on what it is that they really want as they realize what they thought they wanted is going to be difficult or expensive to implement. Or, you put an alternative approach to them and they are happy to achieve the same result in a different manner that is easier to implement. This is a valuable way to work out what's real and what really matters.
Read more
  • 0
  • 0
  • 1986

article-image-working-report-builder-microsoft-sql-server-2008-part-2
Packt
22 Oct 2009
3 min read
Save for later

Working with the Report Builder in Microsoft SQL Server 2008: Part 2

Packt
22 Oct 2009
3 min read
Enabling and reviewing My Reports As described in Part 1 the My Reports folder needs to be enabled in order to use the folder or display it in the Open Report dialogue. The RC0 version had a documentation bug which has been rectified (https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=366413) Getting ready In order to enable the My Reports folder you need to carry out a few tasks. This will require authentication and working with the SQL Server Management Studio. These tasks are listed here: Make sure the Report Server has started. Make sure you have adequate permissions to access the Servers. Open the Microsoft SQL Server Management Studio as described previously. Connect to the Reporting Services after making sure you have started the Reporting Services. Right-click the Report Server node. General Execution History Logging Security Advanced The Server Properties window is displayed with a navigation list on the left consisting of the following: In the General page the name, version, edition, authentication mode, and URL of Reporting Service is displayed. Download of an ActiveX Client Print control is enabled by default. In order to work with Report Builder effectively and provide a My Reports folder for each user, you need to place a check mark for the check box Enable a My Reports folder for each user. The My Reports feature has been turned on as shown in the next screenshot. In the Execution page there is choice for report timeout execution, with the default set such that the report execution expires after 1800 seconds. In the History page there is choice between keeping an unlimited number of snapshots in the report history (default) or to limit the copies allowing you to specify how many to be kept. In the Logging page, report execution logging is enabled and the log entries older than 60 days are removed by default. This can be changed if desired. In the Security page, both Windows integrated security for report data sources and ad hoc report executions are enabled by default. The Advanced page shows several more items including the ones described thus far as shown in the next figure. In the General page enable the My Reports feature by placing a check mark. Click on the Advanced list item in the left. The Advanced page is displayed as shown: Now expand the Security node of Reporting Services and you will see that the My Reports role is present in the list of roles as shown. This is also added to the ReportServer database. The description of everything that a user with the assignment My Reports role can do is as follows: May publish reports and linked reports, manage folders, reports, and resources in a users My Reports folder. Now bring up Report Builder 2.0 by clicking Start | All Programs | Microsoft SQL Server 2008 Report Builder | Report Builder 2.0. Report Builder 2.0 is displayed. Click on Office Button | Open. The Open Report dialogue appears as shown. When the report Server is offline, the default location is My Documents, like Microsoft products Excel and MS Access. Choose the Recent sites and Servers. The Report server that is active should get displayed here as shown: Highlight the Server URL and click Open. All the folders and files on the server become accessible as shown: Open the Report Manager by providing its URL address. Verify that a My Reports folder is created for the user (current user). There could be slight differences in the look of the interface depending on whether you are using the RTM or the final version of SQL Server 2008 Enterprise edition.
Read more
  • 0
  • 0
  • 1949
Visually different images

article-image-customer-management-joomla-and-virtuemart
Packt
22 Oct 2009
7 min read
Save for later

Customer Management in Joomla! and VirtueMart

Packt
22 Oct 2009
7 min read
Note that all VirtueMart customers must be registered with Joomla!. However, not all Joomla! users need to be the VirtueMart customers. Within the first few sections of this article, you will have a clear concept about user management in Joomla! and VirtueMart. Customer management Customer management in VirtueMart includes registering customers to the VirtueMart shop, assigning them to user groups for appropriate permission levels, managing fields in the registration form, viewing and editing customer information, and managing the user groups. Let's dive in to these activities in the following sections. Registration/Authentication of customers Joomla! has a very strong user registration and authentication system. One core component in Joomla! is com_users, which manages user registration and authentication in Joomla!. However, VirtueMart needs some extra information for customers. VirtueMart collects this information through its own customer registration process, and stores the information in separate tables in the database. The extra information required by VirtueMart is stored in a table named jos_vm_user_info, which is related to the jos_users table by the user id field. Usually, when a user registers to the Joomla! site, they also register with VirtueMart. This depends on some global settings. In the following sections, we are going to learn how to enable the user registration and authentication for VirtueMart. Revisiting registration settings We configure the registration settings from VirtueMart's administration panel Admin | Configuration | Global screen. There is a section titled User Registration Settings, which defines how the user registration will be handled: Ensure that your VirtueMart shop has been configured as shown in the screenshot above. The first field to configure is the User Registration Type. Selecting Normal Account Creation in this field creates both a Joomla! and VirtueMart account during user registration. For our example shop, we will be using this setting. Joomla!'s new user activation should be disabled when we are using VirtueMart. That means the Joomla! New account activation necessary? field should read No. Enabling VirtueMart login module There is a default module in Joomla! which is used for user registrations and login. When we are using this default Login Form (mod_login module), it does not collect information required by VirtueMart, and does not create customers in VirtueMart. By default, when published, the mod_login module looks like the following screenshot: As you see, registered users can log in to Joomla! through this form, recover their forgotten password by clicking on the Forgot your password? link, and create a new user account by clicking on the Create an account link. When a user clicks on the Create an account link, they get the form as shown in the following screenshot: We see that normal registration in Joomla! only requires four pieces of information: Name, Username, Email, and Password. It does not collect information needed in VirtueMart, such as billing and shipping address, to be a customer. Therefore, we need to disable the mod_login module and enable the mod_virtuemart_login module. We have already learned how to enable and disable a module in Joomla!. We have also learned how to install modules. By default, the mod_virtuemart_login module's title is VirtueMart Login. You may prefer to show this title as Login only. In that case, click on the VirtueMart Login link in the Module Name column. This brings the Module:[Edit] screen: In the Title field, type Login (or any other text you want to show as the title of this module). Make sure the module is enabled and position is set to left or right. Click  on the Save icon to save your settings. Now, browse to your site's front-page  (for example, http://localhost/bdosn/), and you will see the login form as shown in the following screenshot: As you can see, this module has the same functionalities as we saw in the mod_login module of Joomla!. Let us test the account creation in this module. Click on the Register link. It brings the following screen: The registration form has three main sections: Customer Information, Bill To Information, and Send Registration. At the end, there is the Send Registration button for submitting the form data. In the Customer Information section, type your email address, the desired username, and password. Confirm the password by typing it again in the Confirm password field. In the Bill To Information section, type the address details where bills are to be sent. In the entire form, required fields are marked with an asterisk (*). You must provide information for these required fields. In the Send Registration section, you need to agree to the Terms of Service. Click on the Terms of Service link to read it. Then, check the I agree to the Terms of Service checkbox and click on the Send Registration button to submit the form data: If you have provided all of the required information and submitted a unique  email address, the registration will be successful. On successful completion of registration, you get the following screen notification, and will be logged in to  the shop automatically: If you scroll down to the Login module, you will see that you are logged in and greeted by the store. You also see the User Menu in this screen: Both the User Menu and the Login modules contain a Logout button. Click on either of these buttons to log out from the Joomla! site. In fact, links in the User Menu module are for Joomla! only. Let us try the link Your Details. Click on the Your Details link, and you will see the information shown in the following screenshot: As you see in the screenshot above, you can change your full name, email, password, frontend language, and time zone. You cannot view any information regarding billing address, or other information of the customer. In fact, this information is for regular Joomla! users. We can only get full customer information by clicking on the Account Maintenance link in the Login module. Let us try it. Click on the Account Maintenance link, and it shows the following screenshot: The Account Maintenance screen has three sections: Account Information, Shipping Information, and Order Information. Click on the Account Information link to see what happens. It shows the following screen: This shows Customer Information and Bill To Information, which have been entered during user registration. The last section on this screen is the Bank Information, from where the customer can add bank account information. This section looks like the following screenshot: As you can see, from the Bank Account Info section, the customers can enter their bank account information including the account holder's name, account number, bank's sorting code number, bank's name, account type, and IBAN (International Bank Account Number). Entering this information is important when you are using  a Bank Account Debit payment method. Now, let us go back to the Account Maintenance screen and see the other sections. Click on the Shipping Information link, and you get the following screen: There is one default shipping address, which is the same as the billing address. The customers can create additional shipping addresses. For creating a new shipping address, click on the Add Address link. It shows the following screen:
Read more
  • 0
  • 0
  • 9307

article-image-adding-newsletters-web-site-using-drupal-6
Packt
22 Oct 2009
4 min read
Save for later

Adding Newsletters to a Web Site Using Drupal 6

Packt
22 Oct 2009
4 min read
Creating newsletters A newsletter is a great way of keeping customers up-to-date without them needing to visit your web site. Customers appreciate well-designed newsletters because they allow the customer to keep tabs on their favorite places without needing to check every web site on a regular basis. Creating a newsletter Good Eatin' Goal: Create a new newsletter on the Good Eatin' site, which will contain relevant news about the restaurant, and will be delivered quarterly to subscribers. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps Newsletters are containers for individual issues. For example, you could have a newsletter called Seasonal Dining Guide, which would have four issues per year (Summer, Fall, Winter, and Spring). A customer subscribes to the newsletter and each issue is sent to them as it becomes available. Begin by installing and activating the Simplenews module, as shown below: At this point, we only need to enable the Simplenews module, and the Simplenews action module can be left disabled. Next, select Content management and then Newsletters, from the Administer menu. Drupal will display an administration area divided into the following sections: a) Sent issuesb) Draftsc) Newslettersd) Subscriptions Click on the Newsletters tab and Drupal will display a page similar to the following: As you can see, a default newsletter with the name of our site has been automatically created for us. We can either edit this default newsletter or click on the Add newsletter link to create a new newsletter. Let's click the Add newsletter option to create our seasonal newsletter. Drupal will display a standard form where we can enter the name, description, and relative importance (relative importance weight) of the newsletter. Click Save to save the newsletter. It will now appear in the list of available newsletters. If you want to modify the Sender information for the newsletter to use an alternate name or email address to your site's default ones, you can either expand the Sender information section when adding the newsletter, or you click Edit newsletter and modify the Sender information, as shown in the following screenshot: Allowing users to sign-up for the newsletter Good Eatin' Goal: Demonstrate how registered and unregistered users can sign-up for a newsletter, and configure the registration process. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps To allow customers to sign-up for the newsletter, we will begin by adding a block to the page. Open the Block Manager by selecting Site building and then Blocks, from the Administer menu. Add the block for the newsletter that you want to allow customers to subscribe to, as shown in the following screenshot: We will now need to give users permission to subscribe to newsletters by selecting User management and then Permissions, from the Administer menu. We will give all users permissions to subscribe to newsletters and to view newsletter links, as shown below: If the customer does not have permission to subscribe to newsletters then the block will appear as shown in the following screenshot: However, if the customer has permissions to subscribe to newsletters, and is logged in to the site, the block will appear as shown in the following screenshot: If the customer has permission to subscribe, but is not logged in, the block will appear as follows: To subscribe to the newsletter, the customer will simply click on the Subscribe button. Once they he subscribed, the Subscribe button will change to Unsubscribe so that the user can easily opt out of the newsletter. If the user does not have an active account with the site, they will need to confirm that they want to subscribe to the site.
Read more
  • 0
  • 0
  • 2310

article-image-writing-tips-bloggers
Packt
22 Oct 2009
9 min read
Save for later

Writing Tips for Bloggers

Packt
22 Oct 2009
9 min read
The first thing to look at regarding content is the quality of your writing itself. Good writing takes practice. The best way to learn is to study the work of other good writers and bloggers, and by doing so, develop an ear for a good sentence. However, there are guidelines to bear in mind that apply specifically to blogs, and we'll look at some of these here. Killer Headlines Ask any newspaper sub-editor and he or she will tell you that writing good headlines is an art to be mastered. This is equally true for blogs. Your headlines are the post titles and it's very important to get them right. Your headlines should be concise and to the point. You should try to convey the essence of the post in its title. Remember that blogs are often consumed quickly, and readers will use your post titles to decide if they want to carry on reading. People tend to scan through blogs, so the titles play a big part in helping them pick which posts they might be interested in. Your post titles also have a part to play in search engine optimization. Many search engines will use them to index your posts. As more and more people are using RSS feeds to subscribe to blogs it becomes even more important to make your post titles as descriptive and informative as possible. Many RSS readers and aggregators only display the post title, so it's essential that you convey as much information as possible whilst keeping it short and snappy. For example, The World's Best Salsa Recipe is a better post title than, A new recipe. Length of Posts Try to keep your posts manageable in terms of their word count. It's difficult to be prescriptive about post lengths. There's no one size fits all rule in blogging. You need to gauge the length of your posts based on your subject matter and target audience. There may be an element of experimentation to see how posts of different lengths are received by your readership. As with headlines, bear in mind that most people tend to read blogs fairly quickly and they may be put off by an overly long post. WordPress 2.6 includes a useful word count feature: An important factor in controlling the length of your posts is your writing skills. You will find that as you improve as a writer, you will be able to get your points across using fewer words. Good writing is all about making your point as quickly and concisely as possible. Inexperienced writers often feel the urge to embellish their sentences and use long, complicated phrases. This is usually unnecessary and when you read back that long sentence, you might see a few words that can be cut. Editing your posts is an important process. At the very least you should always proofread them before clicking the Publish button. Better still; try to get into the habit of actively editing everything you write. If you know someone who is willing to act as an editor for you, that's great. It's always useful to get some feedback on your writing. If, after re-reading and editing your post, it still seems very long, it might be a good idea to split the post in two and publish the second installment a few days later. Post Frequency Again, there are no rules set in stone about how frequently you should post. You will probably know from your own experience of other blogs that this varies tremendously from blogger to blogger. Some bloggers post several times a day and others just once a week or less. Figuring out the correct frequency of your posts is likely to take some trial and error. It will depend on your subject matter and how much you have to say about it. The length of your posts may also have a bearing on this. If you like to write short posts that make just one main point, you may find yourself posting quite regularly. Or, your may prefer to save up your thoughts and get them down in one longer post. As a general rule of thumb, try to post at least once per week. Any less than this and there is a danger your readers will lose interest in your blog. However, it's extremely important not to post just for the sake of it. This is likely to annoy readers and they may very well delete your feed from their news reader. As with many issues in blogging, post frequency is a personal thing. You should aim to strike a balance between posting once in a blue moon and subjecting your readers to 'verbal diarrhea'. Almost as important as getting the post frequency right is fine-tuning the timing of your posts, that is, the time you publish them. Once again, you can achieve this by knowing your target audience. Who are they, and when are they most likely to sit down in front of their computers and read your blog? If most of your readers are office workers, then it makes sense to have your new posts ready for them when they switch on their workstations in the morning. Maybe your blog is aimed at stay-at-home moms, in which case a good time to post might be mid-morning when the kids have been dropped off at school, the supermarket run is over, and the first round of chores are done. If you blog about gigs, bars, and nightclubs in your local area, the readers may well include twenty-something professionals who access your blog on their iPhones whilst riding the subway home—a good time to post for them might be late afternoon. Links to Other Blogs Links to other bloggers and websites are an important part of your content. Not only are they great for your blog's search engine findability, they also help to establish your place in the blogosphere. Blogging is all about linking to others and the resulting 'conversations'. Try to avoid over-using popular links that appear all over the Web, and instead introduce your readers to new websites and blogs that they may not have heard of. Admittedly, this is difficult nowadays with so many bloggers linking to each other's posts, but the more original you can be, the better. This may take quite a bit of research and trawling through the lower-ranked pages on search engines and indices, but it could be time well spent if your readers come to appreciate you as a source of new content beyond your own blog. Try to focus on finding blogs in your niche or key topic areas. Establishing Your Tone and Voice Tone and voice are two concepts that professional writers are constantly aware of and are attempting to improve. An in-depth discussion isn't necessary here, but it's worth being aware of them. The concept of 'tone' can seem rather esoteric to the non-professional writer but as you write more and more, it's something you will become increasingly aware of. For our purposes, we could say the 'tone' of a blog post is all about the way it feels or the way the blogger has pitched it. Some posts may seem very informal; others may be straight-laced, or appear overly complex and technical. Some may seem quite simplistic, while others come across as advanced material. These are all matters of tone. It can be quite subtle, but as far as most bloggers are concerned, it's usually a matter of formal or informal. How you pitch your writing boils down to understanding your target audience. Will they appreciate informal, first-person prose or should you keep it strictly third person, with no slang or casual language? On blogs, a conversational tone is often the most appropriate. With regards to 'voice', this is what makes your writing distinctly yours. Writers who develop a distinct voice become instantly recognizable to readers who know them. It takes a lot of practice to develop and is not something you can consciously aim for; it just happens as you gain more experience. The only thing you can do to help it along is step back from your writing and ask yourself if any of your habits stand in the way of clarity. While you read back your blog posts imagine yourself as one of your target readers and consider whether they would appreciate the language and style you've used. Employing tone and voice well is all about getting inside their heads and producing content they can relate to. Developing a distinctive voice can also be an important aspect of your company's brand identity. Your marketing department may already have brand guidelines, which allude to the tone and voice that should be used while producing written communications. Or you may wish to develop guidelines (such as this) yourself as a way of focusing your use of tone and voice. The Structure of a Post This may not apply to very short posts that don't go further than a couple of brief paragraphs, but for anything longer, it's worth thinking about a structure. The classic form is 'beginning, middle, and end'. Consider what your main point or argument is, and get it down in the first paragraph. In the middle section expand on it and back it up with secondary arguments. At the end reinforce it, and leave no doubt in the reader's mind what it is you've been trying to say. As we've already mentioned, blogs are often read quickly or even just scanned through. Using this kind of structure, which most people are sub-consciously aware of, can help them extract your main points quickly and easily. A Note about Keywords It's worth noting here that your writing has a big impact on search engine findability. This is what adds an extra dimension to writing for the Web. As well as all the usual considerations of style, tone, content, and so on, you also need to optimize your content for the search engines. This largely comes down to identifying your keywords and ensuring they're used with the right frequency. In the meantime, hold this thought.
Read more
  • 0
  • 0
  • 1051
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-php-data-objects-error-handling
Packt
22 Oct 2009
11 min read
Save for later

PHP Data Objects: Error Handling

Packt
22 Oct 2009
11 min read
In this article, we will extend our application so that we can edit existing records as well as add new records. As we will deal with user input supplied via web forms, we have to take care of its validation. Also, we may add error handling so that we can react to non-standard situations and present the user with a friendly message. Before we proceed, let's briefly examine the sources of errors mentioned above and see what error handling strategy should be applied in each case. Our error handling strategy will use exceptions, so you should be familiar with them. If you are not, you can refer to Appendix A, which will introduce you to the new object-oriented features of PHP5. We have consciously chosen to use exceptions, even though PDO can be instructed not to use them, because there is one situation where they cannot be avoided. The PDO constructors always throw an exception when the database object cannot be created, so we may as well use exceptions as our main error‑trapping method throughout the code. Sources of Errors To create an error handling strategy, we should first analyze where errors can happen. Errors can happen on every call to the database, and although this is rather unlikely, we will look at this scenario. But before doing so, let's check each of the possible error sources and define a strategy for dealing with them. This can happen on a really busy server, which cannot handle any more incoming connections. For example, there may be a lengthy update running in the background. The outcome is that we are unable to get any data from the database, so we should do the following. If the PDO constructor fails, we present a page displaying a message, which says that the user's request could not be fulfilled at this time and that they should try again later. Of course, we should also log this error because it may require immediate attention. (A good idea would be emailing the database administrator about the error.) The problem with this error is that, while it usually manifests itself before a connection is established with the database (in a call to PDO constructor), there is a small risk that it can happen after the connection has been established (on a call to a method of the PDO or PDO Statement object when the database server is being shutdown). In this case, our reaction will be the same—present the user with an error message asking them to try again later. Improper Configuration of the Application This error can only occur when we move the application across servers where database access details differ; this may be when we are uploading from a development server to production server, where database setups differ. This is not an error that can happen during normal execution of the application, but care should be taken while uploading as this may interrupt the site's operation. If this error occurs, we can display another error message like: "This site is under maintenance". In this scenario, the site maintainer should react immediately, as without correcting, the connection string the application cannot normally operate. Improper Validation of User Input This is an error which is closely related to SQL injection vulnerability. Every developer of database-driven applications must undertake proper measures to validate and filter all user inputs. This error may lead to two major consequences: Either the query will fail due to malformed SQL (so that nothing particularly bad happens), or an SQL injection may occur and application security may be compromised. While their consequences differ, both these problems can be prevented in the same way. Let's consider the following scenario. We accept some numeric value from a form and insert it into the database. To keep our example simple, assume that we want to update a book's year of publication. To achieve this, we can create a form that has two fields: A hidden field containing the book's ID, and a text field to enter the year. We will skip implementation details here, and see how using a poorly designed script to process this form could lead to errors and put the whole system at risk. The form processing script will examine two request variables:$_REQUEST['book'], which holds the book's ID and $_REQUEST['year'], which holds the year of publication. If there is no validation of these values, the final code will look similar to this: $book = $_REQUEST['book'];$year = $_REQUEST['year'];$sql = "UPDATE books SET year=$year WHERE id=$book";$conn->query($sql); Let's see what happens if the user leaves the book field empty. The final SQL would then look like: UPDATE books SET year= WHERE id=1; This SQL is malformed and will lead to a syntax error. Therefore, we should ensure that both variables are holding numeric values. If they don't, we should redisplay the form with an error message. Now, let's see how an attacker might exploit this to delete the contents of the entire table. To achieve this, they could just enter the following into the year field: 2007; DELETE FROM books; This turns a single query into three queries: UPDATE books SET year=2007; DELETE FROM books; WHERE book=1; Of course, the third query is malformed, but the first and second will execute, and the database server will report an error. To counter this problem, we could use simple validation to ensure that the year field contains four digits. However, if we have text fields, which can contain arbitrary characters, the field's values must be escaped prior to creating the SQL. Inserting a Record with a Duplicate Primary Key or Unique Index Value This problem may happen when the application is inserting a record with duplicate values for the primary key or a unique index. For example, in our database of authors and books, we might want to prevent the user from entering the same book twice by mistake. To do this, we can create a unique index of the ISBN column of the books table. As every book has a unique ISBN, any attempt to insert the same ISBN will generate an error. We can trap this error and react accordingly, by displaying an error message asking the user to correct the ISBN or cancel its addition. Syntax Errors in SQL Statements This error may occur if we haven't properly tested the application. A good application must not contain these errors, and it is the responsibility of the development team to test every possible situation and check that every SQL statement performs without syntax errors. If this type of an error occurs, then we trap it with exceptions and display a fatal error message. The developers must correct the situation at once. Now that we have learned a bit about possible sources of errors, let's examine how PDO handles errors. Types of Error Handling in PDO By default, PDO uses the silent error handling mode. This means that any error that arises when calling methods of the PDO or PDOStatement classes go unreported. With this mode, one would have to call PDO::errorInfo(), PDO::errorCode(), PDOStatement::errorInfo(), or PDOStatement::errorCode(), every time an error occurred to see if it really did occur. Note that this mode is similar to traditional database access—usually, the code calls mysql_errno(),and mysql_error() (or equivalent functions for other database systems) after calling functions that could cause an error, after connecting to a database and after issuing a query. Another mode is the warning mode. Here, PDO will act identical to the traditional database access. Any error that happens during communication with the database would raise an E_WARNING error. Depending on the configuration, an error message could be displayed or logged into a file. Finally, PDO introduces a modern way of handling database connection errors—by using exceptions. Every failed call to any of the PDO or PDOStatement methods will throw an exception. As we have previously noted, PDO uses the silent mode, by default. To switch to a desired error handling mode, we have to specify it by calling PDO::setAttribute() method. Each of the error handling modes is specified by the following constants, which are defined in the PDO class: PDO::ERRMODE_SILENT – the silent strategy. PDO::ERRMODE_WARNING – the warning strategy. PDO::ERRMODE_EXCEPTION – use exceptions. To set the desired error handling mode, we have to set the PDO::ATTR_ERRMODE attribute in the following way: $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); To see how PDO throws an exception, edit the common.inc.php file by adding the above statement after the line #46. If you want to test what will happen when PDO throws an exception, change the connection string to specify a nonexistent database. Now point your browser to the books listing page. You should see an output similar to: This is PHP's default reaction to uncaught exceptions—they are regarded as fatal errors and program execution stops. The error message reveals the class of the exception, PDOException, the error description, and some debug information, including name and line number of the statement that threw the exception. Note that if you want to test SQLite, specifying a non-existent database may not work as the database will get created if it does not exist already. To see that it does work for SQLite, change the $connStr variable on line 10 so that there is an illegal character in the database name: $connStr = 'sqlite:/path/to/pdo*.db'; Refresh your browser and you should see something like this: As you can see, a message similar to the previous example is displayed, specifying the cause and the location of the error in the source code. Defining an Error Handling Function If we know that a certain statement or block of code can throw an exception, we should wrap that code within the try…catch block to prevent the default error message being displayed and present a user-friendly error page. But before we proceed, let's create a function that will render an error message and exit the application. As we will be calling it from different script files, the best place for this function is, of course, the common.inc.php file. Our function, called showError(), will do the following: Render a heading saying "Error". Render the error message. We will escape the text with the htmlspecialchars() function and process it with the nl2br() function so that we can display multi-line messages. (This function will convert all line break characters to tags.) Call the showFooter() function to close the opening and tags. The function will assume that the application has already called the showHeader() function. (Otherwise, we will end up with broken HTML.) We will also have to modify the block that creates the connection object in common.inc.php to catch the possible exception. With all these changes, the new version of common.inc.php will look like this: <?php/*** This is a common include file* PDO Library Management example application* @author Dennis Popel*/// DB connection string and username/password$connStr = 'mysql:host=localhost;dbname=pdo';$user = 'root';$pass = 'root';/*** This function will render the header on every page,* including the opening html tag,* the head section and the opening body tag.* It should be called before any output of the/*** This function will 'close' the body and html* tags opened by the showHeader() function*/function showFooter(){?></body></html><?php}/*** This function will display an error message, call the* showFooter() function and terminate the application* @param string $message the error message*/function showError($message){echo "<h2>Error</h2>";echo nl2br(htmlspecialchars($message));showFooter();exit();}// Create the connection objecttry{$conn = new PDO($connStr, $user, $pass);$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);}catch(PDOException $e){showHeader('Error');showError("Sorry, an error has occurred. Please try your requestlatern" . $e->getMessage());} As you can see, the newly created function is pretty straightforward. The more interesting part is the try…catch block that we use to trap the exception. Now with these modifications we can test how a real exception will get processed. To do that, make sure your connection string is wrong (so that it specifies wrong databasename for MySQL or contains invalid file name for SQLite). Point your browser to books.php and you should see the following window:
Read more
  • 0
  • 0
  • 2114

article-image-google-earth-google-maps-and-your-photos-tutorial
Packt
22 Oct 2009
38 min read
Save for later

Google Earth, Google Maps and Your Photos: a Tutorial

Packt
22 Oct 2009
38 min read
Introduction A scholar who never travels but stays at home is not worthy to be accounted a scholar. From my youth on I had the ambition to travel, but could not afford to wander over the three hundred counties of Korea, much less the whole world. So, carrying out an ancient practise, I drew a geographical atlas. And while gazing at it for long stretches at a time I feel as though I was carrying out my ambition . . . Morning and evening while bending over my small study table, I meditate on it and play with it and there in one vast panorama are the districts, the prefectures and the four seas, and endless stretches of thousands of miles. WON HAK-SAENG Korean Student Preface to his untitled manuscript atlas of China during the Ming Dynasty, dated 1721. To say that maps are important is an understatement almost as big as the world itself. Maps quite literally connect us to the world in which we all live, and by extension, they link us to one another. The oldest preserved maps date back nearly 4500 years. In addition to connecting us to our past, they chart much of human progress and expose the relationships among people through time. Unfortunately, as a work of humankind, maps share many of the same shortcomings of all human endeavors. They are to some degree inaccurate and they reflect the bias of the map maker. Advancements in technology help to address the former issue, and offer us the opportunity to resist the latter. To the extent that it's possible for all of us to participate in the map making, the bias of a select few becomes less meaningful. Google Earth  and Google Maps  are two applications that allow each of us to assert our own place in the world and contribute our own unique perspective. I can think of no better way to accomplish this than by combining maps and photography. Photos reveal much about who we are, the places we have been, the people with whom we have shared those experiences, and the significant events in our lives. Pinning our photos to a map allows us to see them in their proper geographic context, a valuable way to explore and share them with friends and family. Photos can reveal the true character of a place, and afford others the opportunity to experience these destinations, perhaps faraway and unreachable for some of us, from the perspective of someone who has actually been there. In this tutorial I'll show you how to 'pin' your photos using Google Earth and Google Maps. Both applications are free, and available for Windows, Mac OS X, and Linux. In the case of Google Earth there are requirements associated with installing and running what is a local application. Google Maps has its own requirements: primary among them a compatible web browser (the highly regarded FireFox is recommended). In Google Earth, your photos show up on the map within the application complete with titles, descriptions, and other relevant information. You can choose to share your photos with everyone, only people you know, or even reserve them strictly for yourself. Google Maps offers the flexibility to present maps outside of a traditional application. For example you can embed a map on a webpage pinpointing the location of one particular photo, or mapping a collection of photos to present along with a photo gallery, or even collecting all of your digital photos together on one dynamic map. Over the course of a short couple of articles we'll cover everything you need to take advantage of both applications. I've put together two scripts to help us accomplish that goal. The first is a Perl script that works through your photos and generates a file in the proper format with all of the data necessary to include those photos in Google Earth. The second is a short bit of Javascript that works with the first file and builds a dynamic Google Map of those same photos. Both scripts are available for you to download, after which you are free to use them as is, or modify them to suit your own projects. I've purposefully kept the code as simple as possible to make them accessible to the widest audience, even those of you reading this who may be new to programming or unfamiliar with Perl, Javascript or both. I've taken care to comment the code generously so that everything is plainly obvious. I'm hopeful that you will be surprised at just how simple it is. There a couple of preliminary topics to examine briefly before we go any further. In the preceding paragraph I mentioned that the result of the first of our two scripts is a 'file in the proper format...'. This file, or more to the point the file format, is a very important part of the project. KML  (Keyhole Markup Language) is a fairly simple XML-based format that can be considered the native "language" of Google Earth. That description begs the question, 'What is XML?'. To oversimplify, because even a cursory discussion of XML is outside of the scope of this article, XML  (Extensible Markup Language) is an open data format (in contrast to proprietary formats) which allows us to present information in such way that we communicate not only the data itself, but also descriptive information about the data and the relationships among elements of the data. One of the technical terms that applies is 'metalanguage', which approximately translates to mean a language that makes it possible to describe other languages. If you're unfamiliar with the concept, it may be difficult to grasp at first or it may not seem impressive. However, metalanguages, and specifically XML, are an important innovation (I don't mean to suggest that it's a new concept. In fact XML has roots that are quite old, relative to the brief history of computing). These metalanguages make it possible for us to imbue data with meaning such that software can make use of that data. Let's look at an example taken from the Wikipedia entry for KML. <?xml version="1.0" encoding="UTF-8"?> <kml > <Placemark>  <description>New York City</description>  <name>New York City</name>  <Point>  <coordinates>-74.006393,40.714172,0</coordinates>  </Point> </Placemark> </kml>   Ignore all of the pro forma stuff before and after the <Placemark> tags and you might be able to guess what this bit of data represents. More importantly, a computer can be made to understand what it represents in some sense. Without the tags and structure, "New York City" is just a string, i.e. a sequence of characters. Considering the tags we can see that we're dealing with a place, (a Placemark), and that "New York City" is the name of this place (and in this example also its description). With all of this formal structure, programs can be made to roughly understand the concept of a Placemark, which is a useful thing for a mapping application. Let's think about this for a moment. There are a very large number of recognizable places on the planet, and a provably infinite number of strings. Given a block of text, how could a computer be made to identify a place from, for example the scientific name of a particular flower, or a person's name? It would be extremely difficult. We could try to create a database of every recognizable place and have the program check the database every time it encounters a string. That assumes it's possible to agree on a 'complete list of every place', which is almost certainly untrue. Keep in mind that we could be talking about informal places that are significant only to a small number of people or even a single person, e.g. 'the park where, as a child, I first learned to fly a kite'. It would be a very long list if we were going to include these sorts of places, and incredibly time-consuming to search. Relying on the structure of the fragment above, we can easily write a program that can identify 'New York City' as the name of a place, or for that matter 'The park where I first learned to fly a kite'. In fact, I could write a program that pulls all of the place names from a file like this, along with a description, and a pair of coordinate points for each, and includes them on a map. That's exactly what we're going to do. KML makes it possible. If I haven't made it clear, the structural bits of the file must be standardized. KML supports a limited set of elements (e.g. 'Placemark' is a supported element, as are 'Point' and 'coordinates'), and all of the elements used in a file must adhere to the standard for it to be considered valid. The second point we need to address before we begin is, appropriately enough... where to begin? Lewis Carroll famously tells us to "Begin at the beginning and go on till you come to the end: then stop."  Of course, Mr. Carroll  was writing a book at the time. If "Alice's Adventures in Wonderland" were an article, he might have had different advice. From the beginning to the end there is a lot of ground to cover. We're going to start somewhere further along, and make up the difference with the following assumptions. For the purposes of this discussion, I am going to assume that you have: Perl Access to Phil Harvey's excellent ExifTool , a Perl library and command-line application for reading, writing, and editing metadata in images (among other file types). We will be using this library in our first script A publicly accessible web server. Google requires the use of an API key  by which it can monitor the use of its map services. Google must be able to validate your key, and so your site must be publicly available. Note that this a requirement for Google Maps only Photos, preferably in a photo management application. Essentially, all you need is an app capable of generating both thumbnails and reduced size copies of your original photos . An app that can export a nice gallery for use on the web is even better Coordinate data as part of the EXIF metadata  embedded in your photos. If that sounds unfamiliar to you, then most likely you will have to take some additional steps before you can make use of this tutorial. I'm not aware of any digital cameras that automatically include this information at the time the photo is created. There are devices that can be used in combination with digital cameras, and there are a number of ways that you can 'tag' your photos with geographic data much the same way you would add keywords and other annotations. Let's begin!   Part 1: Google Earth Section 1: Introduction to Part 1 Time to begin the project in earnest. As I've already mentioned we'll spend the first half of this tutorial looking at Google Earth and putting together a Perl script whichthat, given a collection of geotagged photos, will build a set of KML files so that we can browse our photos in Google Earth. These same files will serve as the data source for our Google Maps application later on. Section 2: Some Advice Before We Begin Take your time to make sure you understand each topic before you move on to the next. Think of this as the first step in debugging your completed code. If you go slowly enough that you are to be able to identify aspects of the project that you don't quite understand, then you'll have some idea where to start looking for problems should things not go as expected. Furthermore, going slowly will give you the opportunity to identify those parts that you may want to modify to better fit the script to your own needs. If this is new to you, follow along as faithfully as possible with what I do here the first time through. Feel free to make notes for yourself as you go, but making changes on the first pass may make it difficult for you to catch back on to the narrative and piece together a functional script. After you have a working solution, it will be a simple matter to implement changes one at a time until you have something that works for you. Following this approach it will be easy to identify the silly mistakes that tend to creep in once you start making changes. There is also the issue of trust. This is probably the first time we're meeting each other, in which case you should have some doubt that my code works properly to begin with. If you minimize the number of changes you make, you can confirm that this works for you before blaming yourself or your changes for my mistakes. I will tell you up front that I'm building this project myself as we go. You can be certain at least that it functions as described for me as of the date attached to the article. I realize that this is quite different from being certain that the project will work for you, but at least it's something. The entirety of my project is available for you to download. You are free to use all of it for any legal purpose whatsoever,  including my photos in addition to all of the code, icons, etc. This is so you have some samples to use before you involve your own content. I don't promise that they are the most beautiful images you have ever seen, but they are all decent photos and properly annotated with the necessary metadata, including geographic tags. Section 3: Photos, Metadata and ExifTool To begin, we must have a clear understanding of what the Perl script will require from us. Essentially, we need to provide it with a selection of annotated image files, and information about how to reference those files. A simple folder of files is sufficient, and will be convenient for us, both as the programmer and end user. The script will be capable of negotiating nested folders, and if a directory contains both images and other file types, non-image types will be ignored. Typically, after a day of taking photos I'll have 100 to 200 that I want to keep. I delete the rest immediately after offloading them from the camera. For the files that are left, I preserve the original grouping, keeping all of the files together in a single folder. I place this folder of images in an appropriate location according to a scheme that serves to keep my complete collection of photos neatly organized. These are my digital 'negatives'. I handle all subsequent organization, editing, enhancements, and annotations within my photo management application. I use Apple Inc.'s Aperture; but there are many others that do the job equally well. Annotating your photos is well worth the investment of time and effort, but it's important that you have some strategy in mind so that you don't create meaningless tags that are difficult to use to your advantage. For the purposes of this project the tags we'll need are quite limited, which means that going forward we will be able to continue adding photos to our maps with a reasonable amount of work. The annotations we need are: Caption Latitude Longitude Image Date * Location/Sub-Location City Province/State Country Name Event People ImageWidth * ImageHeight * * Values for these Exif tags are generated by your camera. Note that these are labels used in Aperture, and are not necessarily consistent from one application to the next. Some of them are more likely than others to be used reliably. 'City' for example should be dependable, while the labels 'People', 'Events', and 'Location', among others, are more likely to differ. One explanation for these variations is that the meaning of these fields are more open to interpretation. Location, for example, is likely to be used to narrow down the area where the photo was taken within a particular city, but it is left to the person who is annotating the photo to decide that the field should name a specific street address, an informal place (e.g. 'home' or 'school'), or a larger area, for example a district or neighborhood. Fortunately things aren't so arbitrary as they seem. Each of these fields corresponds to a specific tag name that adheres to one of the common metadata formats (Exif1, IPTC2, XMP3, and there are others). These tag names are consistent as required by the standards. The trick is in determining the labels used in your application that correspond to the well-defined tag names. Our script relies on these metadata tags, so it is important that you know which fields to use in your application. This gives us an excuse to get acquainted with ExifTool4. From the project's website, we have this description of the application: ExifTool is a platform-independent Perl library plus a command-line application for reading, writing, and editing meta information in image, audio, and video files... ExifTool can seem a little intimidating at first. Just keep in mind that we will need to understand just a small part of it for this project, and then be happy that such a useful and flexible tool is freely available for you to use. The brief description above states in part that ExifTool is a Perl library and command line application that we can use to extract metadata from image files. With a single short command, we can have the app print all of the metadata contained in one of our image files. First, make sure ExifTool is installed. You can test for this by typing the name of the application at the command line. $ exiftool If it is installed, then running it with no options should prompt the tool to print its documentation. If this works, there will be more than one screen of text. You can page through it by pressing the spacebar. Press the 'q' key at any time to stop. If the tool is not installed, you will need to add it before continuing. See the appendix at the end of this tutorial for more information. Having confirmed that ExifTool is installed, typing the following command will result in a listing of the metadata for the named image: $ exiftool -f -s -g2 /path/image.jpg Where 'path' is an absolute path to image.jpg or a relative path from the current directory, and 'image.jpg' is the name of one of your tagged image files. We'll have more to say about ExifTool later, but because I believe that no tutorial should ask the reader to blindly enter commands as if they were magic incantations, I'll briefly describe each of the options used in the command above: -f, forces printing of tags even if their values are not found. This gives us a better idea about all of the available tag names, whether or not there are currently defined values for those tags. -s, prints tag names instead of descriptions. This is important for our purposes. We need to know the tag names so that we can request them in our Perl code. Descriptions, which are expanded, more human-readable forms of the same names obscure details we need. For example, compare the tag name 'GPSLatitude' to the description 'GPS Latitude'. We can use the tag name, but not the description to extract the latitude value from our files. -g2, organizes output by category. All location specific information is grouped together, as is all information related to the camera, date and time tags, etc.  You may feel, as I do, that this grouping makes it easier to examine the output. Also, this organization is more likely to reflect the grouping of field names used by your photo management application. If you prefer to save the output to a file, you can add ExifTool's -w option with a file extension. $ exiftool -f -s -g2 -w txt path/image.jpg This command will produce the same result but write the output to the file 'image.txt' in the current directory; again, where 'image.jpg' is the name of the image file. The -w option appends the named extension to the image file's basename, creates a new file with that name, and sets the new file as the destination for output. The tag names that correspond to the list of Aperture fields presented above are: metadata tag name Aperture field label Caption-Abstract Caption GPSLatitude Latitude GPSLongitude Longitude DateTimeOriginal Image Date Sub-location Location/Sub-Location City City Province-State Province/State Country-PrimaryLocationName Country Name FixtureIdentifier Event Contact People ImageWidth Pixel Width ImageHeight Pixel Height     Section 4: Making Photos Available to Google Earth We will use some of the metadata tags from our image files to locate our photos on the map (e.g. GPSLatitude, GPSLongitude), and others to describe the photos. For example, we will include the value of the People tag in the information window that accompanies each marker to identify friends and family who appear in the associated photo. Because we want to display and link to photos on the map, not just indicate their position, we need to include references to the location of our image files on a publicly accessible web server. You have some choice about how to do this, but for the implementation described here we will (1) Display a thumbnail in the info window of each map marker and (2) include a link to the details page for the image in a gallery created in our photo management app. When a visitor clicks on a map marker they will see a thumbnail photo along with other brief descriptive information. Clicking a link included as part of the description will open the viewer's web browser to a page displaying a large size image and additional details. Furthermore, because the page is part of a gallery, viewers can jump to an index page and step forward and back through the complete collection. This is a complementary way to browse our photos. Taking this one step further, we could add a comments section to the gallery pages or replace the gallery altogether, instead choosing to display each photo as a weblog post for example. The structure of the gallery created from my photo app is as follows... / (the root of the gallery directory) index.html large-1.html large-2.html large-3.html ... large-n.html assets/ css/ img/ catalog/ pictures/ picture-1.jpg picture-2.jpg picture-3.jpg ... picture-n.jpg thumbnails/ thumb-1.jpg thumb-2.jpg thumb-3.jpg ... thumb-n.jpg The application creates a root directory containing the complete gallery. Assuming we do not want to make any manual changes to the finished site, publishing is as easy as copying the entire directory to a location within the web server's document root. assets/: Is a subfolder containing files related to the theme itself. We don't need to concern ourselves with this sub-directory. catalog/: Contains a single catalog.plist file which is specific to Aperture and not relevant to this discussion. pictures/: Contains the large size images included on the detail gallery pages. thumbnails/: This subfolder contains the thumbnail images corresponding to the large size images in pictures/. Finally, there are a number of files at the root of the gallery. These include index pages and files named 'large-n.html', where n is a number starting at 1 and increasing sequentially e.g. large-1.html, large-2.html, large-3.html, ... The index files are the index pages of our gallery. The number of index pages generated will be dictated by the number of image files in the gallery, as well as the gallery's layout and design. index.html is always the first gallery page. The large-n.html files are the details pages of our gallery. Each page features an individual photo with links to the previous and next photos in sequence and a link back to the index. You can see the gallery I have created for this tutorial here: http://robreed.net/photos/tutorial_gallery/ If you take the time to look through the gallery, maybe you can appreciate the value of viewing these photos on a map. Web-based photo galleries like this one are nice enough, but the photos are more interesting when viewed in some meaningful context. There are a couple of things to notice about this gallery. Firstly, picture-1.jpg, thumb-1.jpg, and large-1.html all refer to the same image. So if we pick one of the three files we can easily predict the names of the other two. This relationship will be useful when it comes to writing our script. There is another important issue I need to call to your attention because it will not be apparent from looking only at the gallery structure. Aperture has renamed all of my photos in the process of exporting them. The name of the original file from which picture-1.jpg was generated (as well as large-1.html and thumb-1.jpg) is 'IMG_0277.JPG', which is the filename produced by my camera. Because I want to link to these gallery files, not my original photos which will stay safely tucked away on a local drive, I must run the script against the photos in the gallery. I cannot run it against the original image files because the filenames referenced in the output are unrelated to the files in the gallery. If my photo management app provided me the option of preserving the original filenames for the corresponding photos in the gallery, then I could run the script against the original image files or the gallery photos because all of the filenames would be consistent, but this is not the case. I don't have a problem as long as I run the script on the exported photos. However, if I'm running the script against the photos in the web gallery, either the pictures or thumbnail images must contain the same metadata as the original image files. Aperture preserves the metadata in both. Your application may not. A simple, dependable way to confirm that the metadata is present in the gallery files is to run ExifTool first against the original file and then the same photo in the pictures/ and thumbnails/ directories in the gallery. If ExifTool reports identical metadata, then you will have no trouble using one of pictures/ or thumbnails/ as your source directory. If the metadata is not present or not complete in the gallery files, you may need to use the script on your original image files. As has already been explained, this isn't a problem unless the gallery produces filenames that are inconsistent with the original filenames, as Aperture does. In this case you have a problem. You won't be able to run the script on the original image files because of the naming issue or on the gallery photos because they don't contain metadata. Make sure that you understand this point. If you find yourself in this situation, then your best bet is to generate files to use with your maps from your original photos in some other way, bypassing your photo management app's web gallery features altogether in favor of a solution that preserves the filenames, the metadata, or both. There is another option which involves setting up a relationship between the names of the original files and the gallery filenames. This tutorial does not include details about how to set up this association. Finally, keep in mind that though we've looked at the structure of a gallery generated by Aperture, virtually all photo management apps produce galleries with a similar structure. Regardless of the application used, you should find: A group of html files including index and details pages A folder of large size image files A folder of thumbnails Once you have identified the structure used by your application, as we have done here, it will be a simple task to translate these instructions. Section 5: Referencing Files Over the Internet Now we can talk about how to reference these files and gallery pages so that we can create a script to generate a KML file that includes these references. When we identify a file over the internet, it is not enough to use the filename, e.g. 'thumb-1.jpg', or even the absolute and relative path to the file on the local computer. In fact these paths are most likely not valid as far as your web server is concerned. Instead we need to know how to reference our files such that they can be accessed over the global internet, and the web more specifically. In other words, we need to be able to generate a URL (Uniform Resource Locator) which unambiguously describes the location of our files. The formal details of exactly what comprises a URL5; are more complicated than may be obvious at first, but most of us are familiar with the typical URL, like this one: http://www.ietf.org/rfc/rfc1630.txt which describes the location of a document titled "Universal Resource Identifiers in WWW" that just so happens to define the formal details of what comprises a URL. http://www.ietf.org This portion of the address is enough to describe the location of a particular web server over the public internet. In fact it does a little more than just specify the location of a machine. The http:// portion is called the scheme and it identifies a particular protocol (i.e. a set of rules governing communication) and a related application, namely the web. What I just said isn't quite correct; at one time, HTTP was used exclusively by the web, but that's no longer true. Many internet-based applications use the protocol because the popularity of the web ensures that data sent via HTTP isn't blocked or otherwise disrupted. You may not be accustomed to thinking of it as such, but the web itself is a highly-distributed, network-based application. /rfc/ This portion of the address specifies a directory on the server. It is equivalent to an absolute path on your local computer. The leading forward slash is the root of the web server's public document directory. Assuming no trickiness on the part of the server, all content lives under the document root. This tells us that rfc/ is a sub-directory contained within the document root. Though this directory happens to be located immediately under the root, this certainly need not be the case. In fact these paths can get quite long. We have now discussed all of the URL except for: rfc1630.txt which is the name of a specific file. The filename is no different than the filenames on your local computer. Let's manually construct a path to one of the large-n.html pages of the gallery we have created. The address of my server is robreed.net, so I know that the beginning of my URL will be: http://robreed.net I keep all of my galleries together within a photos/ directory, which is contained in the document root. http://robreed.net/photos/ Within photos/, each gallery is given its own folder. The name of the folder I have created for this tutorial is 'tutorial_gallery'. Putting this all together, the following URL brings me to the root of my photo gallery: http://robreed.net/photos/tutorial_gallery/ We've already gone over the directory structure of the gallery, so it should make sense you to that when referring to the 'large-1.html' detail page, the complete URL will be: http://robreed.net/photos/tutorial_gallery/large-1.html the URL of the image that corresponds to that detail page is: http://robreed.net/photos/tutorial_gallery/pictures/picture-1.jpg and the thumbnail can be found at: http://robreed.net/photos/tutorial_gallery/thumbnails/thumb-1.jpg Notice that the address of the gallery is shared among all of these resources. Also, notice that resources of each type (e.g. the large images, thumbnails, and html pages) share a more specific address with files of that same type. If we use the term 'base address' to refer to the shared portions of these URLs, then we can talk about several significant base addresses: The gallery base address: http://robreed.net/photos/tutorial_gallery/ The html pages base address: http://robreed.net/photos/tutorial_gallery/ The images base address: http://robreed.net/photos/tutorial_gallery/pictures/ The thumbnails base address: http://robreed.net/photos/tutorial_gallery/thumbnails/ Note that given the structure of this particular gallery, the html pages base address and the gallery base address are identical. This need not be the case, and may not be for the gallery produced by your application. We can hard-code the base addresses into our script. For each photo, we need only append the associated filename to construct valid URLs to any of these resources. As the script runs, it will have access to the name of the file that it is currently evaluating, and so it will be a simple matter to generate the references we need as we go. At this point we have discussed almost everything we need to put together our script. We have: Created a gallery at our server, which includes our photos with metadata in tow Identified all of the metadata tags we need to extract from our photos with the script and the corresponding field names in our photo management application Determined all of the base addresses we need to generate references to our image files Section 6: KML The last thing we need to understand is the format of the KML files we want to produce. We've already looked at a fragment of KML. The full details can be found on Google's KML documentation pages , which include samples, tutorials and a complete reference for the format. A quick look at the reference is enough to see that the language includes many elements and attributes, the majority of which we will not be including in our files. That statement correctly implies it is not necessary for every KML file to include all elements and attributes. The converse however is not true, which is to say that every element and attribute contained in any KML file must be a part of the standard. A small subset of KML will get us most, if not all, of what you will typically see in Google Earth from other applications. Many of the features we will not be using deal with aspects of the language that are either: Not relevant to this project, e.g. ground overlays (GroundOverlay) which "draw an image overlay draped onto the terrain" Minute details for which the default values are sensible There is no need to feel shortchanged because we are covering only a subset of the language. With the basic structure in place and a solid understanding of how to script the generation of KML, you will be able to extend the project to include any of the other components of the language as you see fit. The structure of our KML file is as follows:  1.    <?xml version="1.0" encoding="UTF-8"?> 2.    <kml > 3.        <Document> 4.            <Folder> 5.                <name>$folder_name</name> 6.                <description>$folder_description</description> 7.                <Placemark> 8.                    <name>$placemark_name</name> 9.                    <Snippet maxLines="1"> 10.                        $placemark_snippet 11.                    </Snippet> 12.                    <description><![CDATA[ 13.                        $placemark_description 14.                    ]]></description> 15.                    <Point> 16.                        <coordinates>$longitude,$latitude </coordinates> 17.                    </Point> 18.                </Placemark> 19.            </Folder> 20.        </Document> 21.    </kml> Line 1: XML header Every valid KML file must start with this line and nothing else is allowed to appear before it. As I've already mentioned, KML is an XML-based language and XML requires this header. Line 2: Namespace declaration More specifically this is the KML namespace declaration, and it is another formality. The value of the >Line 3: <Document> is a container element representing the KML file itself. If we do not explicitly name the document by including a name element then Google Earth will use the name of the KML file as the Document element <name>. The Document container will appear on the Google Earth 'Sidebar' within the 'Places' panel. Optionally we can control whether the container is closed or open by default. (This setting can be toggled in Google Earth using a typical disclosure triangle.) There are many other elements and attributes that can be applied to the Document element. Refer to the KML Reference for the full details. Line 4: <Folder> is another container element. The files we produce will include a single <Folder> containing all of our Placemarks, where each Placemark represents a single image. We could create multiple Folder elements to group our Placemarks according to some significant criteria. Think of the Folder element as being similar to your operating system's concept of a folder. At this point, note the structure of the fragment. The majority of it is contained within the Folder element. Folder, in turn, is an element of Document which is itself within the <kml> container. It should make sense that everything in the file that is considered part of the language must be contained within the kml element. From the KML reference: A Folder is used to arrange other Features hierarchically (Folders, Placemarks, NetworkLinks, or Overlays). A Feature is visible only if it and all its ancestors are visible. Line 5: The name element identifies an object, in this case the Folder object. The text that appears between the name tags can be any plain text that will serve as an appropriate label.   Line 6: <description> is any text that seems to adequately describe the object. The description element supports both plain text and a subset of HTML. We'll consider issues related to using HTML in <description> at the discussion of Placemark, lines 12 - 14. Lines 7 - 17 define a <Placemark> element. Note that Placemark contains a number of elements that also appear in Folder, including <name> (line 8), and <description> (lines 12 - 14). These elements serve the same purpose for Placemark as they do for the Folder element, but of course they refer to a different object. I've said that <description> can include a subset of HTML in addition to plain text. Under XML, some characters have special meaning. You may need to use these characters as part of the HTML included in your descriptions. Angle brackets (<, >) for example surround tag names in HTML, but serve a similar purpose in XML. When they are used strictly as part of the content, we want the XML parser to ignore these characters. We can accomplish this a few different ways: We can use entity references, either numeric character references or character entity references, to indicate that the symbol appears as part of the data and should not be treated as part of the syntax of the language. The character '<', which is required to include an image as part of the description (something we will be doing), (e.g. <img src=... />) can safely be included as the character entity reference '&lt;' or the numeric character reference '&#60'. The character entity references may be easier to remember and recognize on sight but are limited to the small subset of characters for which they have been defined. The numeric references on the other hand can specify any ASCII character. Of the two types, numeric character references should be preferred. There are also Unicode entity references which can specify any character at all. In the simple case of embedding short bits of HTML in KML descriptions, we can avoid the complication of these references altogether by enclosing the entire description in a CDATA6 element, which instructs the XML parser to ignore any special characters that appear until the end of the block set off by the CDATA tags. Notice the string '<![CDATA[' immediately after the opening <description> tag within <Placemark>, and the string ]]> immediately before the closing </description> tag. If we simply place all of our HTML and plain text between those two strings, it will all be ignored by the parser and we are not required to deal with special characters individually. This is how we'll handle the issue. Lines 9 - 11: <Snippet> The KML reference does a good job of clearly describing this element. From the KML reference: In Google Earth, this description is displayed in the Places panel under the name of the feature. If a Snippet is not supplied, the first two lines of the <description> are used. In Google Earth, if a Placemark contains both a description and a Snippet, the <Snippet> appears beneath the Placemark in the Places panel, and the <description> appears in the Placemark's description balloon. This tag does not support HTML markup. <Snippet> has a maxLines attribute, an integer that specifies the maximum number of lines to display. Default for maxLines is 2. Notice that in the block above at line 9, I have included a 'maxLines' attribute with a value of 1. Of course, you are free to substitute your own value for maxLines, or you can omit the attribute entirely to use the default value. The only element we have yet to review is <Point>, and again we need only look to the official reference for a nice description. From the KML reference: A geographic location defined by longitude, latitude, and (optional) altitude. When a Point is contained by a Placemark, the point itself determines the position of the Placemark's name and icon. <Point> in turn contains the element <coordinates> which is required. From the KML reference: A single tuple consisting of floating point values for longitude, latitude, and altitude (in that order). The reference also informs us that altitude is optional, and in fact we will not be generating altitude values. Finally the reference warns: Do not include spaces between the three values that describe a coordinate. This seems like an easy mistake to make. We'll need to be careful to avoid it. There will be a number of <Placemark> elements, one for each of our images. The question is how to handle these elements. The answer is that we'll treat KML as a 'fill in the blanks' style template. All of the structural and syntactic bits will be hard-coded, e.g. the XML header, namespace declaration, all of the element and attribute labels, and even the whitespace, which is not strictly required but will make it much easier for us to inspect the resulting files in a text editor. These components will form our template. The blanks are all of the text and html values of the various elements and attributes. We will use variables as place-holders everywhere we need dynamic data, i.e. values that change from one file to the next or one execution of the script to the next. Take a look at the strings I've used in the block above. $folder_name, $folder_description, $placemark_name, etc. For those of you unfamiliar with Perl, these are all valid variable names chosen to indicate where the variables slot into the structure. These are the same names used in the source file distributed with the tutorial. Section 7: Introduction to the Code At this point, having discussed every aspect of the project, we can succinctly describe how to write the code. We'll do this in 3 stages of increasing granularity. Firstly, we'll finish this tutorial with a natural language walk-through of the execution of the script. Secondly, if you look at the source file included with the project, you will notice immediately that comments dominate the code. Because instruction is as important an objective as the actual operation of the script, I use comments in the source to provide a running narration. For those of you who find this superabundant level of commenting a distraction, I'm distributing a second copy of the source file with much of the comments removed. Finally, there is the code itself. After all, source code is nothing more than a rigorously formal set of instructions that describe how to complete a task. Most programs, including this one, are a matter of transforming input in one form to output in another. In this very ge
Read more
  • 0
  • 0
  • 3803

article-image-building-crud-application-zk-framework
Packt
21 Oct 2009
5 min read
Save for later

Building a CRUD Application with the ZK Framework

Packt
21 Oct 2009
5 min read
An Online Media Library There are some traditional applications that could be used to introduce a framework. One condition for the selection is that the application should be a CRUD (Create —Read—Update—Delete) application. Therefore, an 'Online Media Library', which has all four operations, would be appropriate. We start with the description of requirements, which is the beginning of most IT projects. The application will have the following features: Add new media Update existing media Delete media Search for the media (and show the results) User roles (administrator for maintaining the media and user accounts for browsing the media) In the first implementation round the application should only have some basic functionality that will be extended step by step. A media item should have the following attributes: A title A type (Song or Movie) An ID which could be defined by the user A description An image The most important thing at the start of a project is to name it. We will call our project ZK-Medialib. Setting up Eclipse to Develop with ZK We use version 3.3 of Eclipse, which is also known as Europa release. You can download the IDE from http://www.eclipse.org/downloads/. We recommend using the version "Eclipse IDE for Java EE Developers". First we have to make a file association for the .zul files. For that open the Preferences dialog with Window | Preferences. After that do the following steps: Type Content Types into the search dialog. Select Content Types in the tree. Select XML in the tree. Click Add and type *.zul. See the result. The steps are illustrated in the picture below: With these steps, we have syntax highlighting of our files. However, to have content assist, we have to take care about the creation of new files. The easiest way is to set up Eclipse to work with zul.xsd. For that open the Preferences dialog with Window | Preferences. After that do the following steps: Type XML Catalog into the search dialog. Select XML Catalog in the tree. Press Add and fill out the dialog (see the second dialog below). See the result. Now we can easily create new ZUL files with the following steps: File | New | Other, and select XML: Type in the name of the file (for example hello.zul). Press Next. Choose Create XML file from an XML schema file: Press Next. Select Select XML Catalog entry. Now select zul.xsd: Now select the Root Element of the page (e.g. window). Select Finish. Now you have a new ZUL file with content assist. Go into the generated attribute element and press Alt+Space. Setting up a New Project The first thing we will need for the project is the framework itself. You can download the ZK framework from http://www.zkoss.org. At the time of writing, the latest version of ZK is 2.3.0. After downloading and unzipping the ZK framework we should define a project structure. A good structure for the project is the directory layout from the Maven project (http://maven.apache.org/). The structure is shown in the figure below. The directory lib contains the libraries of the ZK framework. For the first time it's wise to copy all JAR files from the ZK framework distribution. If you unzip the distribution of the version 2.3.0 the structure should look like the figure below. The structure below shows the structure of the ZK distribution. Here you can get the files you need for your own application. For our example, you should copy all JAR files from lib, ext, and zkforge to the WEB-INF/lib directory of your application. It's important that the libraries from ext and zkforge are copied direct to WEB-INF/lib. Additionally copy the directories tld and xsd to the WEB-INF directory of your application. Now after the copy process, we have to create the deployment descriptor (web.xml) for the web application. Here you can use web.xml from the demo application, which is provided from the ZK framework. For our first steps, we need no zk.xml (that configuration file is optional in a ZK application). The application itself must be run inside a JEE (Java Enterprise Edition) Webcontainer. For our example, we used the Tomcat container from the Apache project (http://tomcat.apache.org). However, you can run the application in each JEE container that follows the Java Servlet Specification 2.4 (or higher) and runs under a Java Virtual Machine 1.4 (or higher). We create the zk-media.xml file for Tomcat, which is placed in conf/Catalina/localhost of the Tomcat directory. <Context path="/zk-media" docBase="D:/Development/workspaces/workspace-zk-medialib/ZK-Medialib/src/main/webapp" debug="0"privileged="true" reloadable="true" crossContext="false"><Logger className="org.apache.catalina.logger.FileLogger"directory="D:/Development/workspaces/workspace-zk-medialib/logs/ZK-Medialib" prefix="zkmedia-" suffix=".txt" timestamp="true"/></Context> With the help of this context file, we can directly see the changes of our development, since, we set the root of the web application to the development directory.  
Read more
  • 0
  • 0
  • 4972

article-image-supporting-editorial-team-drupal-6
Packt
21 Oct 2009
16 min read
Save for later

Supporting an Editorial Team in Drupal 6

Packt
21 Oct 2009
16 min read
What you will do In this article, you will: Create a team Add Roles to support the team Define new Node Content types Configure permissions to support the Roles Handle a former (and disgruntled) team member The Creative team Let's take a quick look at Drupal's jargon regarding teams. Users—the logins of the individuals that make up a team Roles—the different 'job descriptions' based on a person's responsibilities Permissions—the granting of authorization to perform a Drupal function As the system administrator, you are authorized to perform any action within the Drupal environment, but you would not want every member of a team to have this absolute capability, or else you would soon have chaos. Let's first create a team. Then, we will look at assimilating that team into the Drupal environment. Our Creative team will be made up of individuals, each having one or more of the responsibilities mentioned below (Note: the titles are not Drupal terms): Copy Writers—are the writers of short articles Feature Writers—are the writers of long pieces, in which style matters a much as content Ad Writers—are the writers of internal and external advertising that will appear in blocks Proofreaders—are the reviewers who check pieces for spelling, grammar and usage errors Associate Editors—are the reviewers that are concerned with style, readability, and continuity Style Editors—are responsible for the formatting of content Graphic Artists—are the creators of the illustrations and images that are used as copy Senior Editor—is responsible for the quality of all of the above Moderator—manages postings by site visitors, such as comments and blog posts Blogger—creates blog entries Administrator—addresses the aspects of the site unrelated to content With our team assembled, let's move on to creating the roles in our site. Roles Drupal comes with three roles installed: creator (also known as userID1), authenticated user and anonymous user. Only the latter two are listed when assigning permissions, because the creator role can do everything, including things that you might not want the administrator to be able to do. It's best not to use the creator's login as the administrator login. A separate administrator role should be created and granted the appropriate permissions. So, looking at the list above, we will need to create roles for all of our team members. Creating roles in Drupal is a quick and easy process. Let's create them. Activity 1: Creating Roles The Name of the role is assigned as per the responsibilities of the team member. Login as the administrator. Select the User management option. Select the Roles option. Enter the name of the role in the text box, as shown in the following screenshot, and then click on the Add role button. We'll add the rest of the roles in the same way. After a couple of minutes, we have the entire team added, as seen in following screenshot. The edit role links are locked for anonymous user and authenticated user, because those roles should remain constant and never be edited or deleted. Node Content types The default installation of Drupal contains two Node Content types namely: Page and Story. Some modules, when activated, create additional Node Content types. One such example is the Blog entry, and another is an Event, which is used when using an event calendar. We're using the term Node Content to differentiate content nodes in Drupal, such as Pages and Stories, from other non-node types of content, such as Blocks, which is the generic term for anything on the page. What is the purpose of having different Node Content types? If we want a feature  writer to be able to create Features, then how do we accomplish that? Currently, we have Stories and Pages as our Node Content types. So, if we give the Feature writer the ability to create a Page, then what differentiates that Page from any other Page on our site? If we consider a Page as a Feature, then anyone who can create a Page has created a Feature, but that's not right, because not every Page is a Feature. Activity 2: Node Content for our Roles Because we have role types that we want to limit to working with their respective Node Content types, we will need to create those Node Content types. We will assign a Node Content type of Feature for Feature Writers, Ads for Ad Writers, and so on. Let's create them. From the admin menu, we'll select Content management. On the Content management page, we'll choose Content types. The Node Content types are listed, and from the top of the page we'll select Add content type. We're going to start with the Feature writer, so in the Name field we'll enter Feature. The next field, Type, determines the term that will be used to construct the default URL for this Node Content type. We'll enter feature as the text value for this field. In the Description field, we'll enter a short description, which will appear next to the Node Content type's link on the admin page, as follows: Next, we'll click on the Workflow settings link to display the additional workflow fields. When our Feature Writer completes a piece, it will not be published immediately. It will have to be proofread and undergo an editorial review. So, we'll deselect the Published and Promote to front page boxes. At this point we've configured the new Node Content type as per our needs, so we'll click on the Save button, and then we can see it listed, as shown in the screenshot below. We already have a Node Content type of Blog entry, which was created by the Blog module. The only other Role that requires its own Node Content type is the Ad Writer. This is because the other Roles defined will only edit existing Node Content,  as opposed to creating it. It is here that we run into trouble. The pieces that are 'grabbed' by Drupal to appear (usually) at the center of the screen, which we have been referring to as Node Content, are nodes, whether a Page, a Story, or now a Feature. The small blocks that appear around the sides, or on top, or at the bottom, are Blocks. Because they are placed in those positions, and are not available for selection as Node Content, they are not nodes. The Benefit of BlocksWhen looking at a typical web page of a CMS site, you will see a main body area with Node Content, such as articles, and also small blocks of information elsewhere on the page, such as in the left and right margins, or along the top or bottom. The main content, nodes, are limited, as to where they appear. However, each of the blocks can be configured to appear on any or every page of the site. That is why ads are best created as blocks, so that they can be placed where they will be the most effective. Nodes are created via the Create content function, and that function is available from the front page to anyone who is granted the permission. Using the admin menu is not necessary. On the other hand, blocks are created and edited from the Block page, which is an admin function. Although we can grant that capability to a user without granting any other admin capabilities, it would be much better if we could have an Ad Writer create ads in the same way that they create other Node Content. The reason for this is that with nodes, separate permission can be given to create a node and to administer a node. With  blocks, there is only one permission. You can create, edit, delete, and rearrange all of the blocks, or none. This opens the door to an accidental disaster. We don't want the Ad Writer doing anything but creating ad copy. So, in order to address this concern, we've added a module to our site: Node blocks. This module allows us designate a Node Content type (other than Page and Story) to be used as a Block. With that in mind, let's create our final Node Content type. Where can you find this module? This module, as well as other modules, can be found at http://drupal.org/project/modules. Activity 3—creating a Block Node Content type We'll start by repeating Steps 1 to 3 from the previous activity. In the Title field, we'll type in Ad. In the Type field, we'll type in ad. For the description, we'll enter Advertisement copy that will be used as blocks. We'll click on Workflow settings and deselect Published and Promoted to front page, as we did with the Feature. There is a new heading in this dialog, Available as Block, as seen in the following screenshot. This comes from the module that we've added. We'll select Enabled, which will make any piece created with this Node Content type available as a Block. That's all we need to do, so now we'll save our new Node Content type   Permissions The way that we enable one user to do something that the other cannot is by creating different user types (which we have done), different Node Content types—where necessary—(which again has been done), and then assign permissions to the user types (which we'll do now). The administrator will not be listed as a user type under Permissions, because if permissions were accidentally removed from the administrator, there might be no other user type that has the permissions to restore them. Activity 4: Granting Permissions Let's now assign to the members of the Creative team the Permissions that suit them best. From the admin menu we'll select User management. On the User management page we'll choose Permissions. The screenshot below shows us the upper portion of the screen. There are numerous permissions, and we now have numerous User types, so the resulting grid is very large. Rather than step-by-step illustrations, I'll simply list each Role and the Permissions that should be enabled in the form of Heading→Permission. Ad Writer node module→access content node module→create ad content node module→delete any ad content node module→delete own ad content node module→edit any ad content node module→edit own ad content node module→view revisions fckeditor module→access fckeditor Because of the number of Node Content types, each having several permissions as seen above, combined with the permissions being alphabetical by verb within the heading, instead of Content type, the necessary permissions are somewhat distant from each other and require scrolling to find them all. Feature Writer node module→access content node module→create feature content node module→delete any feature content node module→delete own feature content node module→edit any feature content node module→edit own feature content node module→view revisions fckeditor module→access fckeditor Blogger blog module→create blog entries blog module→delete own blog entries blog module→edit own blog entries node module→access content node module→view revisions fckeditor module→access fckeditor Associate Editor—The Associate Editor is concerned with content, which means editing it. The ability to create or delete content, to affect where the content appears, and so on, is not required for this Role. fckeditor module→access fckeditor node module→access content node module→edit any ad content node module→edit any feature content node module→edit any page content node module→edit any story content node module→revert revisions node module→view revisions path module→create URL aliases Copy Writer fckeditor module→access fckeditor node module→access content node module→create page content node module→create story content node module→delete own page content node module→delete own story content node module→edit own page content node module→edit own story content node module→view revisions Graphic Artist blog module→edit any blog entry fckeditor module→access fckeditor fckeditor module→allow fckeditor fle uploads node module→access content node module→edit any ad content node module→edit any feature content node module→edit any page content node module→edit any story content Moderator blog module→edit any blog entry comment module→access comments comment module→administer comments fckeditor module→access fckeditor node module→access content node module→edit any ad content node module→edit any feature content node module→edit any page content node module→edit any story content Proofreader blog module→edit any blog entry fckeditor module→access fckeditor node module→access content node module→edit any ad content node module→edit any feature content node module→edit any page content node module→edit any story content Style Editor block module→administer blocks fckeditor module→access fckeditor fckeditor module→allow fckeditor fle uploads node module→access content node module→edit any ad content node module→edit any feature content node module→edit any page content node module→edit any story content Senior Editor block module→administer blocks blog module→delete any blog entry blog module→edit any blog entry comment module→access comments comment module→administer comments fckeditor module→access fckeditor fckeditor module→allow fckeditor fle upload node module→access content node module→delete any ad content node module→delete any feature content node module→delete any page content node module→delete any story content node module→delete revisions node module→edit any ad content node module→edit any feature content node module→edit any page content node module→edit any story content node module→revert revisions node module→view revisions path module→create URL aliases view module→access all views view module→administer views With that, we have assigned the required permissions to all of our team members, which will allow them to do their jobs, but keep them out of trouble! However, what do you do when someone intentionally gets into trouble? The disgruntled team member So, we've been marching along as one big happy team, and then it happens. Someone gets let go, and that someone isn't happy about it, to say the least. Of course, we'll remove that person's login, but there is public access to our site as well, in the form of comments. Is there a way for us to stop this person from looking for ways to annoy us, or worse? Yes! Activity 5: Blocking Let's now perform the tasks necessary to keep disgruntled employees (and trouble-makers) at bay. From the admin menu, select User management. On the User management page, we'll select the Access rules option. We'll choose the Add rule option on the Access rules page. On the Add rule page, we have the option to deny access to a user, email address, or host. The username and email address options will block someone from registering, but will not affect someone already registered. The host name will stop anyone with that host name from accessing the system at all. Wild cards can be used: % will match any number of characters, and _ will match one character. Allow rules can be used to give access to someone who would otherwise be blocked by a host or wild card rule. In our case, let's say that the disgruntled former team member is spamming our comments from a host called spamalot.com, and is doing it from many emails. The first thing we want to do is create a 'deny' rule that will deny access to anyone from that host, as shown in the following figure, and then click on the Add rule button. We're also going to create an email deny rule for %@spamalot.com. We shouldn't have to (as we've already denied the host, which in turn would include all of the emails from that host), but we need to, because the rules testing logic ignores that hierarchy at this time. Let's also say that we've received an email from someone whose email address is [email protected], who would like to be a member of our site, and we verify that this person is not our former team member. In such a scenario, we will need to create an Allow rule, as shown in the following screenshot, so that this person can get past our previous Deny rule. Our rules now appear, as shown below, when we click on the List button, which is at the top of the page. It's always good to check and make certain that we've created the rule(s) correctly. If we don't do this, then we might inadvertently block the wrong users. Let's click on the Check rules tab at the top of the Access rules page. In the email box, we'll first try [email protected]. Next, we'll try [email protected]. In this last activity we have created some access rules. Drupal uses these access rules to determine who can and cannot access the site. In some cases, you may be having difficulty with a particular user adding comments to your site. Of course, if you set comments to require moderation, then the questionable ones won't appear, but it can still be a pain having to review a steady stream of them. In that case, you can block a specific user. You might be having difficulty with comments from more than one user at a given email domain. You can, if you like, block everyone from that location. On the other hand, your site might be meant for users of a particular domain, perhaps a university. In that case, you can allow users from that domain and only them. Summary In this article we learned about: Roles—defining types of users Permissions—defining capabilities for each role Node Content types—as they apply to Roles Access Rules—for those pesky, misbehaving users These features have been explained and learned with the help of activities where we have: Created a team Added Roles to enable the team Defined new Node Content types to suit the requirements of some team members Configured permissions to support the Roles and Node Content types Handled a former (and disgruntled) team member
Read more
  • 0
  • 0
  • 1258
article-image-deploying-your-dotnetnuke-portal
Packt
21 Oct 2009
7 min read
Save for later

Deploying Your DotNetNuke Portal

Packt
21 Oct 2009
7 min read
Acquiring a Domain Name One of the most exciting parts of starting a website is acquiring a domain name. When selecting the perfect name there are a few things that you need to keep in mind: Keep it brief: The more letters that a user has to type in to get to your site the more difficult it is going to be for them to remember your site. The name you select will help to brand your site. If it is catchy then people will remember it more readily. Have alternative names in mind: As time goes on, great domain names are becoming fewer and fewer. Make sure you have a few alternatives to choose from. The first domain name you had in mind may already be taken so having a backup plan will help when you decide to purchase a name. Consider buying additional top-level domain names: Say you've already bought www.DanielsDoughnuts.com. You might want to purchase www.DanielsDoughnuts.net as well to protect your name. Once you have decided on the name you want for your domain, you will need to register it. There are dozens of different sites that allow you to register your domain name as well as search to see if it is available. Some of the better-known domain-registration sites are Register.com and NetworkSolutions.com. Both of these have been around a long time and have good reputations. You can also look into some of the discount registers like BulkRegister (http://www.BulkRegister.com) or Enom (http://www.enom.com). After deciding on your domain name and having it registered, you will need to find a place to physically host your portal. Most registration services will also give the ability to host your site with them but it is best to search for a provider that fits your site's needs. Finding a Hosting Provider When deciding on a provider to host your portal, you will need to consider a few things: Cost: This is of course one of the most important things to look at when looking for a provider. There are usually a few plans to select from. The basic plan usually allows you a certain amount of disk space for a very small price but has you share the server with numerous other websites. Most providers also offer dedicated (you get the server all to yourself) and semi-dedicated (you share with a few others). It is usually best to start with the basic plan and move up if the traffic on your site requires it. Windows servers: The provider you select needs to have Windows Server 200/2003 running IIS (Internet Information Services). Some hosts run alternatives to Microsoft like Linux and Apache web server. .NET framework: The provider's servers need to have the .NET framework version 1.1 installed. Most hosts have installed the framework on their servers, but not all. Make sure this is available because DotNetNuke needs this to run. Database availability: You will need database server availability to run DotNetNuke and Microsoft SQL Server is the preferred back-end. It is possible to run your site off Microsoft Access or MySQL (with a purchased provider), but I would not suggest it. Access does not hold up well in a multi-user platform and will slow down considerably when your traffic increases. Also, since most module developers target MS SQL, MySQL, while able to handle multiple users, does not have the module support. FTP access: You will need a way to post your DotNetNuke portal files to your site and the easiest way is to use FTP. Make sure that your host provides this option. E-mail server: A great deal of functionality associated with the DotNetNuke portal relies on being able to send out e-mails to users. Make sure that you will have the availability of an e-mail server. Folder rights: The ASPNET or NetworkService Account (depending on server) will need to have full permissions to the root and subfolders for your DotNetNuke application to run correctly. Make sure that your host either provides you with the ability to set this or is willing to set this up for you. We will discuss the exact steps later in this article. The good news is that you will have plenty of hosting providers to choose from and it should not break the bank. Try to find one that fits all of your needs. There are even some hosts (www.WebHost4life.com) that will install DotNetNuke for you free of charge. They host many DotNetNuke sites and are familiar with the needs of the portal. Preparing Your Local Site Once you have your domain name and a provider to host your portal, you will need to get your local site ready to be uploaded to your remote server. This is not difficult, but make sure you cover all of the following steps for a smooth transition. Modify the compilation debug setting in the web.config file: You will need to modify your web.config file to match the configuration of the server to which you will be sending your files. The first item that needs to be changed is the debug configuration. This should be set to false. You should also rebuild your application in release mode before uploading. This will remove the debug tokens, perform optimizations in the code, and help the site to run faster: <!-- set debugmode to false for running application --> <compilation debug="false" /> Modify the data-provider information in the web.config file: You will need to change the information for connecting to the database so that it will now point to the server on your host. There are three things to look out for in this section (changes shown overleaf): First, if you are using MS SQL, make sure SqlDataProvider is set up as the default provider. Second, change the connection string to reflect the database server address, the database name (if not DotNetNuke), as well as the user ID and password for the database that you received from your provider. Third, if you will be using an existing database to run the DotNetNuke portal, add an objectQualifier. This will append whatever you place in the quotations to the beginning of all of the tables and procedures that are created for your database. <data defaultProvider=" SqlDataProvider" > <providers> <clear/> <add name = "SqlDataProvider" type = "DotNetNuke.Data.SqlDataProvider, DotNetNuke.SqlDataProvider" connectionStringname = "Server=MyServerIP;Database=DotNetNuke; uid=myID;pwd=myPWD;" providerPath = "~ProvidersDataProvidersSqlDataProvider" objectQualifier = "DE" databaseOwner = "dbo" upgradeConnectionString = "" />   Modify any custom changes in the web.config file: Since you set up YetAnotherForum for use on our site, we will need to make the modifications necessary to ensure that the forums connect to the hosted database. Change the to point to the database on the server: <yafnet> <dataprovider>yaf.MsSql,yaf</dataprovider> <connstr> user id=myID;password=myPwd;data source=myServerIP;initial catalog=DotNetNuke;timeout=90 </connstr> <root>/DotNetNuke/DesktopModules/YetAnotherForumDotNet/</root> <language>english.xml</language> <theme>standard.xml</theme> <uploaddir>/DotNetNuke/DesktopModules/yetanotherforum.net /upload/</uploaddir> <!--logtomail>email=;server=;user=;pass=;</logtomail--> </yafnet> Add your new domain name to your portal alias: Since DotNetNuke has the ability to run multiple portals we need to tell it which domain name is associated with our current portal. To do this we need to sign on as host (not admin) and navigate to Admin | Site Settings on the main menu. If signed on as host, you will see a Portal Aliases section on the bottom of the page. Click on the Add New HTTP Alias link:
Read more
  • 0
  • 0
  • 1337

article-image-comparing-asterisk-and-openser
Packt
21 Oct 2009
4 min read
Save for later

Comparing Asterisk and OpenSER

Packt
21 Oct 2009
4 min read
Introduction If you work with IP telephony, it's quite possible that you have not heard about OpenSER, but certainly you must have heard about Asterisk. Well, I love a polemic headline and I have seen this question asked in the forums many times.  So, I will dare to compare these two very popular softwares dedicated to the VoIP market.  The idea here is not to show you which one is the best, but mainly how they are different from each other. Below is a comparison topic by topic. Architecture Asterisk is a Back to Back User Agent (B2BUA), while OpenSER is a Session Initiation Protocol (SIP) Proxy.  This makes all the difference between them. The SIP proxy architecture is faster than a B2BUA because it deals only with signaling. On the other hand, the B2BUA architecture, even being slower, handles the media and it is capable of several services not available in a SIP proxy such as Codec Translation (that is G729<->G.711), Protocol Translation (SIP<->H323), and services related to media such as IVR, Queuing, Text to Speech, and Voice Recognition. Nat Traversal OpenSER deals a lot better with NAT traversal then Asterisk. You can send the media from your customer directly to the provider using OpenSER in most cases (non-symmetric NAT). Manipulating directly the SIP protocol allows you to handle special cases, such as, when you have two customers behind the same NAT device and want to send the media directly between them. Load Balancing OpenSER has specific load balancing algorithms with hash. So it can load balance by the "ruri", "username", "call-id", and some other properties. It can use redirected messages consuming very few resources from the load balancer machine. Failover is part of the solution, things you won't find on Asterisk, but are complementary. Low Level Access to SIP Header and Transactions OpenSER gives you low level access to the protocol. You can handle all the requests and responses. So it is possible, most times, to translate between two incompatible versions of SIP, handling directly the SIP headers, requests, and responses. This is an important feature when you have SIP implementations from different manufacturers, which can be incompatible between each other. Integration with Radius, Diameter, and LDAP OpenSER has built-in integration with LDAP, Radius, and Diameter. While this is also possible with Asterisk, the implementation on OpenSER is developed in C, integrated as a module, and is part of the OpenSER distribution (no perl, no python, no third-party modules). Carrier Class Routing The module CARRIERROUTE implements sophisticated algorithms to route calls to the PSTN. Some times VoIP providers have tables with more then 40.000 routes. In this case, you will absolutely need a specific routing module capable of failback, blacklists, and some other features specific to VoIP providers. Media Services OpenSER is a SIP Proxy and is not capable of any media related services. So it is not possible to create, using OpenSER, systems such as VoiceMail, IVR, TTS, and Voice Recognition. However, it is possible to integrate any of these services to the platform using a separate Media Server such as Asterisk, Yate, and FreeSwitch.  This is by design, and it is the way the SIP protocol is defined in the standards (RFC3261). Connectivity to the PSTN OpenSER always need a SIP gateway to connect to the PSTN. There is no possibility to install telephony cards in the server.  In several cases, Asterisk is used as the PSTN gateway for OpenSER. Conclusion I love this discussion, because Asterisk and OpenSER completes one another. OpenSER provides rock solid SIP services to VoIP providers, it is capable to handle large volumes of calls, to load balance SIP, to solve advanced NAT scenarios, and to deal with SIP signaling as no other. Asterisk is a B2BUA, very strong in the PBX market. It is simpler to configure and can handle low to medium volumes. Asterisk can be used as a "single box does it all", while OpenSER requires all the architectural components of SIP to work. OpenSER is a "hit" in the VoIP provider market and in Universities. Asterisk PBX is a success in the IP PBX market, and it is getting a piece of the small to medium VoIP providers. Usually you start using OpenSER when you have some special need, such as load balancing or when you have large volumes such as more than a thousand registered users. Choose wisely!   If you have read this article you may be interested to view : Using Asterisk as a PSTN Gateway for OpenSER Building the User Portal with SerMyAdmin for OpenSER
Read more
  • 0
  • 0
  • 4158

article-image-social-bookmarking-mediawiki
Packt
21 Oct 2009
4 min read
Save for later

Social Bookmarking - MediaWiki

Packt
21 Oct 2009
4 min read
Traditionally, bookmarking was done through Internet browsing software, such as Internet Explorer, Safari, Firefox, or Opera. With social bookmarking, your bookmarks are not confined to one browser, but are stored online. The following two options are available for enabling social bookmarking on your website: Link to each individual social bookmarking service that you wish to use Use a social bookmarking aggregator such as Socializer or AddThis Even if you do not have links to allow visitors to bookmark your website, many services allow their users to install toolbars in their browser, which allows your website to be added anyway. Adding these links will help to spread your wiki and its new skin very fast. Individual Social Bookmarking Services There are huge numbers of social bookmarking services on the Internet, and quite a number of these have become reasonably popular. We will look at some of the more popular bookmarking services such as: Mister Wong Furl Facebook Your Wiki's Audience One thing to consider before adding social bookmarking service links to your wiki is your audience. For instance, if your wiki is technology-related, you may find it better to use Digg than Facebook, as Digg is more popular than Facebook for your wiki's intended audience. If your wiki's visitors are primarily from Germany, you may find Mister Wong more useful than Furl, because Mister Wong is more popular with the German users. There are many other social bookmarking services available, including Del.icio.us (http://del.icio.us) and StumbleUpon (http://stumbleupon.com), which you can use after asking your wiki's visitors by means of a poll, or following simple experimentation. Example of Audience durhamStudent (http://durhamStudent.co.uk) is a niche website aimed at students of Durham University in the UK: The durhamStudent website uses both the methods of social bookmarking discussed earlier, providing links for eKstreme's Socializer and a link to an individual bookmarking service, Facebook: Facebook was linked individually here because it is incredibly popular among the students at Durham University (indeed, the Durham network is only open to those with a university email address). Although Facebook's bookmarking service is accessible through Socializer, it is also linked separately as the website's target audience is more likely to use that service than any other. Mister Wong Mister Wong, http://www.mister-wong.com, is popular with German and other European users (though not so much in Great Britain), and allows the users to store their bookmarks while maintaining their privacy, with the ability to set bookmarks as "public" or "private". Generally, the social bookmarking services ask for two pieces of information when creating a link from your website to them: the URL (address) of the page or website you want to add, and the title of that page or website, as you wish it to be posted. Linking to Mister Wong To link to Mister Wong, create a link where you want your social bookmark to be shown in your MediaWiki template in the following format: http://www.mister-wong.com/index.php?action=addurl&bm_url=www.example. com &bm_description=Your+Website Spaces are automatically escaped with a "+" sign by the majority of social bookmarking services. You don't have to worry about properly escaping spaces in the title of your link when linking to bookmarking services. To use these services with MediaWiki, we will be required to use some PHP. In particular, we will need the following: The page's title, which we can get with <?php $this->text('pagetitle') ?>. The website's address, retrievable with <?php $this->urlencode('serverurl') ?>. For simplicity, we will assume that our visitors will always want to bookmark JazzMeet's homepage, http://www.jazzmeet.com. Thus, the code in our MediaWiki template would appear as follows: <a href= "http://www.mister-wong.com/index.php?action=addurl&bm_url=www.jazzmeet.com &bm_description=JazzMeet" title="Bookmark JazzMeet with Mister Wong">Bookmark JazzMeet with Mister Wong</a> What Mister Wong Users See If a visitor to your wiki decides to bookmark your wiki with Mister Wong, they will be greeted with a screen similar to the following, with fields for the address of the website (URL), title, related keywords ("tags"), and a comment about the website: They are also given the option to bookmark as either "public", allowing other Mister Wong users to see it, or "private", which restricts the bookmarked website to their account only.
Read more
  • 0
  • 0
  • 1289
article-image-apache-ofbiz-service-engine-part-1
Packt
21 Oct 2009
6 min read
Save for later

Apache OFBiz Service Engine: Part 1

Packt
21 Oct 2009
6 min read
Defining a Service We first need to define a service. Our first service will be named learningFirstService. In the folder ${component:learning}, create a new folder called servicedef. In that folder, create a new file called services.xml and enter into it this: <?xml version="1.0" encoding="UTF-8" ?> <services xsi:noNamespaceSchemaLocation="http://www.ofbiz.org/dtds/services.xsd"> <description>Learning Component Services</description> <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="learningFirstService"> <description>Our First Service</description> <attribute name="firstName" type="String" mode="IN" optional="true"/> <attribute name="lastName" type="String" mode="IN" optional="true"/> </service> </services> In the file ${component:learning}ofbiz-component.xml, add after the last <entity-resource> element this: <service-resource type="model" loader="main" location="service def/services.xml"/> That tells our component learning to look for service definitions in the file ${component:learning}servicedefservices.xml. It is important to note that all service definitions are loaded at startup; therefore any changes to any of the service definition files will require a restart! Creating the Java Code for the Service In the package org.ofbiz.learning.learning, create a new class called LearningServices with one static method learningFirstService: package org.ofbiz.learning.learning; import java.util.Map; import org.ofbiz.service.DispatchContext; import org.ofbiz.service.ServiceUtil; public class LearningServices { public static final String module = LearningServices.class.getName(); public static Map learningFirstService(DispatchContext dctx, Map context){ Map resultMap = ServiceUtil.returnSuccess("You have called on service 'learningFirstService' successfully!"); return resultMap; } } Services must return a map. This map must contain at least one entry. This entry must have the key responseMessage (see org.ofbiz.service.ModelService.RESPONSE_MESSAGE), having a value of one of the following: success or ModelService.RESPOND_SUCCESS error or ModelService.RESPOND_ERROR fail or ModelService.RESPOND_FAIL By using ServiceUtil.returnSuccess() to construct the minimal return map, we do not need to bother adding the responseMessage key and value pair. Another entry that is often used is that with the key successMessage (ModelService.SUCCESS_MESSAGE). By doing ServiceUtil.returnSuccess("Some message"), we will get a return map with entry successMessage of value "Some message". Again, ServiceUtil insulates us from having to learn the convention in key names. Testing Our First Service Stop OFBiz, recompile our learning component and restart OFBiz so that the modified ofbiz-component.xml and the new services.xml can be loaded. In ${component:learning}widgetlearningLearningScreens.xml, insert a new Screen Widget: <screen name="TestFirstService"> <section> <widgets> <section> <condition><if-empty field-name="formTarget"/></condition> <actions> <set field="formTarget" value="TestFirstService"/> <set field="title" value="Testing Our First Service"/> </actions> <widgets/> </section> <decorator-screen name="main-decorator" location="${parameters.mainDecoratorLocation}"> <decorator-section name="body"> <include-form name="TestingServices" location="component://learning/widget/learning/LearningForms.xml"/> <label text="Full Name: ${parameters.fullName}"/> </decorator-section> </decorator-screen> </widgets> </section> </screen> In the file ${component:learning}widgetlearningLearningForms.xml, insert a new Form Widget: <form name="TestingServices" type="single" target="${formTarget}"> <field name="firstName"><text/></field> <field name="lastName"><text/></field> <field name="planetId"><text/></field> <field name="submit"><submit/></field> </form> Notice how the formTarget field is being set in the screen and used in the form. For now don't worry about the Full Name label we are setting from the screen. Our service will eventually set that. In the file ${webapp:learning}WEB-INFcontroller.xml, insert a new request map: <request-map uri="TestFirstService"> <event type="service" invoke="learningFirstService"/> <response name="success" type="view" value="TestFirstService"/> </request-map> The control servlet currently has no way of knowing how to handle an event of type service, so in controller.xml we must add a new handler element immediately under the other <handler> elements: <handler name="service" type="request" class="org.ofbiz.webapp.event.ServiceEventHandler"/> <handler name="service-multi" type="request" class="org.ofbiz.webapp.event.ServiceMultiEventHandler"/> We will cover service-multi services later. Finally add a new view map: <view-map name="TestFirstService" type="screen" page="component://learning/widget/learning/LearningScreens.xml#TestFirstService"/> Fire to webapp learning an http OFBiz request TestFirstService, and see that we have successfully invoked our first service: Service Parameters Just like Java methods, OFBiz services can have input and output parameters and just like Java methods, the parameter types must be declared. Input Parameters (IN) Our first service is defined with two parameters: <attribute name="firstName" type="String" mode="IN" optional="true"/> <attribute name="lastName" type="String" mode="IN" optional="true"/> Any parameters sent to the service by the end-user as form parameters, but not in the services list of declared input parameters, will be dropped. Other parameters are converted to a Map by the framework and passed into our static method as the second parameter. Add a new method handleInputParamaters to our LearningServices class. public static Map handleParameters(DispatchContext dctx, Map context){ String firstName = (String)context.get("firstName"); String lastName = (String)context.get("lastName"); String planetId= (String)context.get("planetId"); String message = "firstName: " + firstName + "<br/>"; message = message + "lastName: " + lastName + "<br/>"; message = message + "planetId: " + planetId; Map resultMap = ServiceUtil.returnSuccess(message); return resultMap; } We can now make our service definition invoke this method instead of the learningFirstService method by opening our services.xml file and replacing: <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="learningFirstService"> with: <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="handleParameters"> Once again shutdown, recompile, and restart OFBiz. Enter for fields First Name, Last Name, and Planet Id values Some, Name, and Earth, respectively. Submit and notice that only the first two parameters went through to the service. Parameter planetId was dropped silently as it was not declared in the service definition. Modify the service learningFirstService in the file ${component:learning}servicedefservices.xml, and add below the second parameter a third one like this: <attribute name="planetId" type="String" mode="IN" optional="true"/> Restart OFBiz and submit the same values for the three form fields, and see all three parameters go through to the service.  
Read more
  • 0
  • 0
  • 2056

article-image-implementing-document-management-alfresco-3-part1
Packt
21 Oct 2009
5 min read
Save for later

Implementing Document Management in Alfresco 3- part1

Packt
21 Oct 2009
5 min read
Managing spaces A space in Alfresco is nothing but a folder that contains content as well as sub spaces. The space users are the users invited to a space to perform specific actions such as editing content, adding content, discussing a particular document, and so on. The exact capability that a given user has within a space is a function of their role, or rights. Let's consider the capability of creating a sub-space. By default, in order to create a sub-space, one of the following must apply: The user is the administrator of the system The user has been granted the Contributor role The user has been granted the Coordinator role The user has been granted the Collaborator role Similarly, to edit space properties, a user will need to be the administrator or be granted a role that gives them rights to edit the space. These roles include Editor, Collaborator and Coordinator. Space is a smart folder Space is a folder with additional features, such as, security, business rules, workflow, notifications, local search capabilities, and special views. The additional features, which make the space a smart folder, are explained as follows: Space security: You can define security at the space level. You can designate a user or a group of users who can perform certain actions on the content in a space. For example, on the Marketing Communications space in the Intranet, you can specify that only users in the marketing group can add content, and other users can only see the content. Space business rules: Business rules, such as transforming content from Microsoft Word to Adobe PDF and sending notifications when content gets into a space, can be defined at the space level. Space workflow: You can define and manage the content workflow on a space. Typically, you will create a space for the content that needs to be reviewed, and a space for the content that has been approved. You will create various spaces for dealing with the different stages that the work flows through, and Alfresco will manage the movement of the content between those spaces. Space events: Alfresco triggers events when content moves into a space, when content moves out of a space, or when content is modified within a space. You can capture such events at the space level, and trigger certain actions, such as sending email notifications to certain users. Space aspects: Aspects are additional properties and behavior that can be added to the content, based on the space in which it resides. For example, you can define a business rule to add customer details to all of the customer contract documents that are in your intranet's Sales space. Space search: Alfresco search functions can be limited to a space. For example, if you create a space called Marketing, then you can limit the search to documents within the Marketing space, instead of searching the entire site. Space syndication: Content in a space can be syndicated by applying RSS feed scripts to a space. You can apply RSS feeds to your News space, so that other applications and web sites can subscribe to this feed for news updates. Space content: Content in a space can be versioned, locked, checked-in and checked-out, and managed. You can specify certain documents in a space to be versioned, and others not. Space network folder: Space can be mapped to a network drive on your local machine, enabling you to work with the content locally. For example, by using CIFS interface, a space can be mapped to the Windows network folder. Space dashboard view: Content in a space can be aggregated and presented using special dashboard views. For example, the Company Policies space can list all of the latest policy documents, that have been updated in the past one month or so. You can create different views for Sales, Marketing, and Finance departmental spaces. Why space hierarchy is important Like regular folders, a space can have child spaces (called sub spaces). These sub spaces can have further sub spaces of their own. There is no limitation on the number of hierarchical levels. However, the space hierarchy is very important for all of the reasons specified above, in the previous section. Any business rules and security defined for a space is applicable to all of the content and sub spaces within that space. Your space hierarchy should look similar to the following screenshot: A space in Alfresco enables you to define various business rules, a dashboard view, properties, workflow, and security for the content belonging to each department. You can decentralize the management of your content by providing access to departments at the individual space levels. The example of the Intranet space should contain sub spaces, as shown in the preceding screenshot. You can create spaces by logging in as the administrator. It is also very important to set the security (by inviting groups of users to these spaces).
Read more
  • 0
  • 0
  • 1513