Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-designing-facebook-clone-and-creating-colony-using-ruby
Packt
25 Aug 2010
14 min read
Save for later

Designing the Facebook Clone and Creating Colony using Ruby

Packt
25 Aug 2010
14 min read
(For more resources on Ruby, see here.) Main features Online social networking services are complex applications with a large number of features. However, these features can be roughly grouped into a few common categories: User Community Content-sharing Developer User features are features that relate directly to and with the user. For example, the ability to create and share their own profiles, and the ability to share status and activities are user features. Community features are features that connect users with each other. An example of this is the friends list feature, which shows the number of friends a user has connected with in the social network. Content sharing features are quite easy to understand. These are features that allow a user to share his self-created content with other users, for example photo sharing or blogging. Social bookmarking features are those features that allow users to share content they have discovered with other users, such as sharing links and tagging items with labels. Finally, developer features are features that allow external developers to access the services and data in the social networks. While the social networking services out in the market often try to differentiate themselves from each other in order to gain an edge over their competition, in this article we will be building a stereotypical online social networking service. We will be choosing only a few of the more common features in each category, except for developer features, which for practical reasons will not be implemented here. Let's look at these features we will implement in Colony, by category. User User features are features that relate directly to users: Users' activities on the system are broadcast to friends as an activity feed. Users can post brief status updates to all users. Users can add or remove friends by inviting them to link up. Friendship in both ways need to be approved by the recipient of the invitation. Community Community features connect users with each other: Users can post to a wall belonging to a user, group, or event. A wall is a place where any user can post on and can be viewed by all users. Users can send private messages to other users. Users can create events that represent an actual event in the real world. Events pull together users, content, and provide basic event management capabilities, such as RSVP. Users can form and join groups. Groups represent a grouping of like-minded people and pulls together users and content. Groups are permanent. Users can comment on various types of shared and created content including photos, pages, statuses, and activities. Comments are textual only. Users can indicate that they like most types of shared and created content including photos, pages, statuses, and activities. Content sharing Content sharing features allow users to share content, either self-generated or discovered, with other users: Users can create albums and upload photos to them Users can create standalone pages belonging to them or attached pages belonging to events and groups Online social networking services grew from existing communications and community services, often evolving and incorporating features and capabilities from those services. Designing the clone Now that we have the list of features that we want to implement for Colony, let's start designing the clone. Authentication, access control, and user management Authentication is done through RPX, which means we delegate authentication to a third party provider such as Google, Yahoo!, or Facebook. Access control however is still done by Colony, while user management functions are shared between the authentication provider and Colony. Access control in Colony is done on all data, which prevents user from accessing data that they are not allowed to. This is done through control of the user account, to which all other data for a user belongs. In most cases a user is not allowed access to any data that does not belong to him/her (that is not shared to everyone). In some cases though access is implicit; for example, an event is accessible to be viewed only if you are the organizer of the event. As before, user management is a shared responsibility between the third party provider and the clone. The provider handles password management and general security while Colony stores a simple set of profile information for the user. Status updates Allowing you to send status updates about yourself is a major feature of all social networking services. This feature allows the user, a member of the social networking service, to announce and define his presence as well as state of mind to his network. In Colony, only the user's friends can read the statuses. Remember that a user's friend is someone validated and approved by the user and not just anyone off the street who happens to follow that user. Status updates belong to a single user but are viewable to all friends as a part of the user's activity feed. User activity feeds and news feeds Activity feeds, activity streams, or life streams are continuous streams of information on a user's activities. Activity feeds go beyond just status updates; they are a digital trace of a user's activity in the social network, which includes his status updates. This include public actions like posting to a wall, uploading photos, and commenting on content, but not private actions like sending messages to individuals. The user's activity feed is visible to all users who visit his user page. Activity feeds are a subset of news feeds that is an aggregate of activity feeds of the user and his network. News feeds give an insight into the user's activities as well as the activities of his network. In the design of our clone, the user's activity feed is what you see when you visit the user page, for example http://colony.saush. com/user/sausheong, while the news feed is what you see when you first log in to Colony, that's the landing page. This design is quite common to many social networking services. Friends list and inviting users to join One of the reasons why social networking services are so wildly successful is the ability to reach out to old friends or colleagues, and also to see friends of your friends. To clone this feature we provide a standard friends list and an option to search for friends. Searching for friends allows you to find other users in the system by their nicknames or their full names. By viewing a user's page, we are able to see his friends and therefore see his friend's user pages as well. Another critical feature in social networking services is the ability to invite friends and spread the word around. In Colony we tap on the capabilities of Facebook and invite friends who are already on Facebook to use Colony. While there is a certain amount of irony (using another social networking service to implement a feature of your social networking service), it makes a lot of practical sense, as Facebook is already one of the most popular social networking services on the planet. To implement this, we will use Facebook Connect. However, this means if the user wants to reach out and get others to join him in Colony he will need to log into Facebook to do so. As with most features, the implementation can be done in many ways and Facebook Connect (or any other type of third-party integration for that matter) is only one of them. Another popular strategy is to use web mail clients such as Yahoo! Mail or Gmail, and extract user contacts with the permission of the user. The e-mails extracted this way can be used as a mailing list to send to potential users. This is in fact a strategy used by Facebook. Posting to the wall A wall is a place where users can post messages. Walls are meant to be publicly read by all visitors. In a way it is like a virtual cork bulletin board that users can pin their messages on to be read by anyone. Wall posts are meant to be short public messages. The Messages feature can be used to send private messages. A wall can belong to a user, an event, or a group and each of these owning entities can have only one wall. This means any post sent to a user, event, or group is automatically placed on its one and only wall. A message on a wall is called a post, which in Colony is just a text message (Facebook's original implementation was text only but later extended to other types of media). Posts can be remarked on and are not threaded. Posts are placed on the wall in a reverse chronological order in a way that the latest post remains at the top of the wall. Sending messages The messaging feature of Colony is a private messaging mechanism. Messages are sent by senders and received by recipients. Messages that are received by a user are placed into an inbox while messages that the user sent are placed into a sent box. For Colony we will not be implementing folders so these are the only two message folders that every user has. Messages sent to and received from users are threaded and ordered by time. We thread the messages in order to group different messages sent back and forth as part of an ongoing conversation. Threaded messages are sorted in chronological order, where the last received message is at the bottom of the message thread. Attending events Events can be thought of as locations in time where people can come together for an activity. Social networking services often act as a nexus for a community so organizing and attending events is a natural extension of the features of a social networking service. Events have a wall, venue, date, and time where the event is happening, and can have event-specific pages that allow users to customize and market their event. In Colony we categorize users who attend events by their attendance status. Confirmed users are users who have confirmed their attendance. Pending users are users who haven't yet decided to attend the event. Declined users are users who have declined to attend the event after they have been invited. Declinations are explicit; there is an invisible group of users who are in none of the above three types. Attracting users to events or simply keeping them informed is a critical part of making this or any feature successful. To do so, we suggest events to users and display the suggested events in the user's landing page. The suggestion algorithm is simple, we just go through each of the user's friends and see which other events they have confirmed attending, and then suggest that event to the user. Besides suggestions, the other means of discovering events are through the activity feeds (whenever an event is created, it is logged as an activity and published on the activity feed) and through user pages, where the list of a user's pages are also displayed. All events are public, as with content created within events like wall posts and pages. Forming groups Social networking services are made of people and people have a tendency to form groups or categories based on common characteristics or interests. The idea of groups in Colony is to facilitate such grouping of people with a simple set of features. Conceptually groups and events are very similar to each other, except that groups are not time-based like events, and don't have a concept of attendance. Groups have members, a wall, and can have specific pages created by the group. Colony's capabilities to attract users to groups are slightly weaker than in events. Colony only suggests groups in the groups page rather than the landing page. However, groups also allow discovery through activity feeds and through user pages. Colony has only public groups and no restriction on who can join these public groups. Commenting on and liking content Two popular and common features in many consumer focused web applications are reviews and ratings. Reviews and ratings allow users to provide reviews (or comments) or ratings to editorial or user-generated content. The stereotypical review and ratings feature is Amazon.com's book review and rating, which allows users to provide book reviews as well as rate the book from one to five stars. Colony's review feature is called comments. Comments are applicable to all user-generated content such as status updates, wall posts, photos, and pages. Comments provide a means for users to review the content and give critique or encouragement to the content creator. Colony's rating feature is simple and follows Facebook's popular rating feature, called likes. While many rating features provide a range of one to five stars for the users to choose, Colony (and Facebook) asks the user to indicate if he likes the content. There is no dislike though, so the fewer number of likes a piece of content, the less popular it is. Colony's comments and liking feature is applicable to all user-generated content such as statuses, photos, wall posts, activities, and pages. Sharing photos Photos are one of the most popular types of user-generated content shared online, with users uploading 3 billion photos a month on Facebook; it's an important feature to include in Colony. The basic concept of photo sharing in Colony is that each user can have one or more albums and each album can have one or more photos. Photos can be commented, liked, and annotated. Blogging with pages Colony's pages are a means of allowing users to create their own full-page content, and attach it to their own accounts, a page, or a group. A user, event, or group can own one or more pages. Pages are meant to be user-generated content so the entire content of the page is written by the user. However in order to keep the look and feel consistent throughout the site, the page will be styled according to Colony's look and feel. To do this we only allow users to enter Markdown, a lightweight markup language that takes many cues from existing conventions for marking up plain text in e-mail. Markdown converts its marked-up text input to valid, well-formed XHTML. We use it here in Colony to let users write content easily without worrying about layout or creating a consistent look and feel. Technologies and platforms used We use a number of technologies in this article, mainly revolving around the Ruby programming language and its various libraries. In addition to Ruby and its libraries we also use mashups, which are described next. Mashups While the main features in the applications are all provided for, sometimes we still depend on other services provided by other providers. In this article we use four such external services—RPX for user web authentication, Gravatar for avatar services, Amazon Web Services S3 for photo storage, and Facebook Connect for reaching out to users on Facebook. Facebook Connect Facebook has a number of technologies and APIs used to interact and integrate with their platform, and Facebook Connect is one of them. Facebook Connect is a set of APIs that let users bring their identity and information into the application itself. We use Facebook Connect to send out requests to a user's friends, inviting them to join our social network. Note that for the user invitation feature, once a user has logged in through Facebook with RPX, he is considered to have logged into Facebook Connect and therefore can send invitations immediately without logging in again. Summary In this article, we described some of the essential features of Facebook and we categorized the features into User, Community, and Content sharing features. After that, we went into a high level discussion on these various features and how we implement them in our Facebook clone, Colony. After that, we went briefly into the various technologies used in the clone. In the next article, we will be building the Facebook clone using Ruby. Further resources on this subject: Building the Facebook Clone using Ruby [article] URL Shorteners – Designing the TinyURL Clone with Ruby [article]
Read more
  • 0
  • 0
  • 2142

article-image-implementing-panels-drupal-6
Packt
23 Aug 2010
4 min read
Save for later

Implementing Panels in Drupal 6

Packt
23 Aug 2010
4 min read
(For more resources on Drupal, see here.) Introduction This is an article centered on building website looks with Panels. We will create a custom front page for a website and a node edit form with a Panel. We will also see the process of generating a node override within Panels and take a look at Mini panels. Making a new front page using Panels and Views (for dynamic content display) We will now create a recipe with Views and Panels to make a custom front page. Getting ready We will need to install Views as the module, if not done already. As the Views and Panels projects are both led by Earl Miles (merlinofchaos), the UIs are very similar. We will not discuss Views in detail as it is out of the scope of the article. To use this recipe, a basic knowledge of Views is required. We will use our recipe of fl exible layout as our start for this recipe. To make a better layout and a custom website, I recommend using adaptive themes (http://adaptivethemes.com/). Here, in this recipe, I have used that theme. The adaptive theme is a starter theme and it includes several custom panels. There is also a built-in support for skins, which helps to make theming a lot easier. We will be using adaptive themes in this recipe for demonstration and will change our administration view from Garland to adaptive theme. The adaptive themes add extra layouts as shown in the following screenshot. How to do it... Go to the Flexible layout recipe we created or you can create your own layout using the same recipe. Now, we will create Views to be included in our Panels layout. Assuming that Views is installed, go to Site building | Views. Add a View. In the View name, add storyblock1, and add a description of your choice. Select the Row style as Node. Put in Items to display as 3. In the Style, we can select Unformatted or Grid depending on how you want to display the nodes. I will use Grid. In the Sort criteria, put in Node: Post date asc and Node: Type asc. In Filters, we just want the posts promoted to the first page. Do a live preview. We will need to add display of the default view as Block, so that the View is available as a block, which we can select in our Panels. We can also put the views default output as a Panel pane, but using blocks as a display of the Views gives the "read more" links by default. In the direct View, we have to create it. Say, we create a block—storyblock1, as shown in the following screenshot: Now, we need to go to the Flexible Layouts UI, as a layout created by you. Go to the Content tab. Earlier, we had displayed a static block; now we will display a dynamic View. Disable earlier panes in the first static row. Select the gears in first static rows and select Add content | Miscellaneous. The custom view block will be here, as shown in the following screenshot. Select it. Save and preview. So, we have now integrated the dynamic View in one of our Panel panes. Let's add sample content to each region now. You can select your own content as you want on your front page, as shown in the following screenshot: Go to Site configuration | Site information. Chan ge the default home pages to the created Panels page. Your home page is now the custom Panel page. How it works... In this recipe, we implemented Panel panes with views and blocks to make a separate custom page and separate display for the existing content in the website.
Read more
  • 0
  • 0
  • 1137

article-image-upgrading-opencart
Packt
23 Aug 2010
3 min read
Save for later

Upgrading OpenCart

Packt
23 Aug 2010
3 min read
This article is suggested reading even for an experienced user. It will show us any possible problems that might occur while upgrading, so we can avoid them. Making backups of the current OpenCart system One thing we should certainly do is backup our files and database before starting any upgrade process. This will allow us to restore the OpenCart system if the upgrade fails and if we cannot solve the reason behind it. Time for action – backing up OpenCart files and database In this section, we will now learn how to back up the necessary files and database of the current OpenCart system before starting the upgrading processes. We will start with backing up database files. We have two choices to achieve this. The first method is easier and uses the built-in OpenCart module in the administration panel. We need to open the System | Backup / Restore menu. In this screen, we should be sure that all modules are selected. If not, click on the Select All link first. Then, we will need to click on the Backup button. A backup.sql file will be generated for us automatically. We will save the file on our local computer. The second method to backup OpenCart database is through the Backup Wizard on cPanel administration panel which most hosting services provide this as a standard management tool for their clients. If you have applied the first method which we have just seen, skip the following section to apply. Still, it is useful to learn about alternative Backup Wizard tool on cPanel. Let's open cPanel screen that our hosting services provided for us. Click on the Backup Wizard item under the Files section. On the next screen, click on the Backup button. We will click on the MySQL Databases button on the Select Partial Backup menu. We will right-click on our OpenCart database file backup and save it on our local computer by clicking on Save Link As. Let's return to the cPanel home screen and open File Manager under the Files menu. Let's browse into the web directory where our OpenCart store files are stored. Right-click on the directory and then Compress it. We will compress the whole OpenCart directory as a Zip Archive file. As we can see from the following screenshot, the compressed store.zip file resides on the web server. We can also optionally download the file to our local computer. What just happened? We have backed up our OpenCart database using cPanel. After this, we also backed up our OpenCart files as a compressed archive file using File Manager in cPanel.
Read more
  • 0
  • 0
  • 1698
Visually different images

Packt
20 Aug 2010
11 min read
Save for later

Adding Features to your Joomla! Form using ChronoForms

Packt
20 Aug 2010
11 min read
(For more resources on ChronoForms, see here.) Introduction We have so far mostly worked with fairly standard forms where the user is shown some inputs, enters some data, and the results are e-mailed and/or saved to a database table. Many forms are just like this, and some have other features added. These features can be of many different kinds and the recipes in this article are correspondingly a mixture. Some, like Adding a validated checkbox, change the way the form works. Others, like Signing up to a newsletter service change what happens after the form is submitted. While you can use these recipes as they are presented, they are just as useful as suggestions for ways to use ChronoForms to solve a wide range of user interactions on your site. Adding a validated checkbox Checkboxes are less often used on forms than most of the other elements and they have some slightly unusual behavior that we need to manage. ChronoForms will do a little to help us, but not everything that we need. In this recipe, we'll look at one of the most common applications—a stand alone checkbox that the user is asked to click to ensure that they've accepted some terms and conditions. We want to make sure that the form is not submitted unless the box is checked. Getting ready We'll just add one more element to our basic newsletter form. It's probably going to be best to recreate a new version of the form using the Form Wizard to make sure that we have a clean starting point. How to do it... In the Form Wizard, create a new form with two TextBox elements. In the Properties box, add the Labels "Name" and "Email" and the Field Names "name" and "email" respectively. Now drag in a CheckBox element. You'll see that ChronoForms inserts the element with three checkboxes and we only need one. In the Properties box remove the default values and type in "I agree". While you are there change the label to "Terms and Conditions". Lastly, we want to make sure that this box is checked so check the Validation | One Required checkbox and add "please confirm your agreement" in the Validation Message box. Apply the changes to the Properties. To complete the form add the Button element, then save your form, publish it, and view it in your browser. To test, click the Submit button without entering anything. You should find that the form does not submit and an error message is displayed. How it works... The only special thing to notice about this is that the validation we used was validate-one- required and not the more familiar required. Checkbox arrays, radio button groups, and select drop-downs will not work with the required option as they always have a value set, at least from the perspective of the JavaScript that is running the validation. There's more... Validating the checkbox server-side If the checkbox is really important to us, then we may want to confirm that it has been checked using the server-side validation box. We want to check and, if our box isn't checked, then create the error message. However, there is a little problem—an unchecked checkbox doesn't return anything at all, there is just no entry in the form results array. Joomla! has some functionality that will help us out though; the JRequest::getVar() function that we use to get the form results allows us to set a default value. If nothing is found in the form results, then the default value will be used instead. So we can add this code block to the server-side validation box: <?php $agree = JRequest::getString('check0[]', 'empty', 'post'); if ( $agree == 'empty' ) { return 'Please check the box to confirm your agreement'; } ?> Note: To test this, we need to remove the validate-one-required class from the input in the Form HTML. Now when we submit the empty form, we see the ChronoForms error message. Notice that the input name in the code snippet is check0[]. ChronoForms doesn't give you the option of setting the name of a checkbox element in the Form Wizard | Properties box. It assigns a check0, check1, and so on value for you. (You can edit this in the Form Editor if you like.) And because checkboxes often come in arrays of several linked boxes with the same name, ChronoForms also adds the [] to create an array name. If this isn't done then only the value of the last checked box will be returned. Locking the Submit button until the box is checked If we want to make the point about terms and conditions even more strongly then we can add some JavaScript to the form to disable the Submit button until the box is checked. We need to make one change to the Form HTML to make this task a little easier. ChronoForms does not add ID attributes to the Submit button input; so open the form in the Form Editor, find the line near the end of the Form HTML and alter it to read: <input value="Submit" name="submit" id='submit' type="submit" /> Now add the following snippet into the Form JavaScript box: // stop the code executing // until the page is loaded in the browser window.addEvent('load', function() { // function to enable and disable the submit button function agree() { if ( $('check00').checked == true ) { $('submit').disabled = false; } else { $('submit').disabled = true; } }; // disable the submit button on load $('submit').disabled = true; //execute the function when the checkbox is clicked $('check00').addEvent('click', agree); }); Apply or save the form and view it in your browser. Now as you tick or untick the checkbox, the submit button will be enabled and disabled. This is a simple example of adding a custom script to a form to add a useful feature. If you are reasonably competent in JavaScript, you will find that there is quite a lot more that you can do. There are different styles of laying out both JavaScript and PHP and sometimes fierce debates about where line breaks and spaces should go. We've adopted a style here that is hopefully fairly clear, reasonably compact, and more or less the same for both JavaScript and PHP. If it's not the style you are accustomed to, then we're sorry. Adding an "other" box to a drop-down Drop-downs are a valuable way of offering a list of choices to your user to select from. And sometimes it just isn't possible to make the list complete, there's always another option that someone will want to add. So we add an "other" option to the drop-down. But that tells us nothing, so we need to add an input to tell us what "other" means here. Getting ready We'll just add one more element to our basic newsletter form. We haven't used a drop-down before but it is very similar to the check-box element from the previous recipe. How to do it... Use the Form Wizard to create a form with two TextBox elements, a DropDown element, and a Button element. The changes to make in the element are: Add "I heard from" in the Label Change the Field Name to "hearabout" Add some options to the Options box—"Google", "Newspaper", "Friend", and "Other" Leave the Add Choose Option box checked and leave Choose Option in the Choose Option Text box. Apply the Properties box. Make any other changes you need to the form elements; then save the form, publish it, and view it in your browser. Notice that as well as the four options we added the Choose Option entry is at the top of the list. That comes from the checkbox and text field that we left with their default values. It's important to have a "null" option like this in a drop-down for two reasons. First, so that it is obvious to a user that no choice has been made. Otherwise it's very easy for them to leave the first option showing and this value—Google in this case—will be returned by default. Second, so that we can validate select-one-required if necessary. The "null" option has no value set and so can be detected by validation script. Now we just need one more text box to collect details if Other is selected. Open the form in the Wizard Edit; add one more TextBox element after the DropDown element. Give it the Label please add details and the name "other". Even though we set the name to "other", ChronoForms will have left the input ID attribute as text_4 or something similar. Open the Form in the Form Editor and change the ID to "other" as well. The same is true of the drop-down. The ID there is select_2, change that to hearabout. Now we need a script snippet to enable and disable the "other" text box if the Other option is selected in the drop-down. Here's the code to put in the Form JavaScript box: window.addEvent('domready', function() { $('hearabout').addEvent('change', function() { if ($('hearabout').value == 'Other' ) { $('other').disabled = false; } else { $('other').disabled = true; } }); $('other').disabled = true; }); This is very similar to the code in the last recipe except that it's been condensed a little more by merging the function directly into the addEvent(). When you view the form you will see that the text box for please add details is grayed out and blocked until you select Other in the drop-down. Make sure that you don't make the please add details input required. It's an easy mistake to make but it stops the form working correctly as you have to select Other in the drop-down to be able to submit it. How it works Once again, this is a little JavaScript that is checking for changes in one part of the form in order to alter the display of another part of the form. There's more... Hiding the whole input It looks a little untidy to have the disabled box showing on the form when it is not required. Let's change the script a little to hide and unhide the input instead of disabling and enabling it. To make this work we need a way of recognizing the input together with its label. We could deal with both separately, but let's make our lives simpler. In the Form Editor, open the Form HTML box and look near the end for the other input block: <div class="form_item"> <div class="form_element cf_textbox"> <label class="cf_label" style="width: 150px;">please add details</label> <input class="cf_inputbox" maxlength="150" size="30" title="" id="other" name="other" type="text" /> </div> <div class="cfclear">&nbsp;</div> </div> That <div class="form_element cf_textbox"> looks like it is just what we need so let's add an ID attribute to make it visible to the JavaScript:   <div class="form_element cf_textbox" id="other_input"> Now we'll modify our script snippet to use this: window.addEvent('domready', function() { $('hearabout').addEvent('change', function() { if ($('hearabout').value == 'Other' ) { $('other_input').setStyle('display', 'block'); } else { $('other_input').setStyle('display', 'none'); } }); // initialise the display if ($('hearabout').value == 'Other' ) { $('other_input').setStyle('display', 'block'); } else { $('other_input').setStyle('display', 'none'); } }); Apply or save the form and view it in your browser. Now the input is invisible see the following screenshot labeled 1 until you select Other from the drop-down see the following screenshot labeled 2. The disadvantage of this approach is that the form can appear to "jump around" as extra fields appear. You can overcome this with a little thought, for example by leaving an empty space. See also In some of the script here we are using shortcuts from the MooTools JavaScript framework. Version 1.1 of MooTools is installed with Joomla! 1.5 and is usually loaded by ChronoForms. You can find the documentation for MooTools v1.1 at http://docs111.mootools.net/ Version 1.1 is not the latest version of MooTools and many of the more recent MooTools script will not run with the earlier version. Joomla 1.6 is expected to use the latest release.
Read more
  • 0
  • 0
  • 2685

article-image-getting-started-drupal-6-panels
Packt
20 Aug 2010
7 min read
Save for later

Getting Started with Drupal 6 Panels

Packt
20 Aug 2010
7 min read
(For more resources on Drupal, see here.) Introduction Drupal Panels are distinct pieces of rectangular content that create a custom layout of the page—where different Panels are more visible and presentable as a structured web page. Panels is a freely-distributed, open source module developed for Drupal 6. With Panels, you can display various content in a customizable grid layout on one page. Each page created by Panels can include a unique structure and content. Using the drag-and-drop user interface, you select a design for the layout and position various kinds of content (or add custom content) within that layout. Panels integrates with other Drupal modules like Views and CCK. Permissions, deciding which users can view which elements, are also integrated into Panels. You can even override system pages such as the display of keywords (taxonomy) and individual content pages (nodes). In the next section, we will see what the Panels can actually do, as defined on drupal.org: http://drupal.org/project/panels. Basically, Panels will help you to arrange a large content on a single page. While Panels can be used to arrange a lot of content on a single page, it is equally useful for small amounts of related content and/or teasers. Panels support styles, which control how individual content's panes, regions within a Panel, and the entire Panels will be rendered. While Panels ship with few styles, styles can be provided as plugins by modules, as well as by themes: The User Interface is nice for visually designing a layout, but a real HTML guru doesn't want the somewhat weighty HTML that this will create. Modules and themes can provide custom layouts that can fit a designer's exacting specifications, but still allow the site builders to place content wherever they like. Panels include a pluggable caching mechanism: a single cache type is included, the 'simple' cache, which is time-based. Since most sites have very specific caching needs based upon the content and traffic patterns, this system was designed to let sites that need to devise their own triggers for cache clearing and implement plugins that will work with Panels. A cache mechanism can be defined for each pane or region with the Panel page. Simple caching is a time-based cache. This is a hard limit, and once cached, it will remain that way until the time limit expires. If "arguments" are selected, this content will be cached per individual argument to the entire display; if "contexts" are selected, this content will be cached per unique context in the pane or display; if "neither", there will be only one cache for this pane. Panels can also be cached as a whole, meaning the entire output of the Panels can be cached, or individual content panes that are heavy, like large views, can be cached. Panels can be integrated with the Drupal module Organic Groups through the #og_Panels module to allow individual groups to have their own customized layouts. Panels integrates Views to allow administrators to add any view as content. We will discuss Module Integration in the coming recipes. Shown in the previous screenshot is one of the example sites that use Panels 3 for their home page (http://concernfast.org). The home page is built using a custom Panels 3 layout with a couple of dedicated Content types that are used to build nodes to drop into the various Panels areas. The case study can be found at: http://drupal.org/node/629860. Panels arrange your site content into an easy navigational pattern, which can be clearly seen in the following screenshot. There are several terms often used within Panels that administrators should become familiar with as we will be using the same throughout the recipes. The common terms in Panels are: Panels page: The page that will display your Panels. This could be the front page of a site, a news page, and so on. These pages are given a path just like any other node. Panels: A container for content. A Panel can have several pieces of content within it, and can be styled. Pane: A unit of content in a Panel. This can be a node, view, arbitrary HTML code, and so on. Panes can be shifted up and down within a Panel and moved from one Panel to another. Layout: Provides a pre-defined collection of Panels that you can select from. A layout might have two columns, a header, footer, or three columns in the middle, or even seven Panels stacked like bricks. Setting up Ctools and Panels We will now set up Ctools, which is required for Panels. "Chaos tools" is a centralized library, which is used by the most powerful modules of Drupal Panels and views. Most functions in Panels are inherited from the chaos library. Getting ready Download the Panels modules for the Drupal website: http://drupal.org/project/Panels You would need Ctools as a dependency module, which can be downloaded from: http://drupal.org/project/ctools How to do it... Upload both the files, Ctools and Panels, into /sites/all/modules. It is always a best practice to keep the installed modules separate from the "core" (the files that install with Drupal) into the /sites/all/modules folder. This makes it easy to upgrade the modules at a later stage when your site becomes complex and has too many modules. Go to the modules page in admin (Admin| Site Building | Modules) and enable Ctools, then enable Panels. Go to permissions (Admin | User Management | Permissions) and give site builders permission to use Panels. Enable the Page manager module in the Chaos tools suite. This module enables the page manager for Panels. To integrate views with Panels, enable the Views content panes module too. We will discuss more about views later on. Enable Panels and set the permissions. You will need to enable Panel nodes, the Panel module, and Mini panels too (as shown in the following screenshot) as we will use the same in our advanced recipes. Go to administer by module in the Site building | Modules. Here, you find the Panels User Interface. There is more Chaos tools suite includes the following tools that form the base of the Panels module. You do not need to go into the details of it to use Panels but it is good to know what it includes. This is the powerhouse that makes Panels the most efficient tool to design complex layouts: Plugins—tools to make it easy for modules to let other modules implement plugins from .inc files. Exportables—tools to make it easier for modules to have objects that live in database or live in code, such as 'default views'. AJAX responder—tools to make it easier for the server to handle AJAX requests and tell the client what to do with them. Form tools—tools to make it easier for forms to deal with AJAX. Object caching—tool to make it easier to edit an object across multiple page requests and cache the editing work. Contexts—the notion of wrapping objects in a unified wrapper and providing an API to create and accept these contexts as input. Modal dialog—tool to make it simple to put a form in a modal dialog. Dependent—a simple form widget to make form items appear and disappear based upon the selections in another item. Content—pluggable Content types used as panes in Panels and other modules like Dashboard. Form wizard—an API to make multi-step forms much easier. CSS tools—tools to cache and sanitize CSS easily to make user input CSS safe. How it works... Now, we have our Panels UI ready to generate layouts. We will discuss each of them in the following recipes. The Panels dashboard will help you to generate the layouts for Drupal with ease.
Read more
  • 0
  • 0
  • 1812

article-image-sessions-and-users-php-5-cms
Packt
17 Aug 2010
14 min read
Save for later

Sessions and Users in PHP 5 CMS

Packt
17 Aug 2010
14 min read
(For more resources on PHP, see here.) The problem Dealing with sessions can be confusing, and is also a source of security loopholes. So we want our CMS framework to provide basic mechanisms that are robust. We want them to be easy to use by more application-oriented software. To achieve these aims, we need to consider: The need for sessions and their working The pitfalls that can introduce vulnerabilities Efficiency and scalability considerations Discussion and considerations To see what is required for our session handling, we shall first review the need for them and consider how they work in a PHP environment. Then the vulnerabilities that can arise through session handling will be considered. Web crawlers for search engines and more nefarious activities can place a heavy and unnecessary load on session handling, so we shall look at ways to avoid this load. Finally, the question of how best to store session data is studied. Why sessions? The need for continuity was mentioned when we first discussed users. But it is worth reviewing the requirement in a little more detail. If Tim Berners-Lee and his colleagues had known all the developments that would eventually occur in the internet world, maybe the Web would have been designed differently. In particular, the basic web transport protocol HTTP might not have treated each request in isolation. But that is hindsight, and the Web was originally designed to present information in a computer-independent way. Simple password schemes were sufficient to control access to specific pages. Nowadays, we need to cater for complex user management, or to handle things like shopping carts, and for these we need continuity. Many people have recognized this, and introduced the idea of sessions. The basic idea is that a session is a series of requests from an individual website visitor, and the session provides access to enduring information that is available throughout the session. The shopping cart is an obvious example of information being retained across the requests that make up a session. PHP has its own implementation of sessions, and there is no point reinventing the wheel, so PHP sessions are the obvious tool for us to use to provide continuity. How sessions work There are three main choices which have been available for handling continuity: Adding extra information to the URI Using cookies Using hidden fields in the form sent to the browser All of them can be used at times. Which of them is most suitable for handling sessions? PHP uses either of the first two alternatives. Web software often makes use of hidden variables, but they do not offer a neat way to provide an unobtrusive general mechanism for maintaining continuity. In fact, whenever hidden variables are used, it is worth considering whether session data would be a better alternative. For reasons discussed in detail later, we shall consider only the use of cookies, and reject the URI alternative. There was a time when there were lots of scary stories about cookies, and people were inclined to block them. While there will always be security issues associated with web browsing, the situation has changed, and the majority of sites now rely on cookies. It is generally considered acceptable for a site to demand the use of cookies for operations such as user login or for shopping carts and purchase checkout. The PHP cookie-based session mechanism can seem obscure, so it is worth explaining how it works. First we need to review the working of cookies. A cookie is simply a named piece of data, usually limited to around 4,000 bytes, which is stored by the browser in order to help the web server to retain information about a user. More strictly, the connection is with the browser, not the user. Any cookie is tied to a specific website, and optionally to a particular part of the website, indicated by a path. It also has a life time that can be specified explicitly as a duration; a zero duration means that the cookie will be kept for as long as the browser is kept open, and then discarded. The browser does nothing with cookies, except to save and then return them to the server along with requests. Every cookie that relates to the particular website will be sent if either the cookie is for the site as a whole, or the optional path matches the path to which the request is being sent. So cookies are entirely the responsibility of the server, but the browser helps by storing and returning them. Note that, since the cookies are only ever sent back to the site that originated them, there are constraints on access to information about other sites that were visited using the same browser. In a PHP program, cookies can be written by calling the set_cookie function, or implicitly through session handling. The name of the cookie is a string, and the value to be stored is also a string, although the serialize function can be used to make more structured data into a string for storage as a cookie. Take care to keep the cookies within the size limit. PHP makes available the cookies that have been sent back by the browser in the $_COOKIES super-global, keyed by their names. Apart from any cookies explicitly written by code, PHP may also write a session cookie. It will do so either as a result of calls to session handling functions, or because the system has been configured to automatically start or resume a session for each request. By default, session cookies do not use the option of setting an expiry time, but can be deleted when the browser is closed down. Commonly, browsers keep this type of cookie in memory so that they are automatically lost on shutdown. Before looking at what PHP is doing with the session cookie, let's note that there is an important general consideration for writing cookies. In the construction of messages between the server and the browser, cookies are part of the header. That means rules about headers must be obeyed. Headers must be sent before anything else, and once anything else has been sent, it is not permitted to send more headers. So, in the case of server to browser communication, the moment any part of the XHTML has been written by the PHP program, it is too late to send a header, and therefore too late to write a cookie. For this reason, a PHP session is best started early in the processing. The only purpose PHP has in writing a session cookie is to allocate a unique key to the session, and retrieve it again on the next request. So the session cookie is given an identifying name, and its value is the session's unique key. The session key is usually called the session ID, and is used by PHP to pick out the correct set of persistent values that belong to the session. By default, the session name is PHPSESSID but it can, in most circumstances, be changed by calling the PHP function session_name prior to starting the session. Starting, or more often restarting, a session is done by calling session_start, which returns the session ID. In a simple situation, you do not need the session ID, as PHP places any existing session data in another superglobal, $_SESSION. In fact, we will have a use for the session ID as you will soon see. The $_SESSION super-global is available once session_start has been called, and the PHP program can store whatever data it chooses in it. It is an array, initially empty, and naturally the subscripts need to be chosen carefully in a complex system to avoid any clashes. The neat part of the PHP session is that provided it is restarted each time with session_start, the $_SESSION superglobal will retain any values assigned during the handling of previous requests. The data is thus preserved until the program decides to remove it. The only exception to this would be if the session expired, but in a default configuration, sessions do not expire automatically. Later in this article, we will look at ways to deliberately kill sessions after a determinate period of inactivity. As it is only the session ID that is stored in the cookie, rules about the timing of output do not apply to $_SESSION, which can be read or written at any time after session_start has been called. PHP stores the contents of $_SESSION at the end of processing or on request using the PHP function session_write_close. By default, PHP puts the data in a temporary file whose name includes the session ID. Whenever the session data is stored, PHP retrieves it again at the next session_start. Session data does not have to be stored in temporary files, and PHP permits the program to provide its own handling routines. We will look at a scheme for storing the session data in a database later in the article. Avoiding session vulnerabilities So far, the option to pass the session ID as part of the URI instead of as a cookie has not been considered. Looking at security will show why. The main security issue with sessions is that a cracker may find out the session ID for a user, and then hijack that user's session. Session handling should do its best to guard against that happening. PHP can pass the session ID as part of the URI. This makes it especially vulnerable to disclosure, since URIs can be stored in all kinds of places that may not be as inaccessible as we would like. As a result, secure systems avoid the URI option. It is also undesirable to find links appearing in search engines that include a session ID as part of the URI. These two points are enough to rule out the URI option for passing session ID. It can be prevented by the following PHP calls: ini_set('session.use_cookies', 1);ini_set('session.use_only_cookies', 1); These calls force PHP to use cookies for session handling, an option that is now considered acceptable. The extent to which the site will function without cookies depends on what a visitor can do with no continuity of data—user login will not stick, and anything like a shopping cart will not be remembered. It is best to avoid the default name of PHPSESSID for the session cookie, since that is something that a cracker could look for in the network traffic. One step that can be taken is to create a session name that is the MD5 hash of various items of internal information. This makes it harder but not impossible to sniff messages to find out a session ID, since it is no longer obvious what to seek—the well known name of PHPSESSID is not used. It is important for the session ID to be unpredictable, but we rely on PHP to achieve that. It is also desirable that the ID be long, since otherwise it might be possible for an attacker to try out all possible values within the life of a session. PHP uses 32 hexadecimal digits, which is a reasonable defense for most purposes. The other main vulnerability apart from session hijacking is called session fixation. This is typically implemented by a cracker setting up a link that takes the user to your site with a session already established, and known to the cracker. An important security step that is employed by robust systems is to change the session ID at significant points. So, although a session may be created as soon as a visitor arrives at the site, the session ID is changed at login. This technique is used by Amazon among others so that people can browse for items and build up a shopping cart, but on purchase a fresh login is required. Doing this reduces the available window for a cracker to obtain, and use, the session ID. It also blocks session fixation, since the original session is abandoned at critical points. It is also advisable to change the ID on logout, so although the session is continued, its data is lost and the ID is not the same. It is highly desirable to provide logout as an option, but this needs to be supplemented by time limits on inactive sessions. A significant part of session handling is devoted to keeping enough information to be able to expire sessions that have not been used for some time. It also makes sense to revoke a session that seems to have been used for any suspicious activity. Ideally, the session ID is never transmitted unencrypted, but achieving this requires the use of SSL, and is not always practical. It should certainly be considered for high security applications. Search engine bots One aspect of website building is, perhaps unexpectedly, the importance of handling the bots that crawl the web. They are often gathering data for search engines, although some have more dubious goals, such as trawling for e-mail addresses to add to spam lists. The load they place on a site can be substantial. Sometimes, search engines account for half or more of the bandwidth being used by a site, which certainly seems excessive. If no action is taken, these bots can consume significant resources, often for very little advantage to the site owner. They can also distort information about the site, such as when the number of current visitors is displayed but includes bots in the counts. Matters are made worse by the fact that bots will normally fail to handle cookies. After all, they are not browsers and have no need to implement support for cookies. This means that every request by a bot is separate from every other, as our standard mechanism for linking requests together will not work. If the system starts a new session, it will have to do this for every new request from a bot. There will never be a logout from the bot to terminate the session, so each bot-related session will last for the time set for automatic expiry. Clearly it is inadvisable to bar bots, since most sites are anxious to gain search engine exposure. But it is possible to build session handling so as to limit the workload created by visitors who do not permit cookies, which will mostly be bots. When we move into implementation techniques, the mechanisms will be demonstrated. Session data and scalability We could simply let PHP take care of session data. It does that by writing a serialized version of any data placed into $_SESSION into a file in a temporary directory. Each session has its own file. But PHP also allows us to implement our own session data handling mechanism. There are a couple of good reasons for using that facility, and storing the information in the database. One is that we can analyze and manage the data better, and especially limit the overhead of dealing with search engine bots. The other is that by storing session data in the database, we make it feasible for the site to be run across multiple servers. There may well be other issues before that can be achieved, but providing session continuity is an essential requirement if load sharing is to be fully effective. Storing session data in a database is a reliable solution to this issue. Arguments against storing session data in a database include questions about the overhead involved, constraints on database performance, or the possibility of a single point of failure. While these are real issues, they can certainly be mitigated. Most database engines, including MySQL, have many options for building scalable and robust systems. If necessary, the database can be spread across multiple computers linked by a high speed network, although this should never be done unless it is really needed. Design of such a system is outside the scope of this article, but the key point is that the arguments against storing session data in a database are not particularly strong.
Read more
  • 0
  • 0
  • 1892
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-netbeans-platform-69-working-window-system
Packt
17 Aug 2010
12 min read
Save for later

NetBeans Platform 6.9: Working with Window System

Packt
17 Aug 2010
12 min read
(For more resources on NetBeans, see here.) Window System Large desktop applications need to provide many different views for visualizing data. These views have to be managed and shown and the NetBeans Platform handles these requirements for you out of the box via its docking framework. While it once might have been sufficient for a docking framework to provide static fixed window layouts, today the user expects far more flexibility. Windows should be able to be opened, movable, and, generally, customizable at runtime. The user tends to assume that the positions of views are modifiable and that they persist across restarts of the application. Not only that, but applications are assumed to be so fiexible that views should be detachable from the application's main window, enabling them to be displayed on multiple monitors at the same time. While once the simple fact of the availability of menus and toolbars was sufficient, today a far more dynamic handling is needed so that window content can be adapted dynamically. Connected to these expectations of flexibility, plugins are increasingly becoming a standard technology, with the user assuming their windows to be pluggable, too. In short, the requirements for window management have become quite complex and can only be met by means of an external docking framework, otherwise all these various concerns would need to be coded (and debugged, tested, and maintained) by hand. The NetBeans Platform provides all of these features via its docking framework, known as the NetBeans Window System. It also provides an API to let you programmatically access the window system. Together, the window system and its API fulfill all the requirements described above, letting you concentrate on your domain knowledge and business logic rather than on the work of creating a custom window management facility for each of your applications. This part of the article teaches you the following: How to define views How to position views in the main window Rest is covered in the second part of this article series. Creating a window The NetBeans Window System simplifies window management by letting you use a default component for displaying windows. The default component, that is, the superclass of all windows, is the TopComponent class, which is derived from the standard JComponent class. It defines many methods for controlling a window and handles notification of main window system events. The WindowManager is the central class controlling all the windows in the application. Though you can implement this class yourself, this is seldom done as normally the default WindowManager is sufficient. Similarly, you typically use the standard TopComponent class, rather than creating your own top-level Swing components. In contrast to the TopComponent class, the default WindowManager cannot manage your own top-level Swing components, so these cannot take advantage of the Window System API. Now let's create a TopComponent and let it be an editor for working with tasks. This is done easily by using the New Window wizard. In the Projects window, right-click the TaskEditor module project node and choose New | Window. On the first page of the wizard select Editor for Window Position and Open on Application Start. Click Next. In the next page of the wizard, type TaskEditor in Class Name Prefix. This prefix is used for all the generated files. It is possible to specify an icon that will be displayed in the tab of the new window, but let's skip that for the moment. Click Finish and all the files are generated into your module source structure. Next, open the newly created TaskEditorTopComponent and drag the TaskEditorPanel from the Palette, which is where you put it at the end of the last chapter, onto the form. The size of the component automatically adjusts to the required size of he panel. Position the panel with the preferred spacing to the left and top and activate the automatic resizing of the panel in horizontal and vertical direction. The form should now look similar to the following screenshot: Start the application. You now see a tab containing the new TaskEditor Window, which holds your form. Examining the generated files You have used a wizard to create a new TopComponent. However, the wizard did more than that. Let's take a look at all the files that have been created and at all the files that have been modified, as well as how these files work together. The only Java class that was generated is the TopComponent that will contain the TaskEditor, shown as follows: @ConvertAsProperties(dtd = "-//com.netbeansrcp.taskeditor//TaskEditor// EN", autostore = false) public final class TaskEditorTopComponent extends TopComponent { private static TaskEditorTopComponent instance; /** path to the icon used by the component and its open action */ // static final String ICON_PATH = "SET/PATH/TO/ICON/HERE"; private static final String PREFERRED_ID = "TaskEditorTopComponent"; public TaskEditorTopComponent() { initComponents(); setName(NbBundle.getMessage(TaskEditorTopComponent.class, "CTL_TaskEditorTopComponent")); setToolTipText(NbBundle.getMessage(TaskEditorTopComponent.class, "HINT_TaskEditorTopComponent")); // setIcon(ImageUtilities.loadImage(ICON_PATH, true)); } /**This method is called from within the constructor to * initialize the form. * WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ // <editor-fold defaultstate="collapsed" desc="Generated Code"> private void initComponents() { javax.swing.GroupLayout layout = new javax.swing. GroupLayout(this); this.setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout. Alignment.LEADING).addGap(0, 555, Short.MAX_VALUE)); layout.setVerticalGroup(layout.createParallelGroup( javax.swing.GroupLayout.Alignment.LEADING) .addGap(0, 442, Short.MAX_VALUE) ); }// </editor-fold> // Variables declaration - do not modify // End of variables declaration /** * Gets default instance. Do not use directly: reserved for *.settings files only, * i.e. deserialization routines; otherwise you could get a non-deserialized instance. * To obtain the singleton instance, use {@link #findInstance}. */ public static synchronized TaskEditorTopComponent getDefault() { if (instance == null) { instance = new TaskEditorTopComponent(); } return instance; } /** * Obtain the TaskEditorTopComponent instance. Never call { @link #getDefault} directly! */ public static synchronized TaskEditorTopComponent findInstance() { TopComponent win = WindowManager.getDefault().findTopComponent (PREFERRED_ID); if (win == null) { Logger.getLogger(TaskEditorTopComponent.class.getName()). warning("Cannot find " + PREFERRED_ID + " component. It will not be located properly in the window system."); return getDefault(); } if (win instanceof TaskEditorTopComponent) { return (TaskEditorTopComponent) win; } Logger.getLogger(TaskEditorTopComponent.class.getName()). warning("There seem to be multiple components with the '" + PREFERRED_ID + "' ID. That is a potential source of errors and unexpected behavior."); return getDefault(); } @Override public int getPersistenceType() { return TopComponent.PERSISTENCE_ALWAYS; } @Override public void componentOpened() { // TODO add custom code on component opening } @Override public void componentClosed() { // TODO add custom code on component closing } void writeProperties(java.util.Properties p) { // better to version settings since initial version as advocated at // http://wiki.apidesign.org/wiki/PropertyFiles p.setProperty("version", "1.0"); // TODO store your settings } Object readProperties(java.util.Properties p) { if (instance == null) { instance = this; } instance.readPropertiesImpl(p); return instance; } private void readPropertiesImpl(java.util.Properties p) { String version = p.getProperty("version"); // TODO read your settings according to their version } @Override protected String preferredID() { return PREFERRED_ID; } } As expected, the class TaskEditorTopComponent extends the TopComponent class. Let's look at it more closely: For efficient resource usage, the generated TopComponent is implemented as a singleton. A private constructor prohibits its incorrect usage from outside by disallowing direct instantiation of the class. The static attribute instance holds the only instance in existence. The static method getDefault creates and returns this instance if necessary on demand. Typically, getDefault should never be called directly. Instead of this, you should use findInstance, which delegates to getDefault if necessary. findInstance tries to retrieve the instance using the Window Manager and the ID of the TopComponent before falling back to the singleton instance. This ensures the correct usage of persistent information. The constructor creates the component tree for the TaskEditorTopComponent by calling the method init Components(). This method contains only code generated via the NetBeans "Matisse" Form Builder and is read-only in the NetBeans Java editor. You can change the code in this method using the Form Builder's Property Sheet, as will be shown later. The static property PreferredID holds the TopComponent ID used for identification of the TopComponent. As indicated by its name, the ID can be changed by the Window System, if name clashes occur. The ID is used throughout all the configuration files. The methods componentOpened() and componentClosed() are part of the lifecycle of the TopComponent. You learn about the method getPersistenceType() later, in the section about the persistence of TopComponents. What does the Java code do and not do? The Java code only defines the visual aspects of the TaskEditorTopComponent and manages the singleton instance of this component. In no way does the code describe how and where the instance is shown. That's the task of the two XML files, described below. Two small XML files are created by the wizard. The first is the TopComponent's settings file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE settings PUBLIC "-//NetBeans//DTD Session settings 1.0//EN" "http://www.netbeans.org/dtds/sessionsettings-1_0.dtd"> <settings version="1.0"> <module name="com.netbeansrcp.taskeditor" spec="1.0"/> <instanceof class="org.openide.windows.TopComponent"/> <instanceof class="com.netbeansrcp.taskeditor. TaskEditorTopComponent"/> <instance class="com.netbeansrcp.taskeditor.TaskEditorTopComponent" method="getDefault"/> </settings> The settings file describes the persistent instance of the TopComponent. As you can see, the preceding configuration describes that the TopComponent belongs to the module TaskEditor in the specification version "1.0" and that it is an instance of the types TopComponent and TaskEditorTopComponent. Also described is that the instance that is created is done so using the method call TaskEditorTopComponent.getDefault(). <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE tc-ref PUBLIC "-//NetBeans//DTD Top Component in Mode Properties 2.0//EN" "http://www.netbeans.org/dtds/tc-ref2_0.dtd"> <tc-ref version="2.0" > <module name="com.netbeansrcp.taskeditor" spec="1.0"/> <tc-id id="TaskEditorTopComponent"/> <state opened="true"/> </tc-ref> The WSTCREF (window system creation file) describes the position of the TopComponent within the main window. This becomes clearer with the following file. The other important information in the WSTCREF file is the opened state at application start. Typically, you do not have to change these two configuration files by hand. This is not true for the following file, the layer.xml, which you often need to change manually, to register new folders and files in the filesystem. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE filesystem PUBLIC "-//NetBeans//DTD Filesystem 1.2//EN" "http:// www.netbeans.org/dtds/filesystem-1_2.dtd"> <filesystem> <folder name="Actions"> <folder name="Window"> <file name="com-netbeansrcp-taskeditor.TaskEditorAction.instance"> <attr name="component" methodvalue="com.netbeansrcp.taskeditor. TaskEditorTopComponent.findInstance"/> <attr name="displayName" bundlevalue="com.netbeansrcp.taskeditor. Bundle#CTL_TaskEditorAction"/> <attr name="instanceCreate" methodvalue="org.openide.windows. TopComponent.openAction"/> </file> </folder> </folder> <folder name="Menu"> <folder name="Window"> <file name="TaskEditorAction.shadow"> <attr name="originalFile" stringvalue="Actions/Window/com netbeansrcp-taskeditor-TaskEditorAction.instance"/> </file> </folder> </folder> <folder name="Windows2"> <folder name="Components"> <file name="TaskEditorTopComponent.settings" url="TaskEditorTopComponentSettings.xml"/> </folder> <folder name="Modes"> <folder name="editor"> <file name="TaskEditorTopComponent.wstcref" url="TaskEditorTopComponentWstcref.xml"/> </folder> </folder> </folder> </filesystem> The layer.xml is integrated into the central registry (also known as the SystemFileSystem) using the via a registration entry in the module's manifest file. The SystemFileSystem is a virtual filesystem for user settings. Each module can supply a layer file for merging configuration data from the module into the SystemFileSystem. The Window System API and the Actions API reserve a number of folders in the central registry for holding its configuration data. These folders enable specific subfolders and files relating to window system registration to be added to the filesystem. Let's have a look at the folder Windows2. Windows2 contains a folder named Components, which contains a virtual file with the name of the TopComponent and the extension .settings. This .settings file redirects to the real settings file. It is used to make the configuration known to the Window System. In addition, the Windows2 folder contains a folder named Modes, which contains a folder named editor. Modes represent the possible positions at which TopComponents can be shown in the application. The editor folder contains a .wstcref file for our TopComponent, which refers to the real WSTCREF file. This registers the TopComponent in the mode editor, so it shows up where typically editor windows are opened, which is the central part of the main window. Next , take a look at the folder Actions. It contains a folder named Window which contains a file declaring the action opening the TaskEditorTopComponent. The name is typically following Java class naming conventions with dots replaced by dashes and ending in .instance. The declaration of the virtual file itself consists of three critical parts. The attribute component describes how to create the component (methodvalue declares which method to call). The attribute displayName describes the default action name as shown in the example, in menu items. A possible declaration is the bundle value which describes the bundle and key to use to retrieve the display name. The attribute instanceCreate uses a static method call to create a real action to use. The folder Menu describes the application main menu. The folder Window contains a .shadow file. The attribute originalFile uses the full path in the SystemFileSytem to delegate to the original action declaration. As described above, .shadow files are used as symbolic links to real-defined virtual files. This declaration adds the action to the real menu bar of the application. As a result, important parts of the Window System API are not called programmatically, but are simply used declaratively. Declarative aspects include configuration and the positioning of windows, as well as the construction of the menu. In addition, you discovered that the wizard for creating TopComponents always creates singleton views. If you would like to change that, you need to adapt the code created by the wizard. For the time being, it is sufficient to use the singleton approach, particularly as it is more resource-friendly.
Read more
  • 0
  • 0
  • 1192

article-image-netbeans-platform-69-advanced-aspects-window-system
Packt
17 Aug 2010
5 min read
Save for later

NetBeans Platform 6.9: Advanced Aspects of Window System

Packt
17 Aug 2010
5 min read
(For more resources on NetBeans, see here.) Creating custom modes You can get quite far with the standard modes provided by the NetBeans Platform. Still, sometimes you may need to provide a custom mode, to provide a new position for the TopComponents within the application. A custom mode is created declaratively in XML files, rather than programmatically in Java code. In the following example, you create two new modes that are positioned side by side in the lower part of the application using a specific location relative to each other. Create a new module named CustomModes, with Code Name Base com.netbeansrcp.custommodes, within the existing WindowSystemExamples application. Right-click the module project and choose New | Other to open the New File dialog. Then choose Other | Empty File, as shown in the following screenshot: Type mode1.wsmode as the new filename and file extension, as shown in the following screenshot. Click Finish. Define the content of the new mode1.wsmode as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mode PUBLIC "-//NetBeans//DTD Mode Properties 2.3//EN" "http://www.netbeans.org/dtds/mode-properties2_3.dtd"> <mode version="2.3"> <name unique="mode1" /> <kind type="view" /> <state type="joined" /> <constraints> <path orientation="vertical" number="20" weight="0.2"/> <path orientation="horizontal" number="20" weight="0.5"/> </constraints> </mode> Create another file to define the second mode and name it mode2.wsmode. Add this content to the new file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mode PUBLIC "-//NetBeans//DTD Mode Properties 2.3//EN" "http://www.netbeans.org/dtds/mode-properties2_3.dtd"> <mode version="2.3"> <name unique="mode2" /> <kind type="view" /> <state type="joined" /> <constraints> <path orientation="vertical" number="20" weight="0.2"/> <path orientation="horizontal" number="40" weight="0.5"/> </constraints> </mode> Via the two wsmode files described above, you have defined two custom modes. The first mode has the unique name mode1, with the second named mode2. Both are created for normal TopComponents (view instead of editor) that are integrated into the main window, rather than being undocked by default (joined instead of separated). The constraints elements in the files are comparable to GridBagLayout, with a relative horizontal and vertical position, as well as a relative horizontal and vertical weight. You place mode1 in position 20/20 with a weighting of 0,5/0,2, while mode2 is placed in position 20/40 with the same weighting. If all the other defined modes have TopComponents opened within them, the TopComponents in the two new modes should lie side by side, right above the status bar, taking up 20% of the available vertical space, with the horizontal space shared between them. Let us now create two new TopComponents and register them in the layer.xml file so that they will be displayed in your new modes. Do this by using the New Window wizard twice in the CustomModes module, first creating a window called Class Name Prefix Red and then a window with Class Name Prefix Blue. What should I set the window position to? In the wizard, in both cases, it does not matter what you set to be the window position, as you are going to change that setting manually afterwards. Let both of them open automatically when the application starts. In the Design mode of both TopComponents, add a JPanel to each of the TopComponents. Change the background property of the panel in the RedTopComponent to red and in the BlueTopComponent to blue. Edit the layer.xml of CustomModes module, registering the two .wsmode files and ensuring that the two new TopComponents open in the new modes: <folder name="Windows2"> <folder name="Components"> <file name="BlueTopComponent.settings" url="BlueTopComponentSettings.xml"/> <file name="RedTopComponent.settings" url="RedTopComponentSettings.xml"/> </folder> <folder name="Modes"> <file name="mode1.wsmode" url="mode1.wsmode"/> <file name="mode2.wsmode" url="mode2.wsmode"/> <folder name="mode1"> <file name="RedTopComponent.wstcref" url="RedTopComponentWstcref.xml"/> </folder> <folder name="mode2"> <file name="BlueTopComponent.wstcref" url="BlueTopComponentWstcref.xml"/> </folder> </folder> </folder> As before, perform a Clean and Build on the application project node and then start the application again. It should look as shown in the following screenshot: In the summary, you defined two new modes in XML files and registered them in the module's layer.xml file. To confirm that the modes work correctly, you use the layer.xml file to register two new TopComponents so that they open by default into the new modes. As a result, you now know how to extend the default layout of a NetBeans Platform application with new modes.
Read more
  • 0
  • 0
  • 1097

article-image-implementing-microsoft-net-application-using-alfresco-web-services
Packt
17 Aug 2010
6 min read
Save for later

Implementing a Microsoft .NET Application using the Alfresco Web Services

Packt
17 Aug 2010
6 min read
(For more resources on Alfresco, see here.) For the first step, you will see how to set up the .NET project in the development environment. Then when we take a look at the sample code, we will learn how to perform the following operations from your .NET application: How to authenticate users How to search contents How to manipulate contents How to manage child associations Setting up the project In order to execute samples included with this article, you need to download and install the following software components in your Windows operating system: Microsoft .NET Framework 3.5 Web Services Enhancements (WSE) 3.0 for Microsoft .NET SharpDevelop 3.2 IDE The Microsoft .NET Framework 3.5 is the main framework used to compile the application, and you can download it using the following URL: http://www.microsoft.com/downloads/details.aspx?familyid=333325fd-ae52-4e35-b531-508d977d32a6&displaylang=en. Before importing the code in the development environment, you need to download and install the Web Services Enhancements (WSE) 3.0, which you can find at this address: http://www.microsoft.com/downloads/details.aspx?FamilyID=018a09fd-3a74-43c5-8ec1-8d789091255d. You can find more information about the Microsoft .NET framework on the official site at the following URL: http://www.microsoft.com/net/. From this page, you can access the latest news and the Developer Center where you can find the official forum and the developer community. SharpDevelop 3.2 IDE is an open source IDE for C# and VB.NET, and you can download it using the following URL: http://www.icsharpcode.net/OpenSource/SD/Download/#SharpDevelop3x. Once you have installed all the mentioned software components, you can import the sample project into SharpDevelop IDE in the following way: Click on File | Open | Project/Solution Browse and select this file in the root folder of the samples: AwsSamples.sln Now you should see a similar project tree in your development environment: More information about SharpDevelop IDE can be found on the official site at the following address: http://www.icsharpcode.net/opensource/sd/. From this page, you can download different versions of the product; which SharpDevelop IDE version you choose depends on the .NET version which you would like to use. You can also visit the official forum to interact with the community of developers. Also, notice that all the source code included with this article was implemented extending an existent open source project named dotnet. The dotnet project is available in the Alfresco Forge community, and it is downloadable from the following address:http://forge.alfresco.com/projects/dotnet/. Testing the .NET sample client Once you have set up the .NET solution in SharpDevelop, as explained in the previous section, you can execute all the tests to verify that the client is working correctly. We have provided a batch file named build.bat to allow you to build and run all the integration tests. You can find this batch file in the root folder of the sample code. Notice that you need to use a different version of msbuild for each different version of the .NET framework. If you want to compile using the .NET Framework 3.5, you need to set the following path in your environment: set PATH=%PATH%;%WinDir%Microsoft.NETFrameworkv3.5 Otherwise, you have to set .NET Framework 2.0 using the following path: set PATH=%PATH%;%WinDir%Microsoft.NETFrameworkv2.0.50727 We are going to assume that Alfresco is running correctly and it is listening on host localhost and on port 8080. Once executed, the build.bat program should start compiling and executing all the integration tests included in this article. After a few seconds have elapsed, you should see the following output in the command line: .........****************** Running tests ******************NUnit version 2.5.5.10112Copyright (C) 2002-2009 Charlie Poole.Copyright (C) 2002-2004 James W. Newkirk, Michael C. Two, Alexei A.Vorontsov.Copyright (C) 2000-2002 Philip Craig.All Rights Reserved.Runtime Environment - OS Version: Microsoft Windows NT 5.1.2600 Service Pack 2 CLR Version: 2.0.50727.3053 ( Net 2.0 )ProcessModel: Default DomainUsage: SingleExecution Runtime: net-2.0............Tests run: 12, Errors: 0, Failures: 0, Inconclusive: 0, Time: 14.170376seconds Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0********* Done ********* As you can see from the project tree, you have some of the following packages: Search Crud Association The Search package shows you how to perform queries against the repository. The Crud package contains samples related to all the CRUD operations that show you how to perform basic operations; namely, how to create/get/update/remove nodes in the repository. The Association package shows you how to create and remove association instances among nodes. Searching the repository Once you have authenticated a user, you can start to execute queries against the repository. In the following sample code, we will see how to perform a query using the RepositoryService of Alfresco: RepositoryService repositoryService = WebServiceFactory.getRepositoryService(); Then we need to create a store where we would like to search contents: Store spacesStore = new Store(StoreEnum.workspace, "SpacesStore"); Now we need to create a Lucene query. In this sample, we want to search the Company Home space, and this means that we have to execute the following query: String luceneQuery = "PATH:"/app:company_home""; In the next step, we need to use the query method available from the RepositoryService. In this way, we can execute the Lucene query and we can get all the results from the repository: Query query =new Query(Constants.QUERY_LANG_LUCENE, luceneQuery);QueryResult queryResult =repositoryService.query(spacesStore, query, false); You can retrieve all the results from the queryResult object, iterating the ResultSetRow object in the following way: ResultSet resultSet = queryResult.resultSet;ResultSetRow[] results = resultSet.rows;//your custom listIList<CustomResultVO> customResultList =new List<CustomResultVO>();//retrieve results from the resultSetforeach(ResultSetRow resultRow in results){ ResultSetRowNode nodeResult = resultRow.node; //create your custom value object CustomResultVO customResultVo = new CustomResultVO(); customResultVo.Id = nodeResult.id; customResultVo.Type = nodeResult.type; //retrieve properties from the current node foreach(NamedValue namedValue in resultRow.columns) { if (Constants.PROP_NAME.Equals(namedValue.name)) { customResultVo.Name = namedValue.value; } else if (Constants.PROP_DESCRIPTION.Equals(namedValue.name)) { customResultVo.Description = namedValue.value; } } //add the current result to your custom list customResultList.Add(customResultVo);} In the last sample, we iterated all the results and we created a new custom list with our custom value object CustomResultVO. More information about how to build Lucene queries can be found at this URL: http://wiki.alfresco.com/wiki/Search. Performing operations We can perform various operations on the repository. They are documented as follows: Authentication For each operation, you need to authenticate users before performing all the required operations on nodes. The class that provides the authentication feature is named AuthenticationUtils, and it allows you to invoke the startSession and endSession methods: String username = "johndoe";String password = "secret";AuthenticationUtils.startSession(username, password);try{}finally{ AuthenticationUtils.endSession();} Remember that the startSession method requires the user credentials: the username as the first argument and the password as the second. Notice that the default endpoint address of the Alfresco instance is as follows: http://localhost:8080/alfresco If you need to change the endpoint address, you can use the WebServiceFactory class invoking the setEndpointAddress method to set the new location of the Alfresco repository.
Read more
  • 0
  • 0
  • 2589

article-image-error-handling-php-5-cms
Packt
17 Aug 2010
10 min read
Save for later

Error Handling in PHP 5 CMS

Packt
17 Aug 2010
10 min read
(For more resources on PHP, see here.) The problem Errors will happen whether we like it or not. Ideally the framework can help in their discovery, recording, and handling by: Trapping different kinds of errors Making a record of errors with sufficient detail to aid analysis Supporting a structure that mitigates the effect of errors Discussion There are three main kinds of errors that can arise. Many possible situations can crop up within PHP code that count as errors, such as an attempt to use a method on a variable that turns out not to be an object, or is an object but does not implement the specified method. The database will sometimes report errors, such as an attempt to retrieve information from a non-existent table, or to ask for a field that has not been defined for a table. And the logic of applications can often lead to situations that can only be described as errors. What resources do we have to handle these error situations? PHP error handling If nothing else is done, PHP has its own error handler. But developers are free to build their own handlers. So that is the first item on our to do list. Consistently with our generally object oriented approach, the natural thing to do is to build an error recording class, and then to tell PHP that one of its methods is to be called whenever PHP detects an error. Once that is done, the error handler must deal with whatever PHP passes, as it has taken over full responsibility for error handling. It has been a common practice to suppress the lowest levels of PHP error such as notices and warnings, but this is not really a good idea. Even these relatively unimportant messages can reveal more serious problems. It is not difficult to write code to avoid them, so that if a warning or notice does arise, it will indicate something unexpected and therefore worth investigation. For example, the PHP foreach statement expects to work on something iterable and will generate a warning if it is given, say, a null value. But this is easily avoided, either by making sure that methods which return arrays will always return an array, even if it is an array of zero items, rather than a null value. Failing that, the foreach can be protected by a preceding test. So it is safest to assume that a low level error may be a symptom of a bigger problem, and have our error handler record every error that is passed to it. The database is the obvious place to put the error, and the handler receives enough information to make it possible to save only the latest occurrence of the same error, thus avoiding a bloated table of many more or less identical errors. The other important mechanism offered by PHP is new to version 5 and is the try, catch, and throw construct. A section of code can be put within a try and followed by one or more catch specifications that define what is to be done if a particular kind of problem arises. The problems are triggered by using throw. This is a valuable mechanism for errors that need to break the flow of program execution, and is particularly helpful for dealing with database errors. It also has the advantage that the try sections can be nested, so if a large area of code, such as an entire component, is covered by a try it is still possible to write a try of narrower scope within that code. In general, it is better to be cautious about giving information about errors to users. For one thing, ordinary users are simply irritated by technically oriented error messages that mean nothing to them. Equally important is the issue of cracking, and the need to avoid displaying any weaknesses too clearly. It is bad enough that an error has occurred, without giving away details of what is going wrong. So a design assumption for error handling should be that the detail of errors is recorded for later analysis, but that only a very simple indication of the presence of an error is given to the user with a message that it has been noted for rectification. Database errors Errors in database operations are a particular problem for developers. Within the actual database handling code, it would be negligent to ignore the error indications that are available through the PHP interfaces to database systems. Yet within applications, it is hard to know what to do with such errors. SQL is very flexible, and a developer has no reason to expect any errors, so in the nature of things, any error that does arise is unexpected, and therefore difficult to handle. Furthermore, if there have to be several lines of error handling code every time the database is accessed, then the overhead in code size and loss of clarity is considerable. The best solution therefore seems to be to utilize the PHP try, catch, and throw structure. A special database error exception can be created by writing a suitable class, and the database handling code will then deal with an error situation by "throwing" a new error with an exception of that class. The CMS framework can have a default try and catch in place around most of its operation, so that individual applications within the CMS are not obliged to take any action. But if an application developer wants to handle database errors, it is always possible to do so by coding a nested try and catch within the application. One thing that must still be remembered by developers is that SQL easily allows some kinds of error situation to go unnoticed. For example, a DELETE or UPDATE SQL statement will not generate any error if nothing is deleted or updated. It is up to the developer to check how many rows, if any, were affected. This may not be worth doing, but issues of this kind need to be kept in mind when considering how software will work. A good error handling framework makes it easier for a developer to choose between different checking options. Application errors Even without there being a PHP or database error, an application may decide that an error situation has arisen. For some reason, normal processing is impossible, and the user cannot be expected to solve the problem. There are two main choices that will fit with the error handling framework we are considering. One is to use the PHP trigger_error statement. It raises a user error, and allows an error message to be specified. The error that is created will be trapped and passed to the error handler, since we have decided to have our own handler. This mechanism is best used for wholly unexpected errors that nonetheless could arise out of the logic of the application. The other choice is to use a complete try, catch, and throw structure within the application. This is most useful when there are a number of fatal errors that can arise, and are somewhat expected. The CMS extension installer uses this approach to deal with the various possible fatal errors that can occur during an attempt to install an extension. They are mostly related to errors in the XML packaging file, or in problems with accessing the file system. These are errors that need to be reported to help the user in resolving the problem, but they also involve abandoning the installation process. Whenever a situation of this kind arises, try, catch, and throw is a good way to deal with it. Exploring PHP—Error handling PHP provides quite a lot of control over error handling in its configuration. One question to be decided is whether to allow PHP to send any errors to the browser. This is determined by setting the value of display_errors in the php.ini configuration file. It is also possible to determine whether errors will be logged by setting log_errors and to decide where they should be logged by setting error_log. (Often there are several copies of this file, and it is important to find the one that is actually used by the system.) The case against sending errors is that it may give away information useful to crackers. Or it may look bad to users. On the other hand, it makes development and bug fixing harder if errors have to be looked up in a log file rather than being visible on the screen. And if errors are not sent to the screen, then in the event of a fatal error, the user will simply see a blank screen. This is not a good outcome either. Although the general advice is that errors should not be displayed on production systems, I am still rather inclined to show them. It seems to me that an error message, even if it is a technical one that is meaningless to the user, is rather better than a totally blank screen. The information given is only a bare description of the error, with the name and line number for the file having the error. It is unlikely to be a great deal of use to a cracker, especially since the PHP script just terminates on a fatal error, not leaving any clear opportunity for intrusion. You should make your own decision on which approach is preferable. Without any special action in the PHP code, an error will be reported by PHP giving details of where it occurred. Providing our own error handler by using the PHP set_error_handler function gives us far more flexibility to decide what information will be recorded and what will be shown to the user. A limitation on this is that PHP will still immediately terminate on a fatal error, such as attempting a method on something that is not an object. Termination also occurs whenever a parsing error is found, that is to say when the PHP program code is badly formed. It is not possible to have control transferred to a user provided error handler, which is an unfortunate limitation on what can be achieved. However, an error handler can take advantage of knowledge of the framework to capture relevant information. Quite apart from special information on the framework, the handler can make use of the useful PHP debug_backtrace function to find out the route that was followed before the error was reached. This will give information about what called the current code. It can then be used again to find what called that, and so on until no further trace information is available. A trace greatly increases the value of error reporting as it makes it much easier to find out the route that led to the error. When an error is trapped using PHP's try and catch, then it is best to trace the route to the error at the point the exception is thrown. Otherwise, the error trace will only show the chain of events from the exception to the error handler. There are a number of other PHP options that can further refine how errors are handled, but those just described form the primary tool box that we need for building a solid framework.
Read more
  • 0
  • 0
  • 1136
article-image-database-considerations-php-5-cms
Packt
17 Aug 2010
10 min read
Save for later

Database Considerations for PHP 5 CMS

Packt
17 Aug 2010
10 min read
(For more resources on PHP, see here.) The problem Building methods that: Handle common patterns of data manipulation securely and efficiently Help ease the database changes needed as data requirements evolve Provide powerful data objects at low cost Discussion and considerations Relational databases provide an effective and readily available means to store data. Once established, they normally behave consistently and reliably, making them easier to use than file systems. And clearly a database can do much more than a simple file system! Efficiency can quickly become an issue, both in relation to how often requests are made to a database, and how long queries take. One way to offset the cost of database queries is to use a cache at some stage in the processing. Whatever the framework does, a major factor will always be the care developers of extensions take over the design of table structures and software; the construction of SQL can also make a big difference. Examples included here have been assiduously optimized so far as the author is capable, although suggestions for further improvement are always welcome! Web applications are typically much less mature than more traditional data processing systems. This stems from factors such as speed of development and deployment. Also, techniques that are effective for programs that run for a relatively long time do not make sense for the brief processing that is applied to a single website request. For example, although PHP allows persistent database connections, thereby reducing the cost of making a fresh connection for each request, it is generally considered unwise to use this option because it is liable to create large numbers of dormant processes, and slow down database operations excessively. Likewise, prepared statements have advantages for performance and possibly security but are more laborious to implement. So, the advantages are diluted in a situation where a statement cannot be used more than once. Perhaps, even more than performance, security is an issue for web development, and there are well known routes for attacking databases. They need to be carefully blocked. The primary goal of a framework is to make further development easy. Writing web software frequently involves the same patterns of database access, and a framework can help a lot by implementing methods at a higher level than the basic PHP database access functions. In an ideal world, an object-oriented system is developed entirely on the basis of OO principles. But if no attention is paid to how the objects will be stored, problems arise. An object database has obvious appeal, but for a variety of reasons, such databases are not widely used. Web applications have to be pragmatic, and so the aim pursued here is the creation of database designs that occasionally ignore strict relational principles, and objects that are sometimes simpler than idealized designs might suggest. The benefit of making these compromises is that it becomes practical to achieve a useful correspondence between database rows and PHP objects. It is possible that PHP Data Objects (PDO) will become very important in this area, but it is a relatively new development. Use of PDO is likely to pick up gradually as it becomes more commonly found in typical web hosting, and as developers get to understand what it can offer. For the time being, the safest approach seems to be for the framework to provide classes on which effective data objects can be built. A great deal can be achieved using this technique. Database dependency Lest this section create too much disappointment, let me say at the outset that this article does not provide any help with achieving database independence. The best that can be done here is to explain why not, and what can be done to limit dependency. Nowadays, the most popular kind of database employs the relational model. All relational database systems implement the same theoretical principles, and even use more or less the same structured query language. People use products from different vendors for an immense variety of reasons, some better than others. For web development, MySQL is very widely available, although PostgreSQL is another highly regarded database system that is available without cost. There are a number of well-known proprietary systems, and existing databases often contain valuable information, which motivates attempts to link them into CMS implementations. In this situation, there are frequent requests for web software to become database independent. There are, sadly, practical obstacles towards achieving this. It is conceptually simple to provide the mechanics of access to a variety of different database systems, although the work involved is laborious. The result can be cumbersome, too. But the biggest problem is that SQL statements are inclined to vary across different systems. It is easy in theory to assert that only the common core of SQL that works on all database systems should be used. The serious obstacle here is that very few developers are knowledgeable about what comprises the common core. ANSI SQL might be thought to provide a system neutral language, but then not all of ANSI SQL is implemented by every system. So, the fact is that developers become expert in one particular database system, or at best a handful. Skilled developers are conscious of the standardization issue, and where there is a choice, they will prefer to write according to standards. For example, it is better to write: SELECT username, userid, count(userid) AS number FROM aliro_session AS s INNER JOIN aliro_session_data AS d ON s.session_id = d.session_id WHERE isadmin = 0 GROUP BY userid rather than, SELECT username, userid, count(userid) AS number FROM aliro_session AS s, aliro_session_data AS d WHERE s.session_id = d.session_id AND isadmin = 0 GROUP BY userid This is because it makes the nature of the query clearer, and also because it is less vulnerable to detailed syntax variations across database systems. Use of extensions that are only available in some database systems is a major problem for query standardization. Again, it is easy while theorizing to deplore the use of non-standard extensions. In practice, some of them are so tempting that few developers resist them. An older MySQL extension was the REPLACE command, which would either insert or update data depending on whether a matching key was already present in the database. This is now discouraged on the grounds that it achieved its result by deleting any matching data before doing an insertion. This can have adverse effects on linked foreign keys but the newer option of the INSERT ... ON DUPLICATE KEY construction provides a very neat, efficient way to handle the case where data needs to go into the database allowing for what is already there. It is more efficient in every way than trying to read before choosing between INSERT and UPDATE, and also avoids the issue of needing a transaction. Similarly, there is no standard way to obtain a slice of a result set, for example starting with the eleventh item, and comprising the next ten items. Yet this is exactly the operation that is needed to efficiently populate the second page of a list of items, ten per page. The MySQL extension that offers LIMIT and LIMITSTART is ideal for this purpose. Because of these practical issues, independence of database systems remains a desirable goal that is rarely fully achieved. The most practical policy seems to avoid dependencies where this is possible at reasonable cost. The role of the database We already noted that a database can be thought of as uncontrolled global data, assuming the database connection is generally available. So there should be policies on database access to prevent this becoming a liability. One policy adopted by Aliro is to use two distinct databases. The "core" database is reserved for tables that are needed by the basic framework of the CMS. Other tables, including those created by extensions to the CMS framework, use the "general" database. Although it is difficult to enforce restrictions, one policy that is immediately attractive is that the core database should never be accessed by extensions. How data is stored is an implementation matter for the various classes that make up the framework, and a selection of public methods should make up the public interface. Confining access to those public methods that constitute the API for the framework leaves open the possibility of development of the internal mechanisms with little or no change to the API. If the framework does not provide the information needed by extensions, then its API needs further development. The solution should not be direct access to the core database. Much the same applies to the general database, except that it may contain tables that are intended to be part of an API. By and large, extensions should restrict their database operations to their own tables, and provide object methods to implement interfaces across extensions. This is especially so for write operations, but should usually apply to all database operations. Level of database abstraction There have been some clues earlier in this article, but it is worth squarely addressing the question of how far the CMS database classes should go in insulating other classes from the database. All of the discussions here are based on the idea that currently the best available style of development is object oriented. But we have already decided that using a true object database is not usually a practical option for web development. The next option to consider is building a layer to provide an object-relational transformation, so that outside of the database classes, nobody needs to deal with purely relational concepts or with SQL. An example of a framework that does this is Propel, which can be found at http://propel.phpdb.org/trac/. While developments of this kind are interesting and attractive in principle, I am not convinced that they provide an acceptable level of performance and flexibility for current CMS developments. There can be severe overheads on object-relational operations and manual intervention is likely to be necessary if high performance is a goal. For that reason, it seems that for some while yet, CMS developments will be based on more direct use of a relational database. Another complicating factor is the limitations of PHP in respect of static methods, which are obliged to operate within the environment of the class in which they are declared, irrespective of the class that was invoked in the call. This constraint is lifted in PHP 5.3 but at the time of writing, reliance on PHP 5.3 would be premature, software that has not yet found its way into most stable software distributions. With more flexibility in the use of static methods and properties, it would be possible to create a better framework of database-related properties. Given what is currently practical, and given experience of what is actually useful in the development of applications to run within a CMS framework, the realistic goals are as follows: To create a database object that connects, possibly through a choice of different connectors, to a particular database and provides the ability to run SQL queries To enable the creation of objects that correspond to database rows and have the ability to load themselves with data or to store themselves in the database Some operations, such as the update of a single row, are best achieved through the use of a database row object. Others, such as deletion, are often applied to a number of rows, chosen from a list by the user, and are best effected through a SQL query. You can obtain powerful code for achieving the automatic creation of HTML by downloading the full Aliro project. Unfortunately, experience in use has been disappointing. Often, so much customization of the automated code is required that the gains are nullified, and the automation becomes just an overhead. This topic is therefore given little emphasis.
Read more
  • 0
  • 0
  • 1033

Packt
16 Aug 2010
12 min read
Save for later

URL Shorteners – Designing the TinyURL Clone with Ruby

Packt
16 Aug 2010
12 min read
(For more resources on Ruby, see here.) We start off with an easy application, a simple yet very useful Internet application, URL shorteners. We will take a quick tour of URL shorteners before jumping into the design of a simple URL shortener, followed by an in-depth discussion of how we clone our own URL shortener, Tinyclone. All about URL shorteners Internet applications don't always need to be full of features or cover all aspects of your Internet life to be successful. Sometimes it's ok to be simple and just focus on providing a single feature. It doesn't even need to be earth-shatteringly important—it should be just useful enough for its target users. The archetypical and probably most extreme example of this is the URL shortening application or URL shortener. This service offers a very simple but surprisingly useful feature. It provides a shorter URL that represents a normally longer URL. When a user goes to the short URL, he will be redirected to the original URL. For this simple feature, top three most popular URL shortening services (TinyURL, bit.ly, and is.gd) collectively had about 11 million unique visitors, 110 million page views and a reach of about one percent of the Internet in June 2009. In 2008, the most popular URL shortener at that time, TinyURL, was made one of Time Magazine's Top 50 Best Websites. The idea to shorten long and unwieldy URLs into shorter, more manageable ones has been around for some time. One of the earlier attempts to make it a public service is Make A Shorter Link (MASL), which appeared around July 2001. MASL did just that, though the usefulness was debatable as the domain name was long and the shortened URL could potentially be longer than the original. However, the pioneering site that popularized this concept (and subsequently bought over MASL and a few other similar sites) is TinyURL. TinyURL was launched in January 2002 by Kevin Gilbertson to help him to link directly to newsgroup postings which frequently had long URLs. It rapidly became one of the most popular URL shorteners around. In 2008, an estimated 100 similar services came to existence in various forms. URLs or Uniform Resource Locators are resource identifiers that specify where identified resources are available and how they can be retrieved. A popular term for URL is a Web address. Every URL is made up of the following: <resource type>://<username>:<password>@<domain>:<port>/<file path name>?<query string>#<anchor> Not all parts of the URL are required by a browser, if the resource type is missing, it is normally assumed to be http, if the port is missing, it is normally assumed to be 80 (for http). The username, password, query string and anchor components are optional. Initially, TinyURL and similar types of URL shorteners focused on simply providing a short representative URL to their users. Naturally the competitive breadth for shortening URLs was rather well, short. Many chose TinyURL over MASL because TinyURL had a shorter and easier to remember domain name (http://tinyurl.com over http://makeashorterlink.com) Subsequent competition over this space intensified and extended to providing various other features, including custom short URLs (TinyURL, bit.ly), analysis of click-through statistics (bit.ly), advertisements (Adjix, Linkbee), preview pages (TinyURL, is.gd) and so on. The explosive growth of Twitter (from June 2008 to June 2009, Twitter grew 1,164%) opened a new chapter for URL shorteners. Twitter chose a limit of 140 characters for each tweet to accommodate the 160 characters in an SMS message (Twitter was invented as a service for people to use SMS to tell small groups what they are doing). With Twitter's popularity skyrocketing, came the need for users to shorten URLs to fit into the 140 characters limit. Originally Twitter used TinyURL as its default URL shortener and this triggered a steep climb in the usage of TinyURL during the early days of Twitter. However, in May 2009, bit.ly replaced TinyURL as Twitter's default URL shortener and the impact was immediate. For the first time in that period, TinyURL recorded a drop in the number of users in May 2009, dropping from 6.1 million to 5.3 million unique users, while bit.ly jumped from 1.8 million to 2.9 million almost overnight. That's not the end of the story though. In April 2010 during Twitter's Chirp conference, Twitter announced its own URL shortener (twt.tl). As of writing it is still unclear the market share will pan out but it's clear that URL shorteners have good value and everyone is jumping into this market. In December 2009, Google came up with its own two URL shorteners goo.gl and youtu.be. Amazon.com (amzn.to), Facebook (fb.me) and Wordpress (wp.me) all have their own URL shorteners as well. Next, let's do a quick review of why URLs shorteners are so popular and why they attract criticism as well. Here's a quick summary of the benefits: Create short and easy to remember URLs Allow passing of links in character-limited services such as Twitter Create vanity URLs for marketing purposes Can verbally pass URLs The most obvious benefit of having a shortened URL is that it's, well, short. A typical example of an URL gone bad is a link to a location in Google Maps: http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=singapore +flyer&vps=1&jsv=169c&sll=1.352083,103.819836&sspn=0.68645,1.382904&g =singapore&ie=UTF8&latlng=8354962237652576151&ei=Shh3SsSRDpb4vAPsxLS3 BQ&cd=1&usq=Singapore+Flyer Such URLs are meant to be clicked on as it is virtually impossible to pass it around verbally. It might be justifiable if the URL is cut and pasted on documents, but sometimes certain applications will truncate parts of the URL while processing. This makes a long URL difficult to click on and even produces erroneous links. In fact, this was the main motivation in creating most of the earlier URL shorteners—older email clients tend to truncate URLs when they are more than 80 characters. Short links are of course crucial in character-limited message passing systems like Twitter, Plurk, and SMS. Passing long URLs is impossible without URL shorteners. Short URLs are very useful in cases of vanity URLs where for example, the Google Maps link above could be shortened to http://tinyurl.com/singapore-flyer. Such vanity URLs are useful when passing from one person to another, or even when using in a mass marketing way. Sticking to the maps theme in our examples, if you want to give a Google Maps link to your restaurant and put it up in catalogs and brochures, you will not want to give the long URL. Instead you would want a nice, descriptive and short URL. Short URLs are also useful in cases of accessibility. For example, reading out the Google Maps link above is almost impossible, but reading out the TinyURL link (vanity or otherwise) is much easier in comparison. Many popular URL shorteners also provide some form of statistics and analytics on the usage of the links. This feature allows you to track your short URLs to see how many clicks it received and what kind of patterns can be derived from the clicks. Although the metrics are usually not advanced, they do provide basic usefulness. On the other hand, URL shorteners have it fair share of criticisms as well. Here is a summary of the bad side of URL shorteners: Provides opportunity to spammers because it hide original URLs Could be unreliable if dependent on it for redirection Possible undesirable or vulgar short URLs URL shorteners have security issues. When a URL shortener creates a short URL, it effectively hides the original link and this provides opportunity for spammers or other abusers to redirect users to their sites. One relatively mild form of such attack is 'rickrolling'. Rickrolling uses a classic bait-and-switch trick to redirect users to a Rick Astley music video of Never Gonna Give You Up. For example, you might feel that the URL http://tinyurl.com/singapore-flyer goes to Google Map, but when you click on it, you might be rickrolled and redirected to that Rick Astley music video instead. Also, because most short URLs are not customized, it is quite difficult to see if the link is genuine or not just from the URL. Many prominent websites and applications have such concerns, including MySpace, Flickr and even Microsoft Live Messenger, and have one time or another banned or restricted usage of TinyURL because of this problem. To combat spammers and fraud, URL shortening services have come up with the idea of link previews, which allows users to preview a short URL before it redirects the user to the long URL. For example TinyURL will show the user the long URL on a preview page and requires the user to explicitly go to the long URL. Another problem is performance and reliability. When you access a website, your browser goes to a few DNS servers to resolve the address, but the URL shortener adds another layer of indirection. While DNS servers have redundancy and failsafe measures, there is no such assurance from URL shorteners. If the traffic to a particular link becomes too high, will the shortening service provider be able to add more servers to improve performance or even prevent a meltdown altogether? The problem of course lies in over-dependency on the shortening service. Finally, a negative side effect of random or even customized short URLs is that undesirable, vulgar or embarrassing short URLs can be created. Earlier on TinyURL short URLs were predictable and it was exploited, such as embarrassing short URLs that were made to redirect to the White House websites of then U.S. Vice President Dick Cheney and Second Lady Lynne Cheney. We have just covered significant ground on URL shorteners. If you are a programmer you might be wondering, "Why do I need to know such information? I am really interested in the programming bits, the others are just fluff to me." Background information on the application we want to clone is very important. It tells us what why that application exists in the first place and gives us an idea what are the main features (what makes it popular). It also tells us what problems it faces such that we are aware of the problem while programming it, or even avoid it altogether. This is important when we come to the design of the application. Finally it gives us better appreciation of the application and the motivations and issues faced by the product and technical people behind the application we wish to clone. Main features Next, let's list down the features of a URL shortener. The intention in this section is to distill the basic features of the application, features that define the service. Features listed here will be features that make the application what it is. However, as much as possible we want to also explore some additional features that extend the application and are provided by many of its competitors. Most importantly, the features here are mostly features of the most popular and definitive web application in the category. In this article, this will be TinyURL. These are the main features of a URL shortener: Users can create a short URL that represents a long URL Users who visit the short URL will be redirected to the long URL Users can preview a short URL to enable them to see what the long URL is Users can provide a custom URL to represent the long URL Undesirable words are not allowed in the short URL Users are able to view various statistics involving the short URL, including the number of clicks and where the clicks come from (optional, not in TinyURL) URL shorteners are simple web applications and the one that we will design and build will also be simple. Designing the clone Cloning TinyURL is relatively simple but there is some thought behind the design of the application. We will be building a clone of TinyURL called Tinyclone, which will be hosted at the domain http://tinyclone.saush.com. Creating a short URL for each long URL The domain of the short URL is fixed. What's left is the file path name. We need to represent the long URL with a unique file path name (a key), one for each long URL. This means we need to persist the relationship between the key and the URL. One of the ways we can associate the long URL with a unique key is to hash the long URL and use the resulting hash as the unique key. However, the resulting hash might be long and hashing functions could be slow. The faster and easier way is to use a relational database's auto-incremented row ID as the unique key. The database will help ensure the uniqueness of the ID. However, the running row ID number is base 10. To represent a million URLs would already require 7 characters, to represent 1 billion would take up 9 characters. In order to keep the number of characters smaller, we will need a larger base numbering system. In this clone we will use base 36, which is 26 characters of the alphabet (case insensitive) and 10 numbers. Using this system, we will only need 5 characters to represent 1 million URLs: 1,000,000 base 36 = lfls And 1 billion URLs can be represented in just six characters: 1,000,000,000 base 36 = gjdgxs
Read more
  • 0
  • 0
  • 10308

article-image-q-replication-components-ibm-replication-server
Packt
16 Aug 2010
8 min read
Save for later

Q Replication Components in IBM Replication Server

Packt
16 Aug 2010
8 min read
The individual stages for the different layers are shown in the following diagram: The DB2 database layer The first layer is the DB2 database layer, which involves the following tasks: For unidirectional replication and all replication scenarios that use unidirectional replication as the base, we need to enable the source database for archive logging (but not the target table). For multi-directional replication, all the source and target databases need to be enabled for archive logging. We need to identify which tables we want to replicate. One of the steps is to set the DATA CAPTURE CHANGES flag for each source table, which will be done automatically when the Q subscription is created. This setting of the flag will affect the minimum point in time recovery value for the table space containing the table, which should be carefully noted if table space recoveries are performed. Before moving on to the WebSphere MQ layer, let’s quickly look at the compatibility requirements for the database name, the table name, and the column names. We will also discuss whether or not we need unique indexes on the source and target tables. Database/table/column name compatibility In Q replication, the source and target database names and table names do not have to match on all systems. The database name is specified when the control tables are created. The source and target table names are specified in the Q subscription definition. Now let’s move on to looking at whether or not we need unique indexes on the source and target tables. We do not need to be able to identify unique rows on the source table, but we do need to be able to do this on the target table. Therefore, the target table should have one of: Primary key Unique contraint Unique index If none of these exist, then Q Apply will apply the updates using all columns. However, the source table must have the same constraints as the target table, so any constraints that exist at the target must also exist at the source, which is shown in the following diagram: The WebSphere MQ layer This is the second layer we should install and test—if this layer does not work then Q replication will not work! We can either install the WebSphere MQ Server code or the WebSphere MQ Client code. Throughout this book, we will be working with the WebSphere MQ Server code. If we are replicating between two servers, then we need to install WebSphere MQ Server on both servers. If we are installing WebSphere MQ Server on UNIX, then during the installation process a user ID and group called mqm are created. If we as a DBA want to issue MQ commands, then we need to get our user ID added to the mqm group. Assuming that WebSphere MQ Server has been successfully installed, we now need to create the Queue Managers and the queues that are needed for Q replication. This section also includes tests that we can perform to check that the MQ installation and setup is correct. The following diagram shows the MQ objects that need to be created for unidirectional replication: The following figure shows the MQ objects that need to be created for bidirectional replication: There is a mixture of Local Queue (LOCAL/QL) and Remote Queues (QREMOTE/QR) in addition to Transmission Queues (XMITQ) and channels. Once we have successfully completed the installation and testing of WebSphere MQ, we can move on to the next layer—the Q replication layer. The Q replication layer This is the third and final layer, which comprises the following steps: Create the replication control tables on the source and target servers. Create the transport definitions. What we mean by this is that we somehow need to tell Q replication what the source and target table names are, what rows/columns we want to replicate, and which Queue Managers and queues to use. Some of the terms that are covered in this section are: Logical table Replication Queue Map Q subscription Subscription group (SUBGROUP) What is a logical table? In Q replication, we have the concept of a logical table, which is the term used to refer to both the source and target tables in one statement. An example in a peer-to-peer three-way scenario is shown in the following diagram, where the logical table is made up of tables TABA, TABB, and TABC: What is a Replication/Publication Queue Map? The first part of the transport definitions mentioned earlier is a definition of Queue Map, which identifies the WebSphere MQ queues on both servers that are used to communicate between the servers. In Q replication, the Queue Map is called a Replication Queue Map, and in Event Publishing the Queue Map is called a Publication Queue Map. Let’s first look at Replication Queue Maps (RQMs). RQMs are used by Q Capture and Q Apply to communicate. This communication is Q Capture sending Q Apply rows to apply and Q Apply sending administration messages back to Q Capture. Each RQM is made up of three queues: a queue on the local server called the Send Queue (SENDQ), and two queues on the remote server—a Receive Queue (RECVQ) and an Administration Queue (ADMINQ), as shown in the preceding figures showing the different queues. An RQM can only contain one each of SENDQ, RECVQ, and ADMINQ. The SENDQ is the queue that Q Capture uses to send source data and informational messages. The RECVQ is the queue that Q Apply reads for transactions to apply to the target table(s). The ADMINQ is the queue that Q Apply uses to send control messages back to Q Capture. So using the queues in the first “Queues” figure, the Replication Queue Map definition would be: Send Queue (SENDQ): CAPA.TO.APPB.SENDQ.REMOTE on Source Receive Queue (RECVQ): CAPA.TO.APPB.RECVQ on Target Administration Queue (ADMINQ): CAPA.ADMINQ.REMOTE on Target Now let’s look at Publication Queue Maps (PQMs). PQMs are used in Event Publishing and are similar to RQMs, in that they define the WebSphere MQ queues needed to transmit messages between two servers. The big difference is that because in Event Publishing, we do not have a Q Apply component, the definition of a PQM is made up of only a Send Queue. What is a Q subscription? The second part of the transport definitions is a definition called a Q subscription, which defines a single source/target combination and which Replication Queue Map to use for this combination. We set up one Q subscription for each source/target combination. Each Q subscription needs a Replication Queue Map, so we need to make sure we have one defined before trying to create a Q subscription. Note that if we are using the Replication Center, then we can choose to create a Q subscription even though a RQM does not exist. The wizard will walk you through creating the RQM at the point at which it is needed. The structure of a Q subscription is made up of a source and target section, and we have to specify: The Replication Queue Map The source and target table The type of target table The type of conflict detection and action to be used The type of initial load, if any, should be performed If we define a Q subscription for unidirectional replication, then we can choose the name of the Q subscription—for any other type of replication we cannot. Q replication does not have the concept of a subscription set as there is in SQL Replication, where the subscription set holds all the tables which are related using referential integrity. In Q replication, we have to ensure that all the tables that are related through referential integrity use the same Replication Queue Map, which will enable Q Apply to apply the changes to the target tables in the correct sequence. In the following diagram, Q subscription 1 uses RQM1, Q subscription 2 also uses RQM1, and Q subscription 3 uses RQM3: What is a subscription group? A subscription group is the name for a collection of Q subscriptions that are involved in multi-directional replication, and is set using the SET SUBGROUP command. Q subscription activation In unidirectional, bidirectional, and peer-to-peer two-way replication, when Q Capture and Q Apply start, then the Q subscription can be automatically activated (if that option was specified). For peer-to-peer three-way replication and higher, when Q Capture and Q Apply are started, only a subset of the Q subscriptions of the subscription group starts automatically, so we need to manually start the remaining Q subscriptions.
Read more
  • 0
  • 0
  • 2789
Packt
12 Aug 2010
4 min read
Save for later

Easy guide to understand WCF in Visual Studio 2008 SP1 and Visual Studio 2010 Express

Packt
12 Aug 2010
4 min read
(For more resources on Microsoft, see here.) Creating your first WCF application in Visual Studio 2008 You start creating a WCF project by creating a new project from File | New | Project.... This opens the New Project window. You can see that there are four different templates available. We will be using the WCF Service Library template. Change the default name and provide a name for the project (herein JayWcf01) and click OK. The project JayWcf01 gets created with the folder structure shown in the next image: If you were to expand References node in the above you would notice that System.ServiceModel is already referenced. If it is not, for some reason, you can bring it in by using the Add Reference... window which is displayed when you right click the project in the Solution Explorer. IService1.vb is a service interface file as shown in the next listing. This defines the service contract and the operations expected of the service. If you change the interface name "IService1" here, you must also update the reference to "IService1" in App.config. <ServiceContract()> _Public Interface IService1 <OperationContract()> _ Function GetData(ByVal value As Integer) As String <OperationContract()> _ Function GetDataUsingDataContract(ByVal composite As CompositeType) As CompositeType ' TODO: Add your service operations hereEnd Interface' Use a data contract as illustrated in the sample below to add composite types to service operations<DataContract()> _Public Class CompositeType Private boolValueField As Boolean Private stringValueField As String <DataMember()> _ Public Property BoolValue() As Boolean Get Return Me.boolValueField End Get Set(ByVal value As Boolean) Me.boolValueField = value End Set End Property <DataMember()> _ Public Property StringValue() As String Get Return Me.stringValueField End Get Set(ByVal value As String) Me.stringValueField = value End Set End PropertyEnd Class The Service Contract is a contract that will be agreed to between the Client and the Server. Both the Client and the Server should be working with the same service contract. The one shown above is in the server. Inside the service, data is handled as simple (e.g. GetData) or complex types (e.g. GetDataUsingDataContract). However outside the Service these are handled as XML Schema Definitions. WCF Data contracts provides a mapping between the data defined in the code and the XML Schema defined by W3C organization, the standards organization. The service performed when the terms of the contract are properly adhered to is in the listing of Service1.vb file shown here. ' NOTE: If you change the class name "Service1" here, you must also update the reference to "Service1" in App.config.Public Class Service1 Implements IService1 Public Function GetData(ByVal value As Integer) As _ String Implements IService1.GetData Return String.Format("You entered: {0}", value) End Function Public Function GetDataUsingDataContract(ByVal composite As CompositeType) _ As CompositeType Implements IService1.GetDataUsingDataContract If composite.BoolValue Then composite.StringValue = (composite.StringValue & "Suffix") End If Return composite End FunctionEnd Class Service1 is defining two methods of the service by way of Functions. The GetData accepts a number and returns a string. For example, if the Client enters a value 50, the Server response will be "You entered: 50". The function GetDataUsingDataContract returns a Boolean and a String with 'Suffix' appended for an input which consists of a Boolean and a string. The JayWcf01 is a completed program with a default example contract IService1 and a defined service, Service1. This program is complete in itself. It is a good practice to provide your own names for the objects. Notwithstanding the default names are accepted in this demo. In what follows we test this program as is and then slightly modify the contract and test it again. The testing in the next section will invoke an in-built client and then later on we will publish it to the localhost which is an IIS 7 web server. How to test this program The program has a valid pair of contract and service and we should be able to test this service. The Windows Communication Foundation allows Visual Studio 2008 (also Visual Studio 2010 Express) to launch a host to test the service with a client. Build the program and after it succeeds hit F5. The WcfSvcHost is spawned which stays in the taskbar as shown. You can click WcfSvcHost to display the WCF Service Host window popping-up as shown. The host gets started as shown here. The service is hosted on the developmental server. This is immediately followed by the WCF Test Client user interface popping-up as shown. In this harness you can test the service.
Read more
  • 0
  • 0
  • 2384

article-image-drupal-7-preview
Packt
11 Aug 2010
3 min read
Save for later

Drupal 7 Preview

Packt
11 Aug 2010
3 min read
You'll need a localhost LAMP or XAMPP environment to follow along with the examples here. If you don't have one set up, I recommend using the Acquia Stack Drupal Installer: http://acquia.com/downloads. Once your testing environment is configured, download Drupal 7: http://drupal.org/drupal-7.0-alpha6. Installing D7 Save the installer to your localhost Drupal /sites folder and extract it. Set up your MySQL database using your preferred method. Note to developers: D7's new database abstraction layer will theoretically support multiple database types including SQLite, PostgreSQL, MSSQL and Oracle. So if you are running Oracle you may be able to use D7. Now load the installer page in your browser (note I renamed my extracted folder to drupal7): http://localhost:8082/drupal7/install.php. The install process is about the same as D6 - you're still going to need to copy your /sites/default/default.settings.php file and re-name it to settings.php. Also make sure to create your /files folder. Make sure the file has write permissions for the install process. Once you do this and have your db created, it's time to run the installer. One immediate difference with the installer is that D7 now offers you a Standard or Minimal install profile. Standard will install D7 with common Drupal functionality and features that you are familiar with. Minimal is the choice for developers who want only the core Drupal functionality enabled. I'll leave it set for Standard profile. Navigate through the installer screens choosing language; and adding your database information. Enhancements With D7 installed what are the immediate noticeable enhancements? The overall look and feel of the administrative interface now uses overlay windows to present links to sections and content. Navigation in the admin interface now runs horizontally along the top of the site. Directly under the toolbar navigation is a shortcut link navigation. You can customize this by adding your own shortcuts pointing to various admin functionality. In the toolbar, Content points to your content lists. Structure contains links to Blocks, Content types, Menus and Taxonomy. CCK is now built into Drupal 7 so you can create custom content types and manage custom fields without having to install modules. If you want to restore the user interface to look more like D6 you can do this by disabling the Overlay module or tweaking role permissions for the Overlay module. Content Types Two content types are enabled with Drupal 7 core. Article replaces the D6 Story type. Basic Page replaces the D6 Page type. Developers hope these more accurate names will help new Drupal users understand how to add content easily to their site.
Read more
  • 0
  • 0
  • 2156