Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-setting-payment-model-opencart
Packt
25 Aug 2010
8 min read
Save for later

Setting Payment Model in OpenCart

Packt
25 Aug 2010
8 min read
(For more resources on OpenCart, see here.) Shopping cart system The shopping cart is special software which allows customers to add / delete products to a basket from a store catalogue and then complete the order. The shopping cart also automatically updates the total amount which the customer will pay according to product additions or deletions on the basket. OpenCart provides a built-in shopping cart system which provides all such functionality. So, you don't need to install or buy separate software for the shopping cart. Merchant account A merchant account is a special account type which differs from a usual bank account. Its sole purpose is to accept credit card payments. Opening a merchant account requires making a contract with the credit card network providers. Authorized card payments on the store are transferred to the merchant account. Then, as a merchant we can transfer the amount from merchant account to bank account (checking account). Since opening a merchant account can be a tiresome process for most businesses and individuals, there are various online businesses which can provide this functionality. Curious readers can learn the details of merchant accounts on the following links: http://en.wikipedia.org/wiki/Merchant_account http://www.merchantaccount.com/ Payment gateway A payment gateway is an online analogue of a physical credit card processing terminal that we can locate in retail shops. Its function is to process credit card information and return the results back to the store system. You can imagine the payment gateway as an element in the middle of an online store and credit card network. The software part of this service is included in OpenCart but we will have to use one of the payment gateway services. Understanding online credit card processing The following diagram shows the standard credit card processing flowchart in detail. Note that it is not essential to know every detail in steps shown in a red background color. These parts are executed on behalf of us by the payment system which we will use, so it is isolated both from the store and customer. For example, PayPal is such a system, which we will learn about now in detail. Let's explain the flowchart step by step to clearly understand the whole process: A customer successfully enters into the checkout page after filling the shopping cart with the products. Then, he/she enters the credit card information and clicks on the Pay button. Now, the store checkout page sends these details along with the total amount to the payment gateway securely. The payment gateway starts a series of processes. First of all, the information is passed to the merchant's bank processor where the merchant account was opened before. The information is then sent to the credit card network by this processor. Visa and MasterCard are two of the most popular credit card networks. The credit card network processes the validity of the credit card and sends the information to the customer's credit card issuer bank. As a result, the bank rejects or approves the transaction and sends the information back to the credit card network. Through the same routing in reverse, the payment information is finally submitted back to the online store with a special code. All this is done in a few seconds and the information flow starting from the payment gateway is isolated from both the customer and merchant. It means that we don't have to deal with what's going on after sending information to the payment gateway. As a merchant, we only need the result of the transaction. After the information is processed by credit card network during Step 6; the transaction funds are transferred to the merchant account by the credit card network as shown in Step a. Then, the merchant can transfer the funds from the merchant account to the usual checking bank account automatically or manually, as shown in Step b. OpenCart payment methods The current OpenCart version supports many established payment systems, including PayPal services, Authorize.net, Moneybookers, 2Checkout, and so on, as well as basic payment options such as Cash on Delivery, Bank Transfer, Check/money order, etc. We can also get more payment gateway modules on the OpenCart extensions section by searching in Payment Methods. http://www.opencart.com/index.php?route=extension/extension We will now briefly learn the most widely used methods and their differences and similarities to each other. PayPal PayPal is one of the most popular and easiest to use systems for accepting credit cards for an online store. PayPal has two major products to be used in OpenCart through built-in modules: PayPal Website Payment Standard PayPal Website Payment Pro Both of these payment methods provide both payment gateway and merchant account functionality. Let's understand the details of each now. PayPal Website Payment Standard It is the easiest method to implement accepting credit card payments on an online store. For merchants, a simple bank account and a PayPal account is enough to take payments. There are no monthly fees or setup costs charged by PayPal. The only cost is a fixed small percentage taken by PayPal for each transaction. So, you should consider this on price valuations of items in the store. Here is the link to learn about the latest commission rates per transaction: http://merchant.paypal.com When the customer clicks on the checkout button on OpenCart, he/she will be redirected to the PayPal site to continue with the payment. As you can see from the following sample screenshot, a customer can provide credit card information instantly or log in to his/her PayPal account to pay from the balance in the PayPal account: In the next step, after the final review, the user clicks on the Pay Now button. Notice that PayPal automatically localizes the total amount according to the PayPal owner's account currency. In this case, the price is calculated according to Dollar – Euro exchange rates. After the payment, the PayPal screen shows the result of the payment. The screen doesn't return to the merchant store automatically. There is a button for it: Return to Merchant. Finally, the website user is informed about the result of the purchase in the OpenCart store. The main advantage of PayPal Website Payment Standard is that it is easy to implement; many online people are familiar with using it. We can state one minor disadvantage. Some people may abandon the purchase since the payment gateway would leave the store temporarily to complete the transaction on the PayPal website. PayPal Website Payment Pro This is the paid PayPal solution for an online store as a payment gateway and merchant account. The biggest difference from PayPal Website Payment Standard is that customers do not leave the website for credit card processing. The credit card information is completely processed in the online store as it is the popular method of all established e-commerce websites. Even the customers will not know about the processor of the cards. Unless we put a PayPal logo ourselves, this information is well encapsulated. Using this method also only requires a bank account and PayPal account for the merchant. PayPal charges a monthly fee and a one-time setup fee for this service. The individual transactions are also commissioned by PayPal. This is a very professional way of processing credit cards online for a store but it can have a negative effect on some customers. Some customers can require seeing some indication of trust from the store before making a purchase. So, depending the on store owner's choice, it would be wise to put a remark and logo of PayPal stating that «Credit card is processed by PayPal safely and securely» For a beginner OpenCart administrator who wants to use PayPal for the online store, it is recommended to get experience with the free Standard payment option and then upgrade to the Pro option. We can get more information on PayPal Website Payment Pro service at: http://merchant.paypal.com At time of writing this book, PayPal only charges a fixed monthly fee ($30) and commissions on each transaction. There are no other setup costs or hidden charges. PayFlow Pro payment gateway If we already have a merchant account, we don't need to pay extra for it by using PayPal Standard or PayPal Pro. PayFlow Pro is cheaper than other PayPal services and allows us to accept credit card payments to an existing merchant account. Unfortunately, OpenCart currently does not support it as a built-in module but there are both free and paid modules. You can get them from the OpenCart official contributions page at: http://www.opencart.com/index.php?route=extension/extension
Read more
  • 0
  • 0
  • 3363

article-image-websphere-mq-sample-programs
Packt
25 Aug 2010
5 min read
Save for later

WebSphere MQ Sample Programs

Packt
25 Aug 2010
5 min read
(For more resources on IBM, see here.) WebSphere MQ sample programs—server There is a whole set of sample programs available to the WebSphere MQ administrator. We are interested in only a couple of them: the program to put a message onto a queue and a program to retrieve a message from a queue. The names of these sample programs depend on whether we are running WebSphere MQ as a server or as a client. There is a server sample program called amqsput, which puts messages onto a queue using the MQPUT call. This sample program comes as part on the WebSphere MQ samples installation package, not as part of the base package. The maximum length of message that we can put onto a queue using the amqsput sample program is 100 bytes. There is a corresponding server sample program to retrieve messages from a queue called amqsget. Use the sample programs only when testing that the queues are set up correctly—do NOT use the programs once Q Capture and Q Apply have been started. This is because Q replication uses dense numbering between Q Capture and Q Apply, and if we insert or retrieve a message, then the dense numbering will not be maintained and Q Apply will stop. The usual response to this is to cold start Q Capture! To put a message onto a Queue (amqsput) The amqsput utility can be invoked from the command line or from within a batch program. If we are invoking the utility from the command line, the format of the command is: $ amqsput <Queue> <QM name> < <message> We would issue this command from the system on which the Queue Manager sits. We have to specify the queue (<Queue>) that we want to put the message on, and the Queue Manager (<QM name>) which controls this queue. We then pipe (<) the message (<message>) into this. An example of the command is: $ amqsput CAPA.TO.APPB.SENDQ.REMOTE QMA < hello We put these commands into an appropriately named batch fle (say SYSA_QMA_TESTP_UNI_AB.BAT), which would contain the following: Batch file—Windows example: call "C:Program FilesIBMWebSphere MQbinamqsput" CAPA.TO.APPB.SENDQ.REMOTE QMA < QMA_TEST1.TXT Batch file—UNIX example: "/opt/mqm/samp/bin/amqsput" CAPA.TO.APPB.SENDQ.REMOTE QMA < QMA_TEST1.TXT Where the QMA_TEST1.TXT fle contains the message we want to send. Once we have put a message onto the Send Queue, we need to be able to retrieve it. To retrieve a message from a Queue(amqsget) The amqsget utility can be invoked from the command line or from within a batch program. The utility takes 15 seconds to run. We need to specify the Receive Queue that we want to read from and the Queue Manager that the queue belongs to: $ amqsget <Queue> <QM name> As example of the command is shown here: $ amqsget CAPA.TO.APPB.RECVQ QMB If we have correctly set up all the queues, and the Listeners and Channels are running, then when we issue the preceding command, we should see the message we put onto the Send Queue. We can put the amqsget command into a batch fle, as shown next: Batch file—Windows example: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.TO.APPB.RECVQ QMB@ECHO You should see: test1 Batch file—UNIX example: echo This takes 15 seconds to run"/opt/mqm/samp/bin/amqsget" CAPA.TO.APPB.RECVQ QMBecho You should see: test1 Using these examples and putting messages onto the queues in a unidirectional scenario, then the "get" message batch fle for QMA (SYSA_QMA_TESTG_UNI_BA.BAT) contains: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.ADMINQ QMA@ECHO You should see: test2 From CLP-A, run the fle as: $ SYSA_QMA_TESTG_UNI_BA.BAT The "get" message batch file for QMB (SYSB_QMB_TESTG_UNI_AB.BAT) contains: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.TO.APPB.RECVQ QMB@ECHO You should see: test1 From CLP-B, run the file as: $ SYSB_QMB_TESTG_UNI_AB.BAT To browse a message on a Queue It is useful to be able to browse a queue, especially when setting up Event Publishing. There are three ways to browse the messages on a Local Queue. We can use the rfhutil utility, or the amqsbcg sample program, both of which are WebSphere MQ entities, or we can use the asnqmfmt Q replication command. Using the WebSphere MQ rfhutil utility: The rfhutil utility is part of the WebSphere MQ support pack available to download from the web—to find the current download website, simply type rfhutil into an internet search engine. The installation is very simple—unzip the file and run the rfhutil.exe file. Using the WebSphere MQ amqsbcg sample program: The amqsbcg sample program displays messages including message descriptors. $ amqsbcg CAPA.TO.APPB.RECVQ QMB
Read more
  • 0
  • 0
  • 6120

article-image-trapping-errors-using-built-objects-javascript-testing
Packt
25 Aug 2010
6 min read
Save for later

Trapping Errors by Using Built-In Objects in JavaScript Testing

Packt
25 Aug 2010
6 min read
(For more resources on JavaScript, see here.) The Error object An Error is a generic exception, and it accepts an optional message that provides details of the exception. We can use the Error object by using the following syntax: new Error(message); // message can be a string or an integer Here's an example that shows the Error object in action. The source code for this example can be found in the file error-object.html. <html><head><script type="text/javascript">function factorial(x) { if(x == 0) { return 1; } else { return x * factorial(x-1); } } try { var a = prompt("Please enter a positive integer", ""); if(a < 0){ var error = new Error(1); alert(error.message); alert(error.name); throw error; } else if(isNaN(a)){ var error = new Error("it must be a number"); alert(error.message); alert(error.name); throw error; } var f = factorial(a); alert(a + "! = " + f); } catch (error) { if(error.message == 1) { alert("value cannot be negative"); } else if(error.message == "it must be a number") { alert("value must be a number"); } else throw error; } finally { alert("ok, all is done!"); } </script> </head> <body> </body> </html> You may have noticed that the structure of this code is similar to the previous examples, in which we demonstrated try, catch, finally, and throw. In this example, we have made use of what we have learned, and instead of throwing the error directly, we have used the Error object. I need you to focus on the code given above. Notice that we have used an integer and a string as the message argument for var error, namely new Error(1) and new Error("it must be a number"). Take note that we can make use of alert() to create a pop-up window to inform the user of the error that has occurred and the name of the error, which is Error, as it is an Error object. Similarly, we can make use of the message property to create program logic for the appropriate error message. It is important to see how the Error object works, as the following built-in objects, which we are going to learn about, work similarly to how we have seen for the Error object. (We might be able to show how we can use these errors in the console log.) The RangeError object A RangeError occurs when a number is out of its appropriate range. The syntax is similar to what we have seen for the Error object. Here's the syntax for RangeError: new RangeError(message); message can either be a string or an integer. <html><head><script type="text/javascript">try { var anArray = new Array(-1); // an array length must be positive}catch (error) { alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> We'll start with a simple example to show how this works. Check out the following code that can be found in the source code folder, in the file rangeerror.html: When you run this example, you should see an alert window informing you that the array is of an invalid length. After this alert window, you should receive another alert window telling you that The error is RangeError, as this is a RangeError object. If you look at the code carefully, you will see that I have deliberately created this error by giving a negative value to the array's length (array's length must be positive). The ReferenceError object A ReferenceError occurs when a variable, object, function, or array that you have referenced does not exist. The syntax is similar to what you have seen so far and it is as follows: new ReferenceError(message); message can either be a string or an integer. As this is pretty straightforward, I'll dive right into the next example. The code for the following example can be found in the source code folder, in the file referenceerror.html. <html><head><script type="text/javascript">try { x = y; // notice that y is not defined // an array length must be positive}catch (error) { alert(error); alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> Take note that y is not defined, and we are expecting to catch this error in the catch block. Now try the previous example in your Firefox browser. You should receive four alert windows regarding the errors, with each window giving you a different message. The messages are as follows: ReferenceError: y is not defined y is not defined ReferenceError ok, all is done If you are using Internet Explorer, you will receive slightly different messages. You will see the following messages: [object Error] message y is undefined TypeError ok, all is done The TypeError object A TypeError is thrown when we try to access a value that is of the wrong type. The syntax is as follows: new TypeError(message); // message can be a string or an integer and it is optional An example of TypeError is as follows: <html><head><script type="text/javascript">try { y = 1 var test = function weird() { var foo = "weird string"; } y = test.foo(); // foo is not a function}catch (error) { alert(error); alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> If you try running this code in Firefox, you should receive an alert box stating that it is a TypeError. This is because test.foo() is not a function, and this results in a TypeError. JavaScript is capable of finding out what kind of error has been caught. Similarly, you can use the traditional method of throwing your own TypeError(), by uncommenting the code. The following built-in objects are less used, so we'll just move through quickly with the syntax of the built-in objects.
Read more
  • 0
  • 0
  • 1165
Visually different images

article-image-implementing-panels-drupal-6
Packt
23 Aug 2010
4 min read
Save for later

Implementing Panels in Drupal 6

Packt
23 Aug 2010
4 min read
(For more resources on Drupal, see here.) Introduction This is an article centered on building website looks with Panels. We will create a custom front page for a website and a node edit form with a Panel. We will also see the process of generating a node override within Panels and take a look at Mini panels. Making a new front page using Panels and Views (for dynamic content display) We will now create a recipe with Views and Panels to make a custom front page. Getting ready We will need to install Views as the module, if not done already. As the Views and Panels projects are both led by Earl Miles (merlinofchaos), the UIs are very similar. We will not discuss Views in detail as it is out of the scope of the article. To use this recipe, a basic knowledge of Views is required. We will use our recipe of fl exible layout as our start for this recipe. To make a better layout and a custom website, I recommend using adaptive themes (http://adaptivethemes.com/). Here, in this recipe, I have used that theme. The adaptive theme is a starter theme and it includes several custom panels. There is also a built-in support for skins, which helps to make theming a lot easier. We will be using adaptive themes in this recipe for demonstration and will change our administration view from Garland to adaptive theme. The adaptive themes add extra layouts as shown in the following screenshot. How to do it... Go to the Flexible layout recipe we created or you can create your own layout using the same recipe. Now, we will create Views to be included in our Panels layout. Assuming that Views is installed, go to Site building | Views. Add a View. In the View name, add storyblock1, and add a description of your choice. Select the Row style as Node. Put in Items to display as 3. In the Style, we can select Unformatted or Grid depending on how you want to display the nodes. I will use Grid. In the Sort criteria, put in Node: Post date asc and Node: Type asc. In Filters, we just want the posts promoted to the first page. Do a live preview. We will need to add display of the default view as Block, so that the View is available as a block, which we can select in our Panels. We can also put the views default output as a Panel pane, but using blocks as a display of the Views gives the "read more" links by default. In the direct View, we have to create it. Say, we create a block—storyblock1, as shown in the following screenshot: Now, we need to go to the Flexible Layouts UI, as a layout created by you. Go to the Content tab. Earlier, we had displayed a static block; now we will display a dynamic View. Disable earlier panes in the first static row. Select the gears in first static rows and select Add content | Miscellaneous. The custom view block will be here, as shown in the following screenshot. Select it. Save and preview. So, we have now integrated the dynamic View in one of our Panel panes. Let's add sample content to each region now. You can select your own content as you want on your front page, as shown in the following screenshot: Go to Site configuration | Site information. Chan ge the default home pages to the created Panels page. Your home page is now the custom Panel page. How it works... In this recipe, we implemented Panel panes with views and blocks to make a separate custom page and separate display for the existing content in the website.
Read more
  • 0
  • 0
  • 1137

article-image-upgrading-opencart
Packt
23 Aug 2010
3 min read
Save for later

Upgrading OpenCart

Packt
23 Aug 2010
3 min read
This article is suggested reading even for an experienced user. It will show us any possible problems that might occur while upgrading, so we can avoid them. Making backups of the current OpenCart system One thing we should certainly do is backup our files and database before starting any upgrade process. This will allow us to restore the OpenCart system if the upgrade fails and if we cannot solve the reason behind it. Time for action – backing up OpenCart files and database In this section, we will now learn how to back up the necessary files and database of the current OpenCart system before starting the upgrading processes. We will start with backing up database files. We have two choices to achieve this. The first method is easier and uses the built-in OpenCart module in the administration panel. We need to open the System | Backup / Restore menu. In this screen, we should be sure that all modules are selected. If not, click on the Select All link first. Then, we will need to click on the Backup button. A backup.sql file will be generated for us automatically. We will save the file on our local computer. The second method to backup OpenCart database is through the Backup Wizard on cPanel administration panel which most hosting services provide this as a standard management tool for their clients. If you have applied the first method which we have just seen, skip the following section to apply. Still, it is useful to learn about alternative Backup Wizard tool on cPanel. Let's open cPanel screen that our hosting services provided for us. Click on the Backup Wizard item under the Files section. On the next screen, click on the Backup button. We will click on the MySQL Databases button on the Select Partial Backup menu. We will right-click on our OpenCart database file backup and save it on our local computer by clicking on Save Link As. Let's return to the cPanel home screen and open File Manager under the Files menu. Let's browse into the web directory where our OpenCart store files are stored. Right-click on the directory and then Compress it. We will compress the whole OpenCart directory as a Zip Archive file. As we can see from the following screenshot, the compressed store.zip file resides on the web server. We can also optionally download the file to our local computer. What just happened? We have backed up our OpenCart database using cPanel. After this, we also backed up our OpenCart files as a compressed archive file using File Manager in cPanel.
Read more
  • 0
  • 0
  • 1698

article-image-getting-started-drupal-6-panels
Packt
20 Aug 2010
7 min read
Save for later

Getting Started with Drupal 6 Panels

Packt
20 Aug 2010
7 min read
(For more resources on Drupal, see here.) Introduction Drupal Panels are distinct pieces of rectangular content that create a custom layout of the page—where different Panels are more visible and presentable as a structured web page. Panels is a freely-distributed, open source module developed for Drupal 6. With Panels, you can display various content in a customizable grid layout on one page. Each page created by Panels can include a unique structure and content. Using the drag-and-drop user interface, you select a design for the layout and position various kinds of content (or add custom content) within that layout. Panels integrates with other Drupal modules like Views and CCK. Permissions, deciding which users can view which elements, are also integrated into Panels. You can even override system pages such as the display of keywords (taxonomy) and individual content pages (nodes). In the next section, we will see what the Panels can actually do, as defined on drupal.org: http://drupal.org/project/panels. Basically, Panels will help you to arrange a large content on a single page. While Panels can be used to arrange a lot of content on a single page, it is equally useful for small amounts of related content and/or teasers. Panels support styles, which control how individual content's panes, regions within a Panel, and the entire Panels will be rendered. While Panels ship with few styles, styles can be provided as plugins by modules, as well as by themes: The User Interface is nice for visually designing a layout, but a real HTML guru doesn't want the somewhat weighty HTML that this will create. Modules and themes can provide custom layouts that can fit a designer's exacting specifications, but still allow the site builders to place content wherever they like. Panels include a pluggable caching mechanism: a single cache type is included, the 'simple' cache, which is time-based. Since most sites have very specific caching needs based upon the content and traffic patterns, this system was designed to let sites that need to devise their own triggers for cache clearing and implement plugins that will work with Panels. A cache mechanism can be defined for each pane or region with the Panel page. Simple caching is a time-based cache. This is a hard limit, and once cached, it will remain that way until the time limit expires. If "arguments" are selected, this content will be cached per individual argument to the entire display; if "contexts" are selected, this content will be cached per unique context in the pane or display; if "neither", there will be only one cache for this pane. Panels can also be cached as a whole, meaning the entire output of the Panels can be cached, or individual content panes that are heavy, like large views, can be cached. Panels can be integrated with the Drupal module Organic Groups through the #og_Panels module to allow individual groups to have their own customized layouts. Panels integrates Views to allow administrators to add any view as content. We will discuss Module Integration in the coming recipes. Shown in the previous screenshot is one of the example sites that use Panels 3 for their home page (http://concernfast.org). The home page is built using a custom Panels 3 layout with a couple of dedicated Content types that are used to build nodes to drop into the various Panels areas. The case study can be found at: http://drupal.org/node/629860. Panels arrange your site content into an easy navigational pattern, which can be clearly seen in the following screenshot. There are several terms often used within Panels that administrators should become familiar with as we will be using the same throughout the recipes. The common terms in Panels are: Panels page: The page that will display your Panels. This could be the front page of a site, a news page, and so on. These pages are given a path just like any other node. Panels: A container for content. A Panel can have several pieces of content within it, and can be styled. Pane: A unit of content in a Panel. This can be a node, view, arbitrary HTML code, and so on. Panes can be shifted up and down within a Panel and moved from one Panel to another. Layout: Provides a pre-defined collection of Panels that you can select from. A layout might have two columns, a header, footer, or three columns in the middle, or even seven Panels stacked like bricks. Setting up Ctools and Panels We will now set up Ctools, which is required for Panels. "Chaos tools" is a centralized library, which is used by the most powerful modules of Drupal Panels and views. Most functions in Panels are inherited from the chaos library. Getting ready Download the Panels modules for the Drupal website: http://drupal.org/project/Panels You would need Ctools as a dependency module, which can be downloaded from: http://drupal.org/project/ctools How to do it... Upload both the files, Ctools and Panels, into /sites/all/modules. It is always a best practice to keep the installed modules separate from the "core" (the files that install with Drupal) into the /sites/all/modules folder. This makes it easy to upgrade the modules at a later stage when your site becomes complex and has too many modules. Go to the modules page in admin (Admin| Site Building | Modules) and enable Ctools, then enable Panels. Go to permissions (Admin | User Management | Permissions) and give site builders permission to use Panels. Enable the Page manager module in the Chaos tools suite. This module enables the page manager for Panels. To integrate views with Panels, enable the Views content panes module too. We will discuss more about views later on. Enable Panels and set the permissions. You will need to enable Panel nodes, the Panel module, and Mini panels too (as shown in the following screenshot) as we will use the same in our advanced recipes. Go to administer by module in the Site building | Modules. Here, you find the Panels User Interface. There is more Chaos tools suite includes the following tools that form the base of the Panels module. You do not need to go into the details of it to use Panels but it is good to know what it includes. This is the powerhouse that makes Panels the most efficient tool to design complex layouts: Plugins—tools to make it easy for modules to let other modules implement plugins from .inc files. Exportables—tools to make it easier for modules to have objects that live in database or live in code, such as 'default views'. AJAX responder—tools to make it easier for the server to handle AJAX requests and tell the client what to do with them. Form tools—tools to make it easier for forms to deal with AJAX. Object caching—tool to make it easier to edit an object across multiple page requests and cache the editing work. Contexts—the notion of wrapping objects in a unified wrapper and providing an API to create and accept these contexts as input. Modal dialog—tool to make it simple to put a form in a modal dialog. Dependent—a simple form widget to make form items appear and disappear based upon the selections in another item. Content—pluggable Content types used as panes in Panels and other modules like Dashboard. Form wizard—an API to make multi-step forms much easier. CSS tools—tools to cache and sanitize CSS easily to make user input CSS safe. How it works... Now, we have our Panels UI ready to generate layouts. We will discuss each of them in the following recipes. The Panels dashboard will help you to generate the layouts for Drupal with ease.
Read more
  • 0
  • 0
  • 1812
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
Packt
20 Aug 2010
11 min read
Save for later

Adding Features to your Joomla! Form using ChronoForms

Packt
20 Aug 2010
11 min read
(For more resources on ChronoForms, see here.) Introduction We have so far mostly worked with fairly standard forms where the user is shown some inputs, enters some data, and the results are e-mailed and/or saved to a database table. Many forms are just like this, and some have other features added. These features can be of many different kinds and the recipes in this article are correspondingly a mixture. Some, like Adding a validated checkbox, change the way the form works. Others, like Signing up to a newsletter service change what happens after the form is submitted. While you can use these recipes as they are presented, they are just as useful as suggestions for ways to use ChronoForms to solve a wide range of user interactions on your site. Adding a validated checkbox Checkboxes are less often used on forms than most of the other elements and they have some slightly unusual behavior that we need to manage. ChronoForms will do a little to help us, but not everything that we need. In this recipe, we'll look at one of the most common applications—a stand alone checkbox that the user is asked to click to ensure that they've accepted some terms and conditions. We want to make sure that the form is not submitted unless the box is checked. Getting ready We'll just add one more element to our basic newsletter form. It's probably going to be best to recreate a new version of the form using the Form Wizard to make sure that we have a clean starting point. How to do it... In the Form Wizard, create a new form with two TextBox elements. In the Properties box, add the Labels "Name" and "Email" and the Field Names "name" and "email" respectively. Now drag in a CheckBox element. You'll see that ChronoForms inserts the element with three checkboxes and we only need one. In the Properties box remove the default values and type in "I agree". While you are there change the label to "Terms and Conditions". Lastly, we want to make sure that this box is checked so check the Validation | One Required checkbox and add "please confirm your agreement" in the Validation Message box. Apply the changes to the Properties. To complete the form add the Button element, then save your form, publish it, and view it in your browser. To test, click the Submit button without entering anything. You should find that the form does not submit and an error message is displayed. How it works... The only special thing to notice about this is that the validation we used was validate-one- required and not the more familiar required. Checkbox arrays, radio button groups, and select drop-downs will not work with the required option as they always have a value set, at least from the perspective of the JavaScript that is running the validation. There's more... Validating the checkbox server-side If the checkbox is really important to us, then we may want to confirm that it has been checked using the server-side validation box. We want to check and, if our box isn't checked, then create the error message. However, there is a little problem—an unchecked checkbox doesn't return anything at all, there is just no entry in the form results array. Joomla! has some functionality that will help us out though; the JRequest::getVar() function that we use to get the form results allows us to set a default value. If nothing is found in the form results, then the default value will be used instead. So we can add this code block to the server-side validation box: <?php $agree = JRequest::getString('check0[]', 'empty', 'post'); if ( $agree == 'empty' ) { return 'Please check the box to confirm your agreement'; } ?> Note: To test this, we need to remove the validate-one-required class from the input in the Form HTML. Now when we submit the empty form, we see the ChronoForms error message. Notice that the input name in the code snippet is check0[]. ChronoForms doesn't give you the option of setting the name of a checkbox element in the Form Wizard | Properties box. It assigns a check0, check1, and so on value for you. (You can edit this in the Form Editor if you like.) And because checkboxes often come in arrays of several linked boxes with the same name, ChronoForms also adds the [] to create an array name. If this isn't done then only the value of the last checked box will be returned. Locking the Submit button until the box is checked If we want to make the point about terms and conditions even more strongly then we can add some JavaScript to the form to disable the Submit button until the box is checked. We need to make one change to the Form HTML to make this task a little easier. ChronoForms does not add ID attributes to the Submit button input; so open the form in the Form Editor, find the line near the end of the Form HTML and alter it to read: <input value="Submit" name="submit" id='submit' type="submit" /> Now add the following snippet into the Form JavaScript box: // stop the code executing // until the page is loaded in the browser window.addEvent('load', function() { // function to enable and disable the submit button function agree() { if ( $('check00').checked == true ) { $('submit').disabled = false; } else { $('submit').disabled = true; } }; // disable the submit button on load $('submit').disabled = true; //execute the function when the checkbox is clicked $('check00').addEvent('click', agree); }); Apply or save the form and view it in your browser. Now as you tick or untick the checkbox, the submit button will be enabled and disabled. This is a simple example of adding a custom script to a form to add a useful feature. If you are reasonably competent in JavaScript, you will find that there is quite a lot more that you can do. There are different styles of laying out both JavaScript and PHP and sometimes fierce debates about where line breaks and spaces should go. We've adopted a style here that is hopefully fairly clear, reasonably compact, and more or less the same for both JavaScript and PHP. If it's not the style you are accustomed to, then we're sorry. Adding an "other" box to a drop-down Drop-downs are a valuable way of offering a list of choices to your user to select from. And sometimes it just isn't possible to make the list complete, there's always another option that someone will want to add. So we add an "other" option to the drop-down. But that tells us nothing, so we need to add an input to tell us what "other" means here. Getting ready We'll just add one more element to our basic newsletter form. We haven't used a drop-down before but it is very similar to the check-box element from the previous recipe. How to do it... Use the Form Wizard to create a form with two TextBox elements, a DropDown element, and a Button element. The changes to make in the element are: Add "I heard from" in the Label Change the Field Name to "hearabout" Add some options to the Options box—"Google", "Newspaper", "Friend", and "Other" Leave the Add Choose Option box checked and leave Choose Option in the Choose Option Text box. Apply the Properties box. Make any other changes you need to the form elements; then save the form, publish it, and view it in your browser. Notice that as well as the four options we added the Choose Option entry is at the top of the list. That comes from the checkbox and text field that we left with their default values. It's important to have a "null" option like this in a drop-down for two reasons. First, so that it is obvious to a user that no choice has been made. Otherwise it's very easy for them to leave the first option showing and this value—Google in this case—will be returned by default. Second, so that we can validate select-one-required if necessary. The "null" option has no value set and so can be detected by validation script. Now we just need one more text box to collect details if Other is selected. Open the form in the Wizard Edit; add one more TextBox element after the DropDown element. Give it the Label please add details and the name "other". Even though we set the name to "other", ChronoForms will have left the input ID attribute as text_4 or something similar. Open the Form in the Form Editor and change the ID to "other" as well. The same is true of the drop-down. The ID there is select_2, change that to hearabout. Now we need a script snippet to enable and disable the "other" text box if the Other option is selected in the drop-down. Here's the code to put in the Form JavaScript box: window.addEvent('domready', function() { $('hearabout').addEvent('change', function() { if ($('hearabout').value == 'Other' ) { $('other').disabled = false; } else { $('other').disabled = true; } }); $('other').disabled = true; }); This is very similar to the code in the last recipe except that it's been condensed a little more by merging the function directly into the addEvent(). When you view the form you will see that the text box for please add details is grayed out and blocked until you select Other in the drop-down. Make sure that you don't make the please add details input required. It's an easy mistake to make but it stops the form working correctly as you have to select Other in the drop-down to be able to submit it. How it works Once again, this is a little JavaScript that is checking for changes in one part of the form in order to alter the display of another part of the form. There's more... Hiding the whole input It looks a little untidy to have the disabled box showing on the form when it is not required. Let's change the script a little to hide and unhide the input instead of disabling and enabling it. To make this work we need a way of recognizing the input together with its label. We could deal with both separately, but let's make our lives simpler. In the Form Editor, open the Form HTML box and look near the end for the other input block: <div class="form_item"> <div class="form_element cf_textbox"> <label class="cf_label" style="width: 150px;">please add details</label> <input class="cf_inputbox" maxlength="150" size="30" title="" id="other" name="other" type="text" /> </div> <div class="cfclear">&nbsp;</div> </div> That <div class="form_element cf_textbox"> looks like it is just what we need so let's add an ID attribute to make it visible to the JavaScript:   <div class="form_element cf_textbox" id="other_input"> Now we'll modify our script snippet to use this: window.addEvent('domready', function() { $('hearabout').addEvent('change', function() { if ($('hearabout').value == 'Other' ) { $('other_input').setStyle('display', 'block'); } else { $('other_input').setStyle('display', 'none'); } }); // initialise the display if ($('hearabout').value == 'Other' ) { $('other_input').setStyle('display', 'block'); } else { $('other_input').setStyle('display', 'none'); } }); Apply or save the form and view it in your browser. Now the input is invisible see the following screenshot labeled 1 until you select Other from the drop-down see the following screenshot labeled 2. The disadvantage of this approach is that the form can appear to "jump around" as extra fields appear. You can overcome this with a little thought, for example by leaving an empty space. See also In some of the script here we are using shortcuts from the MooTools JavaScript framework. Version 1.1 of MooTools is installed with Joomla! 1.5 and is usually loaded by ChronoForms. You can find the documentation for MooTools v1.1 at http://docs111.mootools.net/ Version 1.1 is not the latest version of MooTools and many of the more recent MooTools script will not run with the earlier version. Joomla 1.6 is expected to use the latest release.
Read more
  • 0
  • 0
  • 2685

article-image-database-considerations-php-5-cms
Packt
17 Aug 2010
10 min read
Save for later

Database Considerations for PHP 5 CMS

Packt
17 Aug 2010
10 min read
(For more resources on PHP, see here.) The problem Building methods that: Handle common patterns of data manipulation securely and efficiently Help ease the database changes needed as data requirements evolve Provide powerful data objects at low cost Discussion and considerations Relational databases provide an effective and readily available means to store data. Once established, they normally behave consistently and reliably, making them easier to use than file systems. And clearly a database can do much more than a simple file system! Efficiency can quickly become an issue, both in relation to how often requests are made to a database, and how long queries take. One way to offset the cost of database queries is to use a cache at some stage in the processing. Whatever the framework does, a major factor will always be the care developers of extensions take over the design of table structures and software; the construction of SQL can also make a big difference. Examples included here have been assiduously optimized so far as the author is capable, although suggestions for further improvement are always welcome! Web applications are typically much less mature than more traditional data processing systems. This stems from factors such as speed of development and deployment. Also, techniques that are effective for programs that run for a relatively long time do not make sense for the brief processing that is applied to a single website request. For example, although PHP allows persistent database connections, thereby reducing the cost of making a fresh connection for each request, it is generally considered unwise to use this option because it is liable to create large numbers of dormant processes, and slow down database operations excessively. Likewise, prepared statements have advantages for performance and possibly security but are more laborious to implement. So, the advantages are diluted in a situation where a statement cannot be used more than once. Perhaps, even more than performance, security is an issue for web development, and there are well known routes for attacking databases. They need to be carefully blocked. The primary goal of a framework is to make further development easy. Writing web software frequently involves the same patterns of database access, and a framework can help a lot by implementing methods at a higher level than the basic PHP database access functions. In an ideal world, an object-oriented system is developed entirely on the basis of OO principles. But if no attention is paid to how the objects will be stored, problems arise. An object database has obvious appeal, but for a variety of reasons, such databases are not widely used. Web applications have to be pragmatic, and so the aim pursued here is the creation of database designs that occasionally ignore strict relational principles, and objects that are sometimes simpler than idealized designs might suggest. The benefit of making these compromises is that it becomes practical to achieve a useful correspondence between database rows and PHP objects. It is possible that PHP Data Objects (PDO) will become very important in this area, but it is a relatively new development. Use of PDO is likely to pick up gradually as it becomes more commonly found in typical web hosting, and as developers get to understand what it can offer. For the time being, the safest approach seems to be for the framework to provide classes on which effective data objects can be built. A great deal can be achieved using this technique. Database dependency Lest this section create too much disappointment, let me say at the outset that this article does not provide any help with achieving database independence. The best that can be done here is to explain why not, and what can be done to limit dependency. Nowadays, the most popular kind of database employs the relational model. All relational database systems implement the same theoretical principles, and even use more or less the same structured query language. People use products from different vendors for an immense variety of reasons, some better than others. For web development, MySQL is very widely available, although PostgreSQL is another highly regarded database system that is available without cost. There are a number of well-known proprietary systems, and existing databases often contain valuable information, which motivates attempts to link them into CMS implementations. In this situation, there are frequent requests for web software to become database independent. There are, sadly, practical obstacles towards achieving this. It is conceptually simple to provide the mechanics of access to a variety of different database systems, although the work involved is laborious. The result can be cumbersome, too. But the biggest problem is that SQL statements are inclined to vary across different systems. It is easy in theory to assert that only the common core of SQL that works on all database systems should be used. The serious obstacle here is that very few developers are knowledgeable about what comprises the common core. ANSI SQL might be thought to provide a system neutral language, but then not all of ANSI SQL is implemented by every system. So, the fact is that developers become expert in one particular database system, or at best a handful. Skilled developers are conscious of the standardization issue, and where there is a choice, they will prefer to write according to standards. For example, it is better to write: SELECT username, userid, count(userid) AS number FROM aliro_session AS s INNER JOIN aliro_session_data AS d ON s.session_id = d.session_id WHERE isadmin = 0 GROUP BY userid rather than, SELECT username, userid, count(userid) AS number FROM aliro_session AS s, aliro_session_data AS d WHERE s.session_id = d.session_id AND isadmin = 0 GROUP BY userid This is because it makes the nature of the query clearer, and also because it is less vulnerable to detailed syntax variations across database systems. Use of extensions that are only available in some database systems is a major problem for query standardization. Again, it is easy while theorizing to deplore the use of non-standard extensions. In practice, some of them are so tempting that few developers resist them. An older MySQL extension was the REPLACE command, which would either insert or update data depending on whether a matching key was already present in the database. This is now discouraged on the grounds that it achieved its result by deleting any matching data before doing an insertion. This can have adverse effects on linked foreign keys but the newer option of the INSERT ... ON DUPLICATE KEY construction provides a very neat, efficient way to handle the case where data needs to go into the database allowing for what is already there. It is more efficient in every way than trying to read before choosing between INSERT and UPDATE, and also avoids the issue of needing a transaction. Similarly, there is no standard way to obtain a slice of a result set, for example starting with the eleventh item, and comprising the next ten items. Yet this is exactly the operation that is needed to efficiently populate the second page of a list of items, ten per page. The MySQL extension that offers LIMIT and LIMITSTART is ideal for this purpose. Because of these practical issues, independence of database systems remains a desirable goal that is rarely fully achieved. The most practical policy seems to avoid dependencies where this is possible at reasonable cost. The role of the database We already noted that a database can be thought of as uncontrolled global data, assuming the database connection is generally available. So there should be policies on database access to prevent this becoming a liability. One policy adopted by Aliro is to use two distinct databases. The "core" database is reserved for tables that are needed by the basic framework of the CMS. Other tables, including those created by extensions to the CMS framework, use the "general" database. Although it is difficult to enforce restrictions, one policy that is immediately attractive is that the core database should never be accessed by extensions. How data is stored is an implementation matter for the various classes that make up the framework, and a selection of public methods should make up the public interface. Confining access to those public methods that constitute the API for the framework leaves open the possibility of development of the internal mechanisms with little or no change to the API. If the framework does not provide the information needed by extensions, then its API needs further development. The solution should not be direct access to the core database. Much the same applies to the general database, except that it may contain tables that are intended to be part of an API. By and large, extensions should restrict their database operations to their own tables, and provide object methods to implement interfaces across extensions. This is especially so for write operations, but should usually apply to all database operations. Level of database abstraction There have been some clues earlier in this article, but it is worth squarely addressing the question of how far the CMS database classes should go in insulating other classes from the database. All of the discussions here are based on the idea that currently the best available style of development is object oriented. But we have already decided that using a true object database is not usually a practical option for web development. The next option to consider is building a layer to provide an object-relational transformation, so that outside of the database classes, nobody needs to deal with purely relational concepts or with SQL. An example of a framework that does this is Propel, which can be found at http://propel.phpdb.org/trac/. While developments of this kind are interesting and attractive in principle, I am not convinced that they provide an acceptable level of performance and flexibility for current CMS developments. There can be severe overheads on object-relational operations and manual intervention is likely to be necessary if high performance is a goal. For that reason, it seems that for some while yet, CMS developments will be based on more direct use of a relational database. Another complicating factor is the limitations of PHP in respect of static methods, which are obliged to operate within the environment of the class in which they are declared, irrespective of the class that was invoked in the call. This constraint is lifted in PHP 5.3 but at the time of writing, reliance on PHP 5.3 would be premature, software that has not yet found its way into most stable software distributions. With more flexibility in the use of static methods and properties, it would be possible to create a better framework of database-related properties. Given what is currently practical, and given experience of what is actually useful in the development of applications to run within a CMS framework, the realistic goals are as follows: To create a database object that connects, possibly through a choice of different connectors, to a particular database and provides the ability to run SQL queries To enable the creation of objects that correspond to database rows and have the ability to load themselves with data or to store themselves in the database Some operations, such as the update of a single row, are best achieved through the use of a database row object. Others, such as deletion, are often applied to a number of rows, chosen from a list by the user, and are best effected through a SQL query. You can obtain powerful code for achieving the automatic creation of HTML by downloading the full Aliro project. Unfortunately, experience in use has been disappointing. Often, so much customization of the automated code is required that the gains are nullified, and the automation becomes just an overhead. This topic is therefore given little emphasis.
Read more
  • 0
  • 0
  • 1033

Packt
16 Aug 2010
12 min read
Save for later

URL Shorteners – Designing the TinyURL Clone with Ruby

Packt
16 Aug 2010
12 min read
(For more resources on Ruby, see here.) We start off with an easy application, a simple yet very useful Internet application, URL shorteners. We will take a quick tour of URL shorteners before jumping into the design of a simple URL shortener, followed by an in-depth discussion of how we clone our own URL shortener, Tinyclone. All about URL shorteners Internet applications don't always need to be full of features or cover all aspects of your Internet life to be successful. Sometimes it's ok to be simple and just focus on providing a single feature. It doesn't even need to be earth-shatteringly important—it should be just useful enough for its target users. The archetypical and probably most extreme example of this is the URL shortening application or URL shortener. This service offers a very simple but surprisingly useful feature. It provides a shorter URL that represents a normally longer URL. When a user goes to the short URL, he will be redirected to the original URL. For this simple feature, top three most popular URL shortening services (TinyURL, bit.ly, and is.gd) collectively had about 11 million unique visitors, 110 million page views and a reach of about one percent of the Internet in June 2009. In 2008, the most popular URL shortener at that time, TinyURL, was made one of Time Magazine's Top 50 Best Websites. The idea to shorten long and unwieldy URLs into shorter, more manageable ones has been around for some time. One of the earlier attempts to make it a public service is Make A Shorter Link (MASL), which appeared around July 2001. MASL did just that, though the usefulness was debatable as the domain name was long and the shortened URL could potentially be longer than the original. However, the pioneering site that popularized this concept (and subsequently bought over MASL and a few other similar sites) is TinyURL. TinyURL was launched in January 2002 by Kevin Gilbertson to help him to link directly to newsgroup postings which frequently had long URLs. It rapidly became one of the most popular URL shorteners around. In 2008, an estimated 100 similar services came to existence in various forms. URLs or Uniform Resource Locators are resource identifiers that specify where identified resources are available and how they can be retrieved. A popular term for URL is a Web address. Every URL is made up of the following: <resource type>://<username>:<password>@<domain>:<port>/<file path name>?<query string>#<anchor> Not all parts of the URL are required by a browser, if the resource type is missing, it is normally assumed to be http, if the port is missing, it is normally assumed to be 80 (for http). The username, password, query string and anchor components are optional. Initially, TinyURL and similar types of URL shorteners focused on simply providing a short representative URL to their users. Naturally the competitive breadth for shortening URLs was rather well, short. Many chose TinyURL over MASL because TinyURL had a shorter and easier to remember domain name (http://tinyurl.com over http://makeashorterlink.com) Subsequent competition over this space intensified and extended to providing various other features, including custom short URLs (TinyURL, bit.ly), analysis of click-through statistics (bit.ly), advertisements (Adjix, Linkbee), preview pages (TinyURL, is.gd) and so on. The explosive growth of Twitter (from June 2008 to June 2009, Twitter grew 1,164%) opened a new chapter for URL shorteners. Twitter chose a limit of 140 characters for each tweet to accommodate the 160 characters in an SMS message (Twitter was invented as a service for people to use SMS to tell small groups what they are doing). With Twitter's popularity skyrocketing, came the need for users to shorten URLs to fit into the 140 characters limit. Originally Twitter used TinyURL as its default URL shortener and this triggered a steep climb in the usage of TinyURL during the early days of Twitter. However, in May 2009, bit.ly replaced TinyURL as Twitter's default URL shortener and the impact was immediate. For the first time in that period, TinyURL recorded a drop in the number of users in May 2009, dropping from 6.1 million to 5.3 million unique users, while bit.ly jumped from 1.8 million to 2.9 million almost overnight. That's not the end of the story though. In April 2010 during Twitter's Chirp conference, Twitter announced its own URL shortener (twt.tl). As of writing it is still unclear the market share will pan out but it's clear that URL shorteners have good value and everyone is jumping into this market. In December 2009, Google came up with its own two URL shorteners goo.gl and youtu.be. Amazon.com (amzn.to), Facebook (fb.me) and Wordpress (wp.me) all have their own URL shorteners as well. Next, let's do a quick review of why URLs shorteners are so popular and why they attract criticism as well. Here's a quick summary of the benefits: Create short and easy to remember URLs Allow passing of links in character-limited services such as Twitter Create vanity URLs for marketing purposes Can verbally pass URLs The most obvious benefit of having a shortened URL is that it's, well, short. A typical example of an URL gone bad is a link to a location in Google Maps: http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=singapore +flyer&vps=1&jsv=169c&sll=1.352083,103.819836&sspn=0.68645,1.382904&g =singapore&ie=UTF8&latlng=8354962237652576151&ei=Shh3SsSRDpb4vAPsxLS3 BQ&cd=1&usq=Singapore+Flyer Such URLs are meant to be clicked on as it is virtually impossible to pass it around verbally. It might be justifiable if the URL is cut and pasted on documents, but sometimes certain applications will truncate parts of the URL while processing. This makes a long URL difficult to click on and even produces erroneous links. In fact, this was the main motivation in creating most of the earlier URL shorteners—older email clients tend to truncate URLs when they are more than 80 characters. Short links are of course crucial in character-limited message passing systems like Twitter, Plurk, and SMS. Passing long URLs is impossible without URL shorteners. Short URLs are very useful in cases of vanity URLs where for example, the Google Maps link above could be shortened to http://tinyurl.com/singapore-flyer. Such vanity URLs are useful when passing from one person to another, or even when using in a mass marketing way. Sticking to the maps theme in our examples, if you want to give a Google Maps link to your restaurant and put it up in catalogs and brochures, you will not want to give the long URL. Instead you would want a nice, descriptive and short URL. Short URLs are also useful in cases of accessibility. For example, reading out the Google Maps link above is almost impossible, but reading out the TinyURL link (vanity or otherwise) is much easier in comparison. Many popular URL shorteners also provide some form of statistics and analytics on the usage of the links. This feature allows you to track your short URLs to see how many clicks it received and what kind of patterns can be derived from the clicks. Although the metrics are usually not advanced, they do provide basic usefulness. On the other hand, URL shorteners have it fair share of criticisms as well. Here is a summary of the bad side of URL shorteners: Provides opportunity to spammers because it hide original URLs Could be unreliable if dependent on it for redirection Possible undesirable or vulgar short URLs URL shorteners have security issues. When a URL shortener creates a short URL, it effectively hides the original link and this provides opportunity for spammers or other abusers to redirect users to their sites. One relatively mild form of such attack is 'rickrolling'. Rickrolling uses a classic bait-and-switch trick to redirect users to a Rick Astley music video of Never Gonna Give You Up. For example, you might feel that the URL http://tinyurl.com/singapore-flyer goes to Google Map, but when you click on it, you might be rickrolled and redirected to that Rick Astley music video instead. Also, because most short URLs are not customized, it is quite difficult to see if the link is genuine or not just from the URL. Many prominent websites and applications have such concerns, including MySpace, Flickr and even Microsoft Live Messenger, and have one time or another banned or restricted usage of TinyURL because of this problem. To combat spammers and fraud, URL shortening services have come up with the idea of link previews, which allows users to preview a short URL before it redirects the user to the long URL. For example TinyURL will show the user the long URL on a preview page and requires the user to explicitly go to the long URL. Another problem is performance and reliability. When you access a website, your browser goes to a few DNS servers to resolve the address, but the URL shortener adds another layer of indirection. While DNS servers have redundancy and failsafe measures, there is no such assurance from URL shorteners. If the traffic to a particular link becomes too high, will the shortening service provider be able to add more servers to improve performance or even prevent a meltdown altogether? The problem of course lies in over-dependency on the shortening service. Finally, a negative side effect of random or even customized short URLs is that undesirable, vulgar or embarrassing short URLs can be created. Earlier on TinyURL short URLs were predictable and it was exploited, such as embarrassing short URLs that were made to redirect to the White House websites of then U.S. Vice President Dick Cheney and Second Lady Lynne Cheney. We have just covered significant ground on URL shorteners. If you are a programmer you might be wondering, "Why do I need to know such information? I am really interested in the programming bits, the others are just fluff to me." Background information on the application we want to clone is very important. It tells us what why that application exists in the first place and gives us an idea what are the main features (what makes it popular). It also tells us what problems it faces such that we are aware of the problem while programming it, or even avoid it altogether. This is important when we come to the design of the application. Finally it gives us better appreciation of the application and the motivations and issues faced by the product and technical people behind the application we wish to clone. Main features Next, let's list down the features of a URL shortener. The intention in this section is to distill the basic features of the application, features that define the service. Features listed here will be features that make the application what it is. However, as much as possible we want to also explore some additional features that extend the application and are provided by many of its competitors. Most importantly, the features here are mostly features of the most popular and definitive web application in the category. In this article, this will be TinyURL. These are the main features of a URL shortener: Users can create a short URL that represents a long URL Users who visit the short URL will be redirected to the long URL Users can preview a short URL to enable them to see what the long URL is Users can provide a custom URL to represent the long URL Undesirable words are not allowed in the short URL Users are able to view various statistics involving the short URL, including the number of clicks and where the clicks come from (optional, not in TinyURL) URL shorteners are simple web applications and the one that we will design and build will also be simple. Designing the clone Cloning TinyURL is relatively simple but there is some thought behind the design of the application. We will be building a clone of TinyURL called Tinyclone, which will be hosted at the domain http://tinyclone.saush.com. Creating a short URL for each long URL The domain of the short URL is fixed. What's left is the file path name. We need to represent the long URL with a unique file path name (a key), one for each long URL. This means we need to persist the relationship between the key and the URL. One of the ways we can associate the long URL with a unique key is to hash the long URL and use the resulting hash as the unique key. However, the resulting hash might be long and hashing functions could be slow. The faster and easier way is to use a relational database's auto-incremented row ID as the unique key. The database will help ensure the uniqueness of the ID. However, the running row ID number is base 10. To represent a million URLs would already require 7 characters, to represent 1 billion would take up 9 characters. In order to keep the number of characters smaller, we will need a larger base numbering system. In this clone we will use base 36, which is 26 characters of the alphabet (case insensitive) and 10 numbers. Using this system, we will only need 5 characters to represent 1 million URLs: 1,000,000 base 36 = lfls And 1 billion URLs can be represented in just six characters: 1,000,000,000 base 36 = gjdgxs
Read more
  • 0
  • 0
  • 10308

article-image-q-replication-components-ibm-replication-server
Packt
16 Aug 2010
8 min read
Save for later

Q Replication Components in IBM Replication Server

Packt
16 Aug 2010
8 min read
The individual stages for the different layers are shown in the following diagram: The DB2 database layer The first layer is the DB2 database layer, which involves the following tasks: For unidirectional replication and all replication scenarios that use unidirectional replication as the base, we need to enable the source database for archive logging (but not the target table). For multi-directional replication, all the source and target databases need to be enabled for archive logging. We need to identify which tables we want to replicate. One of the steps is to set the DATA CAPTURE CHANGES flag for each source table, which will be done automatically when the Q subscription is created. This setting of the flag will affect the minimum point in time recovery value for the table space containing the table, which should be carefully noted if table space recoveries are performed. Before moving on to the WebSphere MQ layer, let’s quickly look at the compatibility requirements for the database name, the table name, and the column names. We will also discuss whether or not we need unique indexes on the source and target tables. Database/table/column name compatibility In Q replication, the source and target database names and table names do not have to match on all systems. The database name is specified when the control tables are created. The source and target table names are specified in the Q subscription definition. Now let’s move on to looking at whether or not we need unique indexes on the source and target tables. We do not need to be able to identify unique rows on the source table, but we do need to be able to do this on the target table. Therefore, the target table should have one of: Primary key Unique contraint Unique index If none of these exist, then Q Apply will apply the updates using all columns. However, the source table must have the same constraints as the target table, so any constraints that exist at the target must also exist at the source, which is shown in the following diagram: The WebSphere MQ layer This is the second layer we should install and test—if this layer does not work then Q replication will not work! We can either install the WebSphere MQ Server code or the WebSphere MQ Client code. Throughout this book, we will be working with the WebSphere MQ Server code. If we are replicating between two servers, then we need to install WebSphere MQ Server on both servers. If we are installing WebSphere MQ Server on UNIX, then during the installation process a user ID and group called mqm are created. If we as a DBA want to issue MQ commands, then we need to get our user ID added to the mqm group. Assuming that WebSphere MQ Server has been successfully installed, we now need to create the Queue Managers and the queues that are needed for Q replication. This section also includes tests that we can perform to check that the MQ installation and setup is correct. The following diagram shows the MQ objects that need to be created for unidirectional replication: The following figure shows the MQ objects that need to be created for bidirectional replication: There is a mixture of Local Queue (LOCAL/QL) and Remote Queues (QREMOTE/QR) in addition to Transmission Queues (XMITQ) and channels. Once we have successfully completed the installation and testing of WebSphere MQ, we can move on to the next layer—the Q replication layer. The Q replication layer This is the third and final layer, which comprises the following steps: Create the replication control tables on the source and target servers. Create the transport definitions. What we mean by this is that we somehow need to tell Q replication what the source and target table names are, what rows/columns we want to replicate, and which Queue Managers and queues to use. Some of the terms that are covered in this section are: Logical table Replication Queue Map Q subscription Subscription group (SUBGROUP) What is a logical table? In Q replication, we have the concept of a logical table, which is the term used to refer to both the source and target tables in one statement. An example in a peer-to-peer three-way scenario is shown in the following diagram, where the logical table is made up of tables TABA, TABB, and TABC: What is a Replication/Publication Queue Map? The first part of the transport definitions mentioned earlier is a definition of Queue Map, which identifies the WebSphere MQ queues on both servers that are used to communicate between the servers. In Q replication, the Queue Map is called a Replication Queue Map, and in Event Publishing the Queue Map is called a Publication Queue Map. Let’s first look at Replication Queue Maps (RQMs). RQMs are used by Q Capture and Q Apply to communicate. This communication is Q Capture sending Q Apply rows to apply and Q Apply sending administration messages back to Q Capture. Each RQM is made up of three queues: a queue on the local server called the Send Queue (SENDQ), and two queues on the remote server—a Receive Queue (RECVQ) and an Administration Queue (ADMINQ), as shown in the preceding figures showing the different queues. An RQM can only contain one each of SENDQ, RECVQ, and ADMINQ. The SENDQ is the queue that Q Capture uses to send source data and informational messages. The RECVQ is the queue that Q Apply reads for transactions to apply to the target table(s). The ADMINQ is the queue that Q Apply uses to send control messages back to Q Capture. So using the queues in the first “Queues” figure, the Replication Queue Map definition would be: Send Queue (SENDQ): CAPA.TO.APPB.SENDQ.REMOTE on Source Receive Queue (RECVQ): CAPA.TO.APPB.RECVQ on Target Administration Queue (ADMINQ): CAPA.ADMINQ.REMOTE on Target Now let’s look at Publication Queue Maps (PQMs). PQMs are used in Event Publishing and are similar to RQMs, in that they define the WebSphere MQ queues needed to transmit messages between two servers. The big difference is that because in Event Publishing, we do not have a Q Apply component, the definition of a PQM is made up of only a Send Queue. What is a Q subscription? The second part of the transport definitions is a definition called a Q subscription, which defines a single source/target combination and which Replication Queue Map to use for this combination. We set up one Q subscription for each source/target combination. Each Q subscription needs a Replication Queue Map, so we need to make sure we have one defined before trying to create a Q subscription. Note that if we are using the Replication Center, then we can choose to create a Q subscription even though a RQM does not exist. The wizard will walk you through creating the RQM at the point at which it is needed. The structure of a Q subscription is made up of a source and target section, and we have to specify: The Replication Queue Map The source and target table The type of target table The type of conflict detection and action to be used The type of initial load, if any, should be performed If we define a Q subscription for unidirectional replication, then we can choose the name of the Q subscription—for any other type of replication we cannot. Q replication does not have the concept of a subscription set as there is in SQL Replication, where the subscription set holds all the tables which are related using referential integrity. In Q replication, we have to ensure that all the tables that are related through referential integrity use the same Replication Queue Map, which will enable Q Apply to apply the changes to the target tables in the correct sequence. In the following diagram, Q subscription 1 uses RQM1, Q subscription 2 also uses RQM1, and Q subscription 3 uses RQM3: What is a subscription group? A subscription group is the name for a collection of Q subscriptions that are involved in multi-directional replication, and is set using the SET SUBGROUP command. Q subscription activation In unidirectional, bidirectional, and peer-to-peer two-way replication, when Q Capture and Q Apply start, then the Q subscription can be automatically activated (if that option was specified). For peer-to-peer three-way replication and higher, when Q Capture and Q Apply are started, only a subset of the Q subscriptions of the subscription group starts automatically, so we need to manually start the remaining Q subscriptions.
Read more
  • 0
  • 0
  • 2789
Packt
12 Aug 2010
4 min read
Save for later

Easy guide to understand WCF in Visual Studio 2008 SP1 and Visual Studio 2010 Express

Packt
12 Aug 2010
4 min read
(For more resources on Microsoft, see here.) Creating your first WCF application in Visual Studio 2008 You start creating a WCF project by creating a new project from File | New | Project.... This opens the New Project window. You can see that there are four different templates available. We will be using the WCF Service Library template. Change the default name and provide a name for the project (herein JayWcf01) and click OK. The project JayWcf01 gets created with the folder structure shown in the next image: If you were to expand References node in the above you would notice that System.ServiceModel is already referenced. If it is not, for some reason, you can bring it in by using the Add Reference... window which is displayed when you right click the project in the Solution Explorer. IService1.vb is a service interface file as shown in the next listing. This defines the service contract and the operations expected of the service. If you change the interface name "IService1" here, you must also update the reference to "IService1" in App.config. <ServiceContract()> _Public Interface IService1 <OperationContract()> _ Function GetData(ByVal value As Integer) As String <OperationContract()> _ Function GetDataUsingDataContract(ByVal composite As CompositeType) As CompositeType ' TODO: Add your service operations hereEnd Interface' Use a data contract as illustrated in the sample below to add composite types to service operations<DataContract()> _Public Class CompositeType Private boolValueField As Boolean Private stringValueField As String <DataMember()> _ Public Property BoolValue() As Boolean Get Return Me.boolValueField End Get Set(ByVal value As Boolean) Me.boolValueField = value End Set End Property <DataMember()> _ Public Property StringValue() As String Get Return Me.stringValueField End Get Set(ByVal value As String) Me.stringValueField = value End Set End PropertyEnd Class The Service Contract is a contract that will be agreed to between the Client and the Server. Both the Client and the Server should be working with the same service contract. The one shown above is in the server. Inside the service, data is handled as simple (e.g. GetData) or complex types (e.g. GetDataUsingDataContract). However outside the Service these are handled as XML Schema Definitions. WCF Data contracts provides a mapping between the data defined in the code and the XML Schema defined by W3C organization, the standards organization. The service performed when the terms of the contract are properly adhered to is in the listing of Service1.vb file shown here. ' NOTE: If you change the class name "Service1" here, you must also update the reference to "Service1" in App.config.Public Class Service1 Implements IService1 Public Function GetData(ByVal value As Integer) As _ String Implements IService1.GetData Return String.Format("You entered: {0}", value) End Function Public Function GetDataUsingDataContract(ByVal composite As CompositeType) _ As CompositeType Implements IService1.GetDataUsingDataContract If composite.BoolValue Then composite.StringValue = (composite.StringValue & "Suffix") End If Return composite End FunctionEnd Class Service1 is defining two methods of the service by way of Functions. The GetData accepts a number and returns a string. For example, if the Client enters a value 50, the Server response will be "You entered: 50". The function GetDataUsingDataContract returns a Boolean and a String with 'Suffix' appended for an input which consists of a Boolean and a string. The JayWcf01 is a completed program with a default example contract IService1 and a defined service, Service1. This program is complete in itself. It is a good practice to provide your own names for the objects. Notwithstanding the default names are accepted in this demo. In what follows we test this program as is and then slightly modify the contract and test it again. The testing in the next section will invoke an in-built client and then later on we will publish it to the localhost which is an IIS 7 web server. How to test this program The program has a valid pair of contract and service and we should be able to test this service. The Windows Communication Foundation allows Visual Studio 2008 (also Visual Studio 2010 Express) to launch a host to test the service with a client. Build the program and after it succeeds hit F5. The WcfSvcHost is spawned which stays in the taskbar as shown. You can click WcfSvcHost to display the WCF Service Host window popping-up as shown. The host gets started as shown here. The service is hosted on the developmental server. This is immediately followed by the WCF Test Client user interface popping-up as shown. In this harness you can test the service.
Read more
  • 0
  • 0
  • 2384

article-image-creating-recent-comments-widget-agile
Packt
10 Aug 2010
7 min read
Save for later

Creating a Recent Comments Widget in Agile

Packt
10 Aug 2010
7 min read
(For more resources on Agile, see here.) Introducing CWidget Lucky for us, Yii is readymade to help us achieve this architecture. Yii provides a component class, called CWidget, which is intended for exactly this purpose. A Yii widget is an instance of this class (or its child class), and is a presentational component typically embedded in a view file to display self-contained, reusable user interface features. We are going to use a Yii widget to build a recent comments portlet and display it on the main project details page so we can see comment activity across all issues related to the project. To demonstrate the ease of re-use, we'll take it one step further and also display a list of project-specific comments on the project details page. To begin creating our widget, we are going to first add a new public method on our Comment AR model class to return the most recently added comments. As expected, we will begin by writing a test. But before we write the test method, let's update our comment fixtures data so that we have a couple of comments to use throughout our testing. Create a new file called tbl_comment.php within the protected/tests/fixtures folder. Open that file and add the following content: <?phpreturn array('comment1'=>array( 'content' => 'Test comment 1 on issue bug number 1', 'issue_id' => 1, 'create_time' => '', 'create_user_id' => 1, 'update_time' => '', 'update_user_id' => '', ), 'comment2'=>array( 'content' => 'Test comment 2 on issue bug number 1', 'issue_id' => 1, 'create_time' => '', 'create_user_id' => 1, 'update_time' => '', 'update_user_id' => '', ),); Now we have consistent, predictable, and repeatable comment data to work with. Create a new unit test file, protected/tests/unit/CommentTest.php and add the following content: <?phpclass CommentTest extends CDbTestCase{ public $fixtures=array( 'comments'=>'Comment', ); public function testRecentComments() { $recentComments=Comment::findRecentComments(); $this->assertTrue(is_array($recentComments)); }} This test will of course fail, as we have not yet added the Comment::findRecentComments() method to the Comment model class. So, let's add that now. We'll go ahead and add the full method we need, rather than adding just enough to get the test to pass. But if you are following along, feel free to move at your own TDD pace. Open Comment.php and add the following public static method: public static function findRecentComments($limit=10, $projectId=null){ if($projectId != null) { return self::model()->with(array( 'issue'=>array('condition'=>'project_id='.$projectId)))->findAll(array( 'order'=>'t.create_time DESC', 'limit'=>$limit, )); } else { //get all comments across all projects return self::model()->with('issue')->findAll(array( 'order'=>'t.create_time DESC', 'limit'=>$limit, )); }} Our new method takes in two optional parameters, one to limit the number of returned comments, the other to specify a specific project ID to which all of the comments should belong. The second parameter will allow us to use our new widget to display all comments for a project on the project details page. So, if the input project id was specified, it restricts the returned results to only those comments associated with the project, otherwise, all comments across all projects are returned. More on relational AR queries in Yii The above two relational AR queries are a little new to us. We have not been using many of these options in our previous queries. Previously we have been using the simplest approach to executing relational queries: Load the AR instance. Access the relational properties defined in the relations() method. For example if we wanted to query for all of the issues associated with, say, project id #1, we would execute the following two lines of code: // retrieve the project whose ID is 1$project=Project::model()->findByPk(1);// retrieve the project's issues: a relational query is actually being performed behind the scenes here$issues=$project->issues; This familiar approach uses what is referred to as a Lazy Loading. When we first create the project instance, the query does not return all of the associated issues. It only retrieves the associated issues upon an initial, explicit request for them, that is, when $project->issues is executed. This is referred to as lazy because it waits to load the issues. This approach is convenient and can also be very efficient, especially in those cases where the associated issues may not be required. However, in other circumstances, this approach can be somewhat inefficient. For example, if we wanted to retrieve the issue information across N projects, then using this lazy approach would involve executing N join queries. Depending on how large N is, this could be very inefficient. In these situations, we have another option. We can use what is called Eager Loading. The Eager Loading approach retrieves the related AR instances at the same time as the main AR instances are requested. This is accomplished by using the with() method in concert with either the find() or findAll() methods for AR query. Sticking with our project example, we could use Eager Loading to retrieve all issues for all projects by executing the following single line of code: //retrieve all project AR instances along with their associated issue AR instances$projects = Project::model()->with('issues')->findAll(); Now, in this case, every project AR instance in the $projects array already has its associated issues property populated with an array of issues AR instances. This result has been achieved by using just a single join query. We are using this approach in both of the relational queries executed in our findRecentComments() method. The one we are using to restrict the comments to a specific project is slightly more complex. As you can see, we are specifying a query condition on the eagerly loaded issue property for the comments. Let's look at the following line: Comment::model()->with(array('issue'=>array('condition'=>'project_id='.$projectId)))->findAll(); This query specifies a single join between the tbl_comment and the tbl_issue tables. Sticking with project id #1 for this example, the previous relational AR query would basically execute something similar to the following SQL statement: SELECT tbl_comment.*, tbl_issue.* FROM tbl_comment LEFT OUTER JOIN tbl_issue ON (tbl_comment.issue_id=tbl_issue.id) WHERE (tbl_issue.project_id=1) The added array we specify in the findAll() method simply sets an order by clause and a limit clause to the executed SQL statement. One last thing to note about the two queries we are using is how the column names that are common to both tables are disambiguated. Obviously when the two tables that are being joined have columns with the same name, we have to make a distinction between the two in our query. In our case, both tables have the create_time column defined. We are trying to order by this column in the tbl_comment table and not the one defined in the issue table. In a relational AR query in Yii, the alias name for the primary table is fixed as t, while the alias name for a relational table, by default, is the same as the corresponding relation name. So, in our two queries, we specify t.create_time to indicate we want to use the primary table's column. If we wanted to instead order by the issue create_time column, we would alter, the second query for example, as such: return Comment::model()->with('issue')->findAll(array( 'order'=>'issue.create_time DESC', 'limit'=>$limit,));
Read more
  • 0
  • 0
  • 1224

article-image-oracle-enterprise-manager-key-concepts-and-subsystems
Packt
10 Aug 2010
7 min read
Save for later

Oracle Enterprise Manager Key Concepts and Subsystems

Packt
10 Aug 2010
7 min read
(For more resources on Oracle, see here.) Target The term 'target' refers to an entity that is managed via Enterprise Manager Grid Control. Target is the most important entity in Enterprise Manager Grid Control. All other processes and subsystems revolve around the target subsystem. For each target there is a model of the target that is saved in the Enterprise Manager Repository. In this article, we will use the terms target and target model interchangeably. Major building blocks of the target subsystem are: Target definition: All targets are organized into different categories, just like the actual entity that they represent, for example there is WebLogic Server target, Oracle Database target, and so on. These categories are called target types. For each target type there is a definition in XML format that is available with the agent as well as with the repository. This definition includes: Target Attributes: There are some attributes that are common across all target types, and there are some attributes specific to a particular target type. The example of a common attribute is the target name, which uniquely identifies a managed entity. The example of a target type specific attribute is the name of a WebLogic Domain for a WebLogic Server target. Some of the attributes provide connection details for connecting to the monitored entity, such as the WebLogic Domain host and port. Some other attributes contain authentication information to authenticate and connect to the monitored entity. Target asociations: Target type definition includes the association between related targets, for example an OC4J target will have its association defined with a corresponding Oracle Application Server. Target Metrics: This includes all the metrics that need to be collected for a given target and the source for those metrics. We'll cover this in greater detail in the Metrics subsystem. Every target that is managed through the EM belongs to one, and only one, target type category. For any new entity that needs to be managed by the Enterprise Manager, an instance of appropriate target type is created and persisted in the repository. Out-of-the-box Enterprise Manager provides the definition for most common target types such as the Host, Oracle Database, Oracle WebLogic Server, Seibel suite, SQLServer, SAP, .NET platform, IBM Websphere application server, Jboss application server, MQSeries, and so on. For a complete list of out-of-the-box targetsOut-of-the-box Enterprise Manager provides the definition for most common target types such as the Host, Oracle Database, Oracle WebLogic Server, Seibel suite, SQLServer, SAP, .NET platform, IBM Websphere application server, Jboss application server, MQSeries, and so on.For a complete list of out-of-the-box targets please refer to the Oracle website. Now that we have a good idea about the target definition, it's time we get to know more about the target lifecycle. Target lifecycle As the target is very central to the Enterprise Manager—it's very important that we understand each stage in the target life cycle. Please note that not all the stages of the lifecycle may be needed for each target. However, to proceed further we need to understand each step in the target lifecycle. Enterprise Manager automates many of these stages, so in a real life scenario many of these steps may be transparent to the user. For example, Discovery and Configuration for monitoring stages are completely automated for the Oracle Application Server. Discovery of a target Discovery is the first step in the target lifecycle. Discovery is a process that finds the entities that need to be managed, builds the required target model for those entities, and persists the model in the management repository. For example, the discovery process executed on a Linux server learns that there are OC4J containers on that server, it builds target models for the OC4Js and the Linux server, and it persists the target models in the repository. The agent has various discovery scripts and those scripts are used to identify various target types. Besides discovery, these scripts build a model for the discovered target and fill in all of the attributes for that target. We learnt about target attributes in the previous section. Some discovery scripts are executed automatically as a part of the agent installation and therefore, no user inputs are needed for discovery. For example, a discovery script for the Oracle Application Server is automatically triggered when an agent is installed. On the other hand, there are some discovery scripts where the user needs to provide some input parameters. An example for this is the WebLogic server, where the user needs to provide the port number of the WebLogic Administration Server and credentials to authenticate and connect to it. The Enterprise Manager console provides interface for such discovery. Discovery of targets can happen in two modes—local mode and remote mode. In local mode, the agent is running locally on the same host as the target. In remote discovery mode, the agent can be running on a different host. All of the targets can be discovered in local mode and there are some targets that can be discovered in remote mode. For example, discovery of WebLogic servers can happen in local as well as remote mode. One important point to note is that the agent that discovered the target does the monitoring of that target. For example, if a WebLogic Server target is discovered through a remote agent it gets monitored through that same remote agent. Configuration for monitoring After discovery the target needs to be configured for monitoring. The user will need to provide some parameters for the agent to use to connect to the target and get the metrics. These parameters include monitoring credentials, host, and port information, using which, the agent can connect to the target to fetch the metrics. The Enterprise Manager uses these parameters to connect, authenticate, and collect metrics from the targets. For example, to monitor an Oracle database the end user needs to provide the user ID and password, which can be used for authentication when collecting performance metrics using SNMP protocol. Enterprise Manager Console provides an interface for configuring these parameters. For some targets such as Application server, this step is not needed, as all the metrics can be fetched anonymously. For some other targets such as Oracle BPEL Process Manager, this step is needed only for detailed metrics; basic metrics are available without any monitoring configuration, but for advanced metrics monitoring, credentials needs to be provided by the end user. In this case, monitoring credentials are the user ID and password, used to authenticate when connecting to BPEL Process Manager for collecting performance metrics. Updates to a target Over a period of time, some target properties, attributes, and associations with other targets change—the EM target model that represents the target should be updated to reflect the changes. It is very important that end-users see the correct model from Enterprise Manager to ensure that all targets are monitored correctly. For example, in a given WebLogic Cluster, if a new WebLogic Server is added and an existing WebLogic Server is removed—Enterprise Manager's target model needs to reflect that. Or, if credentials to connect to WebLogic Admin Server are changed—the target model should be updated with new credentials. The Enterprise Manager console provides UI interface to update such properties. If the target model is not updated there is a risk that some entity may not be monitored, for example if a new WebLogic server is added but the target model of domain is not updated, the new WebLogic server will not be monitored. Stopping monitoring of a target Each IT resource has some maintenance window or planned 'down-time'. During such time it's desirable to stop monitoring a target and collecting metrics for that resource. This can be achieved by putting that target into a blackout state. In a blackout state, agents do not collect monitoring data for a target and they do not generate alerts. After the maintenance activity is over, the blackout can be cleared from a target and routine monitoring can start again. Enterprise Manager Console provides an interface for creating and removing blackout state for one or more targets.
Read more
  • 0
  • 0
  • 1305
article-image-adding-user-comments-agile
Packt
10 Aug 2010
5 min read
Save for later

Adding User Comments in Agile

Packt
10 Aug 2010
5 min read
(For more resources on Agile, see here.) Iteration planning The goal of this iteration is to implement feature functionality in the Trackstar application to allow users to leave and read comments on issues. When a user is viewing the details of any project issue, they should be able to read all comments previously added as well as create a new comment on the issue. We also want to add a small fragment of content, or portlet, to the project-listing page that displays a list of recent comments left on all of the issues. This will be a nice way to provide a window into recent user activity and allow easy access to the latest issues that have active conversations. The following is a list of high-level tasks that we will need to complete in order to achieve these goals: Design and create a new database table to support comments Create the Yii AR class associated with our new comments table Add a form directly to the issue details page to allow users to submit comments Display a list of all comments associated with an issue directly on the issues details page Creating the model As always, we should run our existing test suite at the start of our iteration to ensure all of our previously written tests are still passing as expected. By this time, you should be familiar with how to do that, so we will leave it to the reader to ensure that all the unit tests are passing before proceeding. We first need to create a new table to house our comments. Following is the basic DDL definition for the table that we will be using: CREATE TABLE tbl_comment( `id` INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT, `content` TEXT NOT NULL, `issue_id` INTEGER, `create_time` DATETIME, `create_user_id` INTEGER, `update_time` DATETIME, `update_user_id` INTEGER) As each comment belongs to a specific issue, identified by the issue_id, and is written by a specific user, indicated by the create_user_id identifier, we also need to define the following foreign key relationships: ALTER TABLE `tbl_comment` ADD CONSTRAINT `FK_comment_issue` FOREIGN KEY (`issue_id`) REFERENCES `tbl_issue` (`id`);ALTER TABLE `tbl_comment` ADD CONSTRAINT `FK_comment_author` FOREIGN KEY (`create_user_id`) REFERENCES `tbl_user` (`id`); If you are following along, please ensure this table is created in both the trackstar_dev and trackstar_test databases. Once a database table is in place, creating the associated AR class is a snap. We simply use the Gii code creation tool's Model Generator command and create an AR class called Comment. Since we have already created the model class for issues, we will need to explicitly add the relations to to the Issue model class for comments. We will also add a relationship as a statistical query to easily retrieve the number of comments associated with a given issue (just as we did in the Project AR class for issues). Alter the Issue::relations() method as such: public function relations(){ return array( 'requester' => array(self::BELONGS_TO, 'User', 'requester_id'), 'owner' => array(self::BELONGS_TO, 'User', 'owner_id'), 'project' => array(self::BELONGS_TO, 'Project', 'project_id'), 'comments' => array(self::HAS_MANY, 'Comment', 'issue_id'), 'commentCount' => array(self::STAT, 'Comment', 'issue_id'),);} Also, we need to change our newly created Comment AR class to extend our custom TrackStarActiveRecord base class, so that it benefits from the logic we placed in the beforeValidate() method. Simply alter the beginning of the class definition as such: <?php/*** This is the model class for table "tbl_comment".*/class Comment extends TrackStarActiveRecord{ We'll make one last small change to the definitions in the Comment::relations() method. The relational attributes were named for us when the class was created. Let's change the one named createUser to be author, as this related user does represent the author of the comment. This is just a semantic change, but will help to make our code easier to read and understand. Change the method as such: /** * @return array relational rules. */public function relations(){ // NOTE: you may need to adjust the relation name and the related // class name for the relations automatically generated below. return array( 'author' => array(self::BELONGS_TO, 'User', 'create_user_id'), 'issue' => array(self::BELONGS_TO, 'Issue', 'issue_id'),); Creating the Comment CRUD Once we have an AR class in place, creating the CRUD scaffolding for managing the related entity is equally as easy. Again, use the Gii code generation tool's Crud Generator command with the AR class name, Comment, as the argument. Although we will not immediately implement full CRUD operations for our comments, it is nice to have the scaffolding for the other operations in place. As long as we are logged in, we should now be able to view the autogenerated comment submission form via the following URL: http://localhost/trackstar/index.php?r=comment/create
Read more
  • 0
  • 0
  • 902

article-image-more-things-you-can-do-oracle-content-server-workflows
Packt
09 Aug 2010
5 min read
Save for later

More Things you can do with Oracle Content Server workflows

Packt
09 Aug 2010
5 min read
(For more resources on Oracle, see here.) The top three things As we've just seen, the most common things you can do are these: Get content approved: This is the most obvious use of the workflow we've just seen. Get people notified: Remember when we were adding workflow steps there was a number of required approvers on the Exit Conditions tab in the Add New Step dialog. If we set that to zero we accomplish one important thing: Approvers will get notified, but no action is required of them. It's a great way to "subscribe" a select group of people to an event of your choice. Perform custom actions: And if that's not enough you can easily add custom scripts to any step of a workflow. You can change metadata, release items, and send them to other workflows. You can even invoke your custom Java code. And here's another really powerful thing you can do with custom workflow actions. You can integrate with other systems and move from the local workflow to process orchestration. You can use a Content Server workflow to trigger external processes. UCM 10gR3 has an Oracle BPEL integration built in. This means that a UCM workflow can be initiated by (or can itself initiate) a BPEL workflow that spans many systems, not just the UCM. This makes ERP systems such as Siebel, PeopleSoft, SAP, and Oracle e-Business Suite easily accessible to the UCM, and content inside the UCM can be easily made available to these systems. So let's look at the jumps and scripting. Jumps and scripting Here's how to add scripting to a workflow: In Workflow Admin select a step of a workflow we've just created. Click on the Edit button on the right. The Edit Step dialog comes up. Go to the Events tab (as shown in the following screenshot): There are three events that you can add custom handlers for: Entry: This event triggers when an item arrives at the step. Update: This happens when an item or its metadata is updated. It's also initiated every hour by a timer event, Workflow Update Cycle. Use it for sending reminders to approvers or escalating the item to an alternative person after your approval period has expired. Exit: This event is triggered when an item has been approved and is about to exit the step. If you have defined Additional Exit Conditions on the Exit Conditions tab then those will be satisfied before this event fires. The following diagram illustrates the sequence of states and corresponding events that are fired when a content item arrives at a workflow step: Great! But how do we can actually add the jumps and custom scripts to a workflow step? How to add a jump to a workflow step Let's add an exception where content submitted by sysadmin will bypass our Manager Approval workflow. We will use a jump—a construct that causes an item to skip the normal workflow sequence and follow an alternative path. Here's how to do it: Add a jump to an Entry event of our very first step. On the Events tab of the Edit Step dialog, click on the Edit button—the one next to the Entry event. The Edit Script dialog displays (as shown in the following screenshot): Click on the Add button. The Add Jump dialog comes up (as shown in the following screenshot): Let's call the jump Sysadmin WF bypass. You don't need to change anything else at this point. Click on OK to get back to the Edit Script dialog. In the Field drop-down box pick Author. Click on the Select… button next to the Value box. Pick sysadmin (if you have trouble locating sysadmin in the list of users, make sure that the filter check-box is un-checked). Click the Add button below the Value field. Make sure that your clause appears in the Script Clauses box below. In the Target Step dropdown pick Next Step. Once you have done so the value will change to its script equivalent, @wfCurrentStep(1). If you have more than one step in the workflow, change 1 to the number of steps you have. This will make sure that you jump past the last step and exit the workflow. Here's how the completed dialog will look (as shown in the following screenshot): Click on OK to close. You're now back to the Events tab on the Edit Step dialog. Notice a few lines of script being added to the box next to the Entry event (as shown in the following screenshot): OK the dialog. It's time to test your changes. Check in a new document. Make sure you set the Author field to sysadmin. Set your Security Group to accounting, and Account to accounting/payable/current. If you don't, the item will not enter our workflow in the first place (as shown in the following screenshot): Complete your check-in and follow the link to go to the Content Info page. See the status of the item. It should be set to Released. That's right. The item got right out of the workflow. Check in a new document again, but use some other author. Notice how your item will enter the workflow and stay there. As you've seen, the dialog we used for creating a jump is simply a code generator. It created a few lines of script we needed to add the handler for the Entry event. Click on the Edit button next to that code and pick Edit Current to study it. You can find all the script function definitions in iDoc Reference Guide. Perfect! And we're still not done. What if you have a few common steps that you'd like to reuse in a bunch of workflows? Would you just have to manually recreate them? Nope. There are several solutions that allow you to reuse parts of the workflow. The one I find to be most useful is sub workflows.
Read more
  • 0
  • 0
  • 1635