Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-distributed-transaction-using-wcf
Packt
17 Jun 2010
12 min read
Save for later

Distributed transaction using WCF

Packt
17 Jun 2010
12 min read
(Read more interesting articles on WCF 4.0 here.) Creating the DistNorthwind solution In this article, we will create a new solution based on the LINQNorthwind solution.We will copy all of the source code from the LINQNorthwind directory to a new directory and then customize it to suit our needs. The steps here are very similar to the steps in the previous chapter when we created the LINQNorthwind solution.Please refer to the previous chapter for diagrams. Follow these steps to create the new solution: Create a new directory named DistNorthwind under the existing C:SOAwithWCFandLINQProjects directory. Copy all of the files under the C:SOAwithWCFandLINQProjectsLINQNorthwind directory to the C:SOAwithWCFandLINQProjectsDistNorthwind directory. Remove the folder, LINQNorthwindClient. We will create a new client for this solution. Change all the folder names under the new folder, DistNorthwind, from LINQNorthwindxxx to DistNorthwindxxx. Change the solution files' names from LINQNorthwind.sln to DistNorthwind.sln, and also from LINQNorthwind.suo to DistNorthwind.suo. Now we have the file structures ready for the new solution but all the file contents and the solution structure are still for the old solution. Next we need to change them to work for the new solution. We will first change all the related WCF service files. Once we have the service up and running we will create a new client to test this new service. Start Visual Studio 2010 and open this solution: C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwind.sln. Click on the OK button to close the projects were not loaded correctly warning dialog. From Solution Explorer, remove all five projects (they should all be unavailable). Right-click on the solution item and select Add | Existing Projects… to add these four projects to the solution. Note that these are the projects under the DistNorthwind folder, not under the LINQNorthwind folder: LINQNorthwindEntities.csproj, LINQNorthwindDAL.csproj, LINQNorthwindLogic.csproj, and LINQNorthwindService.csproj. In Solution Explorer, change all four projects' names from LINQNorthwindxxx to DistNorthwindxxx. In Solution Explorer, right-click on each project, select Properties (or select menu Project | DistNorthwindxxx Properties), then change the Assembly name from LINQNorthwindxxx to DistNorthwindxxx, and change the Default namespace from MyWCFServices.LINQNorthwindxxx to MyWCFServices.DistNorthwindxxx. Open the following files and change the word LINQNorthwind to DistNorthwind wherever it occurs: ProductEntity.cs, ProductDAO.cs, ProductLogic.cs, IProductService.cs, and ProductService.cs. Open the file, app.config, in the DistNorthwindService project and change the word LINQNorthwind to DistNorthwind in this file. The screenshot below shows the final structure of the new solution, DistNorthwind: Now we have finished modifying the service projects. If you build the solution now you should see no errors. You can set the service project as the startup project, run the program. Hosting the WCF service in IIS The WCF service is now hosted within WCF Service Host.We had to start the WCF Service Host before we ran our test client.Not only do you have to start the WCF Service Host, you also have to start the WCF Test client and leave it open. This is not that nice. In addition, we will add another service later in this articleto test distributed transaction support with two databases and it is not that easy to host two services with one WCF Service Host.So, in this article, we will first decouple our WCF service from Visual Studio to host it in IIS. You can follow these steps to host this WCF service in IIS: In Windows Explorer, go to the directory C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindService. Within this folder create a new text file, ProductService.svc, to contain the following one line of code: <%@ServiceHost Service="MyWCFServices.DistNorthwindService. ProductService"%> Again within this folder copy the file, App.config, to Web.config and remove the following lines from the new Web.config file: <host> <baseAddresses> <add baseAddress="http://localhost:8080/ Design_Time_Addresses/MyWCFServices/ DistNorthwindService/ProductService/" /> </baseAddresses> </host> Now open IIS Manager, add a new application, DistNorthwindService, and set its physical path to C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindService. If you choose to use the default application pool, DefaultAppPool, make sure it is a .NET 4.0 application pool.If you are using Windows XP you can create a new virtual directory, DistNorthwindService, set its local path to the above directory, and make sure its ASP.NET version is 4.0. From Visual Studio, in Solution Explorer, right-click on the project item,DistNorthwindService, select Properties, then click on the Build Events tab, and enter the following code to the Post-build event command line box: copy .*.* ..With this Post-build event command line, whenever DistNorthwindService is rebuilt the service binary files will be copied to the C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindServicebin directory so that the service hosted in IIS will always be up-to-date. From Visual Studio, in Solution Explorer, right-click on the project item, DistNorthwindService, and select Rebuild. Now you have finished setting up the service to be hosted in IIS. Open Internet Explorer, go to the following address, and you should see the ProductService description in the browser: http://localhost/DistNorthwindService/ProductService.svc Testing the transaction behavior of the WCF service Before explaining how to enhance this WCF service to support distributed transactions, we will first confirm that the existing WCF service doesn't support distributed transactions. In this article, we will test the following scenarios: Create a WPF client to call the service twice in one method. The first service call should succeed and the second service call should fail. Verify that the update in the first service call has been committed to the database, which means that the WCF service does not support distributed transactions. Wrap the two service calls in one TransactionScope and redo the test. Verify that the update in the first service call has still been committed to the database which means the WCF service does not support distributed transactions even if both service calls are within one transaction scope. Add a second database support to the WCF service. Modify the client to update both databases in one method. The first update should succeed and the second update should fail. Verify that the first update has been committed to the database, which means the WCF service does not support distributed transactions with multiple databases. Creating a client to call the WCF service sequentially The first scenario to test is that within one method of the client application two service calls will be made and one of them will fail. We then verify whether the update in the successful service call has been committed to the database. If it has been, it will mean that the two service calls are not within a single atomic transaction and will indicate that the WCF service doesn't support distributed transactions. You can follow these steps to create a WPF client for this test case: In Solution Explorer, right-click on the solution item and select Add | New Project… from the context menu. Select Visual C# | WPF Application as the template. Enter DistributedWPF as the Name. Click on the OK button to create the new client project. Now the new test client should have been created and added to the solution. Let's follow these steps to customize this client so that we can call ProductService twice within one method and test the distributed transaction support of this WCF service: On the WPF MainWindow designer surface, add the following controls (you can double-click on the MainWindow.xaml item to open this window and make sure you are on the design mode, not the XAML mode): A label with Content Product ID Two textboxes named txtProductID1 and txtProductID2 A button named btnGetProduct with Content Get Product Details A separator to separate above controls from below controls Two labels with content Product1 Details and Product2 Details Two textboxes named txtProduct1Details and txtProduct2Details, with the following properties: AcceptsReturn: checked Background: Beige HorizontalScrollbarVisibility: Auto VerticalScrollbarVisibility: Auto IsReadOnly: checked A separator to separate above controls from below controls A label with content New Price Two textboxes named txtNewPrice1 and txtNewPrice2 A button named btnUpdatePrice with Content Update Price A separator to separate above controls from below controls Two labels with content Update1 Results and Update2 Results Two textboxes named txtUpdate1Results and txtUpdate2Results with the following properties: AcceptsReturn: checked Background: Beige HorizontalScrollbarVisibility: Auto VerticalScrollbarVisibility: Auto IsReadOnly: checked Your MainWindow design surface should look like the following screenshot: In Solution Explorer, right-click on the DistNorthwindWPF project item, select Add Service Reference… and add a service reference of the product service to the project. The namespace of this service reference should be ProductServiceProxy and the URL of the product service should be like this:http://localhost/DistNorthwindService/ProductService.svc On the MainWindow.xaml designer surface, double-click on the Get Product Details button to create an event handler for this button. In the MainWindow.xaml.cs file, add the following using statement: using DistNorthwindWPF.ProductServiceProxy; Again in the MainWindow.xaml.cs file, add the following two class members: Product product1, product2; Now add the following method to the MainWindow.xaml.cs file: private string GetProduct(TextBox txtProductID, ref Product product) { string result = ""; try { int productID = Int32.Parse(txtProductID.Text.ToString()); ProductServiceClient client = new ProductServiceClient(); product = client.GetProduct(productID); StringBuilder sb = new StringBuilder(); sb.Append("ProductID:" + product.ProductID.ToString() + "n"); sb.Append("ProductName:" + product.ProductName + "n"); sb.Append("UnitPrice:" + product.UnitPrice.ToString() + "n"); sb.Append("RowVersion:"); foreach (var x in product.RowVersion.AsEnumerable()) { sb.Append(x.ToString()); sb.Append(" "); } result = sb.ToString(); } catch (Exception ex) { result = "Exception: " + ex.Message.ToString(); } return result; } This method will call the product service to retrieve a product from the database, format the product details to a string, and return the string. This string will be displayed on the screen. The product object will also be returned so that later on we can reuse this object to update the price of the product. Inside the event handler of the Get Product Details button, add the following two lines of code to get and display the product details: txtProduct1Details.Text = GetProduct(txtProductID1, ref product1); txtProduct2Details.Text = GetProduct(txtProductID2, ref product2); Now we have finished adding code to retrieve products from the database through the Product WCF service. Set DistNorthwindWPF as the startup project, press Ctrl + F5 to start the WPF test client, enter 30 and 31 as the product IDs, and then click on the Get Product Details button. You should get a window like this image: To update the prices of these two products follow these steps to add the code to the project: On the MainWindow.xaml design surface and double-click on the Update Price button to add an event handler for this button. Add the following method to the MainWindow.xaml.cs file: private string UpdatePrice( TextBox txtNewPrice, ref Product product, ref bool updateResult) { string result = ""; try { product.UnitPrice = Decimal.Parse(txtNewPrice.Text.ToString()); ProductServiceClient client = new ProductServiceClient(); updateResult = client.UpdateProduct(ref product); StringBuilder sb = new StringBuilder(); if (updateResult == true) { sb.Append("Price updated to "); sb.Append(txtNewPrice.Text.ToString()); sb.Append("n"); sb.Append("Update result:"); sb.Append(updateResult.ToString()); sb.Append("n"); sb.Append("New RowVersion:"); } else { sb.Append("Price not updated to "); sb.Append(txtNewPrice.Text.ToString()); sb.Append("n"); sb.Append("Update result:"); sb.Append(updateResult.ToString()); sb.Append("n"); sb.Append("Old RowVersion:"); } foreach (var x in product.RowVersion.AsEnumerable()) { sb.Append(x.ToString()); sb.Append(" "); } result = sb.ToString(); } catch (Exception ex) { result = "Exception: " + ex.Message; } return result; } This method will call the product service to update the price of a product in the database. The update result will be formatted and returned so that later on we can display it. The updated product object with the new RowVersion will also be returned so that later on we can update the price of the same product again and again. Inside the event handler of the Update Price button, add the following code to update the product prices: if (product1 == null) { txtUpdate1Results.Text = "Get product details first"; } else if (product2 == null) { txtUpdate2Results.Text = "Get product details first"; } else { bool update1Result = false, update2Result = false; txtUpdate1Results.Text = UpdatePrice( txtNewPrice1, ref product1, ref update1Result); txtUpdate2Results.Text = UpdatePrice( txtNewPrice2, ref product2, ref update2Result); } Testing the sequential calls to the WCF service Let's run the program now to test the distributed transaction support of the WCF service. We will first update two products with two valid prices to make sure our code works with normal use cases. Then we will update one product with a valid price and another with an invalid price. We will verify that the update with the valid price has been committed to the database, regardless of the failure of the other update. Let's follow these steps for this test: Press Ctrl + F5 to start the program. Enter 30 and 31 as product IDs in the top two textboxes and click on the Get Product Details button to retrieve the two products. Note that the prices for these two products are 25.89 and 12.5 respectively. Enter 26.89 and 13.5 as new prices in the middle two textboxes and click on the Update Price button to update these two products. The update results are true for both updates, as shown in following screenshot: Now enter 27.89 and -14.5 as new prices in the middle two textboxes and click on the Update Price button to update these two products. This time the update result for product 30 is still True but for the second update the result is False. Click on the Get Product Details button again to refresh the product prices so that we can verify the update results. We know that the second service call should fail so the second update should not be committed to the database. From the test result we know this is true (the second product price didn't change). However from the test result we also know that the first update in the first service call has been committed to the database (the first product price has been changed). This means that the first call to the service is not rolled back even when a subsequent service call has failed. Therefore each service call is in a separate standalone transaction. In other words, the two sequential service calls are not within one distributed transaction.
Read more
  • 0
  • 0
  • 1772

article-image-checkbox-persistence-tabular-forms-reports
Packt
17 Jun 2010
7 min read
Save for later

Checkbox Persistence in Tabular Forms (Reports)

Packt
17 Jun 2010
7 min read
(For more resources on Oracle, see here.) One of the problems we are facing with Tabular Forms is that pagination doesn't submit the current view of the Tabular Form (Report) page, and if we are using Partial Page Refresh (PPR), it doesn't even reload the entire page. As such, Session State is not saved prior to us moving to the next/previous view. Without saving Session State, all the changes that we might have made to the current form view will be lost upon using pagination. This problematic behavior is most notable when we are using a checkboxes column in our Tabular Form (Report). We can mark specific checkboxes in the current Tabular Form (Report) view, but if we paginate to another view, and then return, the marked checkboxes will be cleared (no Session State, no history to rely on). In some cases, it can be very useful to save the marked checkboxes while paginating through the Tabular Form (Report). Joel Kallman, from the APEX development team, blogged about this issue (http://joelkallman.blogspot.com/2008/03/ preserving-checked-checkboxes-in-report.html) and offered a simple solution, which uses AJAX and APEX collections. Using APEX collections means that the marked checkboxes will be preserved for the duration of a specific user's current APEX session. If that's what you need, Joel's solution is very good as it utilizes built-in APEX resources in an optimal way. However, sometimes the current APEX session is not persistent enough. In one of my applications I needed more lasting persistence, which can be used crossed APEX users and sessions. So, I took Joel's idea and modified it a bit. Instead of using APEX collections, I've decided to save the checked checkboxes into a database table. The database table, of course, can support unlimited persistence across users. Report on CUSTOMERS We are going to use a simple report on the CUSTOMERS table, where the first column is a checkboxes column. The following is a screenshot of the report region: W e are going to use AJAX to preserve the status of the checkboxes in the following scenarios: Using the checkbox in the header of the first column to check or clear all the checkboxes in the first column of the current report view Individual row check or clearing of a checkbox The first column—the checkboxes column—represents the CUST_ID column of the CUSTOMERS table, and we are going to implement persistence by saving the values of this column, for all the checked rows, in a table called CUSTOMERS_VIP. This table includes only one column: CREATE TABLE "CUSTOMERS_VIP" ( "CUST_ID" NUMBER(7,0) NOT NULL ENABLE, CONSTRAINT "CUSTOMERS_VIP_PK" PRIMARY KEY ("CUST_ID") ENABLE) Bear in mind: In this particular example we are talking about crossed APEX users and sessions persistence. If, however, you need to maintain a specific user-level persistence, as it happens natively when using APEX collections, you can add a second column to the table that can hold the APP_USER of the user. In this case, you'll need to amend the appropriate WHERE clauses and the INSERT statements, to include and reflect the second column. The report SQL query The following is the SQL code used for the report: SELECT apex_item.checkbox(10,l.cust_id,'onclick=updateCB(this);', r.cust_id) as cust_id, l.cust_name, l.cust_address1, l.cust_address2, l.cust_city, l.cust_zip_code, (select r1.sname from states r1 where l.cust_state = r1.code) state, (select r2.cname from countries r2 where l.cust_country = r2.code) countryFROM customers l, customers_vip rWHERE r.cust_id (+) = l.cust_idORDER BY cust_name The Bold segments of the SELECT statement are the ones we are most interested in. The APEX_ITEM.CHECKBOX function creates a checkboxes column in the report. Its third parameter—p_attributes—allows us to define HTML attributes within the checkbox <input> tag. We are using this parameter to attach an onclick event to every checkbox in the column. The event fires a JavaScript function— updateCB(this)—which takes the current checkbox object as a parameter and initiates an AJAX process. The fourth parameter of the APEX_ITEM.CHECKBOX function—p_checked_ values—allows us to determine the initial status of the checkbox. If the value of this parameter is equal to the value of the checkbox (determined by the second parameter—p_value) the checkbox will be checked. This parameter is the heart of the solution. Its value is taken from the CUSTOMERS_VIP table using outer join with the value of the checkbox. The outcome is that every time the CUSTOMERS_VIP table contains a CUST_ID value equal to the current checkbox value, this checkbox will be checked. The report headers In the Report Attributes tab we can set the report headers using the Custom option. We are going to use this option to set friendlier report headers, but mostly to define the first column header—a checkbox that allows us to toggle the status of all the column checkboxes. The full HTML code we are using for the header of the first column is: <input type="checkbox" id = "CB" onclick="toggleAll(this,10);"title="Mark/Clear All"> We are actually creating a checkbox, with an ID of CB and an onclick event that fires the JavaScript function toggleAll(this,10). The first parameter of this function is a reference to the checkbox object, and the second one is the first parameter—p_idx—of the APEX_ITEM.CHECKBOX function we are using to create the checkbox column. The AJAX client-side JavaScript functions So far, we have mentioned two JavaScript functions that initiate an AJAX call. The first—updateCB()—initiates an AJAX call that updates the CUSTOMERS_VIP file according to the status of a single (row) checkbox. The second one—toggleAll()— initiates an AJAX call that updates the CUSTOMERS_VIP file according to the status of the entire checkboxes column. Let's review these functions. The updateCB() JavaScript function The following is the code of this function: function updateCB(pItem){ var get = new htmldb_Get(null, $v('pFlowId'), 'APPLICATION_PROCESS=update_CB',$v('pFlowStepId')); get.addParam('x01',pItem.value); get.addParam('x02',pItem.checked); get.GetAsync(function(){return;}); get = null;} The function accepts, as a parameter, a reference to an object—this—that points to the checkbox we just clicked. We are using this reference to set the temporary item x01 to the value of the checkbox and x02 to its status (checked/unchecked). As we are using the AJ AX related temporary items, we are using the addParam() method to do so. These items will be available to us in the on-demand PL/SQL process update_CD, which implements the server-side logic of this AJAX call. We stated this process in the third parameter of the htmldb_Get constructor function— 'APPLICATION_PROCESS=update_CB'. In this example, we are using the name 'get' for the variable referencing the new instance of htmldb_Get object. The use of this name is very common in many AJAX examples, especially on the OTN APEX forum, and its related examples. As we'll see when we review the server-side logic of this AJAX call, all it does is update—insert or delete—the content of the CUSTOMERS_VIP table. As such, it doesn't have an immediate effect on the client side, and we don't need to wait for its result. This is a classic case for us to use an asynchronous AJAX call. We do so by using the GetAsync() method. In this specific case, as the client side doesn't need to process any server response, we can use an empty function as the GetAsync() parameter.
Read more
  • 0
  • 0
  • 2811

article-image-improving-plone-3-product-performance
Packt
11 Jun 2010
7 min read
Save for later

Improving Plone 3 Product Performance

Packt
11 Jun 2010
7 min read
(For more resources on Plone, see here.) Introduction CMS Plone provides: A Means of adding, editing, and managing content A database to store content A mechanism to serve content in HTML or other formats Fortunately, it also supplies the tools to do all these things in an incredibly easy and powerful way. For example, content producers can create a new article without worrying how it will look or what other information will be surrounding the main information. To do this Plone must compose a single HTML output file (if we are talking from a web browser viewpoint) by joining and rendering several sources of data according to the place, importance, and target they are meant for. As it is built upon the Zope application server, all these jobs are easy for Plone. However, they have a tremendous impact as far as work and performance goes. If enough care is not taken, then a whole website could be stuck due to a couple of user requests. In this article, we'll look at the various performance improvements and how to measure these enhancements. We are not going to make a comprehensive review of all the options to tweak or set up a Zope-based web application, like configuring a like configuring a proxy cache or a load balancer. There are lots of places, maybe too many, where you can find information about these topics. We invite you to read these articles and tutorials and subscribe or visit Zope and Plone mailing lists: http://projects.zestsoftware.nl/guidelines/guidelines/caching/ caching1_background.html http://plone.org/documentation/tutorial/buildout/a-deployment-configuration/ http://plone.org/documentation/tutorial/optimizing-plone Installing CacheFu with a policy product When a user requests HTML pages from a website, many things can be expressed about the downloading files by setting special headers in the HTTP response. If managed cautiously, the server can save lots of time and, consequently, work by telling the browser how to store and reuse many of the resources it has got. CacheFu is the Plone add-on product that streamlines HTTP header handling in order to obtain the required performance. We could add a couple of lines to the buildout.cfg file to download and install CacheFu. Then we could add some code in our end user content type products (pox.video and Products.poxContentTypes) to configure CacheFu properly to deliver them in an efficient way. However, if we do so, we would be forcing these products to automatically install CacheFu, even if we were testing them in a development environment. To prevent this, we are going to create a policy product and add some code to install and configure CacheFu. A policy product is a regular package that will take care of general customizations to meet customer requirements. For information on how to create a policy product see Creating a policy product. Getting ready To achieve this we'll use pox.policy, the policy product created in Creating a policy product. How to do it... Automatically fetch dependencies of the policy product: Open setup.py in the root pox.policy folder and modify the install_requires variable of the setup call: setup(name='pox.policy', ... install_requires=['setuptools', # -*- Extra requirements: -*- 'Products.CacheSetup', ], Install dependencies during policy product installation. In the profiles/default folder, modify the metadata.xml file: <?xml version="1.0"?><metadata> <version>1</version> <dependencies> <dependency>profile-Products.CacheSetup:default</dependency> </dependencies></metadata You could also add here all the other products you plan to install as dependencies, instead of adding them individually in the buildout.cfg file. Configure products during the policy product installation. Our policy product already has a <genericsetup:importStep /> directive in its main component configuration file (configure.zcml). This import step tells GenericSetup to process a method in the setuphandlers module (we could have several steps, each of them with a matching method). Then modify the setupVarious method to do what we want, that is, to apply some settings to CacheFu. from zope.app.component.hooks import getSitefrom Products.CMFCore.utils import getToolByNamefrom config import * def setupVarious(context): if context.readDataFile('pox.policy_various.txt') is None: return portal = getSite() # perform custom operations # Get portal_cache_settings (from CacheFu) and # update plone-content-types rule pcs = getToolByName(portal, 'portal_cache_settings') rules = pcs.getRules() rule = getattr(rules, 'plone-content-types') rule.setContentTypes(list(rule.getContentTypes()) + CACHED_CONTENT) The above code has been shortened for clarity's sake. Check the accompanying code bundle for the full version. Add or update a config.py file in your package with all configuration options: # Content types that should be cached in plone-content-types# rule of CacheFuCACHED_CONTENT = ['XNewsItem', 'Video',] Build your instance up again and launch it: ./bin/buildout./bin/instance fg After installing the pox.policy product (it's automatically installed during buildout as explained in Creating a policy product) we should see our content types—Video and XNewsItem—listed within the cached content types. The next screenshot corresponds to the following URL: http://localhost:8080/plone/portal_cache_settings/with-caching-proxy/rules/plone-content-types. The with-caching-proxy part of the URL matches the Cache Policy field; and the plone-content-types part matches the Short Name field. As we added Python code, we must test it. Create this doctest in the README.txt file in the pox.policy package folder: Check that our content types are properly configured >>> pcs = getToolByName(self.portal, 'portal_cache_settings')>>> rules = pcs.getRules()>>> rule = getattr(rules, 'plone-content-types')>>> 'Video' in rule.getContentTypes()True>>> 'XNewsItem' in rule.getContentTypes()True Modify the tests module by replacing the ptc.setupPloneSite() line with these ones: # We first tell Zope there's a CacheSetup product availableztc.installProduct('CacheSetup') # And then we install pox.policy product in Plone.# This should take care of installing CacheSetup in Plone alsoptc.setupPloneSite(products=['pox.policy']) And then uncomment the ZopeDocFileSuite: # Integration tests that use PloneTestCaseztc.ZopeDocFileSuite( 'README.txt', package='pox.policy', test_class=TestCase), Run this test suite with the following command: ./bin/instance test -s pox.policy How it works... In the preceding steps, we have created a specific procedure to install and configure other products (CacheFu in our case). This will help us in the final production environment startup as well as on installation of other development environments we could need (when a new member joins the development team, for instance). In Step 1 of the How to do it... section, we modified setup.py to download and install a dependency package during the installation process, which is done on instance buildout. Getting dependencies in this way is possible when products are delivered in egg format thanks to Python eggs repositories and distribution services. If you need to get an old-style product, you'll have to add it to the [productdistros] part in buildout.cfg. Products.CacheSetup is the package name for CacheFu and contains these dependencies: CMFSquidTool, PageCacheManager, and PolicyHTTPCacheManager. There's more... For more information about CacheFu visit the project home page at http://plone.org/products/cachefu. You can also check for its latest version and release notes at Python Package Index (PyPI, a.k.a. The Cheese Shop): http://pypi.python.org/pypi/Products.CacheSetup. The first link that we recommended in the Introduction is a great help in understanding how CacheFu works: http://projects.zestsoftware.nl/guidelines/guidelines/caching/caching1_background.html. See also Creating a policy product Installing and configuring an egg repository
Read more
  • 0
  • 0
  • 1238
Visually different images

article-image-find-and-install-add-ons-expand-plone-functionality
Packt
10 Jun 2010
11 min read
Save for later

Find and Install Add-Ons that Expand Plone Functionality

Packt
10 Jun 2010
11 min read
(For more resources on Plone, see here.) Background It seems like every application platform uses a different name for its add-ons: modules, components, libraries, packages, extensions, plug-ins, and more. Add-on packages for the Zope web application server are generally called Products. A Zope product is a bundle of Zope or Plone functionality contained in a Python module or modules. Like Plone, add-on products are distributed in source code, so that you may always read and examine them. Plone itself is actually a set of tightly connected Zope products and Python modules. Plone add-on products may be divided into three major categories: Skins or themes that change Plone’s look and feel or add visual elements like portlets. These are typically the simplest of Plone products. Products that add new content types with specialized functionality. Some are simple extensions of built-in types, others have custom workflows and behaviours. Products that add to or change the behaviour of Plone itself. Where to Find Products Plone.org’s Products section at http://plone.org/products is the place to look for Plone products. At the time of this writing, the Plone.org contains listings for 765 products and 1,901 product releases. The Plone Products section is itself built with a Plone product, the Plone Software Center – often called the PSC – that adds content types for projects, software releases, project roadmaps, issue trackers, and project documentation. Using the Plone Product Pages Visiting the Plone product pages for the first time may be a bewildering experience due to the number of available products. However, by specifying a product category and target Plone version, you will quickly narrow the product selection to the point where it’s worth reading descriptions and following the links to product pages. Product pages typically contain product descriptions, software releases, and a list of available documentation, issue tracker, version control repository, and contact resources. Each release will have release notes, a change log, and a list of Plone versions with which the release has been tested. If the release has a product package available, it will be available here for download. Some releases do not have associated software packages. This may be because the release is still in a planning stage, and the listing is mainly meant to document the product’s development roadmap; or because the development is still in an early stage, and the software is only available from a version-control repository. The release notes commonly include a list of dependencies, and you should make special note of that along with compatible Plone versions. Many products require the installation of other, supporting products. Some require that your server or test workstation have particular system libraries or utilities. Product pages may also have links to a variety of additional resources: product-specific documentation, other release pages, an issue tracker, a roadmap for future development, contact form for the project, and a version-control repository. Playing it Safe with Add-On Products Plone 3 is probably one of the most rigorously tested open-source software packages in existence. While no software is defect free, Plone’s core development team is on the leading edge of software development methodologies and work under a strong testing culture that requires that they prove their components work correctly before they ever become part of Plone. Plone’s library of add-on products is a very different story. Add-on products are contributed by a diverse community of developers. Some add-on products follow the same development and maintenance methodologies as Plone itself; others are haphazard experiments. To complicate matters, today’s haphazard experiment may be – if it succeeds – next year’s rigorously developed and reliable product. (Much of the Plone core codebase began as add-on products.) And, this year’s reliable standby may lose the devotion of its developers and not be upgraded to work with the next version of Plone. If you’re new to the world of open source software, this may seem dismaying. Don’t be discouraged. It is not hard to evaluate the status of a product, and the Plone community is happy to help. Be encouraged by evidence of continual, exciting innovation. Most importantly, stop thinking of yourself as a consumer. Take an interest in the community process that produces good products. Test some early releases and file bug reports and feature requests. Participate in, or help document, test, and fund the development of the products that are most important to you. Product Choice Strategy Trying out new Plone add-on products is great fun, but incorporating them into production websites requires planning and judgement if you’re going to have good long-run results. New versions of Plone pose a particular challenge. Major new releases of Plone don’t just add features: with every major version of Plone the application programming interface (API) and presentation templates change. This is not done arbitrarily, and there is usually a good deal of warning before a major change, but it means that add-on products often need to be updated before they will work with a major new version of Plone. Probably worthwhile to point out that major versions are released very ~18 months, and that minor version upgrades generally do not pose compatibility problems for the vast majority of add-on products. This means that when a new version of Plone appears on the scene, you won’t be able to migrate your Plone site to use it until compatible product versions are available for all the add-on products in use on the site. If you’re using mainstream, well-supported products, this may happen very quickly. Many products are upgraded to work with new Plone versions during the beta and release-candidate stages of Plone development. Some products take longer, and some may not make the jump at all. The products least likely to be updated are often ones made obsolete by new functionality. This creates a somewhat ironic situation when a new version of Plone arrives: the quickest adopters are often those with the least history with the platform. The slowest adopters are sometimes the sites that are most heavily invested in the new features. Consider, as a prime example, Plone.org, a very active, very large, community site which must be conservatively managed and stick with proven versions of add-on products. Plone.org often does not migrate to a new Plone version until many months after release. Is this a problem? Not really – unless you need both the newest features of the newest Plone version and the functionality of a more slowly developed add-on product. If that’s the case, prepare to make an investment of time or money in supporting product development and possibly writing some custom migration scripts. If you want to be more conservative, try the following strategy: Enjoy testing many products and keeping up with new developments by trying them out on a test server. Learn the built-in Plone functionality well, and use it in preference to add-on products whenever possible. Make sure you have a good understanding of the maturity level and degree of developer support for add-on products. Incorporate the smallest number of add-on products reasonably possible into your production sites. Don’t be just a consumer: when you commit to a product, help support it by filing bug reports and feature requests, contributing translations, documentation or code, and answering questions about it on the Plone mailing lists or #plone IRC channel. Evaluating a Product Judging the maturity of a Plone product is generally easy. Start with a product’s project page on Plone.org. The product page may offer you a "Current release" and one or more "Experimental releases". Anything marked as a current release should be stable on its tested Plone versions. If you need a release to work with an earlier version of Plone than the ones supported by the current release, follow the "List all releases..." link. Releases in the "Experimental" list will be marked as "alpha", "beta", or "Release Candidate." These terms are well-defined in practice: Alpha releases are truly experimental, and are usually posted in order to get early feedback. Interfaces and implementations are likely still in flux. Download an alpha release only for testing in an experimental environment, and only for purposes of previewing new features and giving feedback to developers. Do not plan on keeping any content you develop using an alpha release, as there may be no upgrade path to later releases. With a beta release, feature sets and programming interfaces should be stable or changing only incrementally. It’s reasonable to start testing the integration of the product with the platform and with other products. There will typically be an upgrade path to future releases. Bug reports will be welcome and will help develop the product. Release candidates have a fixed feature set and no known major issues. Templates and messages should be complete, so that translators may work on language files with some confidence that their work won’t be lost. If you encounter a bug in release-candidate products, please immediately file an issue report. Products may be re-released repeatedly at any release state. For alpha, beta and RC releases, each additional release changes the release count, but not the version number. So, "PloneFormGen 1.2" (Beta release 6) is the sixth beta release of version 1.2 of PloneFormGen. Once a product release reaches current release status, new releases for maintenance will increment the version number by 0.0.1. "PloneFormGen 1.1.3" is thus the third maintenance release of version 1.1 of that product. Don’t make too much of version numbers or release counts. Release status is a better indicator of maturity. If your site is mission-critical, don’t use beta releases on it. However, if you test carefully before deploying, you may find that some products are ready for live use when late in their beta development on sites where an error or glitch wouldn’t be intolerable. Testing a Product Conscientious Plone site administrators maintain an off-line mirror of their production sites on a secondary server – or even a desktop computer – that they may use for testing purposes. Always test a new product on a test server. Before deploying, test it on a server that has precisely the combination of products in use on your production server. Ideally, test with a copy of the database of your live server. Check the functionality of not only the new product, but also the products you’re already using. The latter is particularly important if you’re using products that alter the base functionality of Plone or Zope. Looking to the Future Evaluating product maturity and testing the product will help you judge its current status, but what about the future? What are the signs of a product that’s likely to be well-maintained and available for future versions of Plone? There are no guarantees, but here are some signs that experienced Plone integrators look for: Developing in public. This is open-source software. Look to see if the product is being developed with a public roadmap for the future, and with a public version-control repository. Plone.org provides product authors with great tools for indicating release plans, and makes a Subversion (SVN) version-control repository available to all product authors. Look to see if they’re using these facilities. Issue tracker status. Every released product should have a public issue (bug) tracker. Look for it. Look to see if it’s being maintained, and if issues are actively responded to. No issue tracker, or lots of old, uncategorized issues are bad signs. Support for multiple Plone versions. If a product has been around a while look to see if versions are available for at least a couple of Plone releases. This might be the previous and current releases, or the current and next releases. Internationalization. Excellent products attract translations. Good development methodologies. This is the hardest criterion for a non-developer to judge, but a forthcoming version of the Plone Software Center will ask developers to rate themselves on compliance with a set of community standards. My guess is that product developers will be pretty honest about these ratings. Several of these criteria have something in common: they allow the Plone community to participate in product maintenance and development. The best projects belong to the community, and not any single author. One of the best ways to get a quick read on the quality of an add-on product is to hop on the #plone IRC channel and ask. Chances are you’ll run into someone who can share their experiences and offer insight. You may even run into the product author him/herself!
Read more
  • 0
  • 0
  • 1675

article-image-microsoft-dynamics-nav-2009-using-journals-and-entries-custom-application
Packt
10 Jun 2010
9 min read
Save for later

Microsoft Dynamics NAV 2009: Using the journals and entries in a custom application

Packt
10 Jun 2010
9 min read
(Further on Microsoft Dynamics NAV:here.) Designing a journal Now it is time to start on the product part of the Squash Application. In this part we will no longer reverse engineer in detail. We will learn how to search in the standard functionality and reuse parts in our own software. For this part we will look at resources in Microsoft Dynamics NAV. Resources are similar to use products as Items but far less complex making it easier to look and learn. Squash Court master data Our company has 12 courts that we want to register in Microsoft Dynamics NAV. This master data is comparable to resources so we'll go ahead and copy this functionality. Resources are not attached to umbrella data like the vendor/squash player tables. We need the number series again so we'll add a new number series to our squash setup table. The Squash Court table should look like this after creation: Chapter objects With this chapter some objects are required. A description of how to import these objects can be found in the Appendix. After the import process is completed make sure that your current database is the default database for the role tailored client and run Page 123456701, Squash Setup. From this page select the Action Initialise Squash Application. This will execute the C/AL code in the InitSquashApp function of this page, which will prepare demo data for us to play with. The objects are prepared and tested in a Microsoft Dynamics NAV 2009 SP1 W1 database. Reservations When running a squash court we want to be able to keep track of reservations.Looking at standard Dynamics NAV functionality it might be a good idea to create Squash player Journal. The Journal can create entries for reservations that can be invoiced. A journal needs the object structure. The journal is prepared in the objects delivered with this article. Creating a new journal from scratch is a lot of work and can easily lead to making mistakes. It is easier and safer to copy an existing Journal structure from the standard application that is similar to the journal we need for our design. In our example we have copied the Resource Journals. You can export these objects to text format, and then rename and renumber the objects to be reused easily. The squash journal objects are renumbered and renamed from the resource journal. all journals have the same structure. The template, batch and register tables are almost always the same, whereas the journal line and ledger entry table contain function specific fields. Let's have a look at all of them one by one. Journal Template The Journal Template has several fields as shown in the following screenshot: Lets discuss these fields in more detail: Name: This is the unique name. It is possible to define as many Templates as required but usually one Template per Form ID and one for Recurring will do. If you want journals with different source codes you need to have more templates. Description: A readable and understandable description of its purpose. Test Report ID: All Templates have a test report that allows the user to check for posting errors. Form ID: For some journals, more UI objects are required. For example, the General Journals have a special form for bank and cash. Posting Report ID: This report is printed when a user selects Post and Print. Force Posting Report: Use this option when a posting report is mandatory. Source Code: Here you can enter a Trail Code for all the postings done via this Journal. Reason Code: This functionality is similar to source sodes. Recurring: Whenever you post lines from a recurring journal, new lines are automatically created with a posting date defined in the recurring date formula. No. Series: When you use this feature the Document No. in the Journal Line is automatically populated with a new number from this Number Series. Posting No. Series: Use this feature for recurring journals. Journal Batch Journal Batch has various fields as shown in the following screenshot: Lets discuss these fields in more detail: Journal Template Name: The name of the Journal Template this batch refers to Name : Each batch should have a unique code Description: A readable and explaining description for this batch Reason Code: When populated, this Reason Code will overrule the Reason Code from the Journal Template No. Series: When populated this No. Series will overrule the No. Series from the Journal Template Posting No. Series: When populated this Posting No. Series will overrule the Posting No. Series from the Journal Template Register The Register table has various fields as shown in the following screenshot: Lets discuss these fields in more detail: No.: This field is automatically and incrementally populated for each transaction with this journal. There are no gaps between the numbers. From Entry No.: A reference to the first Ledger Entry created is with this transaction. To Entry No.: A reference to the last Ledger Entry is created with this transaction. Creation Date: Always populated with the real date when the transaction was posted. User ID: The ID of the end user who has posted the transaction. The Journal The journal line has a number of mandatory fields that are required for all journals and some fields that are required for its designed functionality. In our case the journal should create a reservation which then can be invoiced. This requires some information to be populated in the lines. Reservation The reservation process is a logistical process that requires us to know the number of the Squash Court, the date and the time of the reservation. We also need to know how long the players want to play. To check the reservation it might also be useful to store the number of the Squash Player. Invoicing For the invoicing part we need to know the price we need to invoice. It might also be useful to store the cost to see our profit. For the system to figure out the proper G/L Account for the turnover we also need to define a General Product Posting Group. Journal Template Name: This is a reference to the current journal template. Line No. : Each journal has a virtually unlimited number of lines; this number is automatically incremented by 10000 allowing lines to be created in between. Entry Type: Reservation or invoice. Document No.: This number can be used to give to the squash player as a reservation number. When the entry type is invoice, it is the invoice number. Posting Date: Posting date is usually the reservation date but when the entry type is invoice it might be the date of the invoice which might differ from the posting date in the general ledger. Squash Player No.: A reference to the squash player who has made the reservation. Squash Court No.: A reference to the squash court. Description: This is automatically updated with the number of the squash court, reservation date and times, but can be changed by the user. Reservation Date: The actual date of the reservation. From Time: The starting time of the reservation. We allow only whole or half hours. To Time: The ending time of the reservation. We only allow whole and half hours. This is automatically populated when people enter a quantity. Quantity: The number of hours playing time. We only allow units of 0.5 to be entered here. This is automatically calculated when the times are populated. Unit Cost: The cost to run a Squash Court for one hour. Total Cost: The cost for this reservation. Unit Price: The invoice price for this reservation per hour. This depends on whether or not the squash player is a member or not. Total Price: The total invoice price for this reservation. Shortcut Dimension Code 1 & 2: A reference to the dimensions used for this transaction. Applies-to Entry No.: When a reservation is invoiced, this is the reference to the squash entry no. of the reservation. Source Code: Inherited from the journal batch or template and used when posting the transaction. Chargeable: When this option is used, there will not be an invoice for the reservation. Journal Batch Name: A reference to the journal batch that is used for this transaction. Reason Code: Inherited from the journal batch or template, and used when posting the transaction. Recurring Method: When the journal is a recurring journal you can use this field to determine whether the amount field is blanked after posting the lines. Recurring Frequency: This field determines the new posting date after the recurring lines are posted. Gen. Bus. Posting Group: The combination of general business and product posting group determines the G/L cccount for turnover when we invoice the reservation. The Gen. Bus. Posting Group is inherited from the bill-to customer. Gen. Prod. Posting Group: This will be inherited from the squash player. External Document No.: When a squash player wants us to note a reference number we can store it here. Posting No. Series: When the journal template has a posting no. series it is populated here to be used when posting. Bill-to Customer No.: This determines who is paying for the reservation. We will inherit this from the squash player. So now we have a place to enter reservations but we have something to do before we can start doing this. Some fields were determined to be inherited and calculated: The time field needs calculation to avoid people entering wrong values The Unit Price should be calculated The Unit Cost, Posting groups and Bill-to Customer No. need to be inherited As final cherry on top, we will look at implementing dimensions
Read more
  • 0
  • 0
  • 3610

article-image-implementing-wcf-service-real-world
Packt
09 Jun 2010
18 min read
Save for later

Implementing a WCF Service in the Real World

Packt
09 Jun 2010
18 min read
WCF is the acronym for Windows Communication Foundation. It is Microsoft's latest technology that enables applications in a distributed environment to communicate with each other. In this article by, Mike Liu, author of  WCF 4.0 Multi-tier Services Development with LINQ to Entities, we will create and test the WCF service by following these steps: Create the project using a WCF Service Library template Create the project using a WCF Service Application template Create the Service Operation Contracts Create the Data Contracts Add a Product Entity project Add a business logic layer project Call the business logic layer from the service interface layer Test the service Here ,In this article, we will learn how to separate the service interface layer from the business logic layer (Read more interesting articles on WCF 4.0 here.) Why layer a service? An important aspect of SOA design is that service boundaries should be explicit, which means hiding all the details of the implementation behind the service boundary. This includes revealing or dictating what particular technology was used. Furthermore, inside the implementation of a service, the code responsible for the data manipulation should be separated from the code responsible for the business logic. So in the real world, it is always good practice to implement a WCF service in three or more layers. The three layers are the service interface layer, the business logic layer, and the data access layer. Service interface layer: This layer will include the service contracts and operation contracts that are used to define the service interfaces that will be exposed at the service boundary. Data contracts are also defined to pass in and out of the service. If any exception is expected to be thrown outside of the service, then Fault contracts will also be defined at this layer. Business logic layer: This layer will apply the actual business logic to the service operations. It will check the preconditions of each operation, perform business activities, and return any necessary results to the caller of the service. Data access layer: This layer will take care of all of the tasks needed to access the underlying databases. It will use a specific data adapter to query and update the databases. This layer will handle connections to databases, transaction processing, and concurrency controlling. Neither the service interface layer nor the business logic layer needs to worry about these things. Layering provides separation of concerns and better factoring of code, which gives you better maintainability and the ability to split out layers into separate physical tiers for scalability. The data access code should be separated into its own layer that focuses on performing translation services between the databases and the application domain. Services should be placed in a separate service layer that focuses on performing translation services between the service-oriented external world and the application domain. The service interface layer will be compiled into a separate class assembly and hosted in a service host environment. The outside world will only know about and have access to this layer. Whenever a request is received by the service interface layer, the request will be dispatched to the business logic layer, and the business logic layer will get the actual work done. If any database support is needed by the business logic layer, it will always go through the data access layer. Creating a new solution and project using WCF templates We need to create a new solution for this example and add a new WCF project to this solution. This time we will use the built-in Visual Studio WCF templates for the new project. Using the C# WCF service library template There are a few built-in WCF service templates within Visual Studio 2010; two of them are Visual Studio WCF Service Library and Visual Studio WCF Service Application. In this article, we will use the service library template. Follow these steps to create the RealNorthwind solution and the project using the service library template: Start Visual Studio 2010, select menu option File New | Project…|, and you will see the New Project dialog box. From this point onwards, we will create a completely new solution and save it in a different location. In the New Project window, specify Visual C# WCF | WCF| Service Library as the project template, RealNorthwindService as the (project) name, and RealNorthwind as the solution name. Make sure that the checkbox Create directory for solution is selected. Click on the OK button, and the solution is created with a WCF project inside it. The project already has an IService1.cs file to define a service interface and Service1.cs to implement the service. It also has an app.config file, which we will cover shortly. Using the C# WCF service application template Instead of using the Visual Studio WCF Service Library template to create our new WCF project, we can use the Visual Studio Service Application template to create the new WCF project. Because we have created the solution, we will add a new project using the Visual Studio WCF Service Application template. Right-click on the solution item in Solution Explorer, select menu option Add New Project…| from the context menu, and you will see the Add New Project dialog box. In the Add New Project window, specify Visual C# | WCF Service Application as the project template, RealNorthwindService2 as the (project) name, and leave the default location of C:SOAWithWCFandLINQProjectsRealNorthwind unchanged. Click on the OK button and the new project will be added to the solution.The project already has an IService1.cs file to define a service interface, and Service1.svc.cs to implement the service. It also has a Service1.svc file and a web.config file, which are used to host the new WCF service. It has also had the necessary references added to the project such as System.ServiceModel. You can follow these steps to test this service: Change this new project, RealNorthwindService2, to be the startup project(right-click on it from Solution Explorer and select Set as Startup Project). Then run it (Ctrl + F5 or F5). You will see that it can now run. You will see that ASP.NET Development Server has been started, and a browser is open listing all of the files under the RealNorthwindService2 project folder.Clicking on the Service1.svc file will open the metadata page of the WCF service in this project. If you have pressed F5 in the previous step to run this project, you might see a warning message box asking you if you want to enable debugging for the WCF service. As we said earlier, you can choose enable debugging or just run in the non-debugging mode. You may also have noticed that the WCF Service Host is started together with ASP.NET Development Server. This is actually another way of hosting a WCF service in Visual Studio 2010. It has been started at this point because, within the same solution, there is a WCF service project (RealNorthwindService) created using the WCF Service Library template. So far we have used two different Visual Studio WCF templates to create two projects. The first project, using the C# WCF Service Library template, is a more sophisticated one because this project is actually an application containing a WCF service, a hosting application (WcfSvcHost), and a WCF Test Client. This means that we don't need to write any other code to host it, and as soon as we have implemented a service, we can use the built-in WCF Test Client to invoke it. This makes it very convenient for WCF development. The second project, using the C# WCF Service Application template, is actually a website. This is the hosting application of the WCF service so you don't have to create a separate hosting application for the WCF service. As we have already covered them and you now have a solid understanding of these styles, we will not discuss them further. But keep in mind that you have this option, although in most cases it is better to keep the WCF service as clean as possible, without any hosting functionalities attached to it. To focus on the WCF service using the WCF Service Library template, we now need to remove the project RealNorthwindService2 from the solution. In Solution Explorer, right-click on the RealNorthwindService2 project item and select Remove from the context menu. Then you will see a warning message box. Click on the OK button in this message box and the RealNorthwindService2 project will be removed from the solution. Note that all the files of this project are still on your hard drive. You will need to delete them using Windows Explorer. Creating the service interface layer In this article, we will create the service interface layer contracts. Because two sample files have already been created for us, we will try to reuse them as much as possible. Then we will start customizing these two files to create the service contracts. Creating the service interfaces To create the service interfaces, we need to open the IService1.cs file and do the following: Change its namespace from RealNorthwindService to: MyWCFServices.RealNorthwindService Change the interface name from IService1 to IProductService. Don't be worried if you see the warning message before the interface definition line, as we will change the web.config file in one of the following steps. Change the first operation contract definition from this line: string GetData(int value); to this line: Product GetProduct(int id); Change the second operation contract definition from this line: CompositeType GetDataUsingDataContract(CompositeType composite); to this line: bool UpdateProduct(Product product); Change the filename from IService1.cs to IProductService.cs. With these changes, we have defined two service contracts. The first one can be used to get the product details for a specific product ID, while the second one can be used to update a specific product. The product type, which we used to define these service contracts, is still not defined. The content of the service interface for RealNorthwindService.ProductService should look like this now: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ [ServiceContract] public interface IProductService { [OperationContract] Product GetProduct(int id); [OperationContract] bool UpdateProduct(Product product); // TODO: Add your service operations here }} This is not the whole content of the IProductService.cs file. The bottom part of this file should still have the class, CompositeType. Creating the data contracts Another important aspect of SOA design is that you shouldn't assume that the consuming application supports a complex object model. One part of the service boundary definition is the data contract definition for the complex types that will be passed as operation parameters or return values. For maximum interoperability and alignment with SOA principles, you should not pass any .NET-specific types such as DataSet or Exceptions across the service boundary. You should stick to fairly simple data structure objects such as classes with properties and backing member fields. You can pass objects that have nested complex types such as 'Customer with an Order collection'. However, you shouldn't make any assumption about the consumer being able to support object-oriented constructs such as inheritance or base-classes for interoperable web services. In our example, we will create a complex data type to represent a product object. This data contract will have five properties: ProductID, ProductName, QuantityPerUnit, UnitPrice, and Discontinued. These will be used to communicate with client applications. For example, a supplier may call the web service to update the price of a particular product or to mark a product for discontinuation. It is preferable to put data contracts in separate files within a separate assembly but, to simplify our example, we will put DataContract in the same file as the service contract. We will modify the file, IProductService.cs, as follows: Change the DataContract name from CompositeType to Product. Change the fields from the following lines: bool boolValue = true;string stringValue = "Hello "; to these seven lines: int productID;string productName;string quantityPerUnit;decimal unitPrice;bool discontinued; Delete the old boolValue and StringValue DataMember properties. Then, for each of the above fields, add a DataMember property. For example, for productID, we will have this DataMember property: [DataMember]public int ProductID{ get { return productID; } set { productID = value; }} A better way is to take advantage of the automatic property feature of C#, and add the following ProductID DataMember without defining the productID field: [DataMember]public int ProductID { get; set; } To save some space, we will use the latter format. So, we need to delete all of those field definitions and add an automatic property for each field, with the first letter capitalized. The data contract part of the finished service contract file, IProductService.cs,should now look like this: [DataContract]public class Product{ [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; }} Implementing the service contracts To implement the two service interfaces that we defined, open the Service1.cs file and do the following: Change its namespace from RealNorthwindService to MyWCFServices.RealNorthwindService. Change the class name from Service1 to ProductService. Make it inherit from the IProductService interface, instead of IService1. The class definition line should be like this: public class ProductService : IProductService Delete the GetData and GetDataUsingDataContract methods. Add the following method, to get a product: public Product GetProduct(int id){ // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10.0; return product;} In this method, we created a fake product and returned it to the client.Later, we will remove the hard-coded product from this method and call the business logic to get the real product. Add the following method to update a product: public bool UpdateProduct(Product product){ // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true;} Also, in this method, we don't update anything. Instead, we always return true if a valid price is passed in. Change the filename from Service1.cs to ProductService.cs. The content of the ProductService.cs file should be like this: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ public class ProductService : IProductService { public Product GetProduct(int id) { // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10; return product; } public bool UpdateProduct(Product product) { // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true; } }} Modifying the app.config file Because we have changed the service name, we have to make the appropriate changes to the configuration file. Note that when you rename the service, if you have used the refactor feature of Visual Studio, some of the following tasks may have been done by Visual Studio. Follow these steps to change the configuration file: Open the app.config file from Solution Explorer. Change all instances of the RealNorthwindService string except the one in baseAddress to MyWCFServices.RealNorthwindService. This is for the namespace change. Change the RealNorthwindService string in baseAddress to MyWCFServices/RealNorthwindService. Change all instances of the Service1 string to ProductService. This is for the actual service name change. Change the service address port from 8731 to 8080. This is to prepare for the client application, which we will create soon. You can also change Design_Time_Addresses to whatever address you want, or delete the baseAddress part from the service. This can be used to test your service locally. We will leave it unchanged for our example. The content of the app.config file should now look like this: <?xml version="1.0" encoding="utf-8" ?><configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service name="MyWCFServices.RealNorthwindService. ProductService"> <endpoint address="" binding="wsHttpBinding" contract="MyWCFServices. RealNorthwindService.IProductService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8080/Design_Time_ Addresses/MyWCFServices/ RealNorthwindService/ProductService/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Testing the service using WCF Test Client Because we are using the WCF Service Library template in this example, we are now ready to test this web service. As we pointed out when creating this project, this service will be hosted in the Visual Studio 2010 WCF Service Host environment. To start the service, press F5 or Ctrl + F5. WcfSvcHost will be started and WCF Test Client is also started. This is a Visual Studio 2010 built-in test client for WCF Service Library projects. In order to run the WCF Test Client you have to log into your machine as a local administrator. You also have to start Visual Studio as an administrator because we have changed the service port from 8732 to 8080 (port 8732 is pre-registered but 8080 is not). Again, if you get an Access is denied error, make sure you run Visual Studio as an administrator (under Windows XP you need to log on as an administrator). Now from this WCF Test Client we can double-click on an operation to test it.First, let us test the GetProduct operation. Now the message Invoking Service… will be displayed in the status bar as the client is trying to connect to the server. It may take a while for this initial connection to be made as several things need to be done in the background. Once the connection has been established, a channel will be created and the client will call the service to perform the requested operation. Once the operation has been completed on the server side, the response package will be sent back to the client, and the WCF Test Client will display this response in the bottom panel. If you started the test client in debugging mode (by pressing F5), you can set a breakpoint at a line inside the GetProduct method in the RealNorthwindService.cs file, and when the Invoke button is clicked, the breakpoint will be hit so that you can debug the service as we explained earlier. However, here you don't need to attach to the WCF Service Host. Note that the response is always the same, no matter what product ID you use to retrieve the product. Specifically, the product name is hard-coded, as shown in the diagram. Moreover, from the client response panel, we can see that several properties of the Product object have been assigned default values. Also, because the product ID is an integer value from the WCF Test Client, you can only enter an integer for it. If a non-integer value is entered, when you click on the Invoke button, you will get an error message box to warn you that you have entered a value with the wrong type. Now let's test the operation, UpdateProduct. The Request/Response packages are displayed in grids by default but you have the option of displaying them in XML format. Just select the XML tab at the bottom of the right-side panel, and you will see the XML-formatted Request/Response packages. From these XML strings, you can see that they are SOAP messages. Besides testing operations, you can also look at the configuration settings of the web service. Just double-click on Config File from the left-side panel and the configuration file will be displayed in the right-side panel. This will show you the bindings for the service, the addresses of the service, and the contract for the service. What you see here for the configuration file is not an exact image of the actual configuration file. It hides some information such as debugging mode and service behavior, and includes some additional information on reliable sessions and compression mode. If you are satisfied with the test results, just close the WCF Test Client, and you will go back to Visual Studio IDE. Note that as soon as you close the client, the WCF Service Host is stopped. This is different from hosting a service inside ASP.NET Development Server, where ASP.NET Development Server still stays active even after you close the client.
Read more
  • 0
  • 0
  • 10386
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-objects-and-types-documentum-65-content-management-foundations-sequel
Packt
04 Jun 2010
11 min read
Save for later

Objects and Types in Documentum 6.5 Content Management Foundations- A Sequel

Packt
04 Jun 2010
11 min read
Content persistence We have seen so far how metadata is persisted but it is not obvious how content is persisted and associated with its metadata. All sysobjects (objects of type dm_sysobject and its subtypes) other than folders (objects of type dm_folder and its subtypes) can have associated content. We saw that a document can have content in the form of renditions as well as in primary format. How are these content files associated with a sysobject? In other words, how does Content Server know what metadata is associated with a content fi le? How does it know that one content fi le is a rendition of another one? Content Server manages content files using content objects, which (indirectly) point to the physical locations of content files and associate them with sysobjects. Locating content files Recall that Documentum repositories can store content in various types of storage systems including a file system, a Relational Database Management System (RDBMS), a content-addressed storage (CAS), or external storage devices. Content Server decides to store each file in a location based on the configuration and the presence of products like Content Storage Services. In general, users are not concerned about where the file is stored since Content Server is able to retrieve the file from the location where it was stored. We will discuss the physical location of a content file without worrying about why Content Server chose to use that location. Content object Every content file in the repository has an associated content object, which stores information about the location of the fi le and identifi es the sysobjects associated with it. These sysobjects are referred to as the parent objects of the content object. A content object is an object of type dmr_content, whose key attributes are listed as follows: Attribute Description parent_count Number of parent objects parent_id List of object IDs of the parent objects storage_id Object ID of the store object representing the storage area holding the content. data_ticket A value used internally to retrieve the content. The value and its usage depend upon the type of storage used. i_contents When the content is stored in turbo storage, this property contains the actual content. If the content is larger than the size of this property (2000 characters for databases other than Sybase, 255 for Sybase), the content is stored in a dmi_subcontent object and this property is unused. If the content is stored in content addressed storage, it contains the content address. If the content is stored in external storage, it contains the token used to retrieve the content. rendition Identifies if it's a rendition and its related behavior 0 means original content 1 means rendition generated by server 2 means rendition generated by client 3 means rendition not to be removed when its primary content is updated or removed format Object ID of the format object representing the format of the content full_content_size Content file size in bytes, except when the content is stored in external storage Object-content relationship Content Server manages content objects while performing content-related operations. Content associated with a sysobject is categorized as primary content or a rendition. A rendition is a content fi le associated with a sysobject that is not its primary content. Content in the first content file added to a sysobject is called its primary content and its format is referred to as the primary format for the parent object. Any other content added to the parent object in the same format is also called primary content, though it is rarely done by users manually. This ability to add multiple primary content files is typically utilized programmatically by applications for their internal use. While a sysobject can have multiple primary content files it is also possible for one content object to have multiple parent objects. This just means that a content file can be shared by multiple objects. Putting it together The details about content persistence can become confusing due to the number of objects involved and the relationships among various attributes. It becomes even more complicated when the full Content Server capabilities (such as multiple content files for one sysobject) are manifested. We will look at a simple scenario to visually grasp how content persistence works in common situations. Documentum provides multiple options for locating the content file. DFC provides the getPath() method and DQL provides get_file_url administration method for this purpose. This section has been included to satisfy the reader's curiosity about content persistence and works through the information manually. This discussion can be treated as supplementary to technical fundamentals.. The sysobject is named paystub.jpg. The primary content file is in jpg format and the rendition is in pdf format, as shown in the following figure: The following figure shows the objects involved in the content persistence for this document. The central object is of type dm_document. The figure also includes two content objects and one format object. Let's try to understand the relationships by asking specific questions. How many content files, primary or renditions, are there for the document paystub.jpg? This question can be answered by looking for the corresponding content objects. We look for dmr_content objects that have the document's object ID in one of their parent_id values. This figure shows that there are two such content objects. Which of these content objects represents the primary content and which one is a rendition? This can be determined by looking at the rendition attribute. The content object on the left shows rendition=0, which indicates primary content. The content object on the right shows rendition=2, which indicates rendition generated by client (recall that we manually imported this rendition). What is the primary format for this document? This is easy to answer by looking at the a_content_type attribute on the document itself. If we need to know the format for a content object we can look for the dm_format object which has the same object ID as the value present in the format property of the content object. In the fi gure above, the format object for the primary content object is shown which represents a JPEG image. Thus, the format determined for the primary content of the object is expected to match the value of a_content_type property of the object. The format object for the rendition is not shown but it would be PDF. What is the exact physical location of the primary content file? As mentioned in the beginning of this section, there are DFC and DQL methods which can provide this information. For understanding content persistence, we will deduce this manually for a file store, which represents storage on a file system. For other types of storage, an exact location might not be evident since we need to rely on the storage interface to access the content file. Deducing the exact file path requires the ability to convert a decimal number to a hexadecimal (hex) number; this can be done with pen and paper or using one of the free tools available on the Web. Also remember that negative numbers are represented with what is known as a 2's-complement notation and many of these tools either don't handle 2's complement or don't support enough digits for our purposes. There are two parts of the file path—the root path for the file store and the path of the file relative to this root path. In order to fi gure out the root path, we identify the fi le store first. Find the dm_filestore object whose object ID is the same as the value in storage_id property of the content object. Then find the dm_location object whose object name is the same as the root property on the file store object. The file_ system_path property on this location object has the root path for the fi le store, which is C:Documentumdatalocaldevcontent_storage_01 in the figure above. In order to find the relative path of the content fi le, we look at data_ticket (data type integer) on the content object. Find the 8-digit hex representation for this number. Treat the hex number as a string and split the string with path separators (slashes, / or depending on the operating system) after every two characters. Suffi x the right-most two characters with the file extension (.jpg), which can be inferred from the format associated with the content object. Prefix the path with an 8-digit hex representation of the repository ID. This gives us the relative path of the content file, which is 000000108009be.jpg in the figure above. Prefix this path with the file store root path identified earlier to get the full path of the content file. Content persistence in Documentum appears to be complicated at first sight. There are a number of separate objects involved here and that is somewhat similar to having several tables in a relational database when we normalize the schema. At a high level, this complexity in the content persistence model serves to provide scalability, flexibility by supporting multiple kinds of content stores, and ease of managing changes in such an environment. Lightweight and shareable object types So far we have primarily dealt with standard types. Lightweight and shareable object types work together to provide performance improvements, which are significant when a large number of lightweight objects share information. The key performance benefits are in terms of savings in storage and in the time it takes to import a large number of documents that share metadata. These types are suitable for use in transactional and archival applications but are not recommended for traditional content management. The term transactional content (as in business transactions) was coined by Forrester Research to describe content typically originating from external parties, such as customers and partners, and driving transactional back-office business processes. Transactional Content Management (TCM) unifi es process, content, and compliance to support solutions involving transactional content. Our example scenario of mortgage loan approval process management is a perfect example of TCM. It involves numerous types of documents, several external parties, and sub-processes implementing parts of the overall process. Lightweight and shareable types play a central role in the High Volume Server, which enhances the performance of Content Server for TCM. A lightweight object type (also known as LwSO for Lightweight SysObject ) is a subtype of a shareable type. When a lightweight object is created, it references an object of its shareable supertype called the parent object of the lightweight object. Conversely, the lightweight object is called the child object of the shareable object. Additional lightweight objects of the same type can share the same parent object. These lightweight objects share the information present in the common parent object rather than each carrying a copy of that information. In order to make the best use of lightweight objects we need to address a couple of questions. When should we use lightweight objects? Lightweight objects are useful when there are a large number of attribute values that are identical for a group of objects. This redundant information can be pushed into one parent object and shared by the lightweight objects. What kind of information is suitable for sharing in the parent object? System-managed metadata, such as policies for security, retention, storage, and so on, are usually applied to a group of objects based on certain criteria. For example, all the documents in one loan application packet could use a single ACL and retention information, which could be placed into the shareable parent object. The specific information about each document would reside in a separate lightweight object. Lightweight object persistence Persistence for lightweight objects works much the same way it works for objects of standard types, with one exception. A lightweight object is a subtype of a shareable type and these types have their separate tables as usual. For a standard type, each object has separate records in all of these tables, with each record identified by the object ID of the object. However, when multiple lightweight objects share one parent object there is only one object ID (of the parent object) in the tables of the shareable type. The lightweight objects need to refer to the object ID of the parent object, which is different from the object ID of any of the lightweight objects, in order to access the shared properties. This reference is made via an attribute named i_sharing_parent, as shown in the last figure.
Read more
  • 0
  • 0
  • 4944

article-image-objects-and-types-documentum-65-content-management-foundations
Packt
04 Jun 2010
11 min read
Save for later

Objects and Types in Documentum 6.5 Content Management Foundations

Packt
04 Jun 2010
11 min read
Objects Documentum uses an object-oriented model to store information within the repository. Everything stored in the repository participates in this object model in some way. For example, a user, a document, and a folder are all represented as objects. An object store s data in its properties (also known as attributes) and has methods that can be used to interact with the object. Properties A content item stored in the repository has an associated object to store its metadata. Since metadata is stored in object properties, the terms metadata and properties are used interchangeably. For example, a document stored in the repository may have its title, subject, and keywords stored in the associated object. However, note that objects can exist in the repository without an associated content item. Such objects are sometimes referred to as contentless objects. For example, user objects and permission set objects do not have any associated content. Each object property has a data type, which can be one of boolean, integer, string, double, time, or ID. A boolean value is true or false. A string value consists of text. A double value is a floating point number. A time value represents a timestamp, including dates. An ID value represents an object ID that uniquely identifi es an object in the repository. Object IDs are discussed in detail later in this article. A property can be single-valued or repeating. Each single-valued property holds one value. For example, the object_name property of a document contains one value and it is of type string. This means that the document can only have one name. On the other hand, keywords is a repeating property and can have multiple string values. For example, a document may have object_name='LoanApp_1234567891.txt' and keywords='John Doe','application','1234567891'. The following figure shows a visual representation of this object. Typically, only properties are shown on the object while methods are shown when needed. Furthermore, only the properties relevant to the discussion are shown. Objects will be illustrated in this manner throughout the article series: Methods Methods are operations that can be performed on an object. An operation often alters some properties of the object. For example, the checkout method can be used to check out an object. Checking out an object sets the r_lock_owner property with the name of the user performing the checkout. Methods are usually invoked using Documentum Foundation Classes (DFCs) programmatically, though they can be indirectly invoked using API. In general, Documentum Query Language (DQL) cannot be used to invoke arbitrary methods on objects. DQL is discussed later in this article. Note that the term method may be used in two different contexts within Documentum. A method as a defined operation on an object type is usually invoked programmatically through DFC. There is also the concept of a method representing code that can be invoked via a job, workflow activity, or a lifecycle operation. This qualification will be made explicit when the context is not clear. Working with objects We used Webtop for performing various operations on documents, where the term document referred to an object with content. Some of these operations are not specific to content and apply to objects in general. For example, checkout and checkin can be performed on contentless objects as well. On the other hand, import, export, and renditions deal specifi cally with content. Talking specifically about operations on metadata, we can view, modify, and export object properties using Webtop. Viewing and editing properties Using Webtop, object properties can be viewed using the View | Properties menu item, shortcut P, or the right-click context menu. The following screenshot shows the properties of the example object discussed earlier. Note that the same screen can be used to modify and save the properties as well. Multiple objects can be selected before viewing properties. In this case, a special dialog shows the common properties for the selected objects, as shown in the following figure. Any changes made on this dialog are applied to all the selected objects. On the properties screen, single-valued properties can be edited directly while repeating properties provide a separate screen for editing through Edit links. Some properties cannot be modified by users at any time. Other properties may not be editable because object security prevents it or if the object is immutable. Object immutability Certain operations on an object mark it as immutable, which means that object properties cannot be changed. An object is marked immutable by setting r_immutable_flag to true. Content Server prevents changes to the content and metadata of an immutable object with the exception of a few special attributes that relate to the operations that are still allowed on immutable objects. For example, users can set a version label on the object, link the object to a folder, unlink it from a folder, delete it, change its lifecycle, and perform one of the lifecycle operations such as promote/demote/suspend/resume. The attributes affected by the allowed operations are allowed to be updated. An object is marked immutable in the following situations: When an object is versioned or branched, it becomes an old version and is marked immutable. An object can be frozen which makes it immutable and imposes some other restrictions. Some virtual document operations can freeze the involved objects. A retention policy can make the documents under its control immutable. Certain operations such as unfreezing a document can reset the immutability flag making the object changeable again. Exporting properties Metadata can be exported from repository lists, such as folder contents and search results. Property values of the objects are exported and saved as a .csv (comma-separated values) file, which can be opened in Microsoft Excel or in a text editor. Metadata export can be performed using Tools | Export to CSV menu item or the right-click context menu. Before exporting the properties, the user is able to choose the properties to export from the available ones. Object types Objects in a repository may represent different kinds of entities – one object may represent a workflow while another object may represent a document, for example. As a result, these objects may have different properties and methods. Every time Content Server creates an object, it needs to determine the properties and methods that the object is going to possess. This information comes from an object type (also referred to as type). The term attribute is synonymous with property and the two are used interchangeably. It is common to use the term attribute when talking about a property name and to use property when referring to its value. We will use a dot notation to indicate that an attribute belongs to an object or a type. For example, objectA.title or dm_sysobject. object_name. This notation is succinct and unambiguous and is consistent with many programming languages. An object type is a template for creating objects. In other words, an object is an instance of its type. A Documentum repository contains many predefined types and allows addition of new user-defined types (also known as custom types). The most commonly used predefined object type for storing documents in the repository is dm_document. We have already seen how folders are used to organize documents. Folders are stored as objects of type dm_folder. A cabinet is a special kind of folder that does not have a parent folder and is stored as an object of type dm_cabinet. Users are represented as objects of type dm_user and a group of users is represented as an object of dm_group. Workflows use a process definition object of type dm_process, while the definition of a lifecycle is stored in an object of type dm_policy. The following figure shows some of these types: Just like everything else in the repository, a type is also represented as an object, which holds structural information about the type. This object is of type dm_type and stores information such as the name of the type, name of its supertype, and details about the attributes in the type. The following figure shows an object of type dm_document and an object of type dm_type representing dm_document. It also indicates how the type hierarchy information is stored in the object of type dm_type. The types present in the repository can be viewed using Documentum Administrator (DA). The following screenshot shows some attributes for the type dm_sysobject. This screen provides controls to scroll through the attributes when there are a large number of attributes present. The Info tab provides information about the type other than the attributes. While the obvious use of a type is to define the structure and behavior of one kind of object, there is another very important utility of types. A type can be used to refer to all the objects of that type as a set. For example, queries restrict their scope by specifying a type where only the objects of that type are considered for matches. In our example scenario, the loan officer may want to search for all loan applications assigned to her. This query will be straightforward if there is an object type for loan applications. Queries are introduced later in this article. As another example, audit events can be restricted to a particular object type resulting in only the objects of this type being audited. Type names and property names Each object type uses an internal type name, such as dm_document, which is used for uniquely identifying the type within queries and application code. Each type also has a label, which is a user-friendly name often used by applications for displaying information to the end users. For example, the type dm_document has the label Document. Conventionally, internal names of predefined (defined by Documentum for Content Server or other client products) types start with dm, as described here: dm_: (general) represents commonly used object types such as dm_document, which is generally used for storing documents. dmr_: (read only) represents read-only object types such as dmr_content, which stores information about a content file. dmi_: (internal) represents internal object types such as dmi_workitem, which stores information about a task. dmc_: (client) represents object types supporting Documentum client applications. For example, dmc_calendar objects are used by Collaboration Services for holding calendar events. Just like an object type each property also has an internal name and a label. For example, the label for property object_name is Name. There are some additional conventions for internal names for properties. These names may begin with the following prefixes: r_: (read only) normally indicates that the property is controlled by the Content Server and cannot be modified by users or applications. For example, r_object_id represents the unique ID for the object. On the other hand, r_version_label is an interesting property. It is a repeating property and has at least one value supplied by the Content Server while others may be supplied by users or applications. i_: (internal) is similar to r_ except that this property is used internally by the Content Server and normally not seen by users and applications. i_chronicle_id binds all the versions together in to a version tree and is managed by the Content Server. a_: (application) indicates that this property is intended to be used by applications and can be modified by applications and users. For example, the format of a document is stored in a_content_type. This property helps Webtop launch an appropriate desktop application to open a document. The other three prefixes can also be considered to imply system or non-application attributes, in general. _: (computed) indicates that this property is not stored in the repository and is computed by Content Server as needed. These properties are also normally read-only for applications. For example, each object has a property called _changed, which indicates whether it has been changed since it was last saved. Many of the computed properties are related to security and most are used for caching information in user sessions.
Read more
  • 0
  • 0
  • 8490

article-image-customize-backend-component-joomla-15
Packt
04 Jun 2010
13 min read
Save for later

Customize backend Component in Joomla! 1.5

Packt
04 Jun 2010
13 min read
(Read more interesting articles on Joomla! 1.5here.) Itemized data Most components handle and display itemized data. Itemized data is data having many instances; most commonly this reflects rows in a database table. When dealing with itemized data there are three areas of functionality that users generally expect: Pagination Ordering Filtering and searching In this section we will discuss each of these areas of functionality and how to implement them in the backend of a component. Pagination To make large amounts of itemized data easier to understand, we can split the data across multiple pages. Joomla! provides us with the JPagination class to help us handle pagination in our extensions. There are four important attributes associated with the JPagination class: limitstart: This is the item with which we begin a page, for example the first page will always begin with item 0. limit: This is the maximum number of items to display on a page. total: This is the total number of items across all the pages. _viewall: This is the option to ignore pagination and display all items. Before we dive into piles of code, let's take the time to examine the listFooter, the footer that is used at the bottom of pagination lists: The box to the far left describes the maximum number of items to display per page (limit). The remaining buttons are used to navigate between pages. The final text defines the current page out of the total number of pages. The great thing about this footer is we don't have to work very hard to create it! We can use a JPagination object to build it. This not only means that it is easy to implement, but that the pagination footers are consistent throughout Joomla!. JPagination is used extensively by components in the backend when displaying lists of items. In order to add pagination to our revues list we must make some modifications to our backend revues model. Our current model consists of one private property $_revues and two methods: getRevues() and delete(). We need to add two additional private properties for pagination purposes. Let's place them immediately following the existing $_revues property: /** @var array of revue objects */var $_revues = null;/** @var int total number of revues */var $_total = null;/** @var JPagination object */var $_pagination = null; Next we must add a class constructor, as we will need to retrieve and initialize the global pagination variables $limit and $limitstart. JModel objects store a state object in order to record the state of the model. It is common to use the state variables limit and limitstart to record the number of items per page and starting item for the page. We set the state variables in the constructor: /** * Constructor */function __construct(){ global $mainframe; parent::__construct(); // Get the pagination request variables $limit = $mainframe->getUserStateFromRequest( 'global.list.limit', 'limit', $mainframe->getCfg('list_limit')); $limitstart = $mainframe->getUserStateFromRequest( $option.'limitstart', 'limitstart', 0); // Set the state pagination variables $this->setState('limit', $limit); $this->setState('limitstart', $limitstart);} Remember that $mainframe references the global JApplication object. We use the getUserStateFromRequest() method to get the limit and limitstart variables. We use the user state variable, global.list.limit, to determine the limit. This variable is used throughout Joomla! to determine the length of lists. For example, if we were to view the Article Manager and select a limit of five items per page, if we move to a different list it will also be limited to five items. If a value is set in the request value limit (part of the listFooter), we use that value. Alternatively we use the previous value, and if that is not set we use the default value defined in the application configuration. The limitstart variable is retrieved from the user state value $option, plus .limitstart. The $option value holds the component name, for example com_content. If we build a component that has multiple lists we should add an extra level to this, which is normally named after the entity. If a value is set in the request value limitstart (part of the listFooter) we use that value. Alternatively we use the previous value, and if that is not set we use the default value 0, which will lead us to the first page. The reason we retrieve these values in the constructor and not in another method is that in addition to using these values for the JPagination object, we will also need them when getting data from the database. In our existing component model we have a single method for retrieving data from the database, getRevues(). For reasons that will become apparent shortly we need to create a private method that will build the query string and modify our getRevues() method to use it. /** * Builds a query to get data from #__boxoffice_revues * @return string SQL query */function _buildQuery(){ $db =& $this->getDBO(); $rtable = $db->nameQuote('#__boxoffice_revues'); $ctable = $db->nameQuote('#__categories'); $query = ' SELECT r.*, cc.title AS cat_title' . ' FROM ' . $rtable. ' AS r' . ' LEFT JOIN '.$ctable.' AS cc ON cc.id=r.catid; return $query;} We now must modify our getRevues() method: /** * Get a list of revues * * @access public * @return array of objects */function getRevues(){ // Get the database connection $db =& $this->_db; if( empty($this->_revues) ) { // Build query and get the limits from current state $query = $this->_buildQuery(); $limitstart = $this->getState('limitstart'); $limit = $this->getState('limit'); $this->_revues = $this->_getList($query, $limitstart, $limit); } // Return the list of revues return $this->_revues;} We retrieve the object state variables limit and limitstart and pass them to the private JModel method _getList(). The _getList() method is used to get an array of objects from the database based on a query and, optionally, limit and limitstart. The last two parameters will modify the first parameter, a query, in such a way that we only return the desired results. For example if we requested page 1 and were displaying a maximum of five items per page, the following would be appended to the query: LIMIT 0, 5. To handle pagination we need to add a method called getPagination() to our model. This method will handle items we are trying to paginate using a JPagination object. Here is our code for the getPagination() method: /** * Get a pagination object * * @access public * @return pagination object */function getPagination(){ if (empty($this->_pagination)) { // Import the pagination library jimport('joomla.html.pagination'); // Prepare the pagination values $total = $this->getTotal(); $limitstart = $this->getState('limitstart'); $limit = $this->getState('limit'); // Create the pagination object $this->_pagination = new JPagination($total, $limitstart, $limit); } return $this->_pagination;} There are three important aspects to this method. We use the private property $_pagination to cache the object, we use the getTotal() method to determine the total number of items, and we use the getState() method to determine the number of results to display. The getTotal() method is a method that we must define in order to use. We don't have to use this name or this mechanism to determine the total number of items. Here is one way of implementing the getTotal() method: /** * Get number of items * * @access public * @return integer */function getTotal(){ if (empty($this->_total)) { $query = $this->_buildQuery(); $this->_total = $this->_getListCount($query); } return $this->_total;} This method calls our model's private method _buildQuery() to build the query, the same query that we use to retrieve our list of revues. We then use the private JModel method _getListCount()to count the number of results that will be returned from the query. We now have all we need to be able to add pagination to our revues list except for actually adding pagination to our list page. We need to add a few lines of code to our revues/view.html.php file. We will need to access to global user state variables so we must add a reference to the global application object as the first line in our display method: global $mainframe; Next we need to create and populate an array that will contain user state information. We will add this code immediately after the code that builds the toolbar: // Prepare list array$lists = array();// Get the user state$filter_order = $mainframe->getUserStateFromRequest( $option.'filter_order', 'filter_order', 'published');$filter_order_Dir = $mainframe->getUserStateFromRequest( $option.'filter_order_Dir', 'filter_order_Dir', 'ASC');// Build the list array for use in the layout$lists['order'] = $filter_order;$lists['order_Dir'] = $filter_order_Dir;// Get revues and pagination from the model$model =& $this->getModel( 'revues' );$revues =& $model->getRevues();$page =& $model->getPagination();// Assign references for the layout to use$this->assignRef('lists', $lists);$this->assignRef('revues', $revues);$this->assignRef('page', $page); After we create and populate the $lists array, we add a variable $page that receives a reference to a JPagination object by calling our model's getPagination() method. And finally we assign references to the $lists and $page variables so that our layout can access them. Within our layout default.php file we must make some minor changes toward the end of the existing code. Between the closing </tbody> tag and the </table> tag we must add the following: <tfoot> <tr> <td colspan="10"> <?php echo $this->page->getListFooter(); ?> </td> </tr></tfoot> This creates the pagination footer using the JPagination method getListFooter(). The final change we need to make is to add two hidden fields to the form. Under the existing hidden fields we add the following code: <input type="hidden" name="filter_order" value="<?php echo $this->lists['order']; ?>" /><input type="hidden" name="filter_order_Dir" value="" /> The most important thing to notice is that we leave the value of the filter_order_Dir field empty. This is because the listFooter deals with this for us. That is it! We now have added pagination to our page. Ordering Another enhancement that we can add is the ability to sort or order our data by column, which we can accomplish easily using the JHTML grid.sort type. And, as an added bonus, we have already completed a significant amount of the necessary code when we added pagination. Most of the changes to revues/view.html.php that we made for pagination are used for implementing column ordering; we don't have to make a single change. We also added two hidden fields, filter_order and filter_order_Dir, to our layout form, default.php. The first defines the column to order our data and the latter defines the direction, ascending or descending. Most of the column headings for our existing layout are currently composed of simple text wrapped in table heading tags (<th>Title</th> for example). We need to replace the text with the output of the grid.sort function for those columns that we wish to be orderable. Here is our new code: <thead> <tr> <th width="20" nowrap="nowrap"> <?php echo JHTML::_('grid.sort', JText::_('ID'), 'id', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="20" nowrap="nowrap"> <input type="checkbox" name="toggle" value="" onclick="checkAll( <?php echo count($this->revues); ?>);" /> </th> <th width="40%"> <?php echo JHTML::_('grid.sort', JText::_('TITLE'), 'title', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="20%"> <?php echo JHTML::_('grid.sort', JText::_('REVUER'), 'revuer', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="80" nowrap="nowrap"> <?php echo JHTML::_('grid.sort', JText::_('REVUED'), 'revued', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="80" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', 'ORDER', 'ordering', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="10" nowrap="nowrap"> <?php if($ordering) echo JHTML::_('grid.order', $this->revues); ?> </th> <th width="50" nowrap="nowrap"> <?php echo JText::_('HITS'); ?> </th> <th width="100" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', JText::_('CATEGORY'), 'category', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="60" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', JText::_('PUBLISHED'), 'published', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> </tr></thead> Let's look at the last column, Published, and dissect the call to grid.sort. Following grid.sort we have the name of the column, filtered through JText::_() passing it a key to our translation file. The next parameter is the sort value, the current order direction, and the current column by which the data is ordered. In order for us to be able to use these headings to order our data we must make a few additional modifications to our JModel class. We created the _buildQuery() method earlier when we were adding pagination. We now need to make a change to that method to handle ordering: /** * Builds a query to get data from #__boxoffice_revues * @return string SQL query */function _buildQuery(){ $db =& $this->getDBO(); $rtable = $db->nameQuote('#__boxoffice_revues'); $ctable = $db->nameQuote('#__categories'); $query = ' SELECT r.*, cc.title AS cat_title' . ' FROM ' . $rtable. ' AS r' . ' LEFT JOIN '.$ctable.' AS cc ON cc.id=r.catid' . $this->_buildQueryOrderBy(); return $query;} Our method now calls a method named _buildQueryOrderBy() that builds the ORDER BY clause for the query: /** * Build the ORDER part of a query * * @return string part of an SQL query */function _buildQueryOrderBy(){ global $mainframe, $option; // Array of allowable order fields $orders = array('title', 'revuer', 'revued', 'category', 'published', 'ordering', 'id'); // Get the order field and direction, default order field // is 'ordering', default direction is ascending $filter_order = $mainframe->getUserStateFromRequest( $option.'filter_order', 'filter_order', 'ordering'); $filter_order_Dir = strtoupper( $mainframe->getUserStateFromRequest( $option.'filter_order_Dir', 'filter_order_Dir', 'ASC')); // Validate the order direction, must be ASC or DESC if ($filter_order_Dir != 'ASC' && $filter_order_Dir != 'DESC') { $filter_order_Dir = 'ASC'; } // If order column is unknown use the default if (!in_array($filter_order, $orders)) { $filter_order = 'ordering'; } $orderby = ' ORDER BY '.$filter_order.' '.$filter_order_Dir; if ($filter_order != 'ordering') { $orderby .= ' , ordering '; } // Return the ORDER BY clause return $orderby;} As with the view, we retrieve the order column name and direction using the application getUserStateFromRequest() method. Since this data is going to be used to interact with the database, we perform some data sanity checks to ensure that the data is safe to use with the database. Now that we have done this we can use the table headings to order itemized data. This is a screenshot of such a table: Notice that the current ordering is title descending, as denoted by the small arrow to the right of Title.
Read more
  • 0
  • 0
  • 1874

article-image-red5-video-demand-flash-server
Packt
04 Jun 2010
6 min read
Save for later

Red5: A video-on-demand Flash Server

Packt
04 Jun 2010
6 min read
Plone does not provide a responsive user experience out of the box. This is not because the system is slow, but because it simply does (too) much. It does a lot of security checks and workflow operations, handles the content rules, does content validation, and so on. Still, there are some high-traffic sites running with the popular Content Management System. How do they manage? "All Plone integrators are caching experts." This saying is commonly heard and read in the Plone community. And it is true. If we want a fast and responsive system, we have to use caching and load-balancing applications to spread the load. The article discusses a practical example. We will set up a protected video-on-demand solution with Plone and a Red5 server. We will see how to integrate it with Plone for an effective and secure video-streaming solution. The Red5 server is an open source Flash server. It is written in Java and is very extensible via plugins. There are plugins for transcoding, different kinds of streaming, and several other manipulations we might want to do with video or audio content. What we want to investigate here is how to integrate video streams protected by Plone permissions. (For more resources on Plone, see here.) Requirements for setting up a Red5 server The requirement for running a Red5 Flash server is Java 6. We can check the Java version by running this: $ java -versionjava version "1.6.0_17"Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-9M3125)Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) The version needs to be 1.6 at least. The earlier versions of the Red5 server run with 1.5, but the plugin for protecting the media files needs Java 6. To get Java 6, if we do not have it already, we can download it from the Sun home page. There are packages available for Windows and Linux. Some Linux distributions have different implementations of Java because of licensing issues. You may check the corresponding documentation if this is the case for you. Mac OS X ships with its own Java bundled. To set the Java version to 1.6 on Mac OS X, we need to do the following: $ cd /System/Library/Frameworks/JavaVM.framework/Versions$ rm Current*$ ln -s 1.6 Current$ ln -s 1.6 CurrentJDK After doing so, we should double-check the Java version with the command shown before. The Red5 server is available as a package for various operating systems. In the next section, we will see how we can integrate a Red5 server into a Plone buildout. A Red5 buildout Red5 can be downloaded in several different ways. As it is open source, even the sources are available as a tarball from the product home page. For the buildout, we use the bundle of ready compiled Java libraries. This bundle comes with everything needed to run a standalone Red5 server. There are startup scripts provided for Windows and Bash (usable with Linux and Mac OS X). Let's see how to configure our buildout. The buildout needs the usual common elements for a Plone 3.3.3 installation. Apart from the application and the instance, the Red5-specific parts are also present: a fss storage part and a part for setting up the supervisor. [buildout]newest = falseparts =zope2instancefssred5red5-webappred5-protectedVODsupervisorextends =http://dist.plone.org/release/3.3.3/versions.cfgversions = versionsfind-links =http://dist.plone.org/release/3.3.3http://dist.plone.org/thirdpartyhttp://pypi.python.org/simple/ There is nothing special in the zope2 application part. [zope2]recipe = plone.recipe.zope2installfake-zope-eggs = trueurl = ${versions:zope2-url} On the Plone side, we need—despite of the fss eggs—a package called unimr.red5.protectedvod. This package with the rather complicated name creates rather complicated one-time URLs for the communication with Red5. [instance]recipe = plone.recipe.zope2instancezope2-location = ${zope2:location}user = admin:adminhttp-address = 8080eggs =Ploneunimr.red5.protectedvodiw.fsszcml =unimr.red5.protectedvodiw.fssiw.fss-meta First, we need to configure FileSystemStorage.FileSystemStorage is used for sharing the videos between Plone and Red5. The videos are uploaded via the Plone UI and they are put on the filesystem. The storage strategy needs to be either site1 or site2. These two strategies store the binary data with its original filename and file extension. The extension is needed for the Red5 server to recognize the file. [fss]recipe = iw.recipe.fsszope-instances =${instance:location}storages =global /site /site site2 The red5 part downloads and extracts the Red5 application. We have to envision that everything is placed into the parts directory. This includes configurations, plugins, logs, and even content. We need to be extra careful with changing the recipe in the buildout if running in production mode. The content we share with Plone is symlinked, so this is not a problem. For the logs, we might change the position to outside the parts directory and symlink them back. [red5]recipe = hexagonit.recipe.downloadurl = http://www.red5.org/downloads/0_8/red5-0.8.0.tar.gz The next part adds our custom application, which handles the temporary links used for protection, to the Red5 application. The plugin is shipped together with the unimr.red5.protectedvod egg we use on the Plone side. It is easier to get it from the Subversion repository directly. [red5-webapp]recipe = infrae.subversionurls = http://svn.plone.org/svn/collective/unimr.red5.protectedvod/trunk/unimr/red5/protectedvod/red5-webapp red5-webapp The red5-protectedVOD part configures the protectedVOD plugin. Basically, the WAR archive we checked out in the previous step is extracted. If the location of the fss storage does not exist already, it is symlinked into the streams directory of the plugin. The streams directory is the usual place for media files for Red5. [red5-protectedVOD]recipe = iw.recipe.cmdon_install = trueon_update = falsecmds =mkdir -p ${red5:location}/webapps/protectedVODcd ${red5:location}/webapps/protectedVODjar xvf ${red5-webapp:location}/red5-webapp/protectedVOD_0.1-red5_0.8-java6.warcd streamsif [ ! -L ${red5:location}/webapps/protectedVOD/streams/fss_storage_site ];then ln -s ${buildout:directory}/var/fss_storage_site .;fi The commands used above are Unix/Linux centric. Until Vista/ Server 2008, Windows didn't understand symbolic links. That's why the whole idea of the recipe doesn't work. The recipe might work with Windows Vista, Windows Server 2008, or Windows 7; but the commands look different Finally, we add the Red5 server to our supervisor configuration. We need to set the RED5_HOME environment variable, so that the startup script can find the necessary libraries of Red5. [supervisor]recipe = collective.recipe.supervisorprograms =30 instance2 ${instance2:location}/bin/runzope ${instance2:location}true40 red5 env [RED5_HOME=${red5:location} ${red5:location}/red5.sh]${red5:location} true After running the buildout, we can start the supervisor by issuing the following command: bin/supervisord The supervisor will take care of running all the subprocesses. To find out more on the supervisor, we may visit its website. To check if everything worked, we can request a status report by issuing this: bin/supervisorctl statusinstance RUNNING pid 2176, uptime 3:00:23red5 RUNNING pid 7563, uptime 0:51:25
Read more
  • 0
  • 0
  • 2499
article-image-using-javascript-effects-joomla-15
Packt
03 Jun 2010
7 min read
Save for later

Using Javascript effects with Joomla! 1.5

Packt
03 Jun 2010
7 min read
(Read more interesting articles on Joomla! 1.5 here.) Using JavaScript effects Joomla! includes mootools—a powerful compact JavaScript framework. Mootools enables us to do many things, but it is used extensively in Joomla! to create client-side effects. Some of these, such as the accordion, are accessible through Joomla! classes. Others require special attention. In some instances it may be necessary to manually add the mootools library to the document. We can do this using the JHTML behavior.mootools type: JHTML::_('behavior.mootools'); JPane A pane is an XHTML area that holds more than one set of information. There are two different types of panes: Tabs: Tabs provides a typical tabbed area with tabs to the top that are used to select different panes. Sliders: Sliders, based on the mootools accordion, are vertical selections of headings above panels that can be expanded and contracted. We use the JPane class to implement panes. This example demonstrates a basic tabular pane with two panels: $pane =& JPane::getInstance('Tabs');echo $pane->startPane('myPane');{ echo $pane->startPanel('Panel 1', 'panel1'); echo "This is Panel 1"; echo $pane->endPanel(); echo $pane->startPanel('Panel 2', 'panel2'); echo "This is Panel 2"; echo $pane->endPanel();}echo $pane->endPane(); There are essentially two elements to a pane: the pane itself and the panels within the pane. We use the methods startPane() and endPane() to signify the start and end of the pane. When we use startPane() we must provide one string parameter, which is a unique identifier used to identify the pane. Panels are always created internally to a pane and use the methods startPanel() and endPanel(). We must provide the startPanel() method with two parameters, the name, which appears on the tab, and the panel ID. The following is a screenshot of the pane created from the previous code: Had we wanted to create a slider pane instead of a tab pane when we used the getInstance() method, we would need to have supplied the parameter Sliders instead of Tabs. This is a screenshot of the same pane as a slider: Panes are used extensively in Joomla! As a general rule, tabs are used for settings and sliders are used for parameters. Tooltips Tool tips are small boxes with useful information in them that appear in response to onmouseover events. They are used extensively in forms to provide more information about fields and their contents. Tooltips can be extremely helpful to users by providing small helpful hints such as what value should be put into a field or what is the purpose of a field. It takes a small amount of code to implement but adds a lot of value for our users. So how do we add a tooltip? We use JHTML to render tips easily. There are two types that we use: behavior.tooltip is used to import the necessary JavaScript to enable tooltips to work and it does not return anything. We only ever need to call this type once in a page. tooltip is used to render a tooltip in relation to an image or a piece of text. There are six parameters associated with tooltip, of which five are optional. We will explore the more common uses of these parameters. The most basic usage of tooltip returns a small information icon that onmouseover displays as a tooltip; as this example demonstrates: echo JHTML::_('tooltip', $tooltip); The next parameter allows us to define a title that is displayed at the top of the tooltip: echo JHTML::_('tooltip', $tooltip, $title); The next parameter allows us to select an image from the includes/js/ThemeOffice directory. This example uses the warning.png image: echo JHTML::_('tooltip', $tooltip, $title, 'warning.png'); The next obvious leap is to use text instead of an image and that is just what the next parameter allows us to do: echo JHTML::_('tooltip', $tooltip, $title, null, $text); There are some additional parameters that relate to using hypertext links. A full description of these is available in Appendix E, Joomla! HTML Library. We can modify the appearance of tooltips using CSS. There are three style classes that we can use: .tool-tip, .tool-title, and .tool-text. The tooltip is encapsulated by the .tool-tip class, and the .tool-title and .tool-text styles relate to the title and the content. This code demonstrates how we can add some CSS to the document to override the default tooltip CSS: // prepare the cSS$css = '/* Tooltips */.tool-tip{ min-width: 100px; opacity: 0.8; filter: alpha(opacity=80); -moz-opacity: 0.8;}.tool-title{ text-align: center;}.tool-text { font-style: italic;}';// add the CSS to the document$doc =& JFactory::getDocument();$doc->addStyleDeclaration($css); Let's add tooltips to our com_boxoffice/views/revue/tmpl/default.php layout file. The first step is to enable tooltips by adding behavior.tooltip to the beginning of our layout file as follows: <?php // No direct access defined('_JEXEC') or die('Restricted access'); // Enable tooltips JHTML::_('behavior.tooltip');?> This should be placed at the beginning as illustrated. This adds the mootool JavaScript class Tips to our document and adds the following JavaScript code to the document heading: <script type="text/javascript"> window.addEvent('domready', function(){ var JTooltips = new Tips($$('.hasTip'), { maxTitleChars: 50, fixed: false}); });</script> Next, we identify those elements that we wish to have a tooltip enabled for. There are two documented ways to implement a tooltip. We will create both for the movie title to illustrate: <tr> <td width="100" align="right" class="key"> <span class="editlinktip hasTip" title="::<?php echo JText::_('TIP_001');?>"> <label for="title"> <?php echo JText::_('Movie Title'); ?>: </label> </span> </td> <td> <input class="inputbox" type="text" name="title" id="title" size="25" value="<?php echo $this->revue->title;?>" /> <?php echo JHTML::_('tooltip', JText::_('TIP_001')); ?> </td></tr> The first approach wraps the label with a <span> that has two CSS classes declared editlinktip and hasTip, and a title attribute. The title attribute is a two part string with the parts separated by double colons; the first part is the tooltip title and the second is the tooltip text. Both methods will produce similar results. There are a few differences that you should keep in mind. The first approach displays the tip when you hover over the spanned element (in this case the label field). The second approach will generate a small icon next to the input field; the tip will appear when you move your mouse over the icon. You can duplicate the results of the first approach using the tooltip method with the following code: <?php $label = '<label for ="title">' . JText::_('Movie Title') . '</label>'echo JHTML::_('tooltip', JText::_('TIP_001'), '', '', $label);?> Note that the tip text is passed through JText with a key from our translation file. Here are the entries for our tips: # Tip TextTIP_001=Enter the film title.TIP_002=Choose the MPAA film rating.TIP_003=Provide a brief impression of the film.TIP_004=Enter the name of the reviewer.TIP_005=Enter 1-5 asterisks (*) for overall quality of the film.TIP_006=Enter the date of the review (mm/dd/yyyy).TIP_007=Do you wish to publish this revue? In the end the method you choose to implement tooltips is largely a personal preference.
Read more
  • 0
  • 0
  • 937

article-image-improving-components-joomla-15
Packt
03 Jun 2010
16 min read
Save for later

Improving components with Joomla! 1.5

Packt
03 Jun 2010
16 min read
(Read more interesting articles on Nhibernate 2 Beginner's Guide here.) Improving components We are going to be working almost exclusively on the backend component in this article but most of what we will be covering could easily be adapted for the frontend component if we wished to do so. Component backend When we build the backend of a component there are some very important things to consider. Most components will include at least two backend views or forms; one will display a list of items and another will provide a form for creating or editing a single item. There may be additional views depending on the component but for now we will work with our com_boxoffice component, which consists of two views. Toolbars Although we have already built our component toolbars, we didn't spend much time discussing all the features and capabilities that are available to us, so let's start with a bit of a review and then add a few enhancements to our component. Our backend component has two toolbars. The frst is displayed when we access our component from the Components | Box Offce Revues menu: The second toolbar is displayed when we click on the New or Edit button, or click on a movie title link in the list that is displayed: Administration toolbars consist of a title and a set of buttons that provide built-in functionality; it requires only a minimum amount of effort to add signifcant functionality to our administration page. We add buttons to our toolbar in our view classes using the static JToolBarHelper class. In our administration/components/com_boxoffice/views folder we have two views, revues, and revue. In the revues/view.html.php file we generated the toolbar with the following code: JToolBarHelper::title( JText::_( 'Box Office Revues' ), 'generic.png' );JToolBarHelper::deleteList();JToolBarHelper::editListX();JToolBarHelper::addNewX();JToolBarHelper::preferences( 'com_boxoffice', '200' );JToolBarHelper::help( 'help', true ); In our example we set the title of our menu bar to Box Offce Revues, passing it through JText::_(), which will translate it if we have installed a language file. Next we add Delete, Edit, New, Preferences, and Help buttons. Note that whenever we use JToolBarHelper we must set the title before we add any buttons. There are many different buttons that we can add to the menu bar; if we cannot find a suitable button we can define our own. Most of the buttons behave as form buttons for the form adminForm, which we will discuss shortly. Some buttons require certain input fields to be included with the adminForm in order to function correctly. The following table lists the available buttons that we can add to the menu bar. Method Name Description addNew Adds an add new button to the menu bar.> addNewX Adds an extended version of the add new button calling hideMainMenu() before submitbutton(). apply Adds an apply button to the menu bar. archiveList Adds an archive button to the menu bar. assign Adds an assign button to the menu bar. back Adds a back button to the menu bar. cancel Adds a cancel button to the menu bar. custom Adds a custom button to the menu bar. customX Adds an extended version of the custom button calling hideMainMenu() before submitbutton(). deleteList Adds a delete button to the menu bar. deleteListX Adds an extended version of the delete button calling hideMainMenu() before submitbutton(). Divider Adds a divider, a vertical line to the menu bar. editCss Adds an edit CSS button to the menu bar. editCssX Adds an extended version of the edit CSS button calling hideMainMenu() before submitbutton(). editHtml Adds an edit HTML button to the menu bar. editHtmlX Adds an extended version of the edit HTML button calling hideMainMenu() before submitbutton(). editList Adds an edit button to the menu bar. editListX Adds an extended version of the edit button calling hideMainMenu() before submitbutton(). help Adds an Help button to the menu bar. makeDefault Adds an Default button to the menu bar. media_manager Adds an Media Manager button to the menu bar. preferences Adds a Preferences button to the menu bar. preview Adds a Preview button to the menu bar. publish Adds a Publish button to the menu bar. publishList Adds a Publish button to the menu bar. save Adds a Save button to the menu bar. Spacer Adds a sizable spacer to the menu bar. title Sets the Title and the icon class of the menu bar. trash Adds a Trash button to the menu bar. unarchiveList Adds an Unarchive button to the menu bar. unpublish Adds an Unpublish button to the menu bar. unpublishList Adds an Unpublish button to the menu bar. Submenu Directly below the main menu bar is an area reserved for the submenu. There are two methods available to populate the submenu. The submenu is automatically populated with items defined in the component XML manifest file. We can also modify the submenu, adding or removing menu items using the JSubMenuHelper class. We will begin by adding a submenu using the component XML manifest file. When we last updated our component XML manifest file we placed a menu item in the Administration section: <menu>Box Office Revues</menu> This placed a menu item under the Components menu. Our component utilizes a single table, #__boxoffice_revues, which stores specific information related to movie revues. One thing that might make our component more useful is to add the ability to categorize movies by genre (for example: action, romance, science fiction, and so on). Joomla!'s built-in #__categories table will make this easy to implement. We will need to make a few changes in several places so let's get started. The first change we need to make is to modify our #_box_office_revues table, adding a foreign key field that will point to a record in the #__categories table. We will add one field to our table immediately after the primary key field id: `catid` int(11) NOT NULL default '0', If you have installed phpMyAdmin you can easily add this new field without losing any existing data. Be sure to update the install.sql file for future component installs. Next we will add our submenu items to the component XML manifest file, immediately after the existing menu declaration: <submenu> <menu link="option=com_boxoffice">Revues</menu> <menu link="option=com_categories &amp;section=com_boxoffice">Categories</menu> Note that we use &amp; rather than an ampersand (&) character to avoid problems with XML parsing. Since we modifed our #__boxoffice_revues table we must update our JTable subclass /tables/revue.php to match by adding the following lines immediately after the id field: /** @var int */var $catid = 0; And finally, we need to modify our layout /views/revue/tmpl/default.php to allow us to select a category or genre for our movie (place this immediately after the </tr> tag of the frst table row, the one that contains our movie title): <tr> <td width="100" align="right" class="key"> <label for="catid"> <?php echo JText::_('Movie Genre'); ?>: </label> </td> <td> <?php echo JHTML::_('list.category', 'catid', 'com_boxoffice', $this->revue->catid );?> </td></tr> The call to JHTML::_() produces the HTML to display the selection drop-down list of component specifc categories. The static JHTML class is an integral part of the joomla.html library which we will discuss in the next section. Creating submenu items through the component XML manifest file is not the only method at our disposal; we can modify the submenu using the static JSubMenuHelper class. Please note however that these methods differ in a number of ways. Submenu items added using the manifest file will appear as submenu items under the Components menu item as well as the submenu area of the menu bar. For example the Components menu will appear as it does in the following screenshot: The submenu items will appear on the component list page as shown in the following image: And the submenu items will also appear on the Category Manager page: If we were to use JSubMenuHelper class the submenu items would only appear on our component submenu bar; they would not appear on Components | Box Offce Revues or on the Category Manager submenu which would eliminate the means of returning to our component menu. For these reasons it is generally better to create submenus that link to other components using the XML manifest file. There are, however, valid reasons for using JSubMenuHelper to create submenu items. If your component provides additional views of your data adding submenu items using JSubMenuHelper would be the more appropriate method for doing so. This example adds two options to the submenu using JSubMenuHelper: // get the current task$task = JRequest::getCmd('task');if ($task == 'item1' || $task == 'item2'){ // determine selected task $selected = ($task == 'item1'); // prepare links $item1 = 'index.php?option=com_myextension&task=item1'; $item2 = 'index.php?option=com_myextension&task=item2'; // add sub menu items JSubMenuHelper::addEntry(JText::_('Item 1'), $item1, $selected); JSubMenuHelper::addEntry(JText::_('Item 2'), $item2, $selected);} The addEntry() method adds a new item to the submenu. Items are added in order of appearance. The frst parameter is the name, the second is the link location, and the third is true if the item is the current menu item. The next screenshot depicts the given example, in the component My Extension, when the selected task is Item1: There is one more thing that we can do with the submenu. We can remove it. This is especially useful with views for which, when a user navigates away without following the correct procedure, an item becomes locked. If we modify the hidemainmenu request value to 1, the submenu will not be displayed. We normally do this in methods in our controllers; a common method in which this would be done is edit(). This example demonstrates how: JRequest::setVar('hidemainmenu', 1); There is one other caveat when doing this; the main menu will be deactivated. This screenshot depicts the main menu across the top of backend: This screenshot depicts the main menu across the top of backend when hidemainmenu is enabled; you will notice that all of the menu items are grayed out: The joomla.html library The joomla.html library provides a comprehensive set of classes for use in rendering XHMTL. An integral part of the library is the static JHTML class. Within this class is the class loader method JHTML::_(), that we will use to generate and render XHTML elements and JavaScript behaviors. We generate an XHTML element or JavaScript behavior using the following method: echo JHTML::_('type', 'parameter_1', …,'parameter_N'); The JHTML class supports eight basic XHTML element types; there are eight supporting classes that provide support for more complex XHTML element types and JavaScript behaviors. While we will not be using every available element type or behavior, we will make good use of a signifcant number of them throughout this article; enough for you to make use of others as the need arises. The basic element types are: calendar Generates a calendar control field and a clickable calendar image date Returns a formatted date string iframe Generates a XHTML <iframe></iframe> element image Generates a XHTML <img></img> element link Generates a XHTML <a></a> element script Generates a XHTML <script></script> element style Generates a <link rel=”stylesheet” style=”text/css” /> element tooltip Generates a popup tooltip using JavaScript There are eight supporting classes that provide more complex elements and behaviors that we generally define as grouped types. Grouped types are identifed by a group name and a type name. The supporting classes and group names are: Class Group Description JHTMLBehavior behavior Creates JavaScript client-side behaviors JHTMLEmail Email Provides email address cloaking JHTMLForm Form Generates a hidden token field JHTMLGrid Grid Creates HTML form grids JHTMLImage image Enables a type of image overriding in templates JHTMLList list Generates common selection lists JHTMLMenu menu Generates menus JHTMLSelect select Generates dropdown selection boxes All group types are invoked using the JHTML::_('group.type',…) syntax. The following section provides an overview of the available group types. behavior These types are special because they deal with JavaScript in order to create client-side behaviors. We'll use behavior.modal as an example. This behavior allows us to display an inline modal window that is populated from a specifc URI. A modal window is a window that prevents a user from returning to the originating window until the modal window has been closed. A good example of this is the 'Pagebreak' button used in the article manager when editing an article. The behavior.modal type does not return anything; it prepares the necessary JavaScript. In fact, none of the behavior types return data; they are designed solely to import functionality into the document. This example demonstrates how we can use the behavior.modal type to open a modal window that uses www.example.org as the source: // prepare the JavaScript parameters$params = array('size'=>array('x'=>100, 'y'=>100));// add the JavaScriptJHTML::_('behavior.modal', 'a.mymodal', $params);// create the modal window linkecho '<a class="mymodal" title="example" href="http://www.example.org" rel="{handler: 'iframe', size: {x: 400, y: 150}}">Example Modal Window</a>'; The a.mymodal parameter is used to identify the elements that we want to attach the modal window to. In this case, we want to use all <a> tags of class mymodal. This parameter is optional; the default selector is a.modal. We use $params to specify default settings for modal windows. This list details the keys that we can use in this array to define default values: ajaxOptions size onOpen onClose onUpdate onResize onMove onShow onHide The link that we create can only be seen as special because of the JavaScript in the rel attribute. This JavaScript array is used to determine the exact behavior of the modal window for this link. We must always specify handler; this is used to determine how to parse the input from the link. In most cases, this will be iframe, but we can also use image, adopt, url, and string. The size parameter is optional; here it is used to override the default specifed when we used the behavior.modal type to import the JavaScript. The settings have three layers of inheritance: The default settings defined in the modal.js file The settings we define when using the behavior.modal type The settings we define when creating the link This is a screenshot of the resultant modal window when the link is used : Here are the behavior types: calendar Adds JavaScript to use the showCalendar() function caption Places the image title beneath an image combobox Adds JavaScript to add combo selection to text fields formvalidation Adds the generic JFormValidator JavaScript class to the document keepalive Adds JavaScript to maintain a user’s session modal Adds JavaScript to implement modal windows mootools Adds the MooTools JavaScript library to the document head switcher Adds JavaScript to toggle between hidden and displayed elements tooltip Adds JavaScript required to enable tooltips tree Instantiates the MooTools JavaScript class MooTree uploader Adds a dynamic file uploading mechanism using JavaScript email There is only one e-mail type. cloak Adds JavaScript to encrypt e-mail addresses in the browser form There is only one form type. token Generates a hidden token field to reduce the risk of CSRF exploits grid The grid types are used for displaying a dataset's item elements in a table of a backend form. There are seven grid types, each of which handles a commonly defined database field such as access, published, ordering, checked_out. The grid types are used within a form named adminForm that must include a hidden field named boxchecked with a default value of 0 and another named task that will be used to determine which task a controller will execute. To illustrate how the grid types are used we will use grid.id and grid.published along with our component database table #__boxoffice_revues that has a primary key field named id, a field named published, which we use to determine if an item should be displayed, and a field named name. We can determine the published state of a record in our table by using grid.published. This example demonstrates how we might process each record in a view form layout and output data into a grid or table ($this->revues is an array of objects representing records from the table): <?php $i = 0; foreach ($this->revues as $row) : $checkbox = JHTML::_('grid.id', ++$i, $row->id); $published = JHTML::_('grid.published', $row, $i); ?> <tr class=<?php echo "row$i%2"; ?>"> <td><?php echo $checkbox; ?></td> <td><?php echo $row->name; ?></td> <td align="center"><?php echo $published ?></td> </tr><?php endforeach;?> If $revues were to contain two objects named Item 1 and Item 2, of which only the frst object is published, the resulting table would look like this: Not all of the grid types are used for data item elements. The grid.sort and grid.order types are used to render table column headings. The grid.state type is used to display an item state selection box, All, Published, Unpublished and, optionally, Archived and Trashed. The grid types include: access Generates an access group text link checkedOut Generates selectable checkbox or small padlock image id Generates a selectable checkbox order Outputs a clickable image for every orderable column published Outputs a clickable image that toggles between published & unpublished sort Outputs a sortable heading for a grid/table column state Outputs a drop-down selection box called filter_state image We use the image types to perform a form of image overriding by determining if a template image is present before using a system default image. We will use image.site to illustrate, using an image named edit.png: echo JHTML::_('image.site', 'edit.png'); This will output an image tag for the image named edit.png. The image will be located in the currently selected template's /images folder. If edit.png is not found in the /images folder then the /images/M_images/edit.png file will be used. We can change the default directories using the $directory and $param_directory parameters. There are two image types, image.administrator and image.site. administrator Loads image from backend templates image directory or default image site Loads image from frontend templates image directory or default image
Read more
  • 0
  • 0
  • 1117

article-image-easily-modifying-page-joomla-15
Packt
02 Jun 2010
17 min read
Save for later

Easily modifying a page with Joomla! 1.5

Packt
02 Jun 2010
17 min read
(Read more interesting articles on Joomla!1.5 here.) Application message queue You may have noticed that when we raise a notice or a warning, a bar appears across the top of the page containing the notice or warning message. These messages are part of the application message queue. The application message queue is a message stack that is displayed the next time the application renders an HTML view. This means that we can queue messages in one request but not show them until a later request. There are three core message types: message, notice, and error. The next screenshot depicts how each of the different types of application message is rendered: We use the application enqueueMessage() method to add a message to the queue. This example demonstrates how we would add all of the messages shown in the previous screenshot to the message queue: $mainframe->enqueueMessage('A message type message');$mainframe->enqueueMessage('A notice type message', 'notice');$mainframe->enqueueMessage('An error type message', 'error'); The first parameter is the message that we want to add and the second parameter is the message type; the default is message. It is uncommon to add notice or error messages this way; normally we will use JError::raiseNotice() and JError::raiseWarning() respectively. This means that we will, in most cases, use one parameter with the enqueueMessage() method. It is possible however, to add messages of our own custom types. This is an example of how we would add a message of type bespoke: $mainframe->enqueueMessage('A bespoke type message', 'bespoke'); Custom type messages will render in the same format as message type messages. Imagine we want to use the bespoke message type to render messages but not display them. This could be useful for debugging purposes. This example demonstrates how we can add a CSS Declaration to the document, using the methods described earlier to modify the way in which the bespoke messages are displayed: $css = '/* Bespoke Error Messages */#system-message dt.bespoke{display: none;}dl#system-message dd.bespoke ul{ color: #30A427; border-top: 3px solid #94CA8D; border-bottom: 3px solid #94CA8D; background: #C8DEC7 url(notice-bespoke.png) 4px 4px no-repeat;}';$doc =& JFactory::getDocument();$doc->addStyleDeclaration($css); Now when bespoke messages are rendered, they will appear like this: Redirecting the browser Redirection allows us to redirect the browser to a new location. Joomla! provides us with some easy ways in which to redirect the browser. Joomla! redirects are implemented using HTTP 301 redirect response codes. In the event that response headers have already been sent, JavaScript will be used to redirect the browser. The most common time to redirect a browser is after a form has been submitted. There are a number of reasons why we might want to do this, such as the following: Redirecting after form submission prevents forms from being submitted multiple times when the browser is refreshed We can redirect to different locations depending on the submitted data Redirecting to another view reduces the amount of development required for each task in the controller There are many scenarios where the use of a redirect is common. The following list identifies some of these: Canceling editing an existing item Copying items Creating new items and updating existing items Deleting items Publishing or unpublishing items Updating item ordering Imagine a user submits a form that is used to create a new record in a database table. The first thing we need to do when we receive a request of this type is to validate the form contents. This next data flow diagram describes the logic that we could implement: The No route passes the invalid input to the session. We do this so that when we redirect the user to the input form we can repopulate the form with the invalid input. If we do not do this the user will have to complete the entire form again. We may choose to omit the Pass invalid input to user session process as the core components do. It is normal to include JavaScript to validate forms before they are submitted, and since the majority of users will have JavaScript support enabled, this may be a good approach to use. Note that omitting this process is not the same as omitting form validation. We must never depend on JavaScript or other client-side mechanisms for data validation. A good approach is to initially develop forms without client-side validation while ensuring that we properly handle invalid data with server-side scripts. As a quick aside, a good way to validate form contents is to use a JTable subclass check() method. If we place failed input into the session, we might want to put it in its own namespace. This makes it easier to remove the data later and helps prevent naming conflicts. The next example demonstrates how we might add the field value of myField to the myForm session namespace: // get the session$session =& JFactory::getSession();// get the raw value of myField$myFieldValue = JRequest::getString('myField', '', 'POST', JREQUEST_ALLOWRAW);// add the value to the session namespace myForm$session->set('myField', $myFieldValue, 'myForm') When we come to display the form we can retrieve the data from the session using the get() method. Once we have retrieved the data we must remember to remove the data from the session, otherwise it will be displayed every time we view the form (unless we use another flag as an indicator). We can remove data items from the myForm namespace using the clear() method: // get the session$session =& JFactory::getSession();// Remove the myField$session->clear('myField', 'myForm'); The final thing we do in the No route is to redirect the user back to the input form. When we do this, we must add some messages to the application queue to explain to the user why the input has been rejected. The Yes route adds a new record to the database and then redirects the user to the newly created item. As with the No route, it is normal to queue a message that will say that the new item has been successfully saved, or something to that effect. There are essentially two ways to redirect. The first is to use the application redirect() method. It is unusual to use this mechanism unless we are developing a component without the use of the Joomla! MVC classes. This example demonstrates how we use the application method: $mainframe->redirect('index.php?option=com_boxoffice'); This will redirect the user's browser to index.php?option=com_boxoffice. There are two additional optional parameters that we can provide when using this method. These are used to queue a message. This example redirects us, as per the previous example, and queues a notice type message that will be displayed after the redirect has successfully completed: $mainframe->redirect('index.php?option=com_boxoffice','Some Message', 'notice'); The final parameter, the message type, defaults to message. The application redirect() method immediately queues the message, redirects the user's browser, and ends the application. The more common mechanism for implementing redirects is to use the JController setRedirect() method. We generally use this from within a controller method that handles a task, but because the method is public we can use it outside of the controller. This example, assuming we are within a JController subclass method, will set the controller redirect to index.php?option=com_boxoffice: $this->setRedirect('index.php?option=com_boxoffice'); As with the application redirect() method, there are two additional optional parameters that we can provide when using this method. These are used to queue a message. This example sets the controller redirect, as per the previous example, and queues a notice type message that will be displayed after the redirect has successfully completed: $this->setRedirect('index.php?option=com_boxoffice', 'Some Message','notice'); Unlike the application redirect() method, this method does not immediately queue the optional message, redirect the user's browser, and end the application.To do this we must use the JController redirect() method. It is normal, in components that use redirects, to execute the controller redirect() method after the controller has executed a given task. This is normally done in the root component file, as this example demonstrates: $controller = new BoxofficeController();$controller->execute(JRequest::getCmd('task'));$controller->redirect(); Component XML metadata files and menu parameters When we create menu items, if a component has a selection of views and layouts, we can choose which view and which layout we want to use. We can create an XML metadata file for each view and layout. In these files we can describe the view or layout and we can define extra parameters for the menu item specific to the specified layout. Our component frontend has a single view with two layouts: default.php and list.php. The next figure describes the folder structure we would expect to find in the views folder (for simplicity, only the files and folders that we are discussing are included in the figure): When an administrator creates a link to this view, the options displayed will not give any information beyond the names of the folders and files described above, as the next screenshot demonstrates: The first element of this list that we will customize is the view name, Revue. To do this we must create a file in the revue folder called metadata.xml. This example customizes the name and description of the revue view: ?xml version="1.0" encoding="utf-8"?><metadata> <view title="Movie Revues"> <message> <![CDATA[Movie Revues]]> </message> </view></metadata> Now if an administrator were to view the list of menu item types, Revue would be replaced with the text Movie Revues as defined in the view tag title attribute. The description, defined in the message tag, is displayed when the mouse cursor is over the view name. The next task is to customize the definitions of the layouts, default.php and list.php. Layout XML metadata files are located in the tmpl folder and are named the same as the corresponding layout template file. For example, the XML metadata file for default.php would be named default.xml. So we need to add the files default.xml and list.xml to the tmpl folder. Within a layout XML metadata file, there are two main tags in which we are interested: layout and state. Here is an example of a XML metadata file <?xml version="1.0" encoding="utf-8"?><metadata> <layout title="Individual Revue"> <message> <![CDATA[Individual movie revue.]]> </message> </layout> <state> <name>Individual Revue</name> <description>Individual movie revue.</description> </state></metadata> And here is the list.xml file: <?xml version="1.0" encoding="utf-8"?><metadata> <layout title="Revue List"> <message> <![CDATA[Summary list of revues.]]> </message> </layout> <state> <name>Revue List</name> <description>Summary list of revues.</description> </state></metadata> At first glance it may seem odd that we appear to be duplicating information in the layout and state tags. However, the layout tag includes information that is displayed in the menu item type list (essentially an overview). The state tag includes information that is displayed during the creation of a menu item that uses the layout. There are occasions when a more detailed description is required when we come to define a menu item. For example, we may want to warn the user that they must fill in a specific menu parameter. We will discuss menu parameters in a moment. Assuming we created the default.xml and list.xml files as shown previously, our menu item type list would now appear as follows: Now that we know how to modify the names and descriptions of views and layouts, we can investigate how to define custom menu parameters. There are many different types of parameter that we can define. Before you continue, you might want to familiarize yourself with this list of parameter types because we will be using them in the examples. A complete description of these parameters is available in Appendix B, Parameters (Core Elements): category editors filelist folderlist helpsites hidden imagelist languages list menu menuitem password radio section spacer sql text textarea timezones Menu parameters can be considered as being grouped into several categories: System Component State URL Advanced The System parameters are predefined by Joomla! (held in the administrator/components/com_menus/models/metadata/component.xml file). These parameters are used to encourage standardization of some common component parameters. System parameters are shown under the heading Parameters (System); we cannot prevent these parameters from being displayed. The Component parameters are those parameters that are defined in the component's config.xml file. Note that changing these parameters when creating a new menu item only affects the menu item, not the entire component. In essence, this is a form of overriding. This form of overriding is not always desirable; it is possible to prevent the component parameters from being shown when creating or editing a menu item. To do this we add the attribute menu to the root tag (config) of the component config.xml file and set the value of the attribute to hide: The remaining parameter groups—State, URL, and Advanced—are defined on a per layout basis in the layout XML metadata files inside the state tag. These are the groups in which we are most interested. The State parameters are located in a tag called params. In this example, which builds on our list.xml file, we add two parameters: a text field named revue_heading and a radio option named show_heading: <?xml version="1.0" encoding="utf-8"?><metadata> <layout title="Revue List"> <message> <![CDATA[Summary list of revues.]]> </message> </layout> <state> <name>Revue List</name> <description>Summary list of revues.</description> <params> <param type="radio" name="show_heading" label="Show Heading" description="Display heading above revues." default="0"> <option value="0">Hide</option> <option value="1">Show</option> </param> <param type="text" name="revue_heading" label="Revue Heading" description="Heading to display above the revues." default="Box Office Revues" /> </params> </state></metadata> When an administrator creates a new menu item for this layout, these two parameters will be displayed under the heading Parameters (Basic). The parameters are not presented under a State heading, because State and URL parameters are consolidated into one section. URL parameters always appear above State parameters. We define URL parameters in much the same way, only this time we place them in a tag named url. The URL parameters are automatically appended to the URI; this means that we can access these parameters using JRequest. These parameters are of particular use when we are creating a layout that is used to display a single item that is retrieved using a unique ID. If we use these parameters to define an ID that is retrieved from a table, we should consider using the often overlooked sql parameter type. The following example builds on the previous example, and adds the URL parameter id, which is extracted from the #__boxoffice_revues table: <?xml version="1.0" encoding="utf-8"?><metadata> <layout title="Revue List"> <message> <![CDATA[Summary list of revues.]]> </message> </layout> <state> <name>Revue List</name> <description>Summary list of revues.</description> <url> <param type="sql" name="id" label="Revue:" description="Revue to display" query="SELECT id AS value, title AS id FROM #__boxoffice_revues" /> </url> <params> <param type="radio" name="show_heading" label="Show Heading" description="Display heading above revues." default="0"> <option value="0">Hide</option> <option value="1">Show</option> </param> <param type="text" name="revue_heading" label="Revue Heading" description="Heading to display above the revues." default="Box Office Revues" /> </params> </state></metadata> The query might be slightly confusing if you are not familiar with the sql parameter type. The query must return two fields, value and id. value specifies the value of the parameter and id specifies the identifier displayed in the drop-down box that is displayed when the parameter is rendered. When using the sql parameter type, if applicable, remember to include a WHERE clause to display only published or equivalent items. The Advanced parameters are specifically for defining parameters that are more complex than the State parameters. These parameters are defined in the advanced tag. This example adds an advanced parameter called advanced_setting: <?xml version="1.0" encoding="utf-8"?><metadata> <layout title="Revue List"> <message> <![CDATA[Summary list of revues.]]> </message> </layout> <state> <name>Revue List</name> <description>Summary list of revues.</description> <url> <param type="sql" name="id" label="Revue:" description="Revue to display" query="SELECT id AS value, title AS id FROM #__boxoffice_revues" /> </url> <params> <param type="radio" name="show_heading" label="Show Heading" description="Display heading above revues." default="0"> <option value="0">Hide</option> <option value="1">Show</option> </param> <param type="text" name="revue_heading" label="Revue Heading" description="Heading to display above the revues." default="Box Office Revues" /> </params> <advanced> <param type="radio" name="list_by_cat" label="List by Genre" description="List revues by genre." default="0"> <option value="0">No</option> <option value="1">Yes</option> </param> </advanced> </state></metadata> Advanced parameters will appear under the Parameters Advanced heading. Component parameters, Component Design—will appear under the Parameters (Component) heading. The resultant parameters area for this layout will appear as follows: All name and description elements from the XML metadata files will be translated into the currently selected locale language. When we save a menu item, all of the parameters, except URL parameters, are saved to the params field in the menu item record. This means that we can end up with naming conflicts between our parameters. We must ensure that we do not name any two parameters the same. This includes not using the predefined System parameter names. This list details the System parameter names: page_title show_page_title pageclass_sfx menu_image secure Once we have successfully created the necessary XML, we will be able to access the parameters from within our component using a JParameter object. This is described in the next section.
Read more
  • 0
  • 0
  • 1190
article-image-ajax-implementation-apex
Packt
01 Jun 2010
8 min read
Save for later

AJAX Implementation in APEX

Packt
01 Jun 2010
8 min read
(For more resources on Oracle, see here.) APEX introduced AJAX supports in version 2.0 (the product was called HTML DB back then). The support includes a dedicated AJAX framework that allows us to use AJAX in our APEX applications, and it covers both the client and the server sides. AJAX support on the client side The APEX built-in JavaScript library includes a special JavaScript file with the implementation of the AJAX client-side components. In earlier versions this file was called htmldb_get.js, and in APEX 3.1, it was changed to apex_get_3_1.js. In version 3.1, APEX also started to implement JavaScript namespace in the apex_ns_3_1.js file. Within the file, there is a definition to an apex.ajax namespace. I'm not mentioning the names of these files just for the sake of it. As the AJAX framework is not officially documented within the APEX documentation, these files can be very important and a useful source of information. By default, these files are automatically loaded into every application page as part of the #HEAD# substitution string in the Header section of the page template. This means that, by default, AJAX functionality is available to us on every page of our application, without taking any extra measures. The htmldb_Get object The APEX implementation of AJAX is based on the htmldb_Get object and as we'll see, creating a new instance of htmldb_Get is always the first step in performing an AJAX request. The htmldb_Get constructor function has seven parameters: function htmldb_Get(obj,flow,req,page,instance,proc,queryString) 1—obj The first parameter is a String that can be set to null, a name of a page item (DOM element), or an element ID. Setting this parameter to null will cause the result of the AJAX request to be assigned in a JavaScript variable. We should use this value every time we need to process the AJAX returned result, like in the cases where we return XML or JSON formatted data, or when we are relaying on the returned result, further in our JavaScript code flow. The APEX built-in JavaScript library defines, in the apex_builder.js file, (which is also loaded into every application page, just like apex_ get_3_1.js), a JavaScript global variable called gReturn. You can use this variable and assign it the AJAX returned result. Setting this parameter to the name (ID) of a page item will set the item value property with the result of the AJAX call. You should make sure that the result of the AJAX call matches the nature of the item value property. For example, if you are returning a text string into a text item it will work just fine. However, if you are returning an HTML snippet of code into the same item, you'll most likely not get the result you wanted. Setting this parameter to a DOM element ID, which is not an input item on the page, will set its innerHTML property to the result of the AJAX call. Injecting HTML code, using the innerHTML property, is a cross-browser issue. Moreover, we can't always set innerHTML along the DOM tree. To avoid potential problems, I strongly recommend that you use this option with <div> elements only. 2—flow This parameter represents the application ID. If we are calling htmldb_Get() from an external JavaScript file, this parameter should be set to $v('pFlowId') or its equivalent in version 3.1 or before ($x('pFlowId').value or html_GetElement('pFlowId').value ). This is also the default value, in case this parameter is left null. If we are calling htmldb_Get() as part of an inline JavaScript code we can use the Substitution String notation &APP_ID. (just to remind you that the trailing period is part of the syntax). Less common, but if you are using Oracle Web Toolkit to generate dynamic code (for dynamic content) that includes AJAX, you can also use the bind variable notation :APP_ID. (In this case, the period is just a punctuation mark.) 3—req This String parameter stands for the REQUEST value. Using the keyword APPLICATION_PROCESS with this parameter allows us to name an application level On Demand—PL/SQL Anonymous Block process that will be fired as part of the AJAX server-side processing. For example: 'APPLICATION_PROCESS=demo_code'. This parameter is case sensitive, and as a String, should be enclosed with quotes. If, as part of the AJAX call, we are not invoking an on-demand process, this parameter should be set to null (which is its default value). 4—page This parameter represents an application page ID. The APEX AJAX process allows us to invoke any application page, to run it in the background, on the server side, and then clip portions of the generated HTML code for this page into the AJAX calling page. In these cases, we should set this parameter to the page ID that we want to pull from. The default value of this parameter is 0 (this stands for page 0). However, this value can be problematic at times, especially when page 0 has not been defined on the application, or when there are inconsistencies between the Authorization scheme, or the page Authentication (such as Public and Required Authentication) of page 0 and the AJAX calling page. These inconsistencies can fail the execution of the AJAX process. In cases where you are not pulling information from another page, the safe bet is to set this parameter to the page ID of the AJAX calling page, using $v('pFlowStepId') or its equivalent for versions earlier than 3.1. In the case of an inline code, the &APP_PAGE_ID. Substitution String can also be used. Using the calling page ID as the default value for this parameter can be considered a "good practice" even for upcoming APEX versions, where implementation of page level on-demand process will probably be introduced. I hope you remember that as of version 3.2, we can only define on-demand processes on the application level. 5—instance This parameter represents the APEX session ID, and should almost always be left null (personally, I never encountered the need to set it otherwise). In this case, it will be populated with the result of $v('pInstance') or its earliest versions. 6—proc This String parameter allows us to invoke a stored or packaged procedure on the database as part of the AJAX process. The common behavior of the APEX AJAX framework is to use the application level On Demand PL/SQL Anonymous Block process as the logic of the AJAX server-side component. In this case, the on-demand process is named through the third parameter—req—using the keyword APPLICATION_PROCESS, and this parameter—proc—should be left null. The parameter will be populated with its default value of 'wwv_flow.show'(the single quotes are part of the syntax, as this is a String parameter). However, the APEX AJAX framework also allows us to invoke an external (to APEX) stored (or packaged) procedure as the logic of the AJAX server side. In this case, we can utilize an already existing logic in the database. Moreover, we can benefit from the "regular" advantages of stored procedures, such as a pre-complied code, for better performance, or the option to use wrapped PL/SQL packages, which can protect our business logic better (the APEX on-demand PL/SQL process can be accessed on the database level as clear text). The parameter should be formatted as a URL and can be in the form of a relative URL. In this case, the system will complete the relative URL into a full path URL based on the current window.location.href property. As with all stored or packaged procedures that we wish to use in our APEX application, the user (and in the case of using DAD, the APEX public user) should have the proper privileges on the stored procedure. In case the stored procedure, or the packaged procedure, doesn't have a public synonym defined for it then the procedure name should be qualified with the owner schema. For example, with inline code we can use: '#OWNER#.my_package.my_proc' For external code, you should retrieve the owner and make it available on the page (e.g. assign it to a JavaScript global variable) or define a public synonym for the owner schema and package. 7—queryString This parameter allows us to add parameters to the stored (packaged) procedure that we named in the previous parameter—proc. As we are ultimately dealing with constructing a URL, that will be POSTed to the server, this parameter should take the form of POST parameters in a query string—pairs of name=value, delimited by ampersand (&). Let's assume that my_proc has two parameters: p_arg1 and p_arg2. In this case, the queryString parameter should be set similar to the following: 'p_arg1=Hello&p_arg2=World' As we are talking about components of a URL, the values should be escaped so their code will be a legal URL. You can use the APEX built-in JavaScript function htmldb_Get_escape() to do that. If you are using the req parameter to invoke an APEX on-demand process with your AJAX call, the proc and queryString parameters should be left null. In this case, you can close the htmldb_Get() syntax right after the page parameter. If, on the other hand, you are invoking a stored (packaged) procedure, the req parameter should be set to null.
Read more
  • 0
  • 0
  • 3275

article-image-blogs-and-forums-using-plone-3
Packt
28 May 2010
11 min read
Save for later

Blogs and Forums using Plone 3

Packt
28 May 2010
11 min read
Blogs and forums have much to offer in a school setting. They help faculty and students communicate despite large class sizes. They engage students in conversations with each other. And they provide an easy way for instructors and staff members to build their personal reputations—and, thereby, the reputation of your institution. In this article, we consider how best to build blogs and forums in Plone. Along the way, we cite education-domain examples and point out tips for keeping your site stable and your users smiling. Plone's blogging potential Though Plone wasn't conceived as a blogging platform, its role as a full-fledged content management system gives it all the functionality of a blog and more. With a few well-placed tweaks, it can present an interface that puts users of other blogging packages right at home while letting you easily maintain ties between your blogs and the rest of your site. Generally speaking, blog entries are… Prominently labeled by date and organized in reverse chronological order Tagged by subject Followed by reader comments Syndicated using RSS or other protocols Plone provides all of these, with varying degrees of polish, out of the box: News items make good blog entries, and the built-in News portlet lists the most recent few, in reverse chronological order and with publication dates prominently shown. A more comprehensive, paginated list can easily be made using collections. Categories are a basic implementation of tags. Plone's built-in commenting can work on any content type, News Items included. Every collection has its own RSS feed. Add-on products: free as in puppies In addition to Plone's built-in tools, this article will explore the capabilities of several third-party add-ons. Open-source software is often called "free as in beer" or "free as in freedom". As typical of Plone add-ons, the products we will consider are both. However, they are also "free as in puppies". Who can resist puppies? They are heart-meltingly cute and loads of fun, but it's easy to forget, when their wet little noses are in your face, that they come with responsibility. Likewise, add-ons are free to install and use, but they also bring hidden costs: Products can hold you back. If you depend on one that doesn't support a new version of Plone, you'll face a choice between the product and the Plone upgrade. This situation is most likely at major version boundaries: for example, upgrading from Plone 3.x to Plone 4. Minor upgrades, as from Plone 3.2 to 3.3, should be fairly uneventful. (This was not always true with Plone 2.x, but release numbering has since gotten a dose of sanity.) One place products often fall short is uninstallation. It takes care to craft a quality uninstallation routine; low-quality or prerelease products sometimes fail to uninstall cleanly, leaving bits of themselves scattered throughout your site. They can even prevent your site from displaying any pages at all (often due to leaving remnants in portal_actions), and you may have to repair things by hand through the ZMI or, failing that, through an afternoon of fun with the Python debugger. The moral: even trying a product can be a risk. Test installation and uninstallation on a copy of your site before committing to one, and back up your Data.fs file before installing or uninstalling on production servers. Pace of work varies widely. Reporting a bug against an actively developed product might get you a new release within the week. Hitting a bug in an abandoned one could leave you fixing it yourself or paying someone else to. (Fortunately, there are scads of Plone consultants for hire in the #plone IRC channel and on the plone-users mailing list.) In addition to the above, products that add new content types (like blog entries, for instance) bring a risk of lock-in proportional to the amount of content you create with them. If a product is abandoned by its maintainer or you decide to stop using it for some other reason, you will need to migrate its content into some other type, either by writing custom scripts or by copying and pasting. These considerations are major drivers of this article's recommendations. For each of the top three Plone blogging strategies, we'll outline its capabilities, tick off its pros and cons, and estimate how high-maintenance a puppy it will be. Remember, even though puppies can be some work, a well-chosen and well-trained one becomes a best friend for life. News Items: blogging for the hurried or risk-averse Using news items as blog entries is, in true Extreme Programming style, "the simplest thing that could possibly work". Nonetheless, it's a surprisingly flexible practice and will disappoint only if you need features like pings, trackbacks, and remote editor integration. Here is an example front page of a Plone blog built using only news items, collections, and the built-in portlets: Structure of a news-item blog A blog in Plone can be as simple as a folder full of News Items, further organized into subfolders if necessary. Add a collection showing the most recent News Items to the top-level folder, and set it as its default page. As illustrated below, use an Item Type criterion for the collection to pull in the News Items, and use a Location criterion to exclude those created outside the blog folder: To provide pagination—recommended once the length of listings starts to noticeably impact download or render timetime—use the Limit Search Results option on the collection. One inconsistency is that only the Summary and Tabular Views on collections support pagination; Standard View (which shows the same information) does not. This means that Summary View, which sports a Read more link and is a bit more familiar to most blog users, is typically a good choice. Go easy on the pagination More items displayed per page is better. User tests on prototypes of gap.com's online store have suggested that, at least when selling shirts, more get sold when all are on one big page. Perhaps it's because users are faced with a louder mental "Continue or leave?" when they reach the end of a page. Regardless, it's something to consider when setting page size using a collection's Number of Items setting; you may want to try several different numbers and see how it affects the frequency with which your listing pages show up as "exit pages" in a web analytics package like AWStats. As a starting point, 50 is a sane choice, assuming your listings show only the title and description of each entry (as the built-in views do). The ideal number will be a trade-off between tempting visitors to leave with page breaks and keeping load and render times tolerable.   Finally, make sure to sort the entries by publication date. Set this up on the front-page collection's Criteria tab by selecting Effective Date and reversing the display order: As with all solutions in this article, a blog built on raw News Items can easily handle either single- or multi-author scenarios; just assign rights appropriately on the Sharing tab of the blog folder. News Item pros and cons Unadorned News Items are a great way to get started fast and confer practically zero upgrade risk, since they are maintained as part of Plone itself. However, be aware of these pointy edges you might bang into when using them as blog entries: With the built-in views, logged-out users can't see the authors or the publication dates of entries. Even logged-in users see only the modification dates unless they go digging through the workflow history. Categories applied to a News Item appear on its page, but clicking them takes you to a search for all items (both blog-related and otherwise) having that category. This could be a bug or a feature, depending on your situation. However, the ordering of the search results is unpredictable, and that is definitely unhelpful. The great thing about plain News Items is that there's a forward migration path. QuillsEnabled, which we'll explore later, can be layered atop an existing news-item-based blog with no migrations necessary and removed again if you decide to go back. Thus, a good strategy may be to start simple, with plain news items, and go after more features (and risk) as the need presents itself. Scrawl: a blog with a view One step up from plain News Items is Scrawl, a minimalist blog product that adds only two things: A custom Blog Entry type, which is actually just a copy of News Item. A purpose-built Blog view that can be applied to folders or collections, which are otherwise used just as with raw News Items. Here are both additions in action: Scrawl's Blog Entry isn't quite a verbatim copy of News Item; Scrawl makes a few tweaks: Commenting is turned on for new Blog Entries, without which authors would have to enable it manually each time. The chances of that happening are slim, since it's buried on the Edit → Settings tab, and users seldom stray from the default tab when editing. Blog Entry's default view is a slightly modified version of News Item's: it shows the author's name and the posting date even to unauthenticated users—and in a friendly "Posted by Fred Finster" format. It also adds a Permalink link, lest you forfeit crosslinks from users who know no other way of finding an entry's address. Calm your ringing phone by cloning types Using a custom content type for blog entries—even if it's just a copy of an existing one—has considerable advantages. For one, you can match contributors' vocabulary: assuming contributors think of part of your site as a blog (which they probably will if the word "blog" appears anywhere onscreen), they won't find it obvious to add "news items" there. Adding a "blog entry," on the other hand, lines up naturally with their expectations. This little trick, combined with judicious use of the Add new… → Restrictions… feature to pare down their options, will save hours of your time in training and support calls. A second advantage of a custom type is that it shows separately in Plone's advanced search. Visitors, like contributors, will identify better with the "blog entry" nomenclature. Plus, sometimes it's just plain handy to limit searches to only blogs. This type-cloning technique isn't limited to blog entries; you can clone and rename any content type: just visit portal_types in the ZMI, copy and paste a type, rename it, and edit its Title and Description fields. One commonly cloned type is File. Many contributors, even experts in noncomputer domains, aren't familiar with the word file. Cloning it to create PDF File, Word Document, and so on can go a long way toward making them comfortable using Plone.   Pros and cons of scrawl Scrawl's biggest risk is lock-in: since it uses its own Blog Entry content type to store your entries, uninstalling it leaves them inaccessible. However, because the Blog Entry type is really just the News Item type, a migration script is easy to write: # Turn all Blog Entries in a Plone site into News Items. # # Run by adding a "Script (Python)" in the ZMI (it doesn't matter where) and pasting this in. from Products.CMFCore.utils import getToolByName portal_catalog = getToolByName(context, 'portal_catalog') for brain in portal_catalog(portal_type='Blog Entry'): blog_entry = brain.getObject() # Get the actual blog entry from # the catalog entry. blog_entry.portal_type = 'News Item' # Update the catalog so searches see the new info: blog_entry.reindexObject() The reverse is also true: if you begin by using raw News Items and decide to switch to Scrawl, you'll need the reverse of the above script—just swap 'News Item' and 'Blog Entry'. If you have news items that shouldn't be converted to blog entries, your catalog query will have to be more specific, perhaps adding a path keyword argument, as in portal_catalog(portal_type='News Item', path='/my-plonesite/ blog-folder'). Aside from that, Scrawl is pretty risk-free. Its simplicity makes it unlikely to accumulate showstopping bugs or to break in future versions of Plone, and, if it does, you can always migrate back to news items or, if you have some programming skill, maintain it yourself—it's only 1,000 lines of code.
Read more
  • 0
  • 0
  • 18861