Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-article-setting-development-environment
Packt
13 Jan 2012
14 min read
Save for later

Setting Up a Development Environment

Packt
13 Jan 2012
14 min read
Selecting your virtual environment Prisoners serving life sentences (in Canada) have what is known as a faint hope clause where you have a glimmer of a chance of getting parole after 15 years. However, those waiting for Microsoft to provide us a version of Virtual PC that can run Virtual Hard Drives (VHDs) hosting 64-bit operating systems (such as Windows Server 2008 R2), have no such hope of ever seeing that piece of software. But miracles do happen, and I hope that the release of a 64-bit capable Virtual PC renders this section of the article obsolete. If this has in fact happened, go with it and proceed to the following section.   Getting ready Head into your computer's BIOS settings and enable the virtualization setting. The exact setting you are looking for varies widely, so please consult with your manufacturer's documentation. This setting seems universally defaulted to off, so I am very sure you will need to perform this action.   How to do it... Since you are still reading, however, it is safe to say that a miracle has not yet happened. Your first task is to select a suitable virtualization technology that can support a 64-bit guest operating system. The recipe here is to consider the choices in this order, with the outcome of your virtual environment being selected: Microsoft Virtualization: Hyper-V certainly has the ability to create and run Virtual Hard Disks (VHDs) with 64-bit operating systems. It's free—that is, you can install the Hyper-V role , but it requires the base operating system to be Windows Server 2008 R2. It can be brutal to get it running properly on something like a laptop (for example, because of driver issues). It won't be a good idea to get Windows 2008 Server running on a laptop, primarily because of driver issues. I recommend that if your laptop is running Windows 7, look at creating a dual boot, and a boot to VHD where this other boot option / partition is Windows Server 2008 R2. The main disadvantage is coming up with an (preferably licensed) installation of Windows Server 2008 R2 as the main computer operating system (or as a dual boot option). Or perhaps your company runs Hyper-V on their server farm and would be willing to host your development environment for you? Either way, if you have managed to get access to a Hyper-V server, you are good to go! VMware Workstation: Go to http://www.vmware.com and download my absolute favorite virtualization technology—VMware Workstation—fully featured, powerful, and can run on Windows 7. I have used it for years and love it. You must of course pay for a license, but please believe me, it is a worthwhile investment. You can sign up for a 30 day trial to explore the benefits. Note that you only need one copy of VMware Workstation to create a virtual machine. Once you have created it, you can run it anywhere using the freely available VMware Player. Oracle Virtual Box: Go to http://www.virtualbox.org/ and download this free software that will run on Windows 7 and create and host 64-bit guest operating systems. The reason that this is at the bottom of the list is that I personally do not have experience using this software. However, I have colleagues who have used it and have had no problems with it. Give this a try and see if it works as equally well as a paid version of VMware. With your selected virtualization technology in hand, head to the next section to install and configure Windows Server 2008 R2, which is the base operating system required for an installation of SharePoint Server 2010. Installing and configuring Windows Server 2008 R2 SharePoint 2010 requires the Windows Server 2008 R2 operating system in order to run. In this recipe, we will confi gure the components of Windows Server 2008 necessary in order to get ready to install SQL Server 2008 and SharePoint 2010. Getting ready Download Windows Server 2008 R2 from your MSDN subscription, or type in windows server 2008 R2 trial download into your favorite search engine to download the 180-day trial from the Microsoft site. This article does not cover actually installing the base operating system. The specific instructions to do so will be dependent upon the virtualization software. Generally, it will be provided as an ISO image (the file extension will be .iso). ISO means a compressed disk image, and all virtualization software that I am aware of will let you mount (attach) an ISO image to the virtual machine as a CD Drive. This means that when you elect to create a new virtual machine, you will normally be prompted for the ISO image, and the installation of the operating system should proceed in a familiar and relatively automated fashion. So for this recipe, ready means that you have your virtualization software up and running, the Windows Server 2008 R2 base operating system is installed, and you are able to log in as the Administrator (and that you are effectively logging in for the first time). How to do it... Log in as the Administrator. You will be prompted to change the password the first time—I suggest choosing a very commonly used Microsoft password—Password1. However, feel free to select a password of your choice, but use it consistently throughout. The Initial configuration tasks screen will come up automatically. On this screen: Activate windows using your 180 day trial key or using your MSDN key. Select Provide computer name and domain. Change the computer name to a simpler one of your choice. In my case, I named the machine OPENHIGHWAY. Leave the Member of option as Workgroup. The computer will require a reboot. In the Update this server section, choose Download and install updates. Click on the Change settings link and select the option Never check for updates and click OK. Click the Check for updates link. The important updates will be selected. Click on Install Updates. Now is a good time for a coffee break! You will need to reboot the server when the updates complete. In the Customize this server section, click on Add Features. Select the Desktop Experience, Windows, PowerShell, Integrated, Scripting, and Environment options. Choose Add Required Features when prompted to do so. Reboot the server when prompted to do so. If the Initial configuration tasks screen appears now, or in the future, you may now select the checkbox for Do not show this window at logon. We will continue configuration from the Server Manager, which should be displayed on your screen. If not, launch the Server Manager using the icon on the taskbar. We return to Server Manager to continue the confi guration: OPTIONAL: Click on Configure Remote Desktop if you have a preference for accessing your virtual machine using Remote Desktop (RDP) instead of using the virtual machine's console software. In the Security Information section, click Go to Windows Firewall. Click on the Windows Firewall Properties link. From the dialog, go to each of the tabs, namely, Domain Profi le, Private Profi le, and Public Profi le and set the Firewall State to Off on each tab and click OK. Click on the Server Manager node, and from the main screen, click on the Configure IE ESC link. Set both options to Off and click OK. From the Server Manager, expand the Configuration node and then expand Local Users and Groups node, and then click on the Users folder. Right-click on the Administrator account and select Properties. Select the option for Password never expires and click OK. From the Server Manager, click the Roles node . Click the Add Roles link. Now, click on the Introductory screen and select the checkbox for Active Directory Domain Services. Click Next, again click on Next, and then click Install. After completion, click the Close this wizard and launch the Active Directory Domain Services Installation Wizard (dcpromo.exe) link. Now, carry out the following steps: From the new wizard that pops up, from the welcome screen, select the checkbox Use advanced mode installation, click Next, and again click on Next on the Operating System Compatibility screen. Select the option Create a new domain in a new forest and click Next. Choose your domain (FQDN)! This is completely internal to your development server and does not have to be real. For article purposes, I am using theopenhighway.net, as shown in the following screenshot. Then click Next: From the Set Forest Functional Level drop-down, choose Windows Server 2008 R2 and click Next. Click Next on the Additional Domain Controller Option screen. Select Yes on the Static IP assignment screen. Click Yes on the Dns Delegation Warning screen. Click Next on the Location for Database, Log Files, and SYSVOL screen. On the Directory Services Restore Mode Administrator Password screen, enter the same password that you used for the Administrator account, in my case, Password1. Click Next. Click Next on the Summary screen. Click on the Reboot On Completion screen. Otherwise reboot the server after the installation completes You will now confi gure a user account that will run the application pools for the SharePoint web applications in IIS. From the Server Manager, expand the Roles node. Keep expanding the Active Directory Domain Services until you see the Users folder. Click on the Users folder. Now carry out the following: Right-click on the Users folder and select New | User Enter SP_AppPool in the full name field and also enter SP_AppPool in the user logon field and click Next. Enter the password as Password1 (or the same as you had selected for the Administrator account). Deselect the option for User must change password at next logon and select the option for Password never expires. Click Next and then click Finish. A loopback check is a security feature to mitigate against reflection attacks, introduced in Windows Server 2003 SP1. You will likely encounter connection issues with your local websites and it is therefore universally recommended that you disable the loopback check on a development server. This is done from the registry editor: Click the Start menu button, choose Run…, enter Regedit, and click OK to bring up the registry editor. Navigate to HKEY_LOCAL_MACHINE | SYSTEM | CurrentControlSet | Control | Lsa Right-click the Lsa node and select New | DWORD (32-bit) Value In the place of New Value #1 type DisableLoopbackCheck. Right-click DisableLoopbackCheck, select Modify, change the value to 1, and click OK Congratulations! You have successfully confi gured Windows Server 2008 R2. There's more... The Windows Shutdown Event Tracker is simply annoying on a development machine. To turn this feature off, click the Start button, select Run…, enter gpedit.msc, and click OK. Scroll down, right-click on Display Shutdown Event Tracker, and select Edit. Select the Disabled option and click OK, as shown in the following screenshot: Installing and configuring SQL Server 2008 R2 SharePoint 2010 requires Microsoft SQL Server as a fundamental component of the overall SharePoint architecture. The content that you plan to manage in SharePoint, including web content and documents, literally is stored within and served from SQL Server databases. The SharePoint 2010 architecture itself relies on information stored in SQL Server databases, such as confi guration and the many service applications. In this recipe, we will install and configure the components of SQL Server 2008 necessary to install SharePoint 2010. Getting ready I do not recommend SQL Server Express for your development environment, although this is a possible, free, and valid choice for the installation of SharePoint 2010. In my personal experience, I have valued the full power and fl exibility of the full version of SQL Server as well as not having to live with the constraints and limitations of SQL Express. Besides, there is another little reason too! The Enterprise edition of SQL Server is either readily available with your MSDN subscription or downloadable as a trial from the Microsoft site. Download SQL Server 2008 R2 Enterprise from your MSDN subscription, or type in sql server 2008 enterprise R2 trial download into your favorite search engine to download the 180-day trial from the Microsoft site. For SQL Server 2008 R2 Enterprise, if you have MSDN software, then you will be provided with an ISO image that you can attach to the virtual machine. If you download your SQL Server from the Microsoft site as a trial, extract the software (it is a self-extracting EXE) on your local machine, and then share the folder with your virtual machine. Finallly, run the Setup.exe fi le. How to do it... Here is your recipe for installing SQL Server 2008 R2 Enterprise. Carry out the following steps to complete this recipe: You will be presented with the SQL Server Installation Center; on the left side of the screen, select Installation, as shown in the following screenshot: For the choices presented on the Installation screen, select New installation or add features to an existing installation. The Setup Support Rules (shown in the following screenshot) will run to identify any possible problems that might occur when installing SQL Server. All rules should pass. Click OK to continue: You will be presented with the SQL Server 2008 R2 Setup screen. On the fi rst screen, you can select an evaluation or use your product key (from, for example, MSDN) and then click Next. Accept the terms in the license, but do not check the Send feature usage data to Microsoft checkbox, and click Next. On the Setup Support Files screen, click Install. All tests will pass except for a warning that you can safely ignore (the one noting we are installing on a domain controller), and click Next, as shown in the following screenshot: On the Setup Role screen, select SQL Server Feature Installation and click Next. On the Feature Selection, as shown in the following screenshot, carry out the following tasks: In Instance Features, select Database Engine Services (and both SQL Server Replication and Full Text Search), Analysis Services, and Reporting Services In Shared Features, select Business Intelligence Development Studio, Management Tools Basic (and Management Tools Complete), and Microsoft Sync Framework Finally, click Next. On the Installation Rules screen, click Next On the Instance Confi guration screen, click Next. On the Disk Space Requirements screen, click Next On the Server Confi guration screen: Set the Startup Type for SQL Server Agent to be Automatic Click on the button Use the same account for all SQL Server services. Select the account NT AUTHORITYSYSTEM and click OK. Finally, click Next. On the Database Configuration Engine screen: Look for the Account Provisioning tab and click the Add Current User button under Specify SQL Server administrators. Finally, click Next On the Analysis Services Confi guration screen: Look for the Account Provisioning tab and click the Add Current User button under Specify which users have administrative permissions for Analysis Services. Finally, click Next. On the Reporting Services Configuration screen, select the option to Install but do not configure the report server. Now, click Next. On the Error Reporting Screen, click Next. On the Installation Confi guration Rules screen, click Next. On the Ready to Install screen, click Install. Your patience will be rewarded with the Complete screen! Finally, click Close. The Complete screen is shown in the following screenshot: You can close the SQL Server Installation Center. Confi gure SQL Server security for the SP_AppPool account: Click Start | All Programs | SQL Server 2008 R2 | SQL Server Management Studio. On Connect to server, type a period (.) in the Server Name field and click Connect. Expand the Security node. Right-click Logins and select New Login. Use the Search function and enter SP_AppPool in the box Enter object name to select. Click the check names button and then click OK. In my case, you see the properly formatted THEOPENHIGHWAYSP_AppPool in the login name text box. On the Server Roles tab, ensure that the dbcreator and securityadmin roles are selected (in addition to the already selected public role). Finally, click OK. Congratulations! You have successfully installed and confi gured SQL Server 2008 R2 Enterprise.
Read more
  • 0
  • 0
  • 1020

article-image-gadgets-jira
Packt
29 Dec 2011
14 min read
Save for later

Gadgets in JIRA

Packt
29 Dec 2011
14 min read
(For more resources on JIRA, see here.) Writing JIRA 4 gadgets Gadgets are a big leap in JIRA's reporting features! The fact that JIRA is now an OpenSocial container lets its user add useful gadgets (both JIRA's own and third-party) into its dashboard. At the same time, gadgets written for JIRA can be added in other containers like iGoogle, Gmail, and so on! In this recipe, we will have a look at writing a very simple gadget, one that says 'Hello from JTricks'. By keeping the content simple, it will let us concentrate more on writing the gadget! Before we start writing the gadget, it is probably worth understanding the key components of a JIRA gadget: Gadget XML is the most important part of a JIRA Gadget. It holds the specification of the gadget and includes the following: Gadget Characteristics. It includes title, description, author's name, and so on Screenshot and a thumbnail image. Please note that the screenshot is not used within Atlassian containers such as JIRA or Confluence. We can optionally add it if we want them to be used in other OpenSocial containers Required features that the gadget container must provide for the gadget User preferences which will be configured by the gadget users The Gadget content created using HTML and JavaScript A screenshot and thumbnail image will be used during preview and while selecting the gadget from the container. An i18n property file used for internationalization in the gadget. Optional CSS and JavaScript file used to render the display in the Content section of the gadget. We will see each of them in the recipe. Getting ready Create a skeleton plugin using Atlassian Plugin SDK. How to do it... The following are the steps to write our first gadget, one that shows the greetings from JTricks! Modify the plugin descriptor with the gadget module and the resources required for our gadget: Add the Gadget module in the plugin descriptor: <gadget key="hello-gadget" name="Hello Gadget" location="hello-gadget.xml"> <description>Hello Gadget! </description></gadget> As you can see, this has a unique key and points to the location of the gadget XML! You can have as many gadget definitions as you want in your atlassian-plugin.xml file, but in our example, we stick with the preceding one. Include the thumbnail and screenshot images and downloadable resources in the plugin descriptor. More can be learned at http://confluence.atlassian.com/display/JIRADEV/Downloadable+Plugin+Resources. In our example, the resources are added on to the plugin descriptor as: <resource type="download" name="screenshot.png" location="/images/screenshot.png"/><resource type="download" name="thumbnail.png" location="/images/thumbnail.png"/> The location is relative to the src/main/resources folder in the plugin. As mentioned before, the screenshot is optional. Add the i18n properties file that will be used in the gadget also as a downloadable resource: <resource type="download" name="i18n/messages.xml" location="i18n/messages.xml"> <param name="content-type" value="text/xml; charset=UTF-8"/></resource> The atlassian-plugin.xml will now look like this: <atlassian-plugin key="com.jtricks.gadgets" name="Gadgets Plugin" plugins-version="2"> <plugin-info> <description>Gadgets Example</description> <version>2.0</version> <vendor name="JTricks" url="http://www.j-tricks.com/" /> </plugin-info> <gadget key="hello-gadget" name="Hello Gadget" location="hello-gadget.xml"> <description>Hello Gadget!</description> </gadget> <resource type="download" name="screenshot.png" location="/images/screenshot.png"/> <resource type="download" name="thumbnail.png" location="/images/thumbnail.png"/> <resource type="download" name="i18n/messages.xml" location="i18n/messages.xml"> <param name="content-type" value="text/xml; charset=UTF-8"/> </resource> </atlassian-plugin> Add the screenshot and thumbnail images under the src/main/resources/ images folder. The thumbnail image should be of the size 120 x 60 pixels. Add the i18n properties file under the src/main/resources/i18n folder. The name of the filer we defined in messages.xml. This file is an XML file wrapped within the messagebundle tag. Each property in the file is entered as an XML tag, as shown next: <msg name="gadget.title">Hello Gadget</msg> The msg tag has a name attribute, which is the property, and the corresponding Value is enclosed in the msg tag. We use three properties in our example and the entire file in our example looks like the following: <messagebundle> <msg name="gadget.title">Hello Gadget</msg> <msg name="gadget.title.url">http://www.j-tricks.com</msg> <msg name="gadget.description">Example Gadget from J-Tricks</msg></messagebundle> Write the Gadget XML. The Gadget XML has a Module element at the root of the XML. It has mainly three elements underneath – ModulePrefs, UserPref, and Content. We will write of each of them in this example. The entire set of attributes and elements and other details of the gadget specification can be read at http://confluence.atlassian. com/display/GADGETDEV/Creating+your+Gadget+XML+Specification. Write the ModulePrefs element. This element holds the information about the gadget. It also has two child elements – Require and Optional, that are used to define the required or optional features for the gadget. The following is how the ModulePrefs element looks in our example after it is populated with all the attributes: <ModulePrefs title="__MSG_gadget.title__" title_url="__MSG_gadget.title.url__" description="__MSG_gadget.description__" author="Jobin Kuruvilla" [email protected] screenshot='#staticResourceUrl("com.jtricks.gadgets:hello-gadget", "screenshot.png")' thumbnail='#staticResourceUrl("com.jtricks.gadgets:hello-gadget", "thumbnail.png")' height="150" ></ModulePrefs> As you can see, it holds information like title, title URL (to which the gadget title will link to), description, author name and email, height of the gadget, and URLs to screenshot and thumbnail images. Anything that starts with __MSG_ and ends with __ is a property that is referred from the i18n properties file. The height of the gadget is optional and 200, by default. The images are referenced using #staticResourceUrl where the first argument is the fully qualified gadget module key which is of the form ${atlassianplugin- key}:${module-key}. In our example, the plugin key is com. jtricks.gadgets and the module key is hello-gadget. Add the optional gadget directory feature inside ModulePrefs. This is currently supported only in JIRA: <Optional feature="gadget-directory"> <Param name="categories"> Other </Param></Optional> In the example, we add the category as Other! Other values supported for category are: JIRA, Confluence, FishEye, Crucible, Crowd, Clover, Bamboo, Admin, Charts, and External Content. You can add the gadget to more than one category by adding the categories within the Param element, each in a new line. Include Required features if there are any under the XML tag require. A full list of supported features can be found at http://confluence.atlassian.com/display/GADGETDEV/Including+Features+into+your+Gadget. Add the Locale element to point to the i18n properties file: <Locale messages="__ATLASSIAN_BASE_URL__/download/resources/com.jtricks.gadgets/i18n/messages.xml"/> Here the property __ATLASSIAN_BASE_URL__ will be automatically substituted with JIRA's configured base URL when the gadget is rendered. The path to the property file here is __ATLASSIAN_BASE_URL__/download/ resources/com.jtricks.gadgets, where com.jtricks.gadgets is the Atlassian plugin key. The path to the XML file /i18n/messages.xml is what is defined in the resource module earlier. Add User Preferences if required, using the UserPref element. We will omit the same in this example as the 'Hello Gadget' doesn't take any inputs from the user. Add the Content for the gadget. This is where the gadget is rendered using HTML and JavaScript. In our example, we just need to provide the static text 'Hello From JTricks' and it is fairly easy. The entire content is wrapped within the <![CDATA[ and ]]>, so that they won't be treated as XML tags. The following is how it looks in our example: <Content type="html" view="profile"> <![CDATA[ Hello From JTricks ]]></Content> Our gadget's XML is now ready and looks like the following block of code: <?xml version="1.0" encoding="UTF-8" ?><Module> <ModulePrefs title="__MSG_gadget.title__" title_url="__MSG_gadget.title.url__" description="__MSG_gadget.description__" author="Jobin Kuruvilla" [email protected] screenshot='#staticResourceUrl("com.jtricks.gadgets:hello-gadget", "screenshot.png")' thumbnail='#staticResourceUrl("com.jtricks.gadgets:hello-gadget", "thumbnail.png")' height="150" > <Optional feature="gadget-directory"> <Param name="categories"> Other </Param> </Optional> <Locale messages="__ATLASSIAN_BASE_URL__/download/resources/com.jtricks.gadgets/i18n/messages.xml"/> </ModulePrefs> <Content type="html" view="profile"> <![CDATA[ Hello From JTricks ]]> </Content></Module> Package the plugin, deploy it, and test it. How it works... Once the plugin is deployed, we need to add the gadget in the JIRA dashboard. The following is how it appears in the Add Gadget screen. Note the thumbnail is the one we have in the plugin and also note that it appears in the Other section: Once it is added, it appears as follows in the Dashboards section:   (Move the mouse over the image to enlarge.)   There's more... We can modify the look-and-feel of the gadgets by adding more HTML or gadget preferences! For example, <font color="red">Hello From JTricks</font> will make it appear in red. We can adjust the size of the gadget using the dynamic-height feature. We should add the following under the ModulePrefs element: <Require feature="dynamic-height"/> We should then invoke gadgets.window.adjustHeight(); whenever the content is reloaded. For example, we can do it in a window onload event, as shown next: <script type="text/javascript" charset="utf-8"> function resize() { gadgets.window.adjustHeight(); } window.onload=resize;</script> The gadget xml file, in this case, will look like this: <?xml version="1.0" encoding="UTF-8" ?><Module> <ModulePrefs title="__MSG_gadget.title__" title_url="__MSG_gadget.title.url__" description="__MSG_gadget.description__" author="Jobin Kuruvilla" author_email="[email protected]" screenshot='#staticResourceUrl("com.jtricks.gadgets:hello-gadget", "screenshot.png")' thumbnail='#staticResourceUrl("com.jtricks.gadgets:hello-gadget", "thumbnail.png")' height="150" > <Optional feature="gadget-directory"> <Param name="categories"> Other </Param> </Optional> <Require feature="dynamic-height"/> <Locale messages="__ATLASSIAN_BASE_URL__/download/resources/com.jtricks.gadgets/i18n/messages.xml"/> </ModulePrefs> <Content type="html" view="profile"> <![CDATA[ <script type="text/javascript" charset="utf-8"> function resize() { gadgets.window.adjustHeight(); } window.onload=resize; </script> Hello From JTricks ]]> </Content></Module> The gadget should now appear as follows: Note that the size is adjusted to just fit the text! Invoking REST services from gadgets In the previous recipe, we saw how to write a gadget with static content. In this recipe, we will have a look at creating a gadget with dynamic content or the data that is coming from the JIRA server. JIRA uses REST services to communicate between the gadgets and the server. In this recipe, we will use an existing REST service. Getting ready Create the Hello Gadget, as described in the previous recipe. How to do it... Let us consider a simple modification to the existing Hello Gadget to understand the basics of invoking REST services from gadgets. We will try to greet the current user by retrieving the user details from the server instead of displaying the static text: Hello From JTricks. JIRA ships with some inbuilt REST methods, one of which is to retrieve the details of the current user. The method can be reached in the URL: /rest/gadget/1.0/currentUser. We will use this method to retrieve the current user's full name and then display it in the gadget greeting. If the user's name is Jobin Kuruvilla, the gadget will display the message as Hello, Jobin Kuruvilla. As we are only changing the content of the gadget, the only modification is required in the gadget XML, which is hello-gadget.xml in our example. Only the Content element needs to be modified, which will now invoke the REST service and render the content. The following are the steps: Include the common Atlassian gadget resources: #requireResource("com.atlassian.jira.gadgets:common")#includeResources() #requireResource will bring in the JIRA gadget JavaScript framework into the gadget's context. #includeResources will write out the HTML tags for the resource in place. Check out http://confluence.atlassian.com/display/GADGETDEV/Using+Web+Resources+in+your+Gadget for more details. Construct a gadget object as follows: var gadget = AJS.Gadget The gadget object has four top-level options: baseUrl: An option to pass the base URL. It is a mandatory option, and we use __ATLASSIAN_BASE_URL__ here which will be rendered as JIRA's base URL. useOauth: An optional parameter. Used to configure the type of authentication which must be a URL. /rest/gadget/1.0/currentUser is commonly used. config: Another optional parameter. Only used if there are any configuration options for the gadget. view: Used to define the gadget's view. In our example, we don't use authentication or any configuration options. We will just go with the baseUrl and view options. The following is how the Gadget is created using JavaScript: <script type="text/javascript"> (function () { var gadget = AJS.Gadget({ baseUrl: "__ATLASSIAN_BASE_URL__", view: { ................ } }); })();</script> Populate the gadget view. The view object has the following properties: enableReload: Optional. Used to reload the gadget at regular intervals. onResizeReload: Optional. Used to reload the gadget when the browser is resized. onResizeAdjustHeight: Optional and used along with the dynamicheight feature. This will adjust the gadget height when the browser is resized. template: Created the actual view. args: An array of objects or function that returns an array of objects. It has two attributes. Key –used to access the data from within the template and ajaxOptions – set of request options used to connect to the server and retrieve data. In our example, we will use the template and args properties to render the view. First, let us see args because we use the data retrieved here in the template. args will look like the following: args: [{ key: "user", ajaxOptions: function() { return { url: "/rest/gadget/1.0/currentUser" }; } }] As you can see, we invoke the /rest/gadget/1.0/currentUser method and use the key user to refer the data we retrieved while rendering the view. ajaxOptions uses the jQuery Ajax Options, details of which can be found at http://api.jquery.com/jQuery.ajax#options. The key user will now hold the user details from the REST method, as follows: {"username":"jobinkk","fullName":"Jobin Kuruvilla","email":"[email protected]"} The template function will now use this args object (defined earlier) and its key, user to render the view as follows: template: function(args) { var gadget = this; var userDetails = AJS.$("<h1/>").text("Hello, "+args.user["fullName"]); gadget.getView().html(userDetails); } Here, args.user["fullName"] will retrieve the user's fullName from the REST output. Username or e-mail can be retrieved in a similar fashion. AJS.$ will construct the view as <h1>Hello, Jobin Kuruvilla</h1>, where Jobin Kuruvilla is the fullName retrieved. The entire Content section will look as shown in the following lines of code: <Content type="html" view="profile"> <![CDATA[ #requireResource("com.atlassian.jira.gadgets:common") #includeResources() <script type="text/javascript"> (function () { var gadget = AJS.Gadget({ baseUrl: "__ATLASSIAN_BASE_URL__", view: { template: function(args) { var gadget = this; var userDetails = AJS.$("<h1/>").text("Hello, "+args.user["fullName"]); gadget.getView().html(userDetails); }, args: [{ key: "user", ajaxOptions: function() { return { url: "/rest/gadget/1.0/currentUser" }; } }] } }); })(); </script> ]]> </Content> Package the gadget and deploy it. How it works... After the modification to the gadget XML, the gadget will now display the method as follows:
Read more
  • 0
  • 0
  • 5191

article-image-drools-integration-modules-spring-framework-and-apache-camel
Packt
28 Dec 2011
14 min read
Save for later

Drools Integration Modules: Spring Framework and Apache Camel

Packt
28 Dec 2011
14 min read
Setting up Drools using Spring Framework In this recipe, you will see how to configure the Drools business rules engine using the Spring framework, using the integration module specially created to configure the Drools beans with XML. How to do it... Carry out the following steps in order to configure a Drools project using the Spring Framework integration: Add the following dependency in your Maven project by adding this XML code snippet in the pom.xml file: <dependency> <groupId>org.drools</groupId> <artifactId>drools-spring</artifactId> <version>5.2.0.Final</version> </dependency> Once the drools-spring module and the Spring Framework dependencies are added into your project, it's time to write the rules that are going to be included in the knowledge base: package drools.cookbook.chapter07 import drools.cookbook.chapter07.model.Server import drools.cookbook.chapter07.model.Virtualization rule "check minimum server configuration" dialect "mvel" when $server : Server(processors < 2 || memory<=1024 || diskSpace<=250) then System.out.println("Server "" + $server.name + "" was rejected by don't apply the minimum configuration."); retract($server); end rule "check available server for a new virtualization" dialect "mvel" when $virtualization : Virtualization($virtMemory : memory, $virtDiskSpace : diskSpace) $server : Server($memory : memory, $diskSpace : diskSpace, virtualizations !=null) Number((intValue + $virtMemory) < $memory) from accumulate (Virtualization($vmemory : memory) from $server.virtualizations, sum($vmemory)) Number((intValue + $virtDiskSpace) < $diskSpace) from accumulate(Virtualization($vdiskSpace : diskSpace) from $server.virtualizations, sum($vdiskSpace)) then $server.addVirtualization($virtualization); retract($virtualization); end Then a Spring Application Context XML file has to be created to configure the Drools beans with the following code <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://drools.org/schema/drools-spring org/drools/container/spring/drools-spring-1.2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <drools:grid-node id="node1" /> <drools:resource id="resource1" type="DRL" source="classpath:drools/cookbook/chapter07/rules.drl" /> <drools:kbase id="kbase1" node="node1"> <drools:resources> <drools:resource ref="resource1" /> </drools:resources> </drools:kbase> <drools:ksession id="ksession1" type="stateful" kbase="kbase1" node="node1" /> <drools:ksession id="ksession2" type="stateless" kbase="kbase1" node="node1" /> </beans> After all these three steps, you are ready to load the XML file using the Spring Framework API and obtain the instantiated beans to interact with the knowledge sessions: ClassPathXmlApplicationContext applicationContext = new ClassPathXmlApplicationContext("applicationContext.xml"); applicationContext.start(); StatefulKnowledgeSession ksession1= (StatefulKnowledgeSession)applicationContext .getBean("ksession1"); Server debianServer = new Server("debian-1", 4, 2048, 250, 0); ksession1.insert(debianServer); ksession1.fireAllRules(); applicationContext.stop();> In your Maven project, add the following dependencies in the pom.xml file to use JPA persistence using the Spring Framework: public static void main(String[] args) { ClassPathXmlApplicationContext applicationContext = new ClassPath mlApplicationContext("applicationContext.xml"); applicationContext.start(); StatefulKnowledgeSession ksession1 = (StatefulKnowledgeSession) applicationContext.getBean("ksession1"); Server debianServer = new Server("debian-1", 4, 2048, 250, 0); ksession1.insert(debianServer); ksession1.fireAllRules(); applicationContext.stop(); } Implement the java.io.Serializable interface to the objects of your domain model that will be persisted. Create a persistence.xml file inside the resources/META-INF folder to configure the persistence unit. In this recipe, we will use an embedded H2 database for testing purposes, but you can configure it for any relational database engine:<?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xsi_schemaLocation="http://java.sun.com/xml/ns/persistence http: java.sun.com/xml/ns/persistence/persistence_1_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/ xml/ns/persistence/orm_1_0.xsd"> <persistence-unit name="drools.cookbook.spring.jpa" transaction- type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>org.drools.persistence.info.SessionInfo</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect" /> <property name="hibernate.max_fetch_depth" value="3" /> <property name="hibernate.hbm2ddl.auto" value="create" /> <property name="hibernate.show_sql" value="false" /> </properties> </persistence-unit> </persistence> Now, we have to create an XML file named applicationContext.xml in the resources folder, in which we are going to define the beans needed to configure the JPA persistence and the Drools beans: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/ beans http://www.springframework.org/schema/beans/spring- beans-2.0.xsdhttp://drools.org/schema/drools-spring org/drools/ container/spring/drools-spring-1.2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/ schema/spring/camel-spring.xsd"> <bean id="dataSource" class="org.springframework.jdbc.datasource. DriverManagerDataSource"> <property name="driverClassName" value="org.h2.Driver" /> <property name="url" value="jdbc:h2:tcp://localhost/Drools" /> <property name="username" value="sa" /> <property name="password" value="" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm. jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="persistenceUnitName" value="drools.cookbook. spring.jpa" /> </bean> <bean id="txManager" class="org.springframework.orm.jpa. JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <drools:grid-node id="node1" /> <drools:kstore id="kstore1" /> <drools:resource id="resource1" type="DRL" source="classpath:drools/cookbook/chapter07/rules.drl" /> <drools:kbase id="kbase1" node="node1"> <drools:resources> <drools:resource ref="resource1" /> </drools:resources> </drools:kbase> <drools:ksession id="ksession1" type="stateful" kbase="kbase1" node="node1"> <drools:configuration> <drools:jpa-persistence> <drools:transaction-manager ref="txManager" /> <drools:entity-manager-factory ref="entityManagerFactory" /> </drools:jpa-persistence> </drools:configuration> </drools:ksession> </beans> Finally, we have to write the following code in a new Java class file, or in an existing one, in order to interact with the Stateful knowledge session and persist this state into the H2 database without further actions: public void startApplicationContext() { ClassPathXmlApplicationContext applicationContext =newClassPathXmlApplicationContext("/applicationContext.xml"); applicationContext.start(); StatefulKnowledgeSession ksession1 = (StatefulKnowledgeSession) applicationContext.getBean("ksession1"); int sessionId = ksession1.getId(); Server debianServer = new Server("debianServer", 4, 2048, 1222, 0); ksession1.insert(debianServer); ksession1.fireAllRules(); ksession1.dispose(); Environment env = KnowledgeBaseFactory.newEnvironment(); env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, applicationContext.getBean("entityManagerFactory")); env.set(EnvironmentName.TRANSACTION_MANAGER, applicationContext.getBean("txManager")); Virtualization virtualization = new Virtualization( "dev","debian", 512, 30); KnowledgeStoreService kstore = (KnowledgeStoreService) applicationContext.getBean("kstore1"); KnowledgeBase kbase1 = (KnowledgeBase)applicationContext. getBean("kbase1"); ksession1 = kstore .loadStatefulKnowledgeSession(sessionId, kbase1, null, env); ksession1.insert(virtualization); ksession1.fireAllRules(); applicationContext.stop(); } How it works... In order to use the Spring Framework integration in your project, First you have to add the drools-spring module to it. In a Maven project, you can do it by adding the following code snippet in your pom.xml file: <dependency> <artifactId>org.drools</artifactId> <groupId>drools-spring</groupId> <version>5.2.0.Final</version> </dependency> This dependency will transitively include the required Spring Framework libraries in the Maven dependencies. Currently, the integration is done using the 2.5.6 version, but it should work with the newest version as well. Now, we are going to skip the rule authoring step because it's a very common task and you really should know how to do it at this point, and we are going to move forward to the beans configuration. As you know, the Spring Framework configuration is done through an XML file where the beans are defined and injected between them, and to make Drools declaration easy the integration module provides a schema and custom parsers. Before starting the bean configuration, the schema must be added into the XML namespace declaration, otherwise the Spring XML Bean Definition Reader is not going to recognize the Drools tags and some exceptions will be thrown. In the following code lines, you can see the namespace declarations that are needed before you start writing the bean definitions: <?xml version="1.0" encoding="UTF-8"?> <beans xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://drools.org/schema/drools-spring org/drools/container/spring/drools-spring-1.2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <!—- define your beans here --> </beans> After this, the drools beans can be added inside the XML configuration file using the friendly tags: <drools: /> tags: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://drools.org/schema/drools-spring org/drools/container/spring/drools-spring-1.2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <drools:grid-node id="node1" /> <drools:resource id="resource1" type="DRL" source="classpath:drools/cookbook/chapter07/rules.drl" /> <drools:kbase id="kbase1" node="node1"> <drools:resources> <drools:resource ref="resource1" /> </drools:resources> </drools:kbase> <drools:ksession id="ksession1" type="stateful" kbase="kbase1" node="node1" /> </beans> As you can see, there is only one stateful knowledge session bean configured by using the tag with a ksession1 ID. This ksession1 bean was injected with a knowledge base and a grid node so that the Drools Spring beans factories, which are provided by the integration module, can instantiate it. Once the drools beans are configured, it's time to instantiate them using the Spring Framework API, as you usually do: public static void main(String[] args) { ClassPathXmlApplicationContext applicationContext = new ClassPath mlApplicationContext("applicationContext.xml"); applicationContext.start(); StatefulKnowledgeSession ksession1 = (StatefulKnowledgeSession) applicationContext.getBean("ksession1"); Server debianServer = new Server("debian-1", 4, 2048, 250, 0); ksession1.insert(debianServer); ksession1.fireAllRules(); applicationContext.stop(); } In the Java main method, a ClassPathXmlApplicationContext object instance is used to load the bean definitions, and once they are successfully instantiated they are available to be obtained using the getBean(beanId) method . At this point, the Drools beans are instantiated and you can start interacting with them as usual by just obtaining their references. As you saw in this recipe, the Spring framework integration provided by Drools is pretty straightforward and allows the creation of a complete integration, thanks to its custom tags and simple configuration. See also For more information about the Drools bean definitions, read the Spring Integration in the official documentation available at http://www.jboss.org/drools/documentation. Configuring JPA to persist our knowledge with Spring Framework How to do it... Carry out the following steps in order to configure the Drools JPA persistence using the Spring module integration: How it works... Before we start declaring the beans that are needed to persist the knowledge using JPA, we have to add some dependencies into our project configuration, especially the ones used by the Spring Framework. These dependencies were already described in the first step of the previous section, so we can safely continue with the remaining steps. Once the dependencies are added into the project, we have to implement the java.io.Serializable interface in the classes of our domain model that will be persisted. After this, we have to create a persistence unit configuration by using the default persistence.xml file located in the resources/META-INF directory of our project. This persistence unit is named drools.cookbook.spring.jpa and uses the Hibernate JPA implementation. Also, it is configured to use an H2 Java database, but in your real environment, you should supply the appropriate configuration. Next, you will see the persistence unit example, with the annotated SessionInfo entity that will be used to store the session data, which is ready to be used with Drools: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xsi_schemaLocation="http://java.sun.com/xml/ns/persistence http:// java.sun.com/xml/ns/persistence/persistence_1_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/ persistence/orm_1_0.xsd"> <persistence-unit name="drools.cookbook.spring.jpa" transaction- type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>org.drools.persistence.info.SessionInfo</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect" /> <property name="hibernate.max_fetch_depth" value="3" /> <property name="hibernate.hbm2ddl.auto" value="create" /> <property name="hibernate.show_sql" value="false" /> </properties> </persistence-unit> </persistence> Now, we are ready to declare the beans that are needed to enable the JPA persistence with an XML file, where the most important section is the declaration of the Spring DriverManagerDataSource and LocalContainerEntityManagerFactoryBean beans , which are very descriptive and can be configured with the parameters of your data engine. Also, one of the most important declarations is the KnowledgeStoreService bean, using the tag, that will be primarily used to load the persistence knowledge session:<?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://drools.org/schema/drools-spring org/drools/container/spring/drools-spring-1.2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/ spring/camel-spring.xsd"> <bean id="dataSource" class="org.springframework.jdbc.datasource. DriverManagerDataSource"> <property name="driverClassName"value="org.h2.Driver" /> <property name="url"value="jdbc:h2:tcp://localhost/Drools" /> <property name="username" value="sa" /> <property name="password" value="" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa. LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="persistenceUnitName" value="drools.cookbook.spring.jpa" /> </bean> <bean id="txManager" class="org.springframework.orm.jpa. JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <drools:grid-node id="node1" /> <drools:kstore id="kstore1" /> <drools:resource id="resource1" type="DRL" source="classpath:drools/ cookbook/chapter07/rules.drl" /> <drools:kbase id="kbase1" node="node1"> <drools:resources> <drools:resource ref="resource1" /> </drools:resources> </drools:kbase> <drools:ksession id="ksession1" type="stateful" kbase="kbase1" node="node1"> <drools:configuration> <drools:jpa-persistence> <drools:transaction-manager ref="txManager" /> <drools:entity-manager-factory ref="entityManagerFactory" /> </drools:jpa-persistence> </drools:configuration> </drools:ksession> </beans> After the bean definitions, we can start writing the Java code needed to initialize the Spring Framework application context and interact with the defined beans. After loading the application context by using a ClassPathXmlApplicationContext object, we have to obtain the stateful knowledge session to insert the facts into the working memory, and also obtain the ID of the knowledge session to recover it later: ClassPathXmlApplicationContext applicationContext = new ClassPathXmlApplicationContext("/applicationContext.xml"); applicationContext.start(); StatefulKnowledgeSession ksession1 = (StatefulKnowledgeSession) applicationContext.getBean("ksession1"); int sessionId = ksession1.getId(); Server debianServer = new Server("debianServer", 4, 2048, 1222, 0); ksession1.insert(debianServer); ksession1.fireAllRules(); ksession1.dispose(); Once we are done interacting with the knowledge session and inserting facts, firing the rules, and so on, these can be disposed. They can be restored later using the KnowledgeStoreService bean , but we have to create a new org.drools.runtime.Environment object to set the EntityManager and TransactionManager used in the persistence process before trying to load the persisted knowledge session. The org.drools.runtime.Environment object can be created as follows: Environment env = KnowledgeBaseFactory.newEnvironment(); env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, applicationContext.getBean("entityManagerFactory")); env.set(EnvironmentName.TRANSACTION_MANAGER, applicationContext.getBean("txManager")); Virtualization virtualization = new Virtualization("dev", "debian", 512, 30); Finally, with the Environment object created, we can obtain the KnowledgeStoreService bean together with the KnowledgeSession bean and the StatefulKnowledgeSession ID to load the stored state and start to interact with it as we do usually: KnowledgeStoreService kstore = (KnowledgeStoreService) applicationContext.getBean("kstore1"); KnowledgeBase kbase1 = (KnowledgeBase) applicationContext. getBean("kbase1"); ksession1 = kstore.loadStatefulKnowledgeSession(sessionId, kbase1, null, env); ksession1.insert(virtualization); ksession1.fireAllRules(); applicationContext.stop(); As you saw in this recipe, the knowledge session persistence is totally transparent to the user and automatic without any extra steps to save the state. By following these steps you can easily integrate JPA persistence using Hibernate, or any other vendor's JPA implementation, in order to save the current state of the knowledge session using the Spring Framework Integration.
Read more
  • 0
  • 0
  • 2838
Banner background image

article-image-google-apps-surfing-web
Packt
13 Dec 2011
8 min read
Save for later

Google Apps: Surfing the Web

Packt
13 Dec 2011
8 min read
Browsing and using websites Back in the day, when I first started writing, there were two ways to research—own a lot of books (I did and still do), and/or spend a lot of time at the library (I did, don't have to now). This all changed in the early 90s when Sir Tim Berners-Lee invented the World Wide Web (WWW) . The web used the Internet, which was already there, although Al Gore did not invent the Internet. Although earlier experiments took place, Vinton Cerf's development of the basic transmission protocols making all we enjoy today possible gives him the title "Father of the Internet". All of us reading this book use the Internet and the web almost every day. App Inventor itself relies extensively on the web; you have to use the web in designing apps and downloading the Blocks Editor to power them up. Adding web browsing to our apps gives us: Access to literally trillions of web pages (no one knows how many; Google said it was over a trillion in 2008, and growth has been tremendous since then). Lets us leverage the limited resources and storage capacity of the user's phone by many times as we use the millions of dollars invested in server farms (vast collections of interconnected computers) and thousands of web pages on commercial sites (which they want us to use, even beg us to use because it promotes their products or services). Makes it possible to write powerful and useful apps with really little effort on our part. Take, for example, what I referred to earlier in this book as a link app. This is an app that, when called by the user, simply uses ActivityStarter to open a browser and load a web page. Let's whip one up. Time for action – building an eBay link app Yes, eBay loves us—especially if we buy or sell items on this extensive online auction site. And they want you to do it not just at home but when you're out with only your smartphone. To encourage use from your phone, eBay has spent a gazillion dollars (that's a lot) in providing a mobile site (http://m.ebay.com) that makes the entire eBay site (hundreds of thousands, probably millions of pages) available and useful to you. Back to that word, leverage—the ancient Greek mathematician, Archimedes, said about levers, "Give me a place to stand on and I will move the Earth." Our app's leverage won't move the Earth, but we can sure bid on just about everything on it. Okay, the design of our eBay app is very simple since the screen will be there for only a second or so. I'll explain that in a moment. So, we need just a label and two non-visual components: ActivityStarter and Clock. In the Properties column for the ActivityStarter, put android.intent.action.VIEW in the Action field (again, this is how your app calls the phone's web browser, and the way it's entered is case-sensitive). This gives us something as shown in the following screenshot (with a little formatting added): Now, the reason for the LOADING...nInternet connection required (in the label, n being a line break) is a basic notifier and error trap. First, if the Internet connection is slow, it lets the user know something is happening. Second, if there is not 3G or Wi-Fi connection, it tells the user why nothing will happen until they get a connection. Simple additions like the previous (basically thinking of the user's experience in using our apps) define the difference between amateur and professional apps. Always try to anticipate, because if we leave any way possible at all to screw it up, some user will find it. And who gets blamed? Right. You, me, or whatever idiot wrote that defective app. And they will zing you on Android Market. Okay, now to our buddy: the Blocks Editor. We need only one small block group. Everything is in the Clock1.Timer block to connect to the eBay mobile site and then, job done, gracefully exit. Inside the clock timer frame, we put four blocks, which accomplish the following logic: In our ActivityStarter1.DataUri goes the web address of the eBay mobile website home: http://m.ebay.com. The ActivityStarter1.StartActivity block calls the phone's web browser and passes the address to it. Clock1.TimerAlwaysFires set to false tells AI, "Stop, don't do anything else." Finally, the close application block (which we should always be nice and include somewhere in our apps) removes the app from the phone or other Android device's memory, releasing those always scarce resources. Make a nice icon, and it's ready to go onto Android Market (I put mine on as eBay for Droids). What just happened? That's it—a complete application giving access to a few million great bargains on eBay. This is how it will look when the eBay mobile site home page opens on the user's phone: Pretty neat, huh? And there are lots of other major sites out there that offer mobile sites, easily converted into link apps such as this eBay example. But, what if you want to make the app more complex with a lot of choices? One that after the user finishes browsing a site, they come back to your app and choose yet another site to visit? No problem. Here's an example of that. I recently published an app called AVLnews (AVL is the airport code for Asheville, North Carolina where I live). This app actually lets the user drill down into local news for all 19 counties in the Western North Carolina mountains. Here's what the front page of the app looks like: Even with several pages of choices, this type of app is truly easy to create. You need only a few buttons, labels, and (as ever) horizontal and vertical arrangements for formatting. And, of course, add a single ActivityStarter for calling the phone's web browser. Now, here's the cool part! No matter how long and how many pages your user reads on the site a button sends him to, when he or she hits the hardware Back button on their phone, they return to your app. The preceding is the only way the Back button works for an App Inventor app! If user is within an AI app and hits the Back button, it immediately kills your app (except when a listpicker is displayed, in which case it returns you to the screen that invoked it). This is doubleplusungood, so to speak. So, always supply navigation buttons for changing pages and maybe a note somewhere not to use the hardware button. Okay, back to my AVLnews app as an example. The first button I built was for the Citizen-Times, our major newspaper in the area. They have a mobile site such as the one we saw earlier for eBay. It looks nice and is easy to read on a Droid or other smartphone, like the following: And, it's equally easy to program—this is all you need: I then moved on to other news sources for the areas. The small weekly newspapers in the more isolated, outlying mountain counties have no expensive mobile sites. The websites they do have are pretty conventional. But, when I linked to them, the page would come up with tiny, unreadable type. I had to move it around, do the double tap on the content to fit to the page trick to expand the page, turn the phone on its side, and many such more, and still had trouble reading stuff. Folks, you do not do things like that to your users. Not if you want them to think you are one cool app inventor, as I know you to be by now. So, here's the trick to making almost all web pages readable on a phone—Google Mobilizer! It's a free service of Google at http://www.google.com/gwt/n. Use it to construct links that return a nicely formatted and readable page as shown in the next screenshot. People will thank you for it and think you expended many hours of genius-level programming effort to achieve it. And, remember, it's not polite to disagree with people, eh? Often, the search terms and mobilizing of your link is long and goes on for some time (the following screenshot is only a part of it). However, that's where your real genius comes into play: building a search term that gets the articles fitting whatever is the topic on the button. Summing up: using the power of the web lets us build powerful apps that use few resources on the phone yet return thousands upon thousands of pages. Enjoy. Now, we will look at yet another Google service: Fusion Tables. And see how our App Inventor apps can take advantage of these online data sources.  
Read more
  • 0
  • 0
  • 1685

article-image-microsoft-sharepoint-creating-various-content-types
Packt
02 Dec 2011
7 min read
Save for later

Microsoft SharePoint : Creating Various Content Types

Packt
02 Dec 2011
7 min read
(For more resources on Microsoft SharePoint, see here.) SharePoint content types are used to make it simpler for site managers to standardize what content and associated metadata gets uploaded to lists and libraries on the site. In this article, we'll take a look at how you can create various content types and assign them to be used in site containers. As a subset of more complex content types, a document set will allow your users to store related items in libraries as a set of documents sharing common metadata. This approach will allow your users to run business processes on a batch of items in the document set as well as the whole set. In this article, we'll take a look at how you can define a document set to be used on your site. Since users mostly interact with your SharePoint site through pages and views, the ability to modify SharePoint pages to accommodate business user requirements becomes an important part of site management. In this article, we'll take a look at how you can create and modify pages and content related to them. We will also take a look at how you can provision simple out-of-the-box web parts to your SharePoint publishing pages and configure their properties. Creating basic and complex content types SharePoint lists and libraries can store a variety of content on the site. SharePoint also has a user interface to customize what information you can collect from users to be attached as an item metadata. In the scenario where the entire intranet or the department site within your organization requires a standard set of metadata to be collected with list and library items, content types are the easiest approach to implement the requirement. With content types, you can define the type of business content your users will be interacting with. Once defined, you can also add a metadata field and any applicable validation to them. Once defined, you can attach the newly created content type to the library or list of your choice so that newly uploaded or modified content can conform to the rules you defined on the site. Getting ready Considering you have already set up your virtual development environment, we'll get right into authoring our script. It's assumed you are familiar with how to interact with SharePoint lists and libraries using PowerShell. In this recipe, we'll be using PowerGUI to author the script, which means you will be required to be logged in with an administrator's role on the target Virtual Machine. How to do it... Let's take a look at how we can provision site content types using PowerShell as follows: Click Start | All Programs | PowerGUI | PowerGUI Script Editor. In the main script editing window of PowerGUI, add the following script: # Defining script variables$SiteUrl = "http://intranet.contoso.com"$ListName = "Shared Documents"# Loading Microsoft.SharePoint.PowerShell $snapin = Get-PSSnapin | Where-Object {$_.Name -eq 'Microsoft.SharePoint.Powershell'}if ($snapin -eq $null) {Write-Host "Loading SharePoint Powershell Snapin"Add-PSSnapin "Microsoft.SharePoint.Powershell"}$SPSite = Get-SPSite | Where-Object {$_.Url -eq $SiteUrl} if($SPSite -ne $null) { Write-Host "Connecting to the site" $SiteUrl ",list " $ListName $RootWeb = $SPSite.RootWeb $SPList = $RootWeb.Lists[$ListName] Write-Host "Creating new content type from base type" $DocumentContentType = $RootWeb.AvailableContentTypes["Document"] $ContentType = New-Object Microsoft.SharePoint.SPContentType -ArgumentList @($DocumentContentType, $RootWeb.ContentTypes, "Org Document") Write-Host "Adding content type to site" $ct = $RootWeb.ContentTypes.Add($ContentType) Write-Host "Creating new fields" $OrgDocumentContentType = $RootWeb.ContentTypes[$ContentType.Id] $OrgFields = $RootWeb.Fields $choices = New-Object System.Collections.Specialized.StringCollection $choices.Add("East") $choices.Add("West") $OrgDivision = $OrgFields.Add("Division", [Microsoft.SharePoint.SPFieldType]::Choice, $false, $false, $choices) $OrgBranch = $OrgFields.Add("Branch", [Microsoft.SharePoint.SPFieldType]::Text, $false) Write-Host "Adding fields to content type" $OrgDivisionObject = $OrgFields.GetField($OrgDivision) $OrgBranchObject = $OrgFields.GetField($OrgBranch) $OrgDocumentContentType.FieldLinks.Add($OrgDivisionObject) $OrgDocumentContentType.FieldLinks.Add($OrgBranchObject) $OrgDocumentContentType.Update() Write-Host "Associating content type to list" $ListName $association = $SPList.ContentTypes.Add($OrgDocumentContentType) $SPList.ContentTypesEnabled = $true $SPList.Update() Write-Host "Content type provisioning complete" } $SPSite.Dispose() Click File | Save to save the script to your development machine's desktop. Set the filename of the script to CreateAssociateContentType.ps1. Open the PowerShell console window and call CreateAssociateContentType. ps1 using the following command: PS C:UsersAdministratorDesktop> . CreateAssociateContentType.ps1 As a result, your PowerShell script will create a site structure as shown in the following screenshot: Now, from your browser, let's switch to our SharePoint Intranet: http://intranet.contoso.com. From the home page's Quick launch, click the Shared Documents link. On the ribbon, click the Library tab and select Settings | Library Settings. Take note of the newly associated content type added to the Content Types area of the library settings, as shown in the following screenshot: Navigate back to the Shared Documents library from the Quick launch menu on your site and select any of the existing documents in the library. From the ribbons Documents tab, click Manage | Edit Properties. Take note of how the item now has the Content Type option available, where you can pick newly provisioned Org Document content type. Pick the Org Document content type and take note of the associated metadata showing up for the new content type, as shown in the following screenshot: How it works... First, we defined the script variables. In this recipe, the variables include a URL of the site where the content types are provisioned, http://intranet.contoso.com, and a document library to which the content type is associated: $ListName = "Shared Documents" Once a PowerShell snap-in has been loaded, we get a hold of the instance of the current site and its root web. Since we want our content type to inherit from the parent rather than just being defined from the scratch, we get a hold of the existing parent content type first, using the following command: $DocumentContentType = $RootWeb.AvailableContentTypes["Document"] Next, we created an instance of a new content type inheriting from our parent content type and provisioned it to the root site using the following command: $ContentType = New-Object Microsoft.SharePoint.SPContentType -ArgumentList @($DocumentContentType, $RootWeb.ContentTypes, "Org Document") Here, the new object takes the following parameters: the content type representing a parent, a web to which the new content type will be provisioned to, and the display name for the content type. Once our content type object has been created, we add it to the list of existing content types on the site: $ct = $RootWeb.ContentTypes.Add($ContentType) Since most content types are unique by the fields they are using, we will add some business- specific fields to our content type. First, we get a hold of the collection of all of the available fields on the site: $OrgFields = $RootWeb.Fields Next, we create a string collection to hold the values for the choice field we are going to add to our content type: $choices = New-Object System.Collections.Specialized.StringCollection The field with list of choices was called Division, representing a company division. We provision the field to the site using the following command: $OrgDivision = $OrgFields.Add("Division", [Microsoft.SharePoint.SPFieldType]::Choice, $false, $false, $choices) In the preceding command, the first parameter is the name of the field, followed by the type of the field, which in our case is choice field. We then specify whether the field will be a required field, followed by a parameter indicating whether the field name will be truncated to eight characters. The last parameter specifies the list of choices for the choice field. Another field we add, representing a company branch, is simpler since it's a text field. We define the text field using the following command: $OrgBranch = $OrgFields.Add("Branch", [Microsoft.SharePoint.SPFieldType]::Text, $false) We add both fields to the content type using the following commands: $OrgDocumentContentType.FieldLinks.Add($OrgDivisionObject)$OrgDocumentContentType.FieldLinks.Add($OrgBranchObject) The last part is to associate the newly created content type to a library, in our case Shared Documents. We use the following command to associate the content type to the library: $association = $SPList.ContentTypes.Add($OrgDocumentContentType) To ensure the content types on the list are enabled, we set the ContentTypesEnabled property of the list to $true.  
Read more
  • 0
  • 0
  • 2294

article-image-jira-programming-workflows
Packt
01 Dec 2011
20 min read
Save for later

JIRA: Programming Workflows

Packt
01 Dec 2011
20 min read
(For more resources on this topic, see here.) Introduction Workflows are one standout feature which help users to transform JIRA into a user-friendly system. It helps users to define a lifecycle for the issues, depending on the issue type, the purpose for which they are using JIRA, and so on. As the Atlassian documentation says at http://confluence.atlassian.com/display/JIRA/Configuring+Workflow: A JIRA workflow is the set of steps and transitions an issue goes through during its lifecycle. Workflows typically represent business processes. JIRA uses Opensymphony's OSWorkflow which is highly configurable, and more importantly pluggable, to cater for the various requirements. JIRA uses three different plugin modules to add extra functionalities into its workflow, which we will see in detail through this chapter. To make things easier, JIRA ships with a default workflow. We can't modify the default workflow, but can copy it into a new workflow and amend it to suit our needs. Before we go into the development aspect of a workflow, it makes sense to understand the various components of a workflow. The two most important components of a JIRA workflow are Step and Transition. At any point of time, an Issue will be in a step. Each step in the workflow is linked to a workflow Status (http://confluence.atlassian.com/display/JIRA/Defining+%27Status%27+F ield+Values) and it is this status that you will see on the issue at every stage. A transition, on the other hand, is a link between two steps. It allows the user to move an issue from one step to another (which essentially moves the issue from one status to another). Few key points to remember or understand about a workflow: An issue can exist in only one step at any point in time A status can be mapped to only one step in the workflow A transition is always one-way. So if you need to go back to the previous step, you need a different transition A transition can optionally specify a screen to be presented to the user with the right fields on it OSWorkflow, and hence JIRA, provides us with the option of adding various elements into a workflow transition which can be summarized as follows: Conditions: A set of conditions that need to be satisfied before the user can actually see the workflow action (transition) on the issue Validators: A set of validators which can be used to validate the user input before moving to the destination step Post Functions: A set of actions which will be performed after the issue is successfully moved to the destination step These three elements give us the flexibility of handling the various use cases when an issue is moved from one status to another. JIRA ships with a few built-in conditions, validators, and post functions. There are plugins out there which also provide a wide variety of useful workflow elements. And if you still don't find the one you are looking for, JIRA lets us write them as plugins. We will see how to do it in the various recipes in this chapter. Hopefully, that gives you a fair idea about the various workflow elements. A lot more on JIRA workflows can be found in the JIRA documentation at http://confluence.atlassian.com/display/JIRA/Configuring+Workflow. Writing a workflow condition What are workflow conditions? They determine whether a workflow action is available or not. Considering the importance of a workflow in installations and how there is a need to restrict the actions either to a set of people, roles, and so on, or based on some criteria (for example, the field is not empty!), writing workflow conditions is inevitable. Workflow conditions are created with the help of the workflow-condition module. The following are the key attributes and elements supported. See http://confluence.atlassian.com/display/JIRADEV/Workflow+Plugin+Modules#WorkflowPluginModules-Conditions for more details. Attributes: Name Description key This should be unique within the plugin. class Class to provide contexts for rendered velocity templates. Must implement the com.atlassian.jira.plugin.workflow.WorkflowPluginConditionFactory interface. i18n-name-key The localization key for the human-readable name of the plugin module. name Human-readable name of the workflow condition. Elements: Name Description description Description of the workflow condition. condition-class Class to determine whether the user can see the workflow transition. Must implement com.opensymphony.workflow.Condition. Recommended to extend the com.atlassian.jira.workflow.condition.AbstractJiraCondition class. resource type="velocity" Velocity templates for the workflow condition views. Getting ready As usual, create a skeleton plugin. Create an eclipse project using the skeleton plugin and we are good to go! How to do it... In this recipe, let's assume we are going to develop a workflow condition that limits a transition only to the users belonging to a specific project role. The following are the steps to write our condition: Define the inputs needed to configure the workflow condition. We need to implement the WorkflowPluginFactory interface, which mainly exists to provide velocity parameters to the templates. It will be used to extract the input parameters that are used in defining the condition. To make it clear, the inputs here are not the inputs while performing the workflow action, but the inputs in defining the condition. The condition factory class, RoleConditionFactory in this case, extends the AbstractWorkflowPluginFactory, which implements the WorkflowPluginFactory interface. There are three abstract methods that we should implement, that is, getVelocityParamsForInput, getVelocityParamsForEdit, and getVelocityParamsForView. All of them, as the name suggests, are used for populating the velocity parameters for the different scenarios. In our example, we need to limit the workflow action to a certain project role, and so we need to select the project role while defining the condition. The three methods will be implemented as follows: private static final String ROLE_NAME = "role";private static final String ROLES = "roles";.......@Override  protected void getVelocityParamsForEdit(Map<String, Object>velocityParams, AbstractDescriptor descriptor) {    velocityParams.put(ROLE, getRole(descriptor));    velocityParams.put(ROLES, getProjectRoles());  }   @Override  protected void getVelocityParamsForInput(Map<String, Object> velocityParams) {    velocityParams.put(ROLES, getProjectRoles());  }   @Override  protected void getVelocityParamsForView(Map<String, Object> velocityParams, AbstractDescriptor descriptor) {    velocityParams.put(ROLE, getRole(descriptor));  } Let's look at the methods in detail: getVelocityParamsForInput: This method defines the velocity parameters for input scenario, that is, when the user initially configures the workflow. In our example, we need to display all the project roles so that the user can select one to define the condition. The method getProjectRoles merely returns all the project roles and the collection of roles is then put into the velocity parameters with the key ROLES. getVelocityParamsForView: This method defines the velocity parameters for the view scenario, that is, how the user sees the condition after it is configured. In our example, we have defined a role and so we should display it to the user after retrieving it back from the workflow descriptor. If you have noticed, the descriptor, which is an instance of AbstractDescriptor, is available as an argument in the method. All we need is to extract the role from the descriptor, which can be done as follows: private ProjectRole getRole(AbstractDescriptor descriptor){    if (!(descriptor instanceof ConditionDescriptor)) {      throw new IllegalArgumentException("Descriptor must be aConditionDescriptor.");    }      ConditionDescriptor functionDescriptor = (ConditionDescriptor)descriptor;     String role = (String) functionDescriptor.getArgs().get(ROLE);    if (role!=null && role.trim().length()>0)      return getProjectRole(role);    else       return null;  } Just check if the descriptor is a condition descriptor or not, and then extract the role as shown in the preceding snippet. getVelocityParamsForEdit: This method defines the velocity parameters for the edit scenario, that is, when the user modifies the existing condition. Here we need both the options and the selected value. Hence, we put both the project roles collection and the selected role on to the velocity parameters. The second step is to define the velocity templates for each of the three aforementioned scenarios: input, view, and edit. We can use the same template here for input and edit with a simple check to keep the old role selected for the edit scenario. Let us look at the templates: edit-roleCondition.vm: Displays all project roles and highlights the already-selected one in the edit mode. In the input mode, the same template is reused, but the selected role will be null and hence a null check is done: <tr bgcolor="#ffffff">    <td align="right" valign="top" bgcolor="#fffff0">        <span class="label">Project Role:</span>    </td>    <td bgcolor="#ffffff" nowrap>        <select name="role" id="role">        #foreach ($field in $roles)          <option value="${field.id}"            #if ($role && (${field.id}==${role.id}))                SELECTED            #end            >$field.name</option>        #end        </select>        <br><font size="1">Select the role in which the user should be present!</font>    </td></tr> view-roleCondition.vm: Displays the selected role: #if ($role)  User should have ${role.name} Role!#else  Role Not Defined#end The third step is to write the actual condition. The condition class should extend the AbstractJiraCondition class. Here we need to implement the passesCondition method. In our case, we retrieve the project from the issue, check if the user has the appropriate project role, and return true if the user does: public boolean passesCondition(Map transientVars, Map args,PropertySet ps) throws WorkflowException {    Issue issue = getIssue(transientVars);    User user = getCaller(transientVars, args);     project project = issue.getProjectObject();    String role = (String)args.get(ROLE);    Long roleId = new Long(role);     return projectRoleManager.isUserInProjectRole(user,projectRoleManager.getProjectRole(roleId), project);} The issue on which the condition is checked can be retrieved using the getIssue method implemented in the AbstractJiraCondition class. Similarly, the user can be retrieved using the getCaller method. In the preceding method, projectRoleManager is injected in the constructor, as we have seen before. We can see that the ROLE key is used to retrieve the project role ID from the args parameter in the passesCondition method. In order for the ROLE key to be available in the args map, we need to override the getDescriptorParams method in the condition factory class, RoleConditionFactory in this case. The getDescriptorParams method returns a map of sanitized parameters, which will be passed into workflow plugin instances from the values in an array form submitted by velocity, given a set of name:value parameters from the plugin configuration page (that is, the 'input-parameters' velocity template). In our case, the method is overridden as follows: public Map<String, String> getDescriptorParams(Map<String, Object>conditionParams) {    if (conditionParams != null &&conditionParams.containsKey(ROLE))        {            return EasyMap.build(ROLE,extractSingleParam(conditionParams, ROLE));        }        // Create a 'hard coded' parameter        return EasyMap.build();  } The method here builds a map of the key:value pair, where key is ROLE and the value is the role value entered in the input configuration page. The extractSingleParam method is implemented in the AbstractWorkflowPluginFactory class. The extractMultipleParams method can be used if there is more than one parameter to be extracted! All that is left now is to populate the atlassian-plugin.xml file with the aforementioned components. We use the workflow-condition module and it looks like the following block of code: <workflow-condition key="role-condition" name="Role BasedCondition"  class="com.jtricks.RoleConditionFactory">    <description>Role Based Workflow Condition</description>    <condition-class>com.jtricks.RoleCondition</condition-class>    <resource type="velocity" name="view"location="templates/com/jtricks/view-roleCondition.vm"/>    <resource type="velocity" name="input-parameters"location="templates/com/jtricks/edit-roleCondition.vm"/>    <resource type="velocity" name="edit-parameters" location="templates/com/jtricks/edit-roleCondition.vm"/></workflow-condition> Package the plugin and deploy it! How it works... After the plugin is deployed, we need to modify the workflow to include the condition. The following screenshot is how the condition looks when it is added initially. This, as you now know, is rendered using the input template: After the condition is added (that is, after selecting the Developers role), the view is rendered using the view template and looks as shown in the following screenshot: (Move the mouse over the image to enlarge.) If you try to edit it, the screen will be rendered using the edit template, as shown in the following screenshot: Note that the Developers role is already selected. After the workflow is configured, when the user goes to an issue, he/she will be presented with the transition only if he/she is a member of the project role where the issue belongs. It is while viewing the issue that the passesCondition method in the condition class is executed. Writing a workflow validator Workflow validators are specific validators that check whether some pre-defined constraints are satisfied or not while progressing on a workflow. The constraints are configured in the workflow and the user will get an error if some of them are not satisfied. A typical example would be to check if a particular field is present or not before the issue is moved to a different status. Workflow validators are created with the help of the workflow- validator module. The following are the key attributes and elements supported. Attributes: Name Description key This should be unique within the plugin. class Class to provide contexts for rendered velocity templates. Must implement the com.atlassian.jira.plugin.workflow.WorkflowPluginValidatorFactory interface. i18n-name-key The localization key for the human-readable name of the plugin module. name Human-readable name of the workflow validator. Elements: Name Description description Description of the workflow validator. validator-class Class which does the validation. Must implement com.opensymphony.workflow.Validator. resource type="velocity" Velocity templates for the workflow validator views. See http://confluence.atlassian.com/display/JIRADEV/Workflow+Plugin+Modules#WorkflowPluginModules-Validators for more details. Getting ready As usual, create a skeleton plugin. Create an eclipse project using the skeleton plugin and we are good to go! How to do it... Let us consider writing a validator that checks whether a particular field has a value entered on the issue or not! We can do this using the following steps: Define the inputs needed to configure the workflow validator: We need to implement the WorkflowPluginValidatorFactory interface, which mainly exists to provide velocity parameters to the templates. It will be used to extract the input parameters that are used in defining the validator. To make it clear, the inputs here are not the input while performing the workflow action, but the inputs in defining the validator. The validator factory class, FieldValidatorFactory in this case, extends the AbstractWorkflowPluginFactory interface and implements the WorkflowPluginValidatorFactory interface. Just like conditions, there are three abstract methods that we should implement. They are getVelocityParamsForInput, getVelocityParamsForEdit, and getVelocityParamsForView. All of them, as the names suggest, are used for populating the velocity parameters in different scenarios. In our example, we have a single input field, which is the name of a custom field. The three methods will be implemented as follows: @Overrideprotected void getVelocityParamsForEdit(Map velocityParams,AbstractDescriptor descriptor) {    velocityParams.put(FIELD_NAME, getFieldName(descriptor));  velocityParams.put(FIELDS, getCFFields());} @Overrideprotected void getVelocityParamsForInput(Map velocityParams) {    velocityParams.put(FIELDS, getCFFields());} @Overrideprotected void getVelocityParamsForView(Map velocityParams,AbstractDescriptor descriptor) {    velocityParams.put(FIELD_NAME, getFieldName(descriptor));} You may have noticed that the methods look quite similar to the ones in a workflow condition, except for the business logic! Let us look at the methods in detail: getVelocityParamsForInput: This method defines the velocity parameters for input scenario, that is, when the user initially configures the workflow. In our example, we need to display all the custom fields, so that the user can select one to use in the validator. The method getCFFields returns all the custom fields and the collection of fields is then put into the velocity parameters with the key fields. getVelocityParamsForView: This method defines the velocity parameters for the view scenario, that is, how the user sees the validator after it is configured. In our example, we have defined a field and so we should display it to the user after retrieving it back from the workflow descriptor. You may have noticed that the descriptor, which is an instance of AbstractDescriptor, is available as an argument in the method. All we need is to extract the field name from the descriptor, which can be done as follows: private String getFieldName(AbstractDescriptor descriptor){  if (!(descriptor instanceof ValidatorDescriptor)) {    throw new IllegalArgumentException('Descriptor must be aValidatorDescriptor.');  }    ValidatorDescriptor validatorDescriptor = (ValidatorDescriptor)descriptor;   String field = (String)validatorDescriptor.getArgs().get(FIELD_NAME);  if (field != null && field.trim().length() > 0)    return field;  else    return NOT_DEFINED;} Just check if the descriptor is a validator descriptor or not and then extract the field as shown in the preceding snippet. getVelocityParamsForEdit: This method defines the velocity parameters for the edit scenario, that is, when the user modifies the existing validator. Here we need both the options and the selected value. Hence we put both the custom fields' collection and the field name onto the velocity parameters. The second step is to define the velocity templates for each of the three aforementioned scenarios, namely, input, view, and edit. We can use the same template here for input and edit with a simple checking to keep the old field selected for the edit scenario. Let us look at the template: edit-fieldValidator.vm: Displays all custom fields and highlights the already selected one in edit mode. In input mode, the field variable will be null, and so nothing is pre-selected: <tr bgcolor="#ffffff">  <td align="right" valign="top" bgcolor="#fffff0">    <span class="label">Custom Fields :</span>  </td>  <td bgcolor="#ffffff" nowrap>    <select name="field" id="field">    #foreach ($cf in $fields)      <option value="$cf.name"        #if ($cf.name.equals($field)) SELECTED #end      >$cf.name</option>    #end    </select>    <br><font size="1">Select the Custom Field to be validated for NULL</font>  </td></tr> view-fieldValidator.vm: Displays the selected field: #if ($field)  Field '$field' is Required!#end The third step is to write the actual validator. The validator class should implement the Validator interface. All we need here is to implement the validate method. In our example, we retrieve the custom field value from the issue and throw an InvalidInputException if the value is null (empty): public void validate(Map transientVars, Map args, PropertySet ps)throws InvalidInputException, WorkflowException {    Issue issue = (Issue) transientVars.get("issue");    String field = (String) args.get(FIELD_NAME);      CustomField customField = customFieldManager.getCustomFieldObjectByName(field);     if (customField!=null){      //Check if the custom field value is NULL      if (issue.getCustomFieldValue(customField) == null){        throw new InvalidInputException("The field:"+field+" is             required!"); }    }  } The issue on which the validation is done can be retrieved from the transientVars map. customFieldManager is injected in the constructor as usual. All that is left now is to populate the atlassian-plugin.xml file with these components. We use the workflow-validator module, and it looks like the following block of code: <workflow-validator key="field-validator" name="Field Validator"  class="com.jtricks.FieldValidatorFactory">    <description>Field Not Empty Workflow Validator</description>     <validator-class>com.jtricks.FieldValidator</validator-class>     <resource type="velocity" name="view"location="templates/com/jtricks/view-fieldValidator.vm"/>    <resource type="velocity" name="input-parameters" location="templates/com/jtricks/edit-fieldValidator.vm"/>    <resource type="velocity" name="edit-parameters"location="templates/com/jtricks/edit-fieldValidator.vm"/></workflow-validator> Package the plugin and deploy it! Note that we have stored the role name instead of the ID in the workflow, unlike what we did in the workflow condition. However, it is safe to use the ID because administrators can rename the roles, which would then need changes in the workflows. How it works... After the plugin is deployed, we need to modify the workflow to include the validator. The following screenshot is how the validator looks when it is added initially. This, as you now know, is rendered using the input template: After the validator is added (after selecting the Test Number field), it is rendered using the view template and looks as follows: If you try to edit it, the screen will be rendered using the edit template, as shown in the following screenshot: Note that the Test Number field is already selected. After the workflow is configured, when the user goes to an issue and tries to progress it, the validator will check if the Test Number field has a value or not. It is at this point that the validate method in the FieldValidator class is executed. If the value is missing, you will see an error, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 2561
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-introducing-sametime-852
Packt
23 Nov 2011
11 min read
Save for later

Introducing Sametime 8.5.2

Packt
23 Nov 2011
11 min read
(For more resources on IBM Sametime, see here.) What's new in Sametime 8.5.2 IBM Sametime 8.5 and 8.5.2 introduces many new capabilities to the Sametime product suite. In addition to the numerous features already included with the Sametime 8.x family of clients, Sametime 8.5.2 has extended client usability and collaboration. Let us take a look at a few of those enhancements: Sametime Connect Client software is now supported on Microsoft Windows 7.0, Apple Macintosh 10.6, and Linux desktop operating systems including Red Hat Enterprise Desktop (RHED), Ubuntu, and SUSE Linux Enterprise Desktop (SLED) A lightweight browser-based client that requires no additional downloads is available for instant messaging for Apple iPhone and iPad users A browser-based client is available for Sametime meetings Sametime Mobile Client support has been added for Android devices (OS 2.0 and higher), Blackberry 5.0 and 6.0 devices, and Microsoft Mobile 6.5 devices Rich text messaging is now available for chats with users connected through the Sametime Gateway If you deployed Sametime Standard in a previous release or are interested in the online meeting conferencing features of Sametime 8.5.2, then you and your users will be happy to know that meeting attendees now can attend online meetings "instantly" without having to load any additional software in their browser. Meetings start quickly and are retained for future use. Probably the most significant change for you as a Sametime administrator is the introduction of IBM WebSphere Application Server (WAS) as an application hosting platform for Sametime. In previous versions of Sametime, with the exceptio of the Sametime Advanced and Sametime Gateway features, the Sametime server was deployed on Lotus Domino servers. If you know how to install and manage a Lotus Domino server, then you will most likely be the same individual who will manage a Sametime server as the skill sets are similar. But with the addition of WAS comes flexibility in server architecture. As an administrator, you have the ability to choose features and configure servers based on your organization's unique needs. The linkage between Domino and Sametime still exists through the Sametime Community Server. So not only can Sametime be sized appropriately for the needs of your organization, it can also run on multiple operating systems and servers as per your requirements. Some highlights include: With the release of Sametime 8.5.2, Lotus Domino 8.5.2 is now supported. A Sametime Proxy Server has been introduced as a component of the Sametime server architecture. The Sametime Proxy Server hosts the lightweight browser-based Sametime client. It runs on WAS and is different than the WAS Proxy Server. Media Manager Server is another new Sametime server component. This server manages conferences using Session Initiation Protocol (SIP) to support point-to-point and multi-point calls and integrates into the Sametime environment through your Community Server. Sametime 8.5.2 introduces support for standard audio and video codec for improved integration in the Sametime client and the Sametime Meeting Center. This allows for interoperability with third-party conferencing systems. Transversal Using Relay NAT (TURN) server is a Java program that runs in conjunction with the Media Manager Server and behaves as a reflector, routing audio and video traffic between clients on different networks. The technology used by this Network Address Translation (NAT) Traversal server (ICE) uses both TURN and Session Transversal Utilities for NAT (STUN) protocols and behaves similarly to the Sametime reflector service that was part of earlier versions of Sametime. Improved network performance and support for IPv6 networking. A new central administration console called the Sametime System Console (SSC) for managing Sametime server and configuration resources from a consolidated web interface. Sametime Bandwidth Manager is a new optional WAS-based Sametime server component that allows you to create rules and policies that determine the use of audio and video within Sametime. The Bandwidth Manager monitors Sametime traffic and uses your rules to dynamically select codec and quality of video streams as calls are initiated by users. No matter if you are new to Sametime or a long-time Sametime administrator, our aim is to guide you through the planning, installation, management, and troubleshooting steps so that you can successfully implement and support Sametime 8.5.2 in your environment. Sametime 8.5.2 server architecture As we have described briefly, the server architecture for Sametime 8.5.2 has changed significantly from previous versions. Prior to this version, Sametime was a single server installation and ran as an add-in task under a Domino server. It provided both instant messaging and web conferencing features combined into a single server. Although there was a license model that only installed and enabled the instant messaging features (Sametime Entry), the installer was the same if you wanted to include web conferencing functionality as well. The new architecture still includes a Domino-based component but the Domino server is intended strictly for instant messaging and awareness. All other Sametime functionality has been re-engineered into separate server components running on top of the WAS platform. By moving all but the instant messaging and awareness services from Domino onto WebSphere, IBM has constructed an environment better suited to the needs of enterprise customers who have a high demand for services that require significant non-Domino resources such as audio, video, and web conferencing. Additionally, the new architecture of Sametime 8.5.2 is about enhancing the client experience, dramatically improving performance, and bringing the technology in line with modern audio, video, and browser standards. Let us begin by taking a look at the new server components and learning about their role and function. Sametime System Console Core to the entire Sametime multi-server architecture is the management interface which runs as a WebSphere application. It is called the Sametime System Console (SSC). The SSC actually plugs into the standard WAS 7.x menu as an additional option. The SSC provides the configuration and management tools needed to work with all the other Sametime components, including the Domino-based Instant Messaging server. It also comes with a series of step-by-step guides called Sametime Guided Activities to walk you through the installation of each server component in the proper sequence. The SSC also has a Sametime Servers section that allows you manage the Sametime servers. The SSC installs as an add-in to WAS and is accessed through a browser on its own dedicated port. It also uses a custom DB2 database named STSC for storage of its management information. Sametime Community Server Sametime Community Server is the instant messaging and presence awareness component of Sametime, which is installed as an add-in task for Domino. It must be installed on Domino versions 8.5 or 8.5.1, but it can work with earlier versions of Sametime already installed in your environment. Keep in mind, however, that pre-8.5.x clients will not benefit from many of the new features provided by your Sametime 8.5.2 servers. If your requirement is solely for instant messaging, then this is the only component you will need installed alongside Domino itself. The Sametime Community Server "standard" install also includes the original Domino-based Meeting Center. This browser-based component has not been updated in any way from pre-8.5.x versions and is there purely for backwards compatibility and to maintain any existing scheduled meetings. There is no integration or interaction between the Domino-based Meeting Center and the Sametime 8.5.2 Meeting Center(s). Other than being updated to run on top of a Domino 8.5 or 8.5.1 server, the actual Community Server component has changed very little and includes no significant new features from previous versions. Its browser administration interface and options remain the same. However, if you have deployed the SSC, the native Domino administration is over-ridden. Following is a chart of the Sametime Community Server infrastructure. Note the optional management of the server by the SSC. Although the use of Domino as a directory is still supported, it is highly recommended you deploy Sametime using a Lightweight Directory Access Protocol (LDAP) directory. If you will be deploying other Sametime 8.5.2 components, then your deployment will usually require an LDAP directory to be used. Sametime Meeting Server The Sametime Meeting server has been completely re-engineered to bring it up to the standards of modern web conferencing solutions. It is also better aligned with IBM's Sametime Unyte online service. The new Sametime Meeting Server (versus the Domino-based Meeting Center) runs as an application under WAS. In addition, as it requires a data store to hold meeting information, it utilizes a dedicated DB2 database for managing the content of each meeting room. The previous Sametime meeting client was entirely browser-based. To improve performance and functionality for 8.5.2, a rich meeting center client has been introduced which plugs into the Sametime Eclipse environment. A browser interface for meetings is still available but it provides a reduced set of functions. Sametime Proxy Server The Sametime Proxy Server re-introduces a lightweight browser-based client for Sametime, which has not been available in versions shipped since 6.5. The new browser client is designed to be lightweight and fully customizable and it is based on Ajax technology and themed using CSS. This allows it to launch quickly and be customized to match your organization's design. The Proxy Server installs as an application under WAS, although it has no data store of its own and does not require any database connectivity. In the configuration for the Proxy Server, you direct it to a specific Community Server to supply the Sametime services. The following diagram gives a brief overview: The Proxy Server ships with a default client designed as a JavaServer Page, which can be modified using customizable style sheets. It gives a feature-rich Sametime experience including multi-way chats, browser-based meetings, and privacy settings. Sametime Media Manager The Sametime Media Manager takes on the role of providing audio and video services for both the Sametime clients for peer-to-peer VoIP and video chats, and for web conferencing within the meeting rooms in the new meeting center. It is designed to provide services for multiple Meeting Servers and through them for instant meetings from the Sametime client. Installed on a WAS platform, it has no need for a data store and does not require any database connectivity. The Media Manager is designed to provide a multi-way audio and video conferencing experience using modern codecs; however, it does not support Sametime clients in versions prior to 8.5.2. It is the audio and video "glue" that connects all the other Sametime server elements in 8.5.2. Sametime TURN Server In its default configuration, the Media Manager creates a SIP connection from itself to the requesting client. However, where the client is not on the same network as the Media Manager, no SIP connection can be made directly. To address this issue, which affects users outside of your firewall as well as those on different internal networks, IBM has introduced the TURN Server with Sametime 8.5.2. The TURN server uses both TURN and STUN protocols to create a connection with the client. It routes audio and video traffic between itself and the Media Manager, allowing connections between clients across networks. The technology is sometimes referred to as a reflector and pre-8.5 versions of Sametime came with a reflector service of their own. The TURN server is a Java program that runs in a command window on any Windows or Linux server sharing the same subnet as the Media Manager. It doesn't require WAS or any data store but runs with a separately installed IBM Java Virtual Machine (JVM). Sametime Bandwidth Manager The Sametime Bandwidth Manager is a new optional WAS-based component that is designed to help Sametime administrators manage the traffic generated by the Media Manager and its audio and video services. Within the Bandwidth Manager configuration, an administrator can create sites, links, and call-rate policies that define the service provided by the Media Manager. The Bandwidth Manager analyzes its rules when a new call is initiated and instructs the Media Manager on how to service that call. Among the extremely granular levels of customization available are options for sites to have link rules that constrain the traffic between them. You can also create specific policies that specify the services available to named users or groups during peak and off-peak periods. Depending upon network load, user identity, and call participation, the Bandwidth Manager can be configured to control the bandwidth. It can do this by reducing the audio to a lower codec, reducing the video frame rate, or even denying video completely, informing the user that they should retry at a later time.
Read more
  • 0
  • 0
  • 1599

article-image-article-ab-numpy-abr1-1111utm_sourceab_numpy_abr1_1111utm_mediumcontentutm_campaignapoorva
Packt
11 Nov 2011
10 min read
Save for later

NumPy: Commonly Used Functions

Packt
11 Nov 2011
10 min read
  (For more resources on this topic, see here.) File I/O First, we will learn about file I/O with NumPy. Data is usually stored in files. You would not get far if you are not able to read from and write to files. Time for action – reading and writing files As an example of file I/O, we will create an identity matrix and store its contents in a file. Identity matrix creation Creating an identity matrix: The identty matrix is a square matrix with ones on the diagonal and zeroes for the rest. The identity matrix can be created with the eye function. The only argument we need to give the eye function is the number of ones. So, for instance, for a 2-by-2 matrix, write the following code: code1 The output is: code2 Saving data: Save the data with the savetxt function. We obviously need to specify the name of the file that we want to save the data in and the array containing the data itself: code3 A file called eye.txt should have been created. You can check for yourself whether the contents are as expected. What just happened? Reading and writing files is a necessary skill for data analysis. We wrote to a file with savetxt. We made an identity matrix with the eye function. CSV files Files in the comma separated values (CSV) format are encountered quite frequently. Often, the CSV file is just a dump from a database file. Usually, each field in the CSV file corresponds to a database table column. As we all know, spreadsheet programs, such as Excel, can produce CSV files as well. Time for action – loading from CSV files How do we deal with CSV files? Luckily, the loadtxt function can conveniently read CSV files, split up the fields and load the data into NumPy arrays. In the following example, we will load historical price data for Apple (the company, not the fruit). The data is in CSV format. The first column contains a symbol that identifies the stock. In our case, it is AAPL, next in our case. Nn is the date in dd-mm-yyyy format. The third column is empty. Then, in order, we have the open, high, low, and close price. Last, but not least, is the volume of the day. This is what a line looks like: code4 How do we deal with CSV files? Luckily, the loadtxt function can conveniently read CSV files, split up the fields and load the data into NumPy arrays. In the following example, we will load historical price data for Apple (the company, not the fruit). The data is in CSV format. The first column contains a symbol that identifies the stock. In our case, it is AAPL, next in our case. Nn is the date in dd-mm-yyyy format. The third column is empty. Then, in order, we have the open, high, low, and close price. Last, but not least, is the volume of the day. This is what a line looks like: code5 As you can see, data is stored in the data.csv file. We have set the delimiter to , (comma), since we are dealing with a comma separated value file. The usecols parameter is set through a tuple to get the seventh and eighth fields, which correspond to the close price and volume. Unpack is set to True, which means that data will be unpacked and assigned to the c and v variables that will hold the close price and volume, respectively. What just happened? CSV files are a special type of file that we have to deal with frequently. We read a CSV file containing stock quotes with the loadtxt function. We indicated to the loadtxt function that the delimiter of our file was a comma. We specified which columns we were interested in, through the usecols argument, and set the unpack parameter to True so that the data was unpacked for further use. Volume weighted average price Volume weighted average price (VWAP) is a very important quantity. The higher the volume, the more significant a price move typically is. VWAP is calculated by using volume values as weights. Time for action – calculating volume weighted average price These are the actions that we will take: Read the data into arrays. Calculate VWAP: code6 What just happened? That wasn't very hard, was it? We just called the average function and set its weights parameter to use the v array for weights. By the way, NumPy also has a function to calculate the arithmetic mean. The mean function The mean function is quite friendly and not so mean. This function calculates the arithmetic mean of an array. Let's see it in action: code7 Time weighted average price Now that we are at it, let's compute the time weighted average price too. It is just a variation on a theme really. The idea is that recent price quotes are more important, so we should give recent prices higher weights. The easiest way is to create an array with the arange function of increasing values from zero to the number of elements in the close price array. This is not necessarily the correct way. In fact, most of the examples concerning stock price analysis in this book are only illustrative. The following is the TWAP code: code8 It produces this output: code9 The TWAP is even higher than the mean. Pop quiz – computing the weighted average Which function returns the weighted average of an array?   Reading from a file: First, we will need to read our file again and store the values for the high and low prices into arrays: code10 The only thing that changed is the usecols parameter, since the high and low prices are situated in different columns. Getting the range: The following code gets the price range: code11 These are the values returned: code12 Now, it's trivial to get a midpoint, so it is left as an exercise for the reader to attempt. Calculating the spread: NumPy allows us to compute the spread of an array with a function called The ptp function returns the difference between the maximum and minimum values of an array. In other words, it is equal to max(array) – min(array). Call the ptp function: code13 You will see this: code14 Determine the median of the close price: Create a new Python script and call it simplestats.py. You already know how to load the data from a CSV file into an array. So, copy that line of code and make sure that it only gets the close price. The code should appear like this, by now: code15 The function that will do the magic for us is called median. We will call it and print the result immediately. Add the following line of code: The function that will do the magic for us is called median. We will call it and print the result immediately. Add the following line of code: code16 The program prints the following output: code17 Since it is our first time using the median function, we would like to check whether this is correct. Not because we are paranoid or anything! Obviously, we could do it by just going through the file and finding the correct value, but that is no fun. Instead we will just mimic the median algorithm by sorting the close price array and printing the middle value of the sorted array. The msort function does the first part for us. We will call the function, store the sorted array, and then print it: code18 This prints the following output: code19 Yup, it works! Let's now get the middle value of the sorted array: code20 It gives us the following output: code21 Hey, that's a different value than the one the median function gave us. How come? Upon further investigation we find that the median function return value doesn't even appear in our file. That's even stranger! Before filing bugs with the NumPy team, let's have a look at the documentation. This mystery is easy to solve. It turns out that our naive algorithm only works for arrays with odd lengths. For even-length arrays, the median is calculated from the average of the two array values in the middle. Therefore, type the following code: code22 This prints the following output: code23 Success! Another statistical measure that we are concerned with is variance. Variance tells us how much a variable varies. In our case, it also tells us how risky an investment is, since a stock price that varies too wildly is bound to get us into trouble. Calculate the variance of the close price: With NumPy, this is just a one liner. See the following code: code24 This gives us the following output: code25 Not that we don't trust NumPy or anything, but let's double-check using the definition of variance, as found in the documentation. Mind you, this definition might be different than the one in your statistics book, but that is quite common in the field of statistics. The variance is defined as the mean of the square of deviations from the mean, divided by the number of elements in the array. Some books tell us to divide by the number of elements in the array minus one. code26 The output is as follows: code27 Just as we expected! weighted average waverage average avg Have a go hero – calculating other averages Try doing the same calculation using the open price. Calculate the mean for the volume and the other prices. Value range Usually, we don't only want to know the average or arithmetic mean of a set of values, which are sort of in the middle; we also want the extremes, the full range—the highest and lowest values. The sample data that we are using here already has those values per day—the high and low price. However, we need to know the highest value of the high price and the lowest price value of the low price. After all, how else would we know how much our Apple stocks would gain or lose. Time for action – finding highest and lowest values The min and max functions are the answer to our requirement. What just happened? We defined a range of highest to lowest values for the price. The highest value was given by applying the max function to the high price array. Similarly, the lowest value was found by calling the min function to the low price array. We also calculated the peak to peak distance with the ptp function. Statistics Stock traders are interested in the most probable close price. Common sense says that this should be close to some kind of an average. The arithmetic mean and weighted average are ways to find the center of a distribution of values. However, both are not robust and sensitive to outliers. For instance, if we had a close price value of a million dollars, this would have influenced the outcome of our calculations. Time for action – doing simple statistics One thing that we can do is use some kind of threshold to weed out outliers, but there is a better way. It is called the median, and it basically picks the middle value of a sorted set of values. For example, if we have the values of 1, 2, 3, 4 and 5. The median would be 3, since it is in the middle. These are the steps to calculate the median: What just happened? Maybe you noticed something new. We suddenly called the mean function on the c array. Yes, this is legal, because the ndarray object has a mean method. This is for your convenience. For now, just keep in mind that this is possible.
Read more
  • 0
  • 0
  • 1283

article-image-configuration-salesforce-crm
Packt
04 Nov 2011
13 min read
Save for later

Configuration in Salesforce CRM

Packt
04 Nov 2011
13 min read
(For more resources on this topic, see here.) We will look at the mechanisms for storing data in Salesforce and at the concepts of objects and fields. The features that allow these data to be grouped and arranged within the application are then considered by looking at Apps, Tabs, Page Layouts, and Record Types. Finally, we take a look at some of the features that allow views of data to be presented and customized by looking in detail at related lists and list views. Relationship between profile and the features that it controls The following diagram describes the relationship that exists between the profile and the features that it controls: The profile is used to: Control access to the type of license specified for the user and any login hours or IP address restrictions that are set. Control access to objects and records using the role and sharing model. If the appropriate object-level permission is not set on the user's profile, then the user will be unable to gain access to the records of that object type in the application. In this article, we will look at the configurable elements that are set in conjunction with the profile. These are used to control the structure and the user interface for the Salesforce CRM application. Objects Objects are a key element in Salesforce CRM as they provide a structure for storing data and are incorporated in the interface, allowing users to interact with the data. Similar in nature to a database table, objects have properties such as: Fields which are similar in concept to a database column Records which are similar in concept to a database row Relationships to other objects Optional tabs which are user interface components to display the object data Standard objects Salesforce provides standard objects in the application when you sign up and these include Account, Contact, Opportunity, and so on. These are the tables that contain the data records in any standard tab such as Accounts, Contacts, or Opportunities. In addition to the standard objects, you can create custom objects and custom tabs. Custom objects Custom objects are the tables you create to store your data. You can create a custom object to store data specific to your organization. Once you have the custom objects and have created records for these objects, you can also create reports and dashboards based on the record data in your custom object. Fields Fields in Salesforce are similar in concept to a database column and store the data for the object records. An object record is analogous to a row in a database table. Standard fields Standard fields are predefined fields that are included as standard within the Salesforce CRM application. Standard fields cannot be deleted but non-required standard fields can be removed from page layouts whenever necessary. With standard fields, you can customize visual elements that are associated to the field such as field labels and field-level help as well certain data definitions such as picklist values, the formatting of auto-number fields (which are used as unique identifiers for the records), and setting of field history tracking. Some aspects, however, such as the field name cannot be customized and some standard fields (such as Opportunity Probability) do not allow the changing of the field label. Custom fields Custom fields are unique to your business needs and can not only be added and amended, but also deleted. Creating custom fields allow you to store the information that is necessary for your organization. Both standard and custom fields can be customized to include custom help text to help users understand how to use the field: Object relationships Object relationships can be set on both standard and custom objects and are used to define how records in one object relates to records in another object. Accounts, for example, can have a one-to-many relationship with opportunities and these relationships are presented in the application as related lists. Apps An app in Salesforce is a container for all the objects, tabs, processes, and services associated with a business function. There are standard and custom apps that are accessed using the App menu located at the top-right of the Salesforce page as shown in the following screenshot: When users select an app from the App menu, their screen changes to present the objects associated with that app. For example, when switching from an app that contains the Campaign tab to one that does not, the Campaign tab no longer appears. This feature is applied to both standard and custom apps. Standard apps Salesforce provides standard apps such as Sales, Call Center, and Marketing. Custom apps A custom app can optionally include a custom logo. Both standard and custom apps consist of a name, a description, and an ordered list of tabs. Tabs A tab is a user-interface element which, when clicked, displays the record data on a page specific to that object. Hiding and showing tabs To customize your personal tab settings follow the path Your Name Setup | My Personal Settings | Change My Display | Customize My Tabs|. Now, choose the tabs that will display in each of your apps by moving the tab name between the Available Tabs and the Selected Tabs sections and click Save. The following shows the section of tabs for the Sales app: To customize the tab settings of your users, follow the path Your Name Setup | Administration Setup | Manage Users | Profiles|. Now select a profile and click Edit. Scroll down to the tab settings section of the page as shown in the following screenshot: Standard tabs Salesforce provides tabs for each of the standard objects that are provided in the application when you sign up. For example, there are standard tabs for Accounts, Contacts, Opportunities, and so on: Visibility of the tab depends on the setting on the tab display setting for the app. Custom tabs You can create three different types of custom tabs: Custom Object Tabs, Web Tabs, and Visualforce Tabs. Custom Object Tabs allow you to create, read, update, and delete the data records in your custom objects. Web Tabs display any web URL in a tab within your Salesforce application. Visualforce Tabs display custom user-interface pages created using Visualforce. Creating custom tabs: The text displayed on the custom tab is set from the Plural label of the custom object which is entered when creating the custom object. If the tab text needs to be changed this can be done by changing the Plural label stored on the custom object. Salesforce.com recommends selecting the Append tab to users' existing personal customizations checkbox. This benefits your users as they will automatically be presented with the new tab and can immediately access the corresponding functionality without having to first customize their personal settings themselves. It is recommended that you do not show tabs by setting appropriate permissions so that the users in your organization cannot see any of your changes until you are ready to make them available. You can create up to 25 custom tabs in Enterprise Edition and as many as you require in Unlimited Edition. To create custom tabs for a custom object, follow the path Your Name Setup | App Setup | Create | Tabs|. Now select the appropriate tab type and/or object from the available selections as shown in the following screenshot: (Move the mouse over the image to enlarge.) Creating custom objects Custom objects are database tables that allow you to store data specific to your organization in Salesforce.com. You can use custom objects to extend Salesforce functionality or to build new application functionality. You can create up to 200 custom objects in Enterprise Edition and 2000 in Unlimited Edition. Once you have created a custom object, you can create a custom tab, custom-related lists, reports, and dashboards for users to interact with the custom object data. To create a custom object, follow the path Your Name Setup | App Setup | Create | Objects|. Now click New Custom Object, or click Edit to modify an existing custom object. The following screenshot shows the resulting screen: On the Custom Object Definition Edit page, you can enter the following: Label: This is the visible name that is displayed for the object within the Salesforce CRM user interface and shown on pages, views, and reports, for example. Plural Label: This is the plural name specified for the object which is used within the application in places such as reports and on tabs if you create a tab for the object. Gender (language dependent): This field appears if your organization-wide default language expects gender. This is used for organizations where the default language settings is for example, Spanish, French, Italian, German among many others. Your personal language preference setting does not affect whether the field appears or not. For example, if your organization's default language is English but your personal language is French, you will not be prompted for gender when creating a custom object. Starts with a vowel sound: Use of this setting depends on your organization's default language and is a linguistic check to allow you to specify whether your label is to be preceded by "an" instead of "a". For example, resulting in reference to the object as "an Order" instead of "a Order" as an example. Object Name: A unique name used to refer to the object. Here, the Object Name field must be unique and can only contain underscores and alphanumeric characters. It must also begin with a letter, not contain spaces, not contain two consecutive underscores, and not end with an underscore. Description: An optional description of the object. A meaningful description will help to explain the purpose for your custom objects when you are viewing them in a list. Context-Sensitive Help Setting: Defines what information is displayed when your users click the Help for this Page context-sensitive help link from the custom object record home (overview), edit, and detail pages, as well as list views and related lists. The Help & Training link at the top of any page is not affected by this setting. It always opens the Salesforce Help & Training window. Record Name: This is the name that is used in areas such page layouts, search results, key lists, and related lists as shown next. Data Type: The type of field for the record name. Here the data type can be either text or auto-number. If the data type is set to be text, then when a record is created, users must enter a text value which does not need to be unique. If the data type is set to be Auto Number, it becomes a read-only field whereby new records are automatically assigned a unique number: Display Format: As in the preceding example, this option only appears when the Data Type is set to Auto Number. It allows you to specify the structure and appearance of the Auto Number field. For example: {YYYY}{MM}-{000} is a display format that produces a 4-digit year, 2-digit month prefix to a number with leading zeros padded to 3 digits. Example data output would include: 201203-001; 201203-066; 201203-999; 201203-1234. It is worth noting that although you can specify the number to be 3 digits if the number of records created becomes over 999 the record will still be saved but the automatically incremented number becomes 1000, 1001, and so on. Starting Number: As described, Auto Number fields in Salesforce CRM are automatically incremented for each new record. Here you must enter the starting number for the incremental count (which does not have to be set to start from 1). Allow Reports: This setting is required if you want to include the record data from the custom object in any report or dashboard analytics. Such relationships can be either a lookup or a master-detail. Lookup relationships create a relationship between two records so you can associate them with each other. Master-detail relationship creates a relationship between records where the master record controls certain behaviors of the detail record such as record deletion and security. When the custom object has a master-detail relationship with a standard object or is a lookup object on a standard object, a new report type will appear in the standard report category. The new report type allows the user to create reports that relate the standard object to the custom object which is done by selecting the standard object for the report type category instead of the custom object. Allow Activities: Allows users to include tasks and events related to the custom object records which appear as a related list on the custom object page. Track Field History: Enables the tracking of data field changes on the custom object records, such as who changed the value of a field and when it was changed. Fields history tracking also stores the value of the field before and after the fields edit. This feature is useful for auditing and data quality measurement and is also available within the reporting tools. Deployment Status: Indicates whether the custom object is now visible and available for use by other users. This is useful as you can easily set the status to In Development until you are happy for users to start working with the new object. Add Notes & Attachments: This setting allows your users to record notes and attach files to the custom object records. When this is specified, a related list with New Note and Attach File buttons automatically appears on the custom object record page where your users can enter notes and attach documents. The Add Notes & Attachments option is only available when you create a new object. Launch the New Custom Tab Wizard: Starts the custom tab wizard after you save the custom object. The New Custom Tab Wizard option is only available when you create a new object. Creating custom object relationships Considerations to be observed when creating object relationships: Create the object relationships as a first step before starting to build the custom fields, page layouts, and any related list The Related To entry cannot be modified after you have saved the object relationship Each custom object can have up to two master-detail relationship and up to 25 total relationships. When planning to create a master-detail relationship on an object be aware that it can only be created before the object contains record data Clicking Edit List Layout allows you to choose columns for the key views and lookups The Standard Name field is required on all custom object-related lists and also on any page layouts
Read more
  • 0
  • 0
  • 5232

article-image-article-creating-holiday-request-infopath-form-publishing-form-library
Packt
02 Nov 2011
3 min read
Save for later

Creating a holiday request InfoPath form and publishing it to a form library

Packt
02 Nov 2011
3 min read
(For more resources on Microsoft Sharepoint, see here.) Creating a holiday request InfoPath form and publishing it to a form library InfoPath forms are capable of containing repeating data, optional fields, and presenting different views to different users. To access the full power of InfoPath, you will want to create a custom form and publish it to a SharePoint form library. In this recipe we learn how to create a holiday request form and publish it to SharePoint. Getting ready This recipe works for: SharePoint 2010 Enterprise Edition Office 365 (SharePoint Online) You will need a SharePoint site where you want to create an InfoPath form. This recipe creates a holiday request form for illustration. You will need the Design or Full Control permission level to run this recipe. You will need InfoPath Designer 2010 installed on your client machine. How to do it... Open the Microsoft InfoPath Designer 2010 on your computer. The backstage view is displayed. Double click the on SharePoint Form Library button. A new blank form opens. Change the form title to Holiday Request. Change the first subheading to Employee Details. Place the cursor on the first Add control cell in the Employee Details table. Click on Text Box in the Controls section of the Home ribbon. A new textbox will be inserted into the form (field1) and a new field is added to the forms data source (Fields view). Double on click field1 in the Fields view to open the properties dialog. Rename the field to FirstName and click on the OK button. Repeat steps 4 and 5 to add textboxes for LastName, EmailAddress, and Department. Add labels to the form for each of the textboxes that you have added. Highlight the last row in the Employee Details table and delete it. Rename the next section in the form Holiday Details. Add labels for Start Date and End Date. Expand the Controls section in the ribbon. Select the Date Picker control. Add date controls for StartDate and EndDate to the form. Click on the File tab to access the backstage view. Click on the Publish button. The Publish options are displayed. Click on the SharePoint Server button. You will be prompted to first save your form template. Enter the filename holidayrequest.xsn and click on the Save button. The Publishing Wizard will open. Enter the URL of the SharePoint site where you want to publish your form and click on the Next button. Tick the Enable this form to be filled out using a browser. Select the Form Library option. Click on the Next button. Select the Create a new form library option. Click on the Next button. Name the new library Holiday Requests. Click on the Next button. To promote data items in the form to SharePoint list columns, click on the Add button. Select the FirstName column and click on the OK button. Repeat for the LastName, EmailAddress, Department, StartDate, and EndDate fields. Click on the Publish button. The form is published to SharePoint. Click on the Open this form in a browser link. The InfoPath form opens in the web browser. Enter some data to test the form and click on the Save button. You will be prompted for a filename for your form. Enter request1 and click on Save. Your holiday request is saved to the form library. Click on the Close button.
Read more
  • 0
  • 0
  • 1070
article-image-microsoft-sharepoint-recipes-automating-business-processes
Packt
01 Nov 2011
5 min read
Save for later

Microsoft SharePoint: Recipes for Automating Business Processes

Packt
01 Nov 2011
5 min read
(For more resources on Microsoft Sharepoint, see here.) Creating an InfoPath Form for a SharePoint List You can replace the default SharePoint list forms with an InfoPath form on any SharePoint list. This gives you much more flexibility and control over how you edit and display the data. Getting ready This recipe works for: SharePoint 2010 Enterprise Edition Office 365 (SharePoint Online) You will need a SharePoint List where you want to create an InfoPath form. You will need the Design or Full Control permission level to run this recipe. You will need InfoPath Designer 2010 installed on your client machine. This recipe uses a SharePoint 2010 Team Site with a contacts list added for illustration. How to do it... Open Internet Explorer and navigate to your SharePoint 2010 Team Site. Select the Contacts list from the Quick Launch menu. Select the List tab of the List Tools ribbon. Select the Customize Form icon. InfoPath Form Designer 2010 will open displaying an auto-generated InfoPath form for the contacts list. (Move the mouse over the image to enlarge.) Select the File tab in the ribbon to access the backstage view. Select the Quick Publish button. The InfoPath form is now published and replaces the default form on the SharePoint list. Click on the OK button in the Publish dialog displayed. How it works... When you create a SharePoint list, SharePoint automatically creates default edit, display, and new forms for you. While these forms are functional, they are somewhat limited in the presentation and customization options that they provide. If you have SharePoint Server Enterprise Edition installed, you replace these forms with an InfoPath 2010 form. InfoPath forms offer you many more options for creating and controlling how you edit and display your list data. This recipe demonstrates the mechanics of replacing the forms; once you have done this a whole range of new customization options is available to you. Every time you want to edit the form, just repeat this procedure. One gotcha that you may run into is if you have added a taxonomy field (that is, one that shows a term set) to your list. Unfortunately these fields are not supported in the current InfoPath release, and you will receive an error when you try to edit the list form. It's a big omission from the current version of SharePoint, and not one that there is an obvious workaround for. There's more... Having run this recipe, you may be left thinking "so what?" However, once you have created an InfoPath form for your list, you have all the power of the InfoPath form designer at your disposal. You can now remove columns, add graphics, text, and business rules to the form to fit your needs. Techniques for performing these customizations are described in the following sections. Removing columns It's quite common to have columns in a list that you don't want the user to fill in. When you have an InfoPath attached to the list, simply open the form, delete the field that you don't want the users to edit, and republish. If you have used the table-editing tools in Microsoft Word, you will find the InfoPath experience very familiar. The following screenshot shows the Attachments row being deleted from the form: It's important to realize that you are only removing columns from the form, not from the underlying list itself. Also, if you add a new column to your list after you have customized the form, SharePoint won't automatically add the new column to your form for you. It will prompt you that there are new columns available the next time you open the InfoPath form designer. Click on Yes to update the fields list. You can then select the new fields that you require and drag-and-drop them into the form. Adding images, explanatory text, and tooltips Normal SharePoint list forms are a bit dull. Now that you have an InfoPath form, you can start to brighten things up. You can change colors, fonts, add images, text, and tooltips. To add an image, use the Picture button on the Insert ribbon. Then simply browse to the picture that you want to add and republish the form. Adding text to your form can help guide your users when they are filling it in. You can add the text directly to the form or you can add it to the controls ScreenTip. The ScreenTip text will only be shown when the user's mouse hovers over the control in the browser. This helps your users without cluttering up your form. Adding rules to validate data InfoPath forms allow you to add rules. Rules can be added to the form to validate data, apply custom formatting, or perform custom actions. You can use actions to set field values, switch views, submit form data, and so on. InfoPath form rules are the way to implement and enforce custom business logic in your forms, so I advise you to invest some time learning how to build them up and exploring what they can do. The following screenshot shows how to apply InfoPath built in e-mail validation rule to the e-mail textbox field. This rule uses a regular expression to ensure that the value entered is in a valid e-mail format, and shows a validation error if it is not. InfoPath Designer versus InfoPath Filler You may notice two Microsoft Office InfoPath programs on your computer, Microsoft InfoPath Designer 2010 and Microsoft InfoPath Filler 2010. When creating forms for use in SharePoint, InfoPath Designer 2010 is the application you need to use. InfoPath is a standalone forms technology, while InfoPath Filler 2010 exists to allow users to fill in InfoPath forms without the use of SharePoint. That isn't something we cover in this book, though it's useful to know that you can use this if you need to.
Read more
  • 0
  • 0
  • 1006

article-image-data-access-using-spring-framework-hibernatetemplate
Packt
12 Oct 2011
4 min read
Save for later

Data Access Using Spring Framework: HibernateTemplate

Packt
12 Oct 2011
4 min read
Even though ORM framework such as Hibernate hides away most of the complexities related to heterogeneous database systems, the framework has their own infrastructure management requirements including session and transaction management. Such requirements can be managed well using another framework that uses dependency injection to provide and manage the requirements of Hibernate. Spring Framework is one of such most commonly used frameworks. It provides first class integration with Hibernate through its HibernateTemplate. It is analogous to the JdbcTemplate that is used to integrate with JDBC API. In this discussion, the focus will be on using HibernateTemplate to integrate Hibernate with Spring Framework. The first section will introduce you to the HibernateTemplate and details the requirements for using it. In the second section, I will detail the steps for using Hibernate with Spring Framework. In the last section, a real world application will be developed using the steps detailed in the last section. This is the outline for this discussion. HibernateTemplate Spring Framework provides different approaches to integrate with Hibernate. However, the commonly used approach is using HibernateTemplate. There are two main reasons. They are: Hiding away the session and transaction management details Providing template based approach The former takes care of infrastructure management while the later provides a consistent way to implement data access layer. Hiding away the session and transaction management detailsHibernateTemplate class hides away the complexity of managing session and transaction while accessing data using Hibernate. One only has to instantiate the HibernateTemplate by passing an instance of SessionFactory. From thereon, the session and transaction related details will be taken care of by Spring Framework. This helps by eliminating the need for infrastructure code that may become cluttered as the complexity of the application increases. Providing template based approachHibernateTemplate, like JdbcTemplate, provides a template based approach to data access. Due to this, one need to follow the approach dictated by the template and thus consistency is maintained in the coding when compared to the traditional way of data access. When you are using HibernateTemplate, you will be working with callbacks. Callbacks are the only mechanism in templating approach to instruct the template to execute a particular task. The advantage of having a callback is that there only one entry point into the data access layer. And this entry point is defined by the template, in this case HibernateTemplate, thus providing a consistent approach. In nutshell, the template based approach provides consistency and makes the code maintainable. Now that we have discussed the advantages/reasons to use HibernateTemplate, let us proceed towards the functionalities provided by it. The API provided by it can be categorized into the following: Convenience/Helper Methods Template methods The former relates to the data retrieval and manipulation API of Hibernate and the latter relates to the session and transaction management API. Convenience/Helper MethodsHibernate has API that simplifies CRUD (Create, Retrieve, Update, Delete) operations. The helper methods of HibernateTemplate provide a wrapper around these so that the Template methods can take care of session and transaction management. As a developer, you will directly make calls to the helper methods without explicitly opening and closing the session. The helper methods include find(), saveOrUpdate(), delete() etc. You can get a complete list from the following link: http://static.springsource.org/spring/docs/2.5.6/api/org/springframework/orm/hibernate3/HibernateTemplate.html Template methodsTemplate or callback methods are central to HibernateTemplate as they streamline the way operations on data is performed. There are four main template methods. They are: Execute ExecuteWithNativeSession ExecuteWithNewSession ExecuteFind Of these, Execute is most commonly used one. To use any of these methods, one just needs to create an instance of HibernateTemplate by passing an instance of SessionFactory to the constructor of HibernateTemplate. All the forms of Execute method take instance of HibernateCallback class and execute the data access logic contained in the instance of the class extending HibernateCallback. That brings us to the end of this section. However, before moving onto the steps for using HibernateTemplate, it is important to keep in mind the versions of Spring Framework and Hibernate that you will require in the steps to be described in next section. The version of Spring Framework is 2.x and that of Hibernate is 3.x. The important point to remember is that 2.x versions of Spring Framework support Hibernate 3.x versions only. So the libraries that you require for Hibernate 3.x will be required to make the examples detailed in this discussion to work. Same goes for Spring 2.x dependencies.
Read more
  • 0
  • 0
  • 1024

article-image-introducing-xcode-tools-iphone-development
Packt
28 Sep 2011
9 min read
Save for later

Introducing Xcode Tools for iPhone Development

Packt
28 Sep 2011
9 min read
  (For more resources on iPhone Development, see here.) There is a lot of fun stuff to cover, so let's get started. Development using the Xcode Tools If you are running Mac OSX 10.5, chances are your machine is already running Xcode. These are located within the /Developer/Applications folder. Apple also makes this freely available through the Apple Developer Connection at http://developer.apple.com/. The iPhone SDK includes a suite of development tools to assist you with your development of your iPhone, and other iOS device applications. We describe these in the following table. iPhone SDK Core Components COMPONENT DESCRIPTION Xcode This is the main Integrated Development Environment (IDE) that enables you to manage, edit, and debug your projects. DashCode This enables you to develop web-based iPhone and iPad applications, and Dashboard widgets. iPhone Simulator The iPhone Simulator is a Cocoa-based application, which provides a software simulator to simulate an iPhone or iPad on your Mac OSX. Interface Builder   Instruments These are the Analysis tools, which help you optimize your applications and monitor for memory leaks in real-time. The Xcode tools require an Intel-based Mac running Mac OS X version 10.6.4 or later in order to function correctly. Inside Xcode, Cocoa, and Objective-C Xcode 4 is a complete toolset for building Mac OSX (Cocoa-Based) and iOS applications. The new single-windowed development interface has been redesigned to be a lot easier and even more helpful to use than it has been in previous releases. It can now also identify mistakes in both syntax and logical errors, and will even fix your code for you. It provides you with the tools to enable you to speed up your development process, therefore becoming more productive. It also takes care of the deployment of both your Mac OSX and iOS applications. The Integrated Development Interface (IDE) allows you to do the following: Create and manage projects, including specifying platforms, target requirements, dependencies, and build configurations. Supports Syntax Colouring and automatic indenting of code. Enables you to navigate and search through the components of a project, including header files and documentation. Enables you to Build and Run your project. Enables you to debug your project locally, run within the iOS simulator, or remotely, within a graphical source-level debugger. Xcode incorporates many new features and improvements, apart from the redesigned user interface; it features a new and improved LLVM (Low Level Virtual Machine) debugger, which has been supercharged to run 3 times faster and 2.5 times more efficient. This new compiler is the next generation compiler technology designed for high-performance projects and completely supports C, Objective-c, and now C++. It is also incorporated into the Xcode IDE and compiles twice as fast and quickly as GCC and your applications will run faster. The following list includes the many improvements made to this release. The interface has been completely redesigned and features a single-window integrated development interface. Interface Builder has now been fully integrated within the Xcode development IDE. Code Assistant opens in a second window that shows you the file that you are working on, and can automatically find and open the corresponding header file(s). Fix-it checks the syntax of your code and validates symbol names as you type. It will even highlight any errors that it finds and will even fix them for you. The new Version Editor works with GIT (Free Open-Source) version control software or Subversion. This will show you the files entire SCM (software configuration management) history and will even compare any two versions of the file. The new LLVM 2.0 compiler includes full support for C, Objective-C, and C++ The LLDB debugger has now been improved to be even faster, it uses less memory than the GDB debugging engine. The new Xcode 4 development IDE now lets you work on several interdependent projects within the same window. It automatically determines its dependencies so that it builds the projects in the right order. Xcode allows you to customize an unlimited number of build and debugging tools, and executable packaging. It supports several source-code management tools, namely, CVS "Version control software which is an important component of the Source Configuration Management (SCM)" and Subversion, which allows you to add files to a repository, commit changes, get updated versions and compare versions using the Version Editor tool. The iPhone Simulator The iPhone Simulator is a very useful tool that enables you to test your applications without using your actual device, whether this being your iPhone or any other iOS device. You do not need to launch this application manually, as this is done when you Build and run your application within the Xcode Integrated Development Environment (IDE). Xcode installs your application on the iPhone Simulator for you automatically. The iPhone Simulator also has the capability of simulating different versions of the iPhone OS, and this can become extremely useful if your application needs to be installed on different iOS platforms, as well as testing and debugging errors reported in your application when run under different versions of the iOS. While the iPhone Simulator acts as a good test bed for your applications, it is recommended to test your application on the actual device, rather than relying on the iPhone Simulator for testing. The iPhone Simulator can be found at the following location /Developer/Platforms/iPhoneSimulator.Platform/Developer/Applications. Layers of the iOS Architecture According to Apple, they describe the set of frameworks and technologies that are currently implemented within the iOS operating system as a series of layers. Each of these layers is made up of a variety of different frameworks that can be used and incorporated into your applications. Layers of the iOS Architecture We shall now go into detail and explain each of the different layers of the iOS Architecture; this will give you a better understanding of what is covered within each of the Core layers. The Core OS Layer This is the bottom layer of the hierarchy and is responsible for the foundation of the Operating system, which the other layers sit on top of. This important layer is in charge of managing memory - allocating and releasing of memory once it has finished with it, taking care of file system tasks, handles networking, and other Operating System tasks. It also interacts directly with the hardware. The Core OS Layer consists of the following components: COMPONENT NAME COMPONENT NAME OS X Kernel Mach 3.0 BSD Sockets Security Power Management Keychain Certificates File System Bonjour The Core Services Layer The Core Services layer provides an abstraction over the services provided in the Core OS layer. It provides fundamental access to the iPhone OS services. The Core Services Layer consists of the following components: COMPONENT NAME COMPONENT NAME Collections Address Book Networking File Access SQLite Core Location Net Services Threading Preferences URL Utilities The Media Layer The Media Layer provides Multimedia services that you can use within your iPhone, and other iOS devices. The Media Layer is made up of the following components: COMPONENT NAME COMPONENT NAME Core Audio OpenGL Audio Mixing Audio Recording Video Playback Image Formats: JPG, PNG and TIFF PDF Quartz Core Animations OpenGL ES The Cocoa-Touch Layer The Cocoa-Touch layer provides an abstraction layer to expose the various libraries for programming the iPhone, and other IOS devices. You probably can understand why Cocoa-Touch is located at the top of the hierarchy due to its support for Multi-Touch capabilities. The Cocoa-Touch Layer is made up of the following components: COMPONENT NAME COMPONENT NAME Cocoa-Touch Layer Multi-Touch Events Multi-Touch Controls Accelerometer/Gyroscope View Hierarchy Localization/Geographical Alerts Web Views People Picker Image Picker Controllers   Understanding Cocoa, the language of the Mac Cocoa is defined as the development framework used for the development of most native Mac OSX applications. A good example of a Cocoa related application is Mail or Text Edit. This framework consists of a collection of shared object code libraries known as the Cocoa frameworks. It consists of a runtime system and a development environment. These set of frameworks provide you with a consistent and optimized set of prebuilt code modules that will speed up your development process. Cocoa provides you with a rich-layer of functionality, as well as a comprehensive object-oriented like structure and APIs on which you can build your applications. Cocoa uses the Model-View-Controller (MVC) design pattern. What are Design Patterns? Design Patterns represent and handle specific solutions to problems that arise when developing software within a particular context. These can be either a description or a template, on how to go about to solve a problem in a variety of different situations. What is the difference between Cocoa and Cocoa-Touch? Cocoa-Touch is the programming language framework that drives user interaction on iOS. It consists and uses technology derived from the cocoa framework and was redesigned to handle multi-touch capabilities. The power of the iPhone and its User Interface are available to developers throughout the Cocoa-Touch frameworks. Cocoa-Touch is built upon the Model-View-Controller structure; it provides a solid stable foundation for creating mind blowing applications. Using the Interface builder developer tool, developers will find it both very easy and fun to use the new drag-and-drop method when designing their next great masterpiece application on iOS. The Model-View-Controller The Model-View-Controller (or MVC) comprises a logical way of dividing up the code that makes up the GUI (Graphical User Interface) of an application. Object-Oriented applications like Java and .Net have adopted the MVC design pattern. The MVC model comprises three distinctive categories:Model : This part defines your application's underlying data engine. It is responsible for maintaining the integrity of that data. View : This part defines the user interface for your application and has no explicit knowledge of the origin of data displayed in that interface. It is made up of Windows, controls, and other elements that the user can see and interact with. Controller : This part acts as a bridge between the model and view and facilitates updates between them. It binds the Model and View together and the application logic decides how to handle the user's inputs.  
Read more
  • 0
  • 0
  • 2078
article-image-spring-roo-11-working-roo-generated-web-applications
Packt
27 Sep 2011
3 min read
Save for later

Spring Roo 1.1: Working with Roo-generated Web Applications

Packt
27 Sep 2011
3 min read
Adding static views to Roo-generated web application A static view in a Spring Web MVC application is a view for which you don't explicitly create a controller class. We saw earlier that Spring Web MVC application scaffolded by Roo configures static views using element of Spring's mvc schema. The static views don't have an explicit controller, but behind the scenes Spring's built-in ParameterizableViewController is used for rendering static views. Here, we will look at web mvc install view command of Roo, which creates a static view. Getting ready Delete contents of ch04-recipe sub-directory inside C:roo-cookbook directory. Copy ch04_web-app.roo script into ch04-recipe directory. Execute ch04_web-app.roo script that creates flight-app Roo project, sets up Hibernate as persistence provider, configures MySQL as the database for the application, creates Flight and FlightDescription JPA entities and defines many-to-one relationship between Flight and FlightDescription entities. If you are using a different database than MySQL or your connection settings are different than what is specified in the script, then modify the script accordingly. Start Roo shell from C:roo-cookbookch04-recipe directory. Execute the controller all command to create controllers and views corresponding to JPA entities in flight-app project, as shown here: .. roo> controller all --package ~.web Execute perform eclipse command to update project's classpath settings, as shown here: .. roo> perform eclipse Now, import the flight-app project into your Eclipse IDE. How to do it... To add static views to a Roo-generated web application execute the web mvc install view command, as shown here: .. roo> web mvc install view --path /static/views --viewName help --title Help How it works... The following table describes the arguments that web mvc install view command accepts: Argument Description path Specifies the sub-folder inside /WEB-INF/views/ folder in which the view is created. viewName The name of the view JSPX file. title Specifies the name of the menu option with which the static view is accessible. As the output from the web mvc install view command suggests, following actions are taken by Spring Roo in response to executing the command: Creates /static/views directory inside /WEB-INF/views folder. Roo uses the value of path argument to determine the directory to create. Creates help.jspx file inside /WEB-INF/views/static/views directory. The value of viewName argument is used as the name of the JSPX file. Adds a property with value Help to application.properties, that is, the value of title argument is used as the value of the newly added property. The property is used by menu.jspx to show a Help menu option. The Help menu option allows access to the newly created help.jspx view. Creates /WEB-INF/views/static/views/views.xml tiles definitions XML file, containing a single tiles definition for showing help.jspx view, as shown here: <tiles-definitions> <definition extends="default" name="static/views/help"> <put-attribute name="body" value="/WEB-INF/views/static/views/help.jspx"/> </definition> </tiles-definitions> Adds a element to webmvc-config.xml to allow accessing help.jspx view without requiring to write a controller, as shown here: <mvc:view-controller path="/static/view/help"/>  
Read more
  • 0
  • 0
  • 1748

article-image-microsoft-lightswitch-querying-and-filtering-data
Packt
16 Sep 2011
5 min read
Save for later

Microsoft LightSwitch: Querying and Filtering Data

Packt
16 Sep 2011
5 min read
  (For more resources on this topic, see here.)   Querying in LightSwitch The following figure is based on the one you may review on the link mentioned earlier and schematically summarizes the architectural details: Each entity set has a default All and Single as shown in the entity Category. All entity sets have a Save operation that saves the changes. As defined, the entity sets are queryable and therefore query operations on these sets are allowed and supported. A query (query operation) requests an entity set(s) with optional filtering and sorting as shown, for example, in a simple, filtered, and sorted query on the Category entity. Queries can be parameterized with one or more parameters returning single or multiple results (result sets). In addition to the defaults (for example, Category*(SELECT All) and Category), additional filtering and sorting predicates can be defined. Although queries are based on LINQ, all of the IQueryable LINQ operations are not supported. The query passes through the following steps (the pipeline) before the results are returned. Pre-processing CanExecute—called to determine if this operation may be called or not Executing—called before the query is processed Pre-process query expression—builds up the final query expression Execution—LightSwitch passes the query expression to the data provider for execution Post-processing Executed—after the query is processed but before returning the results ExecuteFailed—if the query operation failed   Querying a Single Entity We will start off creating a Visual Studio LightSwitch project LSQueries6 using the Visual Basic Template as shown (the same can be carried out with a C# template). We will attach this application to the SQL Server Express server's Northwind database and bring in the Products (table) entity. We will create a screen EditableProductList which brings up all the data in the Products entity as shown in the previous screenshot. The above screen was created using the Editable Grid Screen template as shown next with the source of data being the Products entity. We see that the EditableProductList screen is displaying all columns including those discontinued items and it is editable as seen by the controls on the displayed screen. This is equivalent to the SQL query, Select * from Products as far as display is concerned.   Filtering and sorting the data Often you do not need all the columns but only a few columns of importance for your immediate needs, which besides being sufficient, enormously reduces the cost of running a query. What do you do to achieve this? Of course, you filter the data by posing a query to the entity. Let us now say, we want products listing ProductID, ProductName excluding the discontinued items. We also need the list sorted. In SQL Syntax, this reduces to: SELECT [Product List].ProductID, [Product List].ProductNameFROM Products AS [Product List]WHERE ((([Product List].Discontinued) =0))ORDER BY [Product List].ProductName; This is a typical filtering of data followed by sorting the filtered data. Filtering the data In LightSwitch, this filtering is carried out as shown in the following steps: Click on Query menu item in the LSQueries Designer as shown: The Designer (short for Query Designer) pops-up as shown and the following changes are made in the IDE: A default Query1 gets added to the Products entity on which it is based as shown; the Query1 property window is displayed and the Query Designer window is displayed. Query1 can be renamed in its Properties window (this will be renamed as Product List). The query target is the Products table and the return type is Product. As you can see Microsoft has provided all the necessary basic querying in this designer. If the query has to be changed to something more complicated, the Edit Additional Query Code link can be clicked to access the ProductListDataService as shown: Well, this is not a SQL Query but a LINQ Query working in the IDE. We know that entities are not just for relational data and this makes perfect sense because of the known advantages of LINQ for queries (review the following link: http://msdn.microsoft.com/en-us/library/bb425822.aspx). One of those main advantages is that you can write the query inVB or C#, and the DataContext, the main player takes it to SQL and runs queries that SQL Databases understand. It's more like a language translation for queries with many more advantages than the one mentioned. Hover over Add Filter to review what this will do as shown: This control will add a new filter condition. Note that Query1 has been renamed (right-click on Query1 and choose Rename) to ProductList. Click on the Add Filter button. The Filter area changes to display the following: The first field in the entity will come up by default as shown for the filtered field for the 'Where' clause. The GUI is helping to build up "Where CategoryID = ". However, as you can see from the composite screenshot (four screens were integrated to create this screenshot) built from using all the drop-down options that you can indeed filter any of the columns and choose any of the built-in criteria. Depending on the choice, you can also add parameter(s) with this UI. For the particular SQL Query we started with, choose the drop-down as shown. Notice that LightSwitch was intelligent enough to get the right data type of value for the Boolean field Discontinued. You also have an icon (in red, left of Where) to click on should you desire to delete the query. Add a Search Data Screen using the previous query as the source by providing the following information to the screen designer (associating the ProductList query for the Screen Data). This screen when displayed shows all products not discontinued as shown. The Discontinued column has been dragged to the position shown in the displayed screen.  
Read more
  • 0
  • 0
  • 3193