Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-setting-civicrm
Packt
19 Aug 2013
11 min read
Save for later

Setting Up CiviCRM

Packt
19 Aug 2013
11 min read
(For more resources related to this topic, see here.) Setting up a CiviCRM theme in Drupal CiviCRM administration screens take up a lot of browser real estate. How CiviCRM looks is determined by what themes you are using in your CMS. Problems arise when you use your main website theme to display CiviCRM pages. All the customizations, blocks of information, and layouts suddenly get in the way when you want to administer CiviCRM. The trick is to use a different theme for CiviCRM. How to do it… This is very easy to accomplish, and just uses a configuration screen in Drupal. Make sure you have the CiviCRM theme module enabled. Navigate to admin/appearance in Drupal by clicking on the Appearance button. This page shows the themes that are currently installed within our CMS—in this case, Drupal. Make sure that any themes you wish to use are enabled. At the foot of the screen, configure CiviCRM Administration theme. How it works… Drupal uses the page URL to check if you are administering CiviCRM. If you are, the pages are displayed using the CiviCRM administration theme. It's a good idea to select a flexible-width theme with sidebars. Garland is a good example. The flexible width accommodates CiviCRM displays nicely. Once the administration theme is selected, navigate to admin/structure/blocks. Here you will see various blocks provided by the CiviCRM module. You can now place these blocks within your administrative theme. Pay special attention to the visibility settings for these blocks, so that they only appear when using CiviCRM. There's more… In Drupal, there is an additional setting that controls which theme is used to display public CiviCRM pages, for example, event sign-up pages. See also You can explore hundreds of contributed Drupal themes at http://drupal.org/project/themes Setting up cron using cPanel Cron is a time-based scheduler that is used extensively throughout CiviCRM. For example, you might want to use CiviCRM to send out an e-mail newsletter at a particular time, or you might want to send out a reminder to participants to attend an event. CiviCRM has settings to accomplish all these tasks, but these, in turn, rely on having "master" cron set up. Cron is set up on your web server, not within CiviCRM. How to do it… There are many different ways of setting up cron, depending on your site-hosting setup. In this example, we are using cPanel, a popular control panel that simplifies website administration. Make a note of your CMS site administrator username and password. Make a note of your CiviCRM site key, which is a long string of characters used to uniquely identify your CiviCRM installation. It is automatically generated when CiviCRM is installed, and is stored in the civicrm_settings.php file. Using a text editor, open up the CiviCRM settings file located at /sites/default/civicrm_settings.php. Around line 170, you will see the following entry: define( 'CIVICRM_SITE_KEY', '7409e83819379dc5646783f34f9753d9' ); Make a note of this key. Log in to cPanel and use the cPanel File Manager to explore the folders and files that are stored there. You are going to create a file that contains all the necessary information for cron to work. You can choose to create the cron file anywhere you like. It makes sense to keep it in the home directory of your webserver—that is, the first directory you get to once you start exploring. Create a file called CiviCron.php. The naming does not particularly matter, but it must be a PHP file. Insert the following code: <?php// create a new cURL resource$ch = curl_init();// set URL and other appropriate optionscurl_setopt($ch, CURLOPT_URL, "http://myDrupalsite.com/sites/all/modules/civicrm/bin/ cron.php?name=admin&pass=adminpassword&key =01504c43af550a317f3c6495c2442ab7");curl_setopt($ch, CURLOPT_HEADER, 0);// grab URL and pass it to the browsercurl_exec($ch);curl_close($ch);?> Substitute http://myDrupalsite.com with your own domain Substitute admin with your own CMS admin username Substitute adminpassword with your own CMS admin password Substitute the key value with the site key from civicrm_settings.php Save this file and then navigate to cron in cPanel. Select an appropriate cron interval from the Common Settings list. Choosing an appropriate cron interval may take some experimentation, depending on how your site is set up. In the Command field, enter the following address: php /home/site_account_name/public_html/CiviCron.php The portion after php is the absolute path to the CiviCron.php file you created in step 4. Click on Add New Cron Job. How it works… All cron does is execute the URL that is constructed in the cron file. The following piece of code does the work: curl_setopt($ch, CURLOPT_URL, "http://myDrupalsite.com/sites/all/modules/civicrm/bin/ cron.php?name=admin&pass=adminpassword&key= 01504c43af550a317f3c6495c2442ab7"); The URL contains the information on permissions (the username, the password, and the site key) to execute the cron.php file provided by the CiviCRM module. Getting cron to work is critical to getting CiviCRM working properly. If you get into difficulties with it, the best solution is to contact your hosting company and seek guidance. To test that your cron job is actually working, carry out the following instructions. In the cPanel cron screen, set it to send you an e-mail each time the cron command is run. The e-mail will contain an error message if the cron fails. Failures are generally due to an incorrect setting of the path, or a permissions problem with the username, password, or site key. Adding items to the CiviCRM navigation menu As you begin to use CiviCRM, you will want to provide administrative shortcuts. You can do this by adding custom menu blocks within your CMS or editing the navigation menu in CiviCRM. How to do it… CiviCRM has a fully customizable navigation menu. You can edit this menu to get one-click access to the features you use most. Navigate to a page that you want to use as the link destination for a menu item. For example, you could navigate to Contacts | Manage Groups , and then select a suitable group. Copy the page URL in the browser location. In this example, it would be as follows: civicrm/group/search?reset=1&force=1&context=smog&gid=2 Navigate to Administer | Customize Data and Screens | Navigation Menu. This displays the CiviCRM navigation menu in tree form. Click on the left arrow on each Parent menu item to expand it. You can now explore all the child menu items. Click on the Add Menu item button at the top of this screen. This brings up the Add Menu Item edit screen. Enter the name of the menu item in the Title field Enter the URL (that you copied) into the URL field. Select a parent to make the menu item appear as the child of another menu item. If you don't select a parent, the item will appear on the main CiviCRM menu bar. Select one or more permissions in the Permission field to control who can use the menu item. These are CMS permissions, so we must ensure that these are set correctly in our CMS for the menu item to behave properly. How it works… CiviCRM stores new menu items, and displays them according to where they are placed in the menu tree and what permissions a user may have to use them. See also You can fully explore CiviCRM customization at http://book.civicrm.org/user/current/initial-set-up/customizing-the-user-interface/ Refreshing the dashboard By default, CiviCRM sets the auto-refresh period for the home page dashboard to 1 hour. In a busy setting, this is too long, and you constantly have to click on the Refresh Dashboard data button to get the information on the dashboard up to date. How to do it… Changing the setting is simply a matter of visiting the CiviCRM administration pages: Navigate to Administer | System Settings | Undelete, Logging and ReCAPTCHA. Change the Dashboard cache timeout value from 1440 (that's 1 hour in seconds) to a smaller figure. Changing display preferences By default, CiviCRM displays a lot of data on the contact summary screen. Sometimes, this can lead to a cluttered display that is hard to use and slow to load. How to do it… CiviCRM components can add to the clutter on the screen. Here we can disable unwanted components and then fine-tune the display of other elements in the contact summary screen. Navigate to Administer | System Settings | Enable CiviCRM Components, and disable any unused CiviCRM components. Navigate to Administer | Customize data and screens | Display preferences. Control which tabs are displayed in the detail screen (for each contact), using the checkboxes. Control which sections you want to see when editing an individual contact, by checking the checkboxes in the Editing Contacts section. Drag the double-arrow icon to move the sections up and down the contact editing screen. See also You can fully explore the display preferences at http://book.civicrm.org/user/current/initial-set-up/customizing-the-user-interface/ Replacing words This is useful for fine-tuning your website. For example, you could replace US spelling with UK spelling (thus avoiding installing the UK language translation). Or you might want to change the wording on parts of a standard form without having to make a custom template. How to do it… The words—or sentences—that we want to replace are called strings. In CiviCRM, we can enter the strings we don't want, and replace them with strings we do want. Navigate to Administer | System Settings | Customize Data and Screens | Word Replacement. In this example, I am replacing the US spelling of "Organization" with the UK version, "Organisation". Use the Exact Match checkbox to match words precisely. This would then exclude plurals of the word from being matched. All word replacements are case sensitive. Setting up geocoding Geocoding allows you to do location-based searching and to display the maps of contacts. How to do it… You need to set a mapping provider—that is a service that will provide you with the visual maps—and a geocoding provider, which will translate your contact addresses into latitude and longitude coordinates. Navigate to Administer | Localization | Address settings. In Address Display, make sure that the Street Address Parsing checkbox is ticked. Navigate to Administer | System Settings | Mapping and Geocoding. Set Mapping Provider to Google or Openstreetmap. Set Geocoding Provider to Google. Navigate to Administer | System Settings | Scheduled Jobs. The Geocode and Parse Addresses scheduled job should now be enabled. You can set how regularly you want CiviCRM to geocode your address data. How it works… Geocoding Provider finds latitude and longitude coordinates for each contact address. Mapping Provider uses this information to draw a local map, with a pointer for the contact. Geocode and Parse Addresses do the geocoding work each day, though you can change this in the settings. There's more… Google currently limits geocoding requests to 2,500 per 24 hours. So, if you exceed this limit, Google may not process requests; it may even restrict access to their geocoding service should you continue to break this limit. This is a problem when you have thousands of addresses to process—for example, after a big import of address data. CiviCRM does not have a tool to place a daily limit on the number of contacts that are processed each day. But you can put parameters into the Geocode and Parse Addresses scheduled job that provide a range of contact IDs to process. You would have to change this each day to work your way though all your contacts. Navigate to Administer | System Settings | Scheduled Jobs, and edit the Geocode and Parse Addresses scheduled job. In the Command Parameters box, enter: start= 1end=2500 1 would be the ID of your first contact. If you have access to your database tables, check the database table civicrm_contact to know what the first value for your contacts is. See also Further details about geocoding in CiviCRM are available at http://wiki.civicrm.org/confluence/display/CRMDOC43/Mapping+and+Geocoding
Read more
  • 0
  • 0
  • 1717

article-image-cache-replication
Packt
19 Aug 2013
5 min read
Save for later

Cache replication

Packt
19 Aug 2013
5 min read
(For more resources related to this topic, see here.) Ehcache replication using RMI The Ehcache framework provides RMI (Remote Method Invocation) based cache replication across the cluster. It is the default implementation for replication. The RMI-based replication works on the TCP protocol. Cached resources are transferred using the serialization and deserialization mechanism of Java. RMI is a point-to-point protocol and hence, it generates a lot of network traffic between clustered nodes. Each node will connect to other nodes in the cluster and send cache replication messages. Liferay provides Ehcache replication configuration files in the bundle. We can re-use them to set up Ehcache replication using RMI. Let's learn how to configure Ehcache replication using RMI for our cluster. Stop both the Liferay Portal nodes if they are running. Add the following properties to the portal-ext.properties file of both the Liferay Portal nodes: net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xmlnet.sf.ehcache.configurationResourceName.peerProviderProperties=peerDiscovery=automatic,multicastGroupAddress=${multicast.group.address["hibernate"]},multicastGroupPort=${multicast.group.port["hibernate"]},timeToLive=1ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xmlehcache.multi.vm.config.location.peerProviderProperties=peerDiscovery=automatic,multicastGroupAddress=${multicast.group.address["multi-vm"]},multicastGroupPort=${multicast.group.port["multi-vm"]},timeToLive=1multicast.group.address["hibernate"]=233.0.0.4multicast.group.port["hibernate"]=23304multicast.group.address["multi-vm"]=233.0.0.5multicast.group.port["multi-vm"]=23305 Now restart both the Liferay Portal nodes. Liferay Portal uses two separate Ehcache configurations for the hibernate cache and the Liferay service layer cache. Liferay ships with two different sets of configuration files for each hibernate and service layer cache. By default, it uses the non-replicated version of the cache file. Using the portal-ext.properties file, we can tell Liferay to use the replicated cache configuration file. In the preceding steps, we configured the replicated version of cache files for both the hibernate and service layer cache using the net.sf.ehcache.configurationResourceName and ehcache.multi.vm.config.location properties. Replicated Ehcache configuration files internally use IP multicast to establish the RMI connection between each Liferay node. We configured IP multicast and ports for establishing connections. Ehcache configuration using JGroups Another option to replicate Ehcache is using JGroups. JGroups is a powerful framework used for multicast communication. The Ehcache framework also supports replication using JGroups. Similar to the RMI-based Ehcache replication, Liferay also supports JGroup-based replication. Let's learn how to configure the JGroup-based Ehcache replication. Stop both the Liferay Portal nodes if they are running. Add the following properties to the portal-ext.properties file of both the Liferay Portal nodes: ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xmlehcache.multi.vm.config.location.peerProviderProperties=connect=UDP(mcast_addr=multicast.group.address["hibernate"];mcast_port=multicast.group.port["hibernate"];):PING:MERGE2:FD_SOCK:VERIFY_SUSPECT:pbcast.NAKACK:UNICAST:pbcast.STABLE:FRAG:pbcast.GMSehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactoryehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactorynet.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xmlnet.sf.ehcache.configurationResourceName.peerProviderProperties=peerDiscovery=connect=UDP(mcast_addr=multicast.group.address["multi-vm"];mcast_port=multicast.group.port["multi-vm"];):PING:MERGE2:FD_SOCK:VERIFY_SUSPECT:pbcast.NAKACK:UNICAST:pbcast.STABLE:FRAG:pbcast.GMSmulticast.group.address["hibernate"]=233.0.0.4multicast.group.port["hibernate"]=23304multicast.group.address["multi-vm"]=233.0.0.5multicast.group.port["multi-vm"]=23305 Now restart both the nodes one by one to activate the preceding configuration. The Ehcache replication configuration is very similar to the RMI-based replication. Here, we used the UDP protocol to connect Liferay Portal nodes. With this option both Liferay Portal nodes also connect with each other using IP multicast. Ehcache replication using Cluster Links We learned about the JGroups- and RMI-based Ehcache replication. The Liferay Enterprise version includes another powerful feature called Cluster Link, which provides the Ehcache replication mechanism. Internally, this feature uses JGroups to replicate the cache across the network. Let's go through the steps to configure this feature. Stop both the Liferay Portal nodes if they are running. Now deploy the ehcache-cluster-web enterprise plugin on both the Liferay Portal servers. Now, edit portal-ext.properties of both the nodes: cluster.link.enabled=trueehcache.cluster.link.replication.enabled=truenet.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xmlehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml Now restart both the Liferay Portal servers to activate this configuration. Unlike the JGroups- or RMI-based Ehcache replication, this option centralizes all Ehcache changes at one place and then distributes changes to all the nodes of the cluster. This in turn reduces unnecessary network transfers. This option is only available in the Liferay Enterprise version. Hence, the preceding steps are applicable only if you are using the Liferay Enterprise version. Ehcache clustering best practices We talked about different options to configure Ehcache replication. Let's learn the best practices related to Ehcache replication. If there are more than two nodes in the cluster, it is recommended to either use Cluster Link- or JGroups-based replication. If we are using the Liferay Enterprise edition, it is recommended to use Cluster Link for Ehcache replication. All three options that we discussed previously use IP multicast for establishing connections with other nodes. The IP multicast technique uses group IP and port to know other nodes in the same group. It is very important to ensure that the same IP and port are used by the nodes of the same cluster. It is advisable to keep the group IP and port different for development, testing, or staging environment to make sure that the nodes of other environments do not pair up with the production environment. Cluster Link provides up to 10 transport channels to transfer cached resources across the cluster. If the application is supposed to have a huge cache and frequent cache changes, it is advisable to configure multiple transport channels using the cluster.link.channel.properties.transport configuration property. Summary We learned about various configuration options for implementing clustering. Resources for Article: Further resources on this subject: Liferay, its Installation and setup [Article] Setting up and Configuring a Liferay Portal [Article] Vaadin Portlets in Liferay User Interface Development [Article]
Read more
  • 0
  • 0
  • 2474

article-image-biztalk-esb-management-portal
Packt
19 Aug 2013
6 min read
Save for later

BizTalk: The ESB Management Portal

Packt
19 Aug 2013
6 min read
(For more resources related to this topic, see here.) Registering services in UDDI Thanks to the ESB Toolkit, we can easily populate our organization's services registry in UDDI with the services that interact with the ESB, either because the ESB exposes them or because they can be consumed through it. Before we can register service in UDDI we must first configure the registry settings. Registry settings The registery settings change how the UDDI registration functionality mentioned in preceding section behaves. UDDI Server: This sets URL of the UDDI server. Auto Publish: When enabled, any registry request will be automatically published. If it's disabled, the requests will require administrative approval. Anonymous: This setting indicates whether to use anonymous access to connect to the UDDI server or to use the UDDI Publisher Service account. Notification Enabled: This enables or disables the delivery of notifications when any registry activity occurs on the portal. SMTP Server: This is the address of the SMTP server that will send notification e-mail messages. Notification E-Mail: This is the e-mail address to which to send endpoint update notification e-mail messages. E-Mail From Address: This is the address that will show up as sender in notification messages sent. E-Mail Subject: This is the text to display in the subject line of notification e-mail messages. E-Mail Body: This is the text for the body of notification e-mail messages. Contact Name: This setting is name of the UDDI administrator to notify of endpoint update requests. Contact E-Mail: This setting is used for the e-mail address of the UDDI administrator for notifications of endpoint update requests. The following screenshot shows all of the settings mentioned in preceding list: In the ESB Management Portal, we can see in the top menu an entry that takes us to the Registry functionality, shown in the following screenshot: On this view, we can directly register a service into UDDI. To do this, first we have to search the endpoint that we want to publish. These can be endpoints of services that the ESB consumes through Send ports, or endpoints of services that the ESB exposes through receive locations. As an example, we will publish one of the services exposed by the ESB through the GlobalBank.ESB sample application that comes with the ESB Toolkit. First, we will search on the New Registry Entry page for the endpoints in the GlobalBank.ESB application, as shown in the following screenshot: Once we get the results, we will click on the Publish link of the DynamicResolutionReqResp_SOAP endpoint that actually exposes the /ESB. NorthAmericanServices/CustomerOrder.asmx service. We will be presented with a screen where we can fill in further details about the service registry entry, such as the service provider under which we want to publish the service (or we can even create a new service provider that will get registered in UDDI as well). After clicking on the Publish button at the bottom of the page, we will be directed back to the New Registry Entry screen, where we can filter again and see how our new registry entry is in Pending status, as it needs to be approved by an administrator. We can access the Manage Pending Requests module through the corresponding submenu under the top-level Registry menu. There we can see if there are any new registry entries that might be pending for approval. By using the buttons to the left of each item, we can view the details of the request, edit them, and approve or delete the request. Once we approve the request, we will receive a confirmation message on the portal telling us that it got approved. Then, we can go to the UDDI portal and look for the service provider that we just created, were we will see that our service got registered. The following screenshot shows how the service provider of the service we just published is shown in the UDDI portal: In the following screenshot we can see the actual service published, with its corresponding properties. With these simple steps, we can easily build our own services registry in UDDI based on the services our organization already has, so they can be used by the ESB or any other systems to discover services and know how to consume them. Understanding the Audit Log Audit Log is a small reporting feature that is meant to provide information about the status of messages that have been resubmitted to the ESB through the resubmission module. We can access this module through the Manage Audit Log menu. We will be presented with a list of the messages that were resubmitted, if those were resubmitted successfully or not, and even check the actual message that was resubmitted, as the message could have been modified before being resubmitted. Fault Settings On the Fault Settings page we can specify: Audit Options: This includes the type events that we want to audit: Audit Save: When a message associated with a fault is saved. Audit Successful Resubmit: When a message is successfully resubmitted. Audit Unsuccessful Resubmit: When the resubmission of a message fails. Alert Queue Options: Here we can enable or disable the queuing of the notifications generated when a fault message is published to the portal. Alert Email Options: Here we can enable and configure the service that will actually send e-mail notifications once fault messages are published to the portal. The three most important settings in this section are: Email Server: The e-mail server that will be actually used to send the e-mails. Email From Address: The address that will show up as sender in the e-mails sent. Email XSLT File Absolute Path: The XSLT transformation sheet that will be used to format the e-mails. The ESB Toolkit provides one, but we could customize it or create our own sheet according to our requirements. Summary In this article, we discussed the additional features of the ESB Management Portal. We learned about the registry settings, which are used for configuring the UDDI and setting up the e-mail notifications. We also learned how to configure fault settings and how to utilize the Audit Log features. Resources for Article : Further resources on this subject: Microsoft Biztalk server 2010 patterns: Operating Biztalk [Article] Setting up a BizTalk Server Environment [Article] Communicating from Dynamics CRM to BizTalk Server [Article]
Read more
  • 0
  • 0
  • 2707
Banner background image

article-image-building-ui-xaml-windows-8-using-c
Packt
16 Aug 2013
35 min read
Save for later

Building UI with XAML for Windows 8 Using C

Packt
16 Aug 2013
35 min read
(For more resources related to this topic, see here.) XAML C++ Store applications typically use eXtensible Application Markup Language (XAML) as the main language for creating the user interface. The first question that comes to mind when XAML is first mentioned, is why? What's wrong with C++, or any other existing programming language? XAML is an XML-based language that describes the what, not the how; it's declarative and neutral. Technically, a complete app can be written without any XAML; there's nothing XAML can do that C++ can't. Here are some reasons why XAML makes sense (or at least may make sense in a little bit): C++ is very verbose as opposed to XAML. XAML is usually shorter than the equivalent C++ code. Since XAML is neutral, design-oriented tools can read and manipulate it. Microsoft provides the Expression Blend tool just for this purpose. The declarative nature of XAML makes it easier (most of the time, after users get used to it) to build user interfaces, as these have a tree-like structure, just like XML. XAML itself has nothing to do with the user interface in itself. XAML is a way to create objects (usually an object tree) and set their properties. This works for any type that is "XAML friendly", meaning it should have the following: A default public constructor Settable public properties The second point is not a strict requirement, but without properties the object is pretty dull. XAML was originally created for Windows Presentation Foundation (WPF), the main rich client technology in .NET. It's now leveraged in other technologies, mostly in the .NET space, such as Silverlight and Windows Workflow Foundation (WF). The XAML level currently implemented in WinRT is roughly equivalent to Silverlight 3 XAML. In particular, it's not as powerful as WPF's XAML. XAML basics XAML has a few rules. Once we understand those rules, we can read and write any XAML. The most fundamental XAML rules are as follows: An XML element means object creation An XML attribute means setting a property (or an event handler) With these two rules, the following markup means creating a Button object and setting its Content property to the string Click me: <Button Content="Click me!" /> The equivalent C++ code would be as follows: auto b = ref new Button;b->Content = "Click me"; When creating a new Blank App project, a MainPage.xaml file is created along with the header and implementation files. Here's how that XAML file looks: <Pagex:Class="BasicXaml.MainPage"mc:Ignorable="d"><Grid Background="{StaticResourceApplicationPageBackgroundThemeBrush}"></Grid></Page> It's worth going over these lines in detail. In this example, the project name is BasicXaml. The root element is Page and an x:Class attribute is set, indicating the class that inherits from Page, here named BasicXaml::MainPage. Note that the class name is the full name including namespace, where the separator must be a period (not the C++ scope resolution operator ::). x:Class can only be placed on the root element. What follows that root element is a bunch of XML namespace declarations. These give context to the elements used in the entire XAML of this page. The default XML namespace (without a name) indicates to the XAML parser that types such as Page, Button, and Grid can be written as they are, without any special prefix. This is the most common scenario, because most of the XAML in a page constitutes user interface elements. The next XML namespace prefix is x and it points to special instructions for the XAML parser. We have just seen x:Classin action. We'll meet other such attributes later in this article. Next up is a prefix named local, which points to the types declared in the BasicXaml namespace. This allows creating our own objects in XAML; the prefix of such types must be local so that the XAML parser understands where to look for such a type (of course, we can change that to anything we like). For example, suppose we create a user control derived type named MyControl. To create a MyControl instance in XAML, we could use the following markup: <local:MyControl /> The d prefix is used for designer-related attributes, mostly useful with Expression Blend. The mc:ignorable attribute states that the d prefix should be ignored by the XAML parser (because it's related to the way Blend works with the XAML). The Grid element is hosted inside the Page, where "hosted" will become clear in a moment. Its Background property is set to {StaticResource ApplicationPageBackgroundThemeBrush}. This is a markup extension, discussed in a later section in this article. XAML is unable to invoke methods directly; it can just set properties. This is understandable, as XAML needs to remain declarative in nature; it's not meant as a replacement for C++ or any other programming language. Type converters XML deals with strings. However, it's clear that many properties are not strings. Many can still be specified as strings, and still work correctly thanks to the type converters employed by the XAML parser. Here's an example of a Rectangle element: <Rectangle Fill="Red" /> Presumably, the Fill property is not of a string type. In fact, it's a Brush. Red here really means ref new SolidColorBrush(Colors::Red). The XAML parser knows how to translate a string, such as Red (and many others) to a Brush type (in this case the more specific SolidColorBrush). Type converters are just one aspect of XAML that make it more succinct than the equivalent C++ code. Complex properties As we've seen, setting properties is done via XML attributes. What about complex properties that cannot be expressed as a string and don't have type converters? In this case, an extended syntax (property element syntax) is used to set the property. Here's an example: <Rectangle Fill="Red"><Rectangle.RenderTransform><RotateTransform Angle="45" /></Rectangle.RenderTransform></Rectangle> Setting the RenderTransform property cannot be done with a simple string; it must be an object that is derived from the Transform class (RotateTransform in this case). The preceding markup is equivalent to the following C++ code: auto r = ref new Rectangle;r->Fill = ref new SolidColorBrush(Colors::Red);auto rotate = ref new RotateTransform();rotate->Angle = 45;r->RenderTransform = rotate; Dependency properties and attached properties Most propertieson various elements and controls are not normal, in the sense that they are not simple wrappers around private fields. It's important to realize that there is no difference in XAML between a dependency property and a regular property; the syntax is the same. In fact, there is no way to tell if a certain property is a dependency property or not, just by looking at its use in XAML. Dependency properties provide the following features Change notifications when the property value changes Visual inheritance for certain properties (mostly the font-related properties) Multiple providers that may affect the final value (one wins out) Memory conservation (value not allocated unless changed) Some WinRT features, such as data binding, styles, and animations are dependent on that support. Another kind of dependency properties is attached properties. But essentially an attached property is contextual—it's defined by one type, but can be used by any type that inherits from DependencyObject (as all elements and controls do). Since this kind of property is not defined by the object it's used on,it merits a special syntax in XAML. The following is an example of a Canvaspanel that holds two elements: <Canvas><Rectangle Fill="Red" Canvas.Left="120" Canvas.Top="40"Width="100" Height="50"/><Ellipse Fill="Blue" Canvas.Left="30" Canvas.Top="90"Width="80" Height="80" /></Canvas> The Canvas.Left and Canvas.Top are attached properties. They were defined by the Canvas class, but they are attached to the Rectangle and Ellipse elements. Attached properties only have meaning in certain scenarios. In this case, they indicate the exact position of the elements within the canvas .The canvas is the one that looks for these properties in the layout phase.This means that if those same elements were placed in, say a Grid, those properties would have no effect, because there is no interested entity in those properties (there is no harm in having them, however). Attached properties can be thought of as dynamic properties that may or may not be set on objects. This is the resulting UI: Setting an attached property in code is a little verbose. Here's the equivalent C++ code for setting the Canvas.Left and Canvas.Top properties on an element named _myrect: Canvas::SetLeft(_myrect, 120);Canvas::SetTop(_myrect, 40); Content properties The relationship between a Page object and a Grid object is not obvious. Grid seems to be inside the Page. But how would that translate to code? The Page/Grid markup can be summed up as follows (ignoring the detailed markup): <Page><Grid Background="..."></Grid></Page> This is actually a shortcut for the following markup: <Page><Page.Content><Grid Background="..."></Grid></Page.Content></Page> This means the Grid object is set as the Content property of the Page object; now the relationship is clear. The XAML parser considers certain properties (no more than one per type hierarchy) as the default or content properties. It doesn't have to be named Content, but it is in the Page case. This attribute is specified in the control's metadata using the Windows::UI::Xaml::Markup::ContentAttribute class attribute. Looking at the Visual Studio object browser for the Page class shows no such attribute. But Page inherits from UserControl; navigating to UserControl, we can see the attribute set: Attributes are a way to extend the metadata for a type declaratively. They can be inserted in C++/CX by an attribute type name in square brackets before the item that attribute is applied to (can be a class, interface, method, property, and other code element). An attribute class must derive from Platform::Metadata::Attribute to be considered as such by the compiler. Some of the common ContentProperty properties in WinRT types are as follows: Content of ContentControl (and all derived types) Content of UserControl Children of Panel (base class for all layout containers) Items of ItemsControl (base class for collection-based controls) GradientStops of GradientBrush (base class of LinearGradientBrush) Collection properties Some properties are collections (of type IVector<T> or IMap<K,V>, for instance).Such properties can be filled with objects, and the XAML parser will call the IVector<T>::Append or IMap<K,V>::Insert methods. Here's an example for a LinearGradientBrush: <Rectangle><Rectangle.Fill><LinearGradientBrush EndPoint="1,0"><GradientStop Offset="0" Color="Red" /><GradientStop Offset=".5" Color="Yellow" /><GradientStop Offset="1" Color="Blue" /></LinearGradientBrush></Rectangle.Fill></Rectangle> Two rules are at work here. The first is the ContentProperty of LinearGradientBrush (GradientStops), which need not be specified. It's of the type GradientStopCollection, which implements IVector<GradientStop>, and thus is eligible for automatic appending. This is equivalent to the following code: auto r = ref new Rectangle;auto brush = ref new LinearGradientBrush;brush->EndPoint = Point(1.0, 0);auto stop = ref new GradientStop;stop->Offset = 0; stop->Color = Colors::Red;brush->GradientStops->Append(stop);stop = ref new GradientStop;stop->Offset = 0.5; stop->Color = Colors::Yellow;brush->GradientStops->Append(stop);stop = ref new GradientStop;stop->Offset = 1; stop->Color = Colors::Blue;brush->GradientStops->Append(stop);r->Fill = brush; This is perhaps the first clear sign of XAML syntax advantage over C++. Here's the rectangle in all its glory: In the case of IMap<K,V>, an attribute named x:Key must be set on each item to indicate the key sent to the IMap<K,V>::Insert method. We'll see an example of such a map later in this article, when we will discuss resources. Markup extensions Markup extensions are special instructions to the XAML parser that provide ways of expressing things that are beyond object creation or setting some property. These instructions are still declarative in nature, but their code equivalent usually entails calling methods, which is not directly possible in XAML. Markup extensions are placed inside curly braces as property values. They may contain arguments and properties, as we'll see in later chapters. The only markup extension used by default in a blank page is {StaticResource}, which will be discussed later in this article. WPF and Silverlight 5 allow developers to create custom markup extensions by deriving classes from MarkupExtension. This capability is unavailable in the current WinRT implementation. One simple example of a markup extension is {x:Null}. This is used in XAML whenever the value nullptr needs to be specified, as there's no better way to use a string for this. The following example makes a hole in the Rectangle element: <Rectangle Stroke="Red" StrokeThickness="10" Fill="{x:Null}" /> Naming elements Objects created through XAML can be named using the x:Name XAML attribute. Here's an example: <Rectangle x_Name="r1">…</Rectangle> The net result is a private member variable (field) that is created by the XAML compiler inside MainPage.g.h (if working on MainPage.xaml): private: ::Windows::UI::Xaml::Shapes::Rectangle^ r1; The reference in itself must be set in the implementation of MainPage::InitializeComponent with the following code: // Get the Rectangle named 'r1'r1 = safe_cast<::Windows::UI::Xaml::Shapes::Rectangle^>(static_cast<Windows::UI::Xaml::IFrameworkElement^>(this)->FindName(L"r1")); The mentioned file and method are discussed further in the section XAML compilation and execution. Regardless of how it works, r1 is now a reference to that particular rectangle. Connecting events to handlers Events can be connected to handlers by using the same syntax as setting properties, but in this case the value of the property must be a method in the code behind class with the correct delegate signature. Visual Studio helps out by adding a method automatically if Tab is pressed twice after entering the event name (in the header and implementation files). The default name that Visual Studio uses includes the element's name (x:Name) if it has one, or its type if it doesn't, followed by an underscore and the event name, and optionally followed by an underscore and an index if duplication is detected. The default name is usually not desirable; a better approach that still has Visual Studio creating the correct prototype is to write the handler name as we want it, and then right-click on the handler name and select Navigate to Event Handler. This has the effect of creating the handler (if it does not exist) and switching to the method implementation. Here's an example of an XAML event connection: <Button Content="Change" Click="OnChange" /> And the handler would be as follows (assuming the XAML is in MainPage.xaml): void MainPage::OnChange(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e){} Visual Studio also writes the namespace name in front of the class name (deleted in the preceding code example); this can be deleted safely, since an in-use namespace statement exists at the top of the file for the correct namespace. Also, the usage of Platform::Object instead of just Object (and similarly for RoutedEventArgs) is less readable; the namespace prefixes can be removed, as they are set up at the top of the file by default. All events (by convention) use delegates that are similar. The first argument is always the sender of the event (in this case a Button) and the second parameter is the extra information regarding the event. RoutedEventArgs is the minimum type for events, known as routed events. A detailed discussion of routed events is covered in the next article. XAML rules summary This is a summary of all XAML rules: An XAML element means creating an instance. An XAML attribute sets a property or an event handler. For properties, a type converter may execute depending on the property's type. Complex properties are set with the Type.Property element syntax. Attached properties are set with the Type.Property syntax, where Type is the declaring type of the attached property. ContentPropertyAttribute sets a Content property that need not be specified. Properties that are collections cause the XAML parser to call Append or Insert, as appropriate, automatically. Markup extensions allow for special (predefined) instructions. Introducing the Blend for Visual Studio 2012 tool Visual Studio 2012 is installed with the Blend for Visual Studio 2012 tool. This tool is typically used by UI designers to create or manipulate the user interface for XAML-based applications. The initial release of Blend for Visual Studio 2012 only supported Windows 8 Store Apps and Windows Phone 8 projects. The support for WPF 4.5 and Silverlight was added in Update 2 for Visual Studio 2012. Blend can be used alongside Visual Studio 2012, as both understand the same file types (such as solution .sln files). It's not atypical to switch back and forth between the two tools—using each tool for its strengths. Here's a screenshot of Blend with the CH03.sln solution file open (the solution that holds all the samples for this article): The preceding screenshot shows a particular XAML file open, with one button selected. Several windows comprise Blend, some of which are similar to their Visual Studio counterparts, namely Projects and Properties. Some of the new windows include: Assets: Holds the elements and controls available in WinRT (along with some other useful shortcuts) Objects and Timeline: Include all objects in the visual tree and also animations Resources: Holds all resources (refer to the next section) within the application Blend's design surface allows manipulating elements and controls, which is also possible to do in Visual Studio. Blend's layout and some special editing features make it easier for UI/graphic designers to work with as it mimics other popular applications, such as Adobe Photoshop and Illustrator. Any changes made using the designer are immediately reflected by the changed XAML. Switching back to Visual Studio and accepting the reload option synchronizes the files; naturally, this can be done both ways. It's possible to work from within Blend entirely. Pressing F5 builds and launches the app in the usual way. Blend, however, is not Visual Studio, and breakpoints and other debugging tasks are not supported. Blend is a non-trivial tool, and is well beyond the scope of this book. Experimentation can go a long way, however. XAML compilation and execution The XAML compiler that runs as part of the normal compilation process, places the XAML as an internal resource within the EXE or DLL. In the constructor of a XAML root element type (such as MainPage), a call is made to InitializeComponent. This method uses a static helper method Application::LoadComponent to load the XAML and parse it—creating objects, setting properties, and so on. Here's the implementation created by the compiler for InitializeComponent (in MainPage.g.hpp, with some code cleaning): void MainPage::InitializeComponent() {if (_contentLoaded)return;_contentLoaded = true;// Call LoadComponent on ms-appx:///MainPage.xamlApplication::LoadComponent(this,ref new ::Windows::Foundation::Uri(L"ms-appx:///MainPage.xaml"),ComponentResourceLocation::Application);} Connecting XAML, H, and CPP files to the build process From a developer's perspective, working with a XAML file carries with it two other files, the H and CPP. Let's examine them in a little more detail. Here's the default MainPage.xaml.h (comments and namespaces removed): #include "MainPage.g.h"namespace BasicXaml {public ref class MainPage sealed {public:MainPage();protected:virtual void OnNavigatedTo(NavigationEventArgs^ e)override;};} The code shows a constructor and a virtual method override named OnNavigatedTo (unimportant for this discussion). One thing that seems to be missing is the InitializeComponent method declaration mentioned in the previous section. Also the inheritance from Page that was hinted at earlier is missing. It turns out that the XAML compiler generates another header file named MainPage.g.h (g stands for generated) based on the XAML itself (this is evident with the top #include declaration). This file contains the following (it can be opened easily by selecting the Project | Show All Files, or the equivalent toolbar button, or right clicking on #include and selecting Open Document…): namespace BasicXaml {partial ref class MainPage : public Page,public IComponentConnector {public:void InitializeComponent();virtual void Connect(int connectionId, Object^ target);private:bool _contentLoaded;};} Here we find the missing pieces. Here we find InitializeComponent, as well as the derivation from Page. How can there be more than one header file per class? A new C++/CX feature called partial classes allows this. The MainPage class is marked as partial, meaning there are more parts to it. The last part should not be marked as partial, and should include at least one header so that a chain forms, eventually including all the partial headers; all these headers must be part of the same compilation unit (a CPP file). This MainPage.g.h file is generated before any compilation happens; it's generated on the fly while editing the XAML file. This is important because named elements are declared in that file, providing instance intellisense. During the compilation process, MainPage.cpp is finally compiled, producing an object file, MainPage.obj. It still has some unresolved functions, such as InitializeComponent. At this time, MainPage.obj (along with other XAML object files, if exist) is used to generate the metadata (.winmd) file. To complete the build process, the compiler generates MainPage.g.hpp, which is actually an implementation file, created based on the information extracted from the metadata file (the InitializeComponentimplementation is generated in this file). This generated file is included just once in a file called XamlTypeInfo.g.cpp, which is also generated automatically based on the metadata file, but that's good enough so that MainPage.g.hppis finally compiled, allowing linking to occur correctly. Resources The term "resources" is highly overloaded. In classic Win32 programming, resources refer to read-only chunks of data, used by an application. Typical Win32 resources are strings, bitmaps, menus, toolbars, and dialogs, but custom resources can be created as well, making Win32 treat those as unknown chunks of binary data. WinRT defines binary, string, and logical resources. The following sections discuss binary and logical resources (string resources are useful for localization scenarios and will not be discussed in this section). Binary resources Binary resources refer to chunks of data, provided as part of the application's package. These typically include images, fonts, and any other static data needed for the application to function correctly. Binary resources can be added to a project by right-clicking on the project in Solution Explorer, and selecting Add Existing Item. Then, select a file that must be in the project's directory or in a subdirectory. Contrary to C# or VB projects, adding an existing item from a location does not copy the file to the project's directory. This inconsistency is a bit annoying for those familiar with C#/VB projects, and hopefully will be reconciled in a future Visual Studio version or service pack. A typical Store app project already has some binary resources stored in the Assets project folder, namely images used by the application: Using folders is a good way to organize resources by type or usage. Right-clicking on the project node and selecting Add New Filter creates a logical folder, to which items may be dragged. Again, contrary to C#/VB projects, project folders are not created in the filesystem. It's recommended that these are actually created in the filesystem for better organization. The added binary resource is packaged as part of the application's package and is available in the executable folder or subfolder, keeping its relative location. Rightclicking on such a resource and selecting Properties yields the following dialog: The Content attribute must be set to Yes for the resource to be actually available (the default). Item Type is typically recognized by Visual Studio automatically. In case, it doesn't, we can always set it to Text and do whatever we want with it in code. Don't set Item Type to Resource. This is unsupported in WinRT and will cause compile errors (this setting is really for WPF/Silverlight). Binary resources can be accessed in XAML or in code, depending on the need. Here's an example of using an image named apple.png stored in a subfolder in the application named Images under the Assets folder by an Image element: <Image Source="/Assets/Images/apple.png" /> Note the relative URI. The preceding markup works because of a type converter that's used or the Image::Source property (which is of the type ImageSource). That path is really a shortcut for the following, equivalent, URI: <Image Source="ms-appx:///Assets/Images/apple.png" /> Other properties may require a slightly different syntax, but all originate through the ms-appx scheme, indicating the root of the application's package. Binary resources that are stored in another component referenced by the application can be accessed with the following syntax: <Image Source="/ResourceLibrary/jellyfish.jpg" /> The markup assumes that a component DLL named ResourceLibrary.Dll is referenced by the application and that a binary resource named jellyfish.jpg is present in its root folder. Logical resources Binary resources are not new or unique to Store apps. They've been around practically forever. Logical resources, on the other hand, is a more recent addition. First created and used by WPF, followed by the various versions of Silverlight, they are used in WinRT as well. So, what are they? Logical resources can be almost anything. These are objects, not binary chunks of data. They are stored in the ResourceDictionary objects, and can be easily accessed in XAML by using the StaticResource markup extension. Here's an example of two elements that use an identical brush: <Ellipse Grid.Row="0" Grid.Column="1"><Ellipse.Fill><LinearGradientBrush EndPoint="0,1"><GradientStop Offset="0" Color="Green" /><GradientStop Offset=".5" Color="Orange" /><GradientStop Offset="1" Color="DarkRed" /></LinearGradientBrush></Ellipse.Fill></Ellipse><Rectangle Grid.Row="1" Grid.Column="1" StrokeThickness="20"><Rectangle.Stroke><LinearGradientBrush EndPoint="0,1"><GradientStop Offset="0" Color="Green" /><GradientStop Offset=".5" Color="Orange" /><GradientStop Offset="1" Color="DarkRed" /></LinearGradientBrush></Rectangle.Stroke></Rectangle> The problem should be self-evident. We're using the same brush twice. This is bad for two reasons: If we want to change the brush, we need to do it twice (because of the duplication). Naturally, this is more severe if that brush is used by more than two elements. Two different objects are created, although just one shared object is needed. LinearGradientBrush can be turned into a logical resource (or simply a resource) and referenced by any element that requires it. To do that, the brush must be placed in a ResourceDictionary object. Fortunately, every element has a Resources property (of type ResourceDictionary) that we can use. This is typically done on the root XAML element (typically a Page), or (as we'll see in a moment) in the application's XAML (App.Xaml): <Page.Resources><LinearGradientBrush x_Key="brush1" EndPoint="0,1"><GradientStop Offset="0" Color="Green" /><GradientStop Offset=".5" Color="Orange" /><GradientStop Offset="1" Color="DarkRed" /></LinearGradientBrush></Page.Resources> Any logical resource must have a key, because it's in a dictionary. That key is specified by the x:Key XAML directive. Once placed, a resource can be accessed from any element within that Page with the StaticResource markup extension as follows: <Ellipse Fill="{StaticResource brush1}" /><Rectangle Stroke="{StaticResource brush1}" StrokeThickness="40" /> The StaticResource markup extension searches for a resource with the specified key starting from the current element. If not found, the search continues on the resources with its parent element (say a Grid). If found, the resource is selected (it is created the first time it's requested), and StaticResource is done. If not found, the parent's parent is searched and so on. If the resource is not found at the top level element (typically a Page, but can be a UserControl or something else), the search continues in the application resources (App.xaml). If not found, an exception is thrown. The search process can be summarized by the following diagram: Why is the markup extension called StaticResource? Is there a DynamicResource? DynamicResource exists in WPF only, which allows a resource to be replaced dynamically, with all those bound to it noticing the change. This is currently unsupported by WinRT. There is no single call that is equivalent to StaticResource, although it's not difficult to create one if needed. The FrameworkElement::Resources property can be consulted on any required level, navigating to the parent element using the Parent property. The Application::Resources property has special significance, since any resources defined within it can be referenced by any page or element across the entire application. This is typically used to set various defaults for a consistent look and feel. It may be tempting to store actual elements as resources (such as buttons). This should be avoided because resources are singletons within their usage container; this means referencing that button more than once within the same page will cause an exception to be thrown on the second reference, because an element can be in the visual tree just once. Resources are really intended for sharable objects, such as brushes, animations, styles, and templates. Resources can be added dynamically by using the ResourceDictionary::Insert method (on the relevant ResourceDictionary) and removed by calling ResourceDictionary::Remove. This only has an effect on subsequent {StaticResource} invocations; already bound resources are unaffected. A StaticResource markup extension can be used by a resource as well. For this to work, any StaticResource must reference a resource that was defined earlier in the XAML; this is due to the way the XAML parser works. It cannot find resources that it has not yet encountered. Managing logical resources Logical resources may be of various types, such as brushes, geometries, styles, templates, and more. Placing all those resources in a single file, such as App.xaml, hinders maintainability. A better approach would be to separate resources of different types (or based on some other criteria) from their own files. Still, they must be referenced somehow from within a common file such as App.xaml so that they are recognized. ResourceDictionary can incorporate other resource dictionaries using its MergedDictionaries property (a collection). This means a ResourceDictionary can reference as many resource dictionaries as desired and can have its own resources as well. The Source property must point to the location of ResourceDictionary. The default App.xaml created by Visual Studio contains the following (comments removed): <Application.Resources><ResourceDictionary><ResourceDictionary.MergedDictionaries><ResourceDictionarySource="Common/StandardStyles.xaml"/></ResourceDictionary.MergedDictionaries></ResourceDictionary></Application.Resources> Indeed, we find a file called StandardStyles.xaml in the Common folder, which hosts a bunch of logical resources, with ResourceDictionary as its root element. For this file to be considered when StaticResource is invoked, it must be referenced by another ResourceDictionary, from a Page or the application (the application is more common). The ResourceDictionary::MergedDictionaries property contains other ResourceDictionary objects, whose Source property must point to the required XAML to be included (that XAML must have ResourceDictionary as its root element). We can create our own ResourceDictionary XAML by using Visual Studio's Add New Item menu option and selecting Resource Dictionary: Duplicate keys No two objects can have the same key in the same ResourceDictionary instance. StaticResource takes the first resource it finds with the specified key, even if that key already exists in a ResourceDictionary. What about merged dictionaries? Merging different resource dictionaries may cause an issue—two or more resources with the same key that originate from different merged dictionaries. This is not an error and does not throw an exception. Instead, the selected object is the one from the last resource dictionary added (which has a resource with that key). Furthermore, if a resource in the current resource dictionary has the same key as any of the resources in its merged dictionaries, it always wins out. Here's an example: <ResourceDictionary><SolidColorBrush Color="Blue" x_Key="brush1" /><ResourceDictionary.MergedDictionaries><ResourceDictionary Source="Resources/Brushes2.xaml" /><ResourceDictionary Source="Resources/Brushes1.xaml" /></ResourceDictionary.MergedDictionaries></ResourceDictionary> Given this markup, the resource named brush1 is a blue SolidColorBrush because it appears in the ResourceDictionary itself. This overrides any resources named brush1 in the merged dictionaries. If this blue brush did not exist, brush1 would be looked up in Brushes1.xaml first, as this is the last entry in the merged dictionaries collection. XAML containing a ResourceDictionary as its root can be loaded dynamically from a string using the static XamlReader::Load method and then added as a merged dictionary, where appropriate. Styles Consistency in user interface is an important trait; there are many facets of consistency, one of which is the consistent look and feel of controls. For example, all buttons should look roughly the same—similar colors, fonts, sizes, and so on. Styles provide a convenient way of grouping a set of properties under a single object, and then selectively (or automatically, as we'll see later) apply it to elements. Styles are always defined as resources (usually at the application level, but can also be at the Page or UserControl level). Once defined, they can be applied to elements by setting the FrameworkElement::Style property. Here's a style defined as part of the Resources section of a Page: <Page.Resources><Style TargetType="Button" x_Key="style1"><Setter Property="FontSize" Value="40" /><Setter Property="Background"><Setter.Value><LinearGradientBrush ><GradientStop Offset="0" Color="Yellow" /><GradientStop Offset="1" Color="Orange" /></LinearGradientBrush></Setter.Value></Setter><Setter Property="Foreground" Value="DarkBlue" /></Style></Page.Resources> The style has a key (style1), and must have TargetType. This is the type the style may be applied to (and any derived types). The XAML parser has a type converter that converts TargetType to a TypeName object. The main ingredient in Style is its Setters collection (which is also its ContentProperty). This collection accepts Setter objects, which need Property and Value. The property must be a dependency property (not usually a problem, as most element properties are dependency properties); these are provided as simple strings thanks to type converters used behind the scene. The above markup sets up the properties FontSize, Background (with a complex property syntax because of the LinearGradientBrush), and Foreground—all for the Button controls. Once defined, the style can be applied to elements using the usual StaticResource markup extension in XAML by setting the FrameworkElement::Style property, as in the following example: <Button Content="Styled button" Style="{StaticResource style1}" /> Readers familiar with WPF may be wondering if the TargetType property can be omitted so that a greater control range can be covered. This is unsupported in the current version of WinRT. Setting the style on an incompatible element type (such as a CheckBox control in this example) causes an exception to be thrown at page load time. If a CheckBox should also be able to use the same style, the TargetType can be changed to ButtonBase (which covers all button types). Use different styles for different elements, even if a base type seems to cover several controls. It's very likely that later some properties may need to be tweaked for a particular type, making it difficult to change the style. Build a different style for different concrete types. You can also use style inheritance (as described later) to shorten some of the markup. What happens if an element with an applied style sets a property to a different value than the one from Style? The local value wins out. This means that the following button has a font size of 30 and not 40: <Button Content="Styled button" FontSize="30"Style="{StaticResource style1}" /> Implicit (automatic) styles The previous section showed how to create a style that has a name (x:Key) and how to apply it to elements. Sometimes, however, we would like a style to be applied automatically to all elements of a certain type, to give the application a consistent look. For example, we may want all buttons to have a certain font size or background, without the need for setting the Style property of each and every button. This makes creating new buttons easier, as the developer/designer doesn't have to know what style to apply (if any, the implicit style in scope will be used automatically). To create a Style that is applied automatically, the x:Key attribute must be removed: <Style TargetType="Button">…</Style> The key still exists, as the Style property is still part of ResourceDictionary (which implements IMap<Object, Object>), but is automatically set to a TypeName object for the specified TargetType. Once the Style property is defined and any Button element (in this example) in scope for ResourceDictionary of the Style property is in, that style will be applied automatically. The element can still override any property it wishes by setting a local value. Automatic styles are applied to the exact type only, not to derived types. This means that an automatic style for ButtonBase is useless, as it's an abstract class. An element may wish to revert to its default style and not have an implicit style applied automatically. This can be achieved by setting FrameworkElement::Style to nullptr (x:Null in XAML). Style inheritance Styles support the notion of inheritance, somewhat similar to the same concept in object orientation. This is done using the BasedOn property that must point to another style to inherit from. The TargetType of the derived style must be the same as in the base style. An inherited style can add Setter objects for new properties to set, or it can provide a different value for a property that was set by the base style. Here's an example for a base style of a button: <Style TargetType="Button" x_Key="buttonBaseStyle"><Setter Property="FontSize" Value="70" /><Setter Property="Margin" Value="4" /><Setter Property="Padding" Value="40,10" /><Setter Property="HorizontalAlignment" Value="Stretch" /></Style> The following markup creates three inherited styles: <Style TargetType="Button" x_Key="numericStyle"BasedOn="{StaticResource buttonBaseStyle}"><Setter Property="Background" Value="Blue" /><Setter Property="Foreground" Value="White" /></Style><Style TargetType="Button" x_Key="operatorStyle"BasedOn="{StaticResource buttonBaseStyle}"><Setter Property="Background" Value="Orange" /><Setter Property="Foreground" Value="Black" /></Style><Style TargetType="Button" x_Key="specialStyle"BasedOn="{StaticResource buttonBaseStyle}"><Setter Property="Background" Value="Red" /><Setter Property="Foreground" Value="White" /></Style> These styles are part of a simple integer calculator application. The calculator looks like this when running: Most of the elements comprising the calculator are buttons. Here is the numeric button markup: <Button Style="{StaticResource numericStyle}" Grid.Row="1"Content="7" Click="OnNumericClick" /><Button Style="{StaticResource numericStyle}" Grid.Row="1"Grid.Column="1" Content="8" Click="OnNumericClick"/><Button Style="{StaticResource numericStyle}" Grid.Row="1"Grid.Column="2" Content="9" Click="OnNumericClick"/> The operator buttons simply use a different style: <Button Style="{StaticResource operatorStyle}" Grid.Row="3"Grid.Column="3" Content="-" Click="OnOperatorClick"/><Button Style="{StaticResource operatorStyle}" Grid.Row="4"Grid.Column="3" Content="+" Grid.ColumnSpan="2"Click="OnOperatorClick"/> The = button uses the same style as operators, but changes its background by setting a local value: <Button Style="{StaticResource operatorStyle}" Grid.Row="4"Grid.Column="1" Grid.ColumnSpan="2" Content="="Background="Green" Click="OnCalculate"/> The complete project is named StyledCalculator and can be found as part of the downloadable source for this article. Style inheritance may seem very useful, but should be used with caution. It suffers from the same issues as object oriented inheritance in a deep inheritance hierarchy—a change in a base style up in the hierarchy can affect a lot of styles, being somewhat unpredictable, leading to a maintenance nightmare. Thus, a good rule of thumb to use is to have no more than two inheritance levels. Any more than that may cause things to get out of hand. Store application styles A Store app project created by Visual Studio has a default style file named StandardStyles.xaml in the Common folder. The file includes styles for all common elements and controls the set up for a common look and feel that is recommended as a starting point. It's certainly possible to change these styles or to inherit from them if needed. WinRT styles are similar in concept to CSS used in web development to provide styling to HTML pages. The cascading part hints to the multilevel nature of CSS, much like the multilevel nature of WinRT styles (application, page, panel, specific element, and so on). Summary This article was all about XAML, the declarative language used to build user interfaces for Windows Store apps. XAML takes some time getting used to it, but its declarative nature and markup extensions cannot easily be matched by procedural code in C++ (or other languages). Designer-oriented tools, such as Expression Blend and even the Visual Studio designer make it relatively easy to manipulate XAML without actually writing XAML, but as developers and designers working with other XAML-based technologies have already realized, it's sometimes necessary to write XAML by hand, making it an important skill to acquire. Resources for Article : Further resources on this subject: WPF 4.5 Application and Windows [Article] Android Native Application API [Article] Working with Apps in Splunk [Article]
Read more
  • 0
  • 0
  • 2876

article-image-introduction-reporting-microsoft-dynamics-crm
Packt
14 Aug 2013
8 min read
Save for later

Introduction to Reporting in Microsoft Dynamics CRM

Packt
14 Aug 2013
8 min read
(For more resources related to this topic, see here.) CRM report types Microsoft DynamicsCRM 2011 allows different types of reports; not only can the SQL Reporting Services reports be used, but other custom reports, such as Crystal Reports, ASP.NET, or Silverlight reports can also be integrated. Dynamics CRM can manage the following types of reports: RDL files, which are SQL Reporting Services reports External links to external applications such as Crystal Reports, ASP.NET, or Silverlight reports Native CRM dashboards with charts The RDL files can be created in either of the following two ways: By using the Report Wizard By using Visual Studio Dynamics CRM 2011 comes with 54 predefined reports out of the box; 25 of them are main reports and 29 are subreports. If for some reason you don't see any report as shown in the following screenshot, it means Dynamics CRM 2011 Reporting Extensions were not installed. This is something that can only happen for on-premise environments; if you are working with CRM Online, you don't need to be worried about any report-extension-deployment tasks. CRM report settings Reports in Dynamics CRM have the following settings or categories that you can access by clicking on the Edit button of each report, as shown in the following screenshot: In the Report: Account Summary window you will see two tabs, General and Administration. The Administration tab will show the name of the owner of the report, when the report was created or updated and who did it, and whether it is viewable to the user or the entire organization. In the General tab, you will see the name of the report and the description. If it is a subreport, we will see the parent report displayed. Lastly, in the Categorization section, you can see the following settings: Categories Related Record Types Display in Languages We will study each of these settings in detail. Categories By default, there are four categories created out of the box in every CRM organization: Administrative Reports Marketing Reports Sales Reports Service Reports You can change, add, or remove these categories by navigating to Settings | Administration | System Settings| Reporting as shown in the following screenshot: These report categories are used so that you can filter reports by each category when the predefined views are available in the main Reports interface, as shown in the following screenshot: Notice that if you add a new category, you will also have to create the view as it won't be created automatically. Related Record Types The Related Record Types option allows you to select what entities you want the report to be displayed under. The reports will be listed under the Run Report tab that is on the Ribbon. There are two locations where the report will be listed on the entities you selected: the home page grid and the form. The home page grid is where you see all the records of an entity (depending on the view you selected) as shown in the following screenshot: Almost every entity in Dynamics CRM has a Run Report button. As you can see, there are some reports that can run on the selected records and there are others that only run on all records. The form is the second place where the Run Report button is located and it is visible on the form record that you will see when you open a record; the report will only affect that record. Display in As we saw in the Related Record Types option, we can decide here where we want to show our report. The options are: Forms for related record types Lists for related record types The Reports area The first option will make the report available on the Run Report button, which is on the form ribbon of an entity record as we have seen earlier. The lists for the Related Record Types option appears on the home page grid ribbon button. The Reports area refers to the main reporting interface that is in the workspace. Languages This last option of the Categorization section allows us to specify the language of the report. We have the option of selecting all the languages on the list if you want your single report to be displayed in any of these languages. This is helpful if we have the different language packs installed on the CRM Server and the organization has people from different countries who understand different languages. By default all the reports are based on the local language. This option might not be visible on your installation if you don't have any other language installed on the system. SQL reporting services versions The first version of reporting services was released as a separate download for SQL 2000. It was in the SQL 2005 version that it was integrated in the SQL Server installation media and became an optional feature of the SQL Server setup. I remember that when I first installed SQL Reporting Services 2000, the setup was very complicated and required touching some XML files manually. It was in the 2005 version that it included a very nice application called Reporting Services Configuration Manager to help set up and deploy, which has been improving with every version to make this task much easier. The 2000 and 2005 versions required Internet Information Services (IIS)to be installed on the server to be used by the report manager and report web services. However, the 2008 and 2012 versions come with their own HTTP server and don't make use of the IIS. There is an important difference between the versions of SQL Server and Visual Studio. Basically, the last version of SQL 2012 is one version behind Visual Studio as currently there is no support for the Report Server Project Templates in Visual Studio 2012. The following table shows this discrepancy: SQL Server   Visual Studio   CRM Server   2005   Visual Studio 2005   4.0   2008   Visual Studio 2005   4.0 and 2011   2008 R2   Visual Studio 2008   4.0 and 2011   2012   Visual Studio 2010   4.0 and 2011     Dynamics CRM 2011 was originally designed to work with Windows Server 2008 R2 and SQL Server 2008 R2. Installing Dynamics CRM 2011 on Windows Server 2012 with SQL Server 2012 is very challenging; Daniel Cai, a fellow Microsoft MVP in Dynamics CRM, has written the necessary steps and workarounds in his article at http://danielcai.blogspot.com.ar/2012/05/install-crm-2011-onwindows-server-8.html. As we can see in the http://support.microsoft.com/default.aspx?kbid=2791312 link , there is upcoming support for Windows 2012 with the Update Rollup 13, which will be available on the Windows Update. In this article, I have decided to use the latest Microsoft versions, Windows Server 2012 and SQL Server 2012, to take the benefits of the latest features and improvements. I will mention in this book whenever a specific feature is different from the previous versions, as some implementations might still use the 2008 R2 versions. At the time of writing this book, CRM Online is using SQL Server 2012. Some of the benefits of using SQL Server 2012 with Dynamics CRM 2011 are as follows: Support for the mobile client with the SQL Server 2012 Service Pack 1 Alerts directly from the reporting-service control Better performance There is also another version of SQL Reporting Services that uses the same concept but is hosted in the cloud of Windows Azure; however, this version can't be used with Dynamics CRM directly. Regardless of the edition, SQL Reporting Serviceshas four main components: SQL Server databases Windows Service Report Manager website Report Server Web service SQL Server databases There are two databases that are used by the SQL Reporting Services—ReportServer and ReportServerTempDB. All the reports and configurations are stored in the first database, and the second one is used to store temporary data and improve the service performance by caching the user sessions. Notice that these databases' names are set by default and a Database administrator(DBA) might change the names using the Reporting Services Configuration Manager. Windows Service The Windows Service is used to automatically generate scheduled reports that can be scheduled with the Report Manager website or the CRM interface. You can see this Windows Service version in the Windows Services tool with the name of SQL Server Reporting Services (MSSQLSERVER), where MSSQLSERVER will be the name of the SQL Server instance you are running. Report Manager website The Report Manager is the web-user interface in which a user can see, create, and run reports by usually going to a URL such as http://<servername>/Reports. From this interface, the administrator can also give and assign permissions to the reports as well as configure and run the reports directly. Report Server Web service The Report Server Web service is the web service end point where a developer can integrate with other custom applications. Usually, by going to a URL such as http://servername/ReportServer, a developer can create another user interface to do everything the Report Manager website can do, but with a custom interface or application such as a Windows or WPF app. This is the URL that Visual Studio and the Report Builder use to connect and interact with the reporting services to run and deploy reports. This web service is very useful if you want to automate some of the export report features, such as to automate the generation of a PDF document by executing a report. An example of one of the end points exposed can be found at http://<servername>/ReportServer/ReportService2010.asmx; there are other ASMX files for compatibility with previous versions, such as ReportService2006.asmx and ReportService2005.asmx.
Read more
  • 0
  • 0
  • 5584

article-image-using-test-fixtures-phpunit
Packt
14 Aug 2013
4 min read
Save for later

Using Test Fixtures in PHPUnit

Packt
14 Aug 2013
4 min read
(For more resources related to this topic, see here.) How to do it... Open tests/CardTest.php and add a new setUp() method and use the $cardproperty to hold the Cardfixture. <?php class CardTest extends PHPUnit_Framework_TestCase { private $card; public function setUp() { $this->card = new Card('4', 'spades'); } public function testGetNumber() { $actualNumber = $this->card->getNumber(); $this->assertEquals(4, $actualNumber, 'Number should be <4>'); } public function testGetSuit() { $actualSuit = $this->card->getSuit(); $this->assertEquals('spades', $actualSuit, 'Suit should be <spades>'); } public function testIsInMatchingSet() { $matchingCard = new Card('4', 'hearts'); $this->assertTrue($this->card->isInMatchingSet($matchingCard), '<4 of Spades> should match <4 of Hearts>'); } public function testIsNotInMatchingSet() { $matchingCard = new Card('5', 'hearts'); $this->assertFalse($this->card->isInMatchingSet($matchingCard), '<4 of Spades> should not match <5 of Hearts>'); } } How it works... You'll notice the biggest change in this method is the addition of the setUp() method. The setUp() method is run immediately before any test method in the test case. So when testGetNumber() is run, the PHPUnit framework will first execute setUp() on the same object. setUp() then initializes $this|card with a new Cardobject. $this|card is then used in the test to validate that the number is returned properly. Using setUp()in this way makes your tests much easier to maintain. If the signature of the Cardclass's constructor is changed, you will only have one place in this file to reflect that change as opposed to four separate places. You will save even more time as you add more and more tests to a single test case class. It should also be noted that a new instance of CardTest is created each time a test method is executed. Only the code in this case is being shared. The objects that setUp() creates are not shared across tests. We will talk about how to share resources across tests shortly. There is also a tearDown() method. It can be used to remove any resource you created inside your setUp() method. If you find yourself opening files, or sockets, or setting up other resources then you will need to use tearDown()to close those resources, delete file contents, or otherwise tear down your resources. This becomes very important to help keep your test suite from consuming too many resources. There is nothing quite like running out of inodes when you are running a large test suite! There's more... As we mentioned a moment ago, PHPUnit has the facility to share resources across execution of multiple tests. This is generally considered bad practice. One of the primary rules of creating tests is that tests should be independent from each other so that you can isolate and locate the code causing test failures more easily. However, there are times when the physical resources required to create a fixture become large enough to outweigh the negatives of sharing this fixture across multiple tests. When such cases arise PHPUnit provides two methods that you can override: setUpBeforeClass() and tearDownAfterClass(). These are expected to be static methods. setUpBeforeClass() will be called prior to any tests or setUp() calls being made on a given class. tearDownAfterClass() will be called once all tests have been run and the final tearDown() call has been made. If you override these methods to create new objects or resources you would need to make sure that you set these values on static members of the test case class. Also, even if you are dealing only with objects, the tearDownAfterClass() is incredibly important to implement. If you do not implement it then any object created in setUpBeforeClass() and saved to static variables will remain in memory until all tests in your test suite have run. Summary In this way, you can use shared fixtures to reduce code duplication and to reduce the code necessary to set up new tests. Resources for Article: Further resources on this subject: Testing your App [Article] A look into the high-level programming operations for the PHP language [Article] Agile with Yii 1.1 and PHP5: The TrackStar Application [Article]
Read more
  • 0
  • 0
  • 2269
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-working-bazaar-centralized-mode
Packt
12 Aug 2013
37 min read
Save for later

Working with Bazaar in Centralized Mode

Packt
12 Aug 2013
37 min read
(For more resources related to this topic, see here.) The centralized mode In the centralized mode, multiple users have write access to one or more branches on a central server. In addition, this mode requires that all commit operations be applied to the central branches directly. This is in contrast with the default behavior of Bazaar, where all commits are local only, and thus private by default. In order to prevent multiple users from overwriting each other's changes, commits must be synchronized and performed in lock-step—if two collaborators try to commit at the same time, only the first commit will succeed. The second collaborator has to synchronize first with the central server, merging in the changes done by others, and try to commit again. In short, a commit operation can only succeed if the server and the user are on the same revision right before the commit. First, we will learn about the core operations, advantages, and disadvantages of the centralized mode in a general context. In the next section, we will learn in detail how the centralized mode works in Bazaar. Core operations The core operations in centralized mode are checkout, update, and commit: Checkout : This operation creates a working tree by downloading the project's files from a central server. This is similar to the branch operation in Bazaar. Update : This operation updates the working tree to synchronize with the central server, downloading any changes committed to the server by others since the last update. This is similar to the pull operation in Bazaar. Commit : This operation records the pending changes in the working tree as a new revision on the central server. This is different from the commit , because in the centralized mode, the commit must be performed on the central server. Bazaar supports all these core operations, and it provides additional operations to switch between centralized and decentralized modes, such as bind, unbind, and the notion of local commits, which we will explain later. The centralized workflow Since the centralized mode requires that all the commits be performed on the central server, it naturally enforces a centralized workflow. After getting the project's files using the checkout operation, the workflow is essentially a cycle of update and commit operations: Do a "checkout" to get the project's files. Work on the files and make some changes. Before committing, update the project to get the changes committed by others in the meantime. Commit the changes and return to step 2. Checkout from the central branch Given the central repository with its branches, the first step for a collaborator is to get the latest version of the project. Typically, you only need to do this once in the lifetime of the project. Later on, you can use the update operation to get the changes that were committed by the other collaborators on the server: As a result of the checkout, collaborators have their own private copy of the project to work on. Making changes Collaborators make changes independently in their own working trees, possibly working on copies of the same files simultaneously. Their environments are independent of each other and of the server too. Their changes are local and typically private until they commit them to the repository: Committing changes Commit operations are atomic—they cannot be interrupted or performed simultaneously in parallel. Therefore, collaborators can only commit new revisions one by one, not at the same time: If two collaborators try to commit at the same time as in this example, only the first one will succeed. The second one will fail because his copy of the project will be out of date as compared to the server, where another revision has been added by the other collaborator. At this point, the second collaborator will have to update his working tree to bring it to the latest revision, downloading the revision added by the other user who succeeded to commit first. Updating from the server The update operation brings the working tree up-to-date by copying any revisions that have been added on the server since the last update or checkout. If there are uncommitted changes in the working tree, they will be merged on top of the incoming changes: After the update, the local branch will be on the same revision as the server, and now the user may commit the pending changes: Handling conflicts during update When there are pending changes in the working tree, the update operation will try to rebase those changes on top of the incoming revisions. That is, the working tree is first synchronized with the server to be on the same revision, and after that the pending changes are applied on top of the updated working tree. Similar to a merge operation, if the pending changes conflict with the incoming changes, the conflicts must be resolved manually. Since there is no systematic way to return to the same original pending state, the update operation can be dangerous in this situation. The more pending changes and the more time has elapsed since the last update or checkout, the greater the risk of conflicts. Advantages The centralized mode has several useful properties that are worth considering. Easy to understand The concept of a central server, where all the changes are integrated and the work of all collaborators is kept synchronized, is simple and easy to understand. In projects using the centralized mode, the central server is an explicit and unambiguous reference point. Easy to synchronize efforts Since all the commits of the collaborators are performed on the central server in lock-step, the independent local working trees cannot diverge too far from each other; it's as if they are always at most one revision away from the central branch. In this way, the centralized mode helps the collaborators to stay synchronized. Widely used The centralized mode has a long-standing history. It is widely used today in many projects, and it is often preferred in corporate environments. Disadvantages The centralized mode has several drawbacks that are important to keep in mind. Single point of failure Any central server is, by definition, a potential single point of failure. Since in the centralized mode all commits must go through the central server, if it crashes or becomes unavailable, it can slow down, hinder, or in the worst case completely block further collaboration. Administrative overhead of access control When multiple users have write access to a branch, it raises questions and issues about access control, server configuration, and maintenance: Who should have write access? An access control policy must be defined and maintained. How to implement write access of multiple users on the central branches? The central server must be configured appropriately to enforce the access control policy. Whenever a collaborator joins or leaves the project, the server configuration must be updated to accommodate changes in the team. Whenever the access policy changes, the server configuration must be updated accordingly. The update operation is not safe The centralized mode heavily relies on an inherently unsafe operation—updating the working tree from the server while it has pending changes. Since the pending changes are, by definition, not recorded anywhere, there is no systematic way to return to the original state after performing the update operation. Unrelated changes interleaved in the revision history When collaborators work on different topics in parallel, if they continuously commit their changes, then unrelated changes will be interleaved in the revision history. As a result, the revision history can become difficult to read, and if a feature needs to be rolled back later, the revisions that were a part of the feature can be difficult to find. Using Bazaar in centralized mode Bazaar fully supports the core operations of the centralized mode by using so-called bound branches. The checkout and update operations are implemented using dedicated commands in the context of bound branches. The commit operation works differently when used with bound branches, in order to enforce the requirements of the centralized mode. In addition to the classic core operations of the centralized mode, Bazaar provides additional operations to easily turn the centralized mode on or off, which opens interesting new ways of combining centralized and decentralized elements in a workflow. Bound branches Bound branches are internally the same as regular branches; they differ only in a few configuration values—the bound flag is set to true, and bound_location is set to the URL of another branch. We will refer to the bound location as the master branch . In most respects, a bound branch behaves just like any regular branch. However, operations that add revisions to a bound branch behave differently—all the revisions are first added in the master branch, and only if that succeeds, the operation is applied to the bound branch. For example, the commit operation succeeds only if it can be applied to the master branch. Similarly, the push and pull operations on a bound branch will attempt to push and pull the missing revisions in the master branch first. Since being bound to another branch is simply a matter of configuration, branches can be reconfigured at any time to be bound or unbound. Creating a checkout The checkout operation creates a bound branch with a working tree. This configuration is called a checkout in Bazaar. This is essentially the same as creating a regular branch and then binding it to the source branch it was created from. The term checkout is also used as a verb to indicate the act of creating a checkout from another branch. Using the command line Let's first create a shared repository to store our sample branches: $ mkdir -p /sandbox $ bzr init-repository /sandbox/central Shared repository with trees (format: 2a) Location: shared repository: /sandbox/central $ cd /sandbox/central You can check out from another branch by using the bzr checkout command and by specifying the URL of the source branch. Optionally, you can specify the target directory where you want to create the new checkout. For example: $ bzr checkout http://bazaar.launchpad.net/~bzrbook/bzrbook-examples/hello-start trunk You can confirm that the branch configuration is a checkout by using the bzr info command: $ bzr info trunk Repository checkout (format: 2a) Location: repository checkout root: trunk checkout of branch: http://bazaar.launchpad.net/~bzrbook/bzrbook-examples/hello-start/ shared repository: . The first line of the output is the branch configuration, in this case a "Repository checkout", because we created the checkout inside a shared repository. Outside a shared repository, the configuration is called simply "Checkout". For example: $ bzr checkout trunk /tmp/checkout-tmp $ cd /tmp/checkout-tmp/ $ bzr info Checkout (format: 2a) Location: checkout root: . checkout of branch: /sandbox/central/trunk In both the cases the checkout of branch line indicates the master branch that this one is bound to. Using Bazaar Explorer Performing a checkout using Bazaar Explorer can be a bit confusing, because the buttons and menu options labeled Checkout... use a special mode of the checkout operation called "lightweight checkouts". Lightweight checkouts are very different from branches.. Use the Branch view to checkout from a branch: From the toolbar, click on the large Start button and select Branch... From the menu, select Bazaar | Start | Initialize In the From: textbox, enter the URL of the source branch. In the To: textbox, you can either type the path to the directory where you want to create the checkout, or click on the Browse button and navigate to it. Make sure to select the Bind new branch to parent location box, in order to make the new branch bound to the source branch: After you click on OK , the Status box will show the bzr command that was executed and its output. For example: Run command: bzr branch https://code.launchpad.net/~bzrbook/bzrbook-examples/hello-start /sandbox/central/trunk2 --bind --use-existing-dir Branched 6 revisions. New branch bound to https://code.launchpad.net/~bzrbook/bzrbook-examples/hello-start Click on Close to return to the status view, which shows the content of the working tree exactly in the same way as in the case of regular branches. The Status view does not indicate whether the branch of the current working tree is bound or not. On the other hand, the repository view uses different icons to distinguish these configurations: Bound branches are shown with a computer icon, and unbound branches are shown with a folder icon. Updating a checkout The purpose of the update operation is to bring a bound branch up-to-date with its master branch. If there are pending changes in the working tree, they will be reapplied after the branch is updated. If the incoming changes conflict with the pending changes in the working tree, the operation may result in conflicts. As collaborators work independently in parallel, it is very common and normal that a bound branch is out of date due to the commits done by other collaborators. In such a state, the commit operation would fail, and the bound branch must be updated first before retrying to commit. Similar to a pull operation, the update operation copies the missing revision data to the repository and updates the branch data to be the same as the master branch. If there are pending changes in the working tree at the time of performing the update, they are first set aside and reapplied at the end. During this step conflicts may happen, the same way as during a merge operation. Using the command line You can bring a bound branch up-to-date with its master branch by using the bzr update command. To demonstrate this, let's first create another checkout based upon an older revision: $ cd /sandbox/central $ bzr checkout trunk -rlast:3 last-3 $ cd last-3 $ bzr missing --line ../trunk You are missing 2 revisions: 6: Janos Gyerik 2013-03-03 updated readme 5: Janos Gyerik 2013-03-03 added python and bash impl That is, our new checkout is two revisions behind the trunk. Let's bring it up to date: $ bzr update +N hello.py +N hello.sh M README.md All changes applied successfully. Updated to revision 6 of branch /sandbox/central/trunk The missing revisions are added to the branch, and the necessary changes are applied to the working tree, resulting in identical branches: $ bzr missing ../trunk Branches are up to date. Using Bazaar Explorer To bring a checkout up-to-date with its master, you can either click on the large Update button in the toolbar, or navigate to Bazaar | Collaborate | Update Working Tree.... in the menu. The user interface does not take any parameters; the operation is applied immediately and its result is shown similar to the command-line interface. Visiting an older revision An interesting alternative use of the update operation is to reset the working tree to a past state, by specifying a revision by using the -r or --revision options. For example: $ cd /sandbox/central/trunk $ bzr update -r3 -D .bzrignore M README.md -D hello.py -D hello.sh All changes applied successfully. Updated to revision 3 of branch http://bazaar.launchpad.net/~bzrbook/bzrbook-examples/hello-start This may seem similar to using bzr revert, but in fact it is very different. The changes applied to the working tree will not be considered pending changes. Instead, the working tree is marked as out of date with its master, effectively preventing commit operations in this state: $ bzr status working tree is out of date, run 'bzr update' Another difference from the revert command is that we cannot specify a subset of files; the update command is applied to the entire working tree. This operation works on unbound branches too. Since an unbound branch can be thought of as being its own master, the update command without a revision parameter simply restores it to its latest revision. Committing a new revision The commit operation works in the same way as it does with unbound branches, however, in keeping with the main principles of the centralized mode, Bazaar must ensure that the commit is performed in two branches—first in the master branch, followed by the bound branch. The commit operation in the master branch succeeds only if it is at the same revision as the bound branch. Otherwise, the operation fails, and the bound branch must first be synchronized with its master branch using the update operation. In Bazaar Explorer, the Commit view shows an additional explanation when committing in a bound branch, as a kind reminder that the operation will be performed on the master branch first, keeping the local and master branches in sync: Practical tips when working in centralized mode The centralized mode is simple and easy to work with in general, except for the update operation. The update operation can be problematic when there are too many pending changes in the working tree, and the central branch has evolved too far since the last time the bound branch was synchronized. Fortunately, a few simple practices can greatly reduce or mitigate the potential conflicts that may arise during update operations: Always perform an update before starting to work on something new. That is, make sure to start a new development based on the latest version of the central branch. Break down bigger changes into smaller steps and commit them little by little. Don't let too many pending changes to accumulate locally; try to commit your work as soon as possible. In case of large scale changes and whenever it makes sense, use dedicated feature branches. You can work on feature branches locally or share them with others by pushing to the central server. Working with bound branches Bazaar provides additional operations using bound branches that go beyond the core principles of the centralized mode, such as: Unbinding from the master branch Binding to a branch Local commits Essentially, these operations provide different ways to switch in and out of the centralized mode, which is extremely useful when a central branch becomes temporarily unavailable, or if you want to rearrange the branches in your workflow. Unbinding from the master branch Sometimes, you may want to commit changes even if the master branch is not accessible. For example, when the server hosting the master branch is experiencing network problems, or if you are in an environment with no network access such as in a coffee shop or in a train. You can unbind from the master branch by using the bzr unbind command. To unbind a branch using Bazaar Explorer, you can either click on the large Work icon in the toolbar and select Unbind Branch , or using the menu Bazaar | Work | Unbind Branch . Internally, this operation simply sets the bound configuration value to false. Since the branch is no longer considered bound, subsequent commit operations will be performed only locally, and the branch will behave as any other regular branch. You can confirm that a branch was unbound from its master by using the bzr info command. For example: $ cd /sandbox/central/ $ bzr checkout trunk mycheckout $ cd mycheckout/ $ bzr info Repository checkout (format: 2a) Location: repository checkout root: . checkout of branch: /sandbox/central/trunk shared repository: /sandbox/central $ bzr unbind $ bzr info Repository tree (format: 2a) Location: shared repository: /sandbox/central repository branch: . That is, the configuration has changed from Repository checkout to Repository tree and the checkout of branch line disappeared from the output. Binding to a branch Sometimes, you may want to bind a regular independent branch to another branch, for example to switch to using the centralized mode, or if you previously unbound from a branch and want to bind to it again. You can bind to a branch by using the bzr bind command and specifying the URL of the branch. To bind a branch using Bazaar Explorer, you can either click on the large Work icon in the toolbar and select Bind Branch... , or use the menu Bazaar | Work | Bind Branch... . If you have previously used unbind in this branch, then you can omit the URL parameter on the command line, and in Bazaar Explorer the previous location is selected by default. Internally, this operation simply updates the branch configuration—sets or updates the value of bound_location and sets the value of bound to True. Since the branch is now considered bound, all commit operations will be first applied to the master branch, but the working tree is left unchanged at this point. Although you can bind any branch to any other branch, it only makes sense to bind to a related branch, typically a branch that is some revisions ahead of the current branch, so that a normal pull operation would bring the local branch up-to-date with its master branch. After binding to a branch, you should bring the local branch up-to-date with its master branch by using bzr update. Ideally, if the local branch is related to its new master and is just some revisions behind, then the update operation will simply bring it up-to-date by copying the revision data and the branch data of the master, leaving the working tree in a clean state, ready to work in the branch. However, if the two branches have diverged from each other, then the update operation will perform a merge—first the working tree is updated to match the latest revision in the master branch, after that the revisions that do not exist in the master branch are merged in the same way as in a regular merge operation. This is an unusual use case, but nonetheless a valid operation. After all the changes are applied, you must sort out all conflicts, if any, and you may commit the merge. Since the branch is now a bound branch, the merge commit will be first applied in the master branch, and after that in the bound branch. Using local commits If you want to break out of the centralized mode only temporarily, an alternative to unbinding and rebinding later is using so-called local commits. When using local commits, you basically stay in centralized mode, but instead of trying to commit in the master branch, the commit operation is applied only in the local branch. This can be very useful when the master branch is temporarily unavailable but expected to be restored soon. You can perform a local commit by using the bzr commit command with the --local flag, or in Bazaar Explorer by selecting the Local commit box in the Commit view: You can continue to perform as many local commits as needed until the master branch becomes available again. As a result of local commits, the bound branch and the master branch go out of sync. If you try to perform a regular commit in such a state, Bazaar will raise an error and tell you to either continue committing locally, or perform an update and then commit. $ bzr commit -m 'removed readme' bzr: ERROR: Bound branch BzrBranch7(file:///sandbox/central/on-the-train/) is out of date with master branch BzrBranch7(file:///sandbox/central/trunk/). To commit to master branch, run update and then commit. You can also pass --local to commit to continue working disconnected. It may seem strange at first that we have to do an update even though in this case our local branch is clearly ahead of its master. However, the behavior is consistent with the rule – if a bound branch is not in sync with its master branch, you must always use the update operation to synchronize it. As usual, the update operation will first restore the working tree to the same state as the latest revision in the master branch. After that, it will perform a merge from the tip of the local branch, applying the changes in the revisions that were committed locally. Finally, it will apply the pending changes that existed at the moment the update operation started. As a result, the working tree will be in a pending merge state, as you can confirm by using the log and status commands. For example: After sorting out all conflicts, if any, you may commit the merge. The local commits will appear as if they had been on a branch and the branch has been merged. This makes perfect sense, as indeed this is exactly what happened: If no new revisions were added in the master branch during your local commits, then a simple way to bring the master up-to-date is to do a bzr push operation instead of bzr update. It works because in this case the two branches have not diverged; the local branch is simply a few revisions ahead of its master. The push operation appends the missing revisions to the master branch, and the two branches become synchronized again, and you can continue to work and commit normally. Working with multiple branches Branch operations work consistently, regardless of whether you use the centralized mode or not. Although the centralized mode permits multiple collaborators committing unrelated changes continuously in the central branch, it is better to work on new improvements in dedicated feature branches and merge them into the central branch only when they are ready. In this way, the revision history remains easy to read, and if a feature causes problems, then all the revisions involved in it can be reverted easily with one swift move. Even in a centralized workflow, you are free to use as many local private branches as needed. You can slice and dice your local branches and when a feature is ready, you can merge them into the central branch, and all the intermediate revisions will be preserved in the history. Team members can work on a feature branch together by sharing the branch on the central server. One of the team members can start working on the feature, and at some point push the branch on the server so that others can checkout from it and start contributing their work. After pushing the branch to the server, the original contributor can switch to the centralized mode using the bind command. When working on a bound branch, keep in mind that in addition to the commit operation, the push and pull operations too will (at least least try to) impact its master branch. Setting up a central server In order to use Bazaar in the centralized mode, collaborators need to have write access to the branches on a central server. Here, we explain a few ways of configuring such servers. Using an SSH server An easy and secure way to provide write access to branches at a central location is by using an SSH server. In this setup, users authenticate via the SSH service running on the server, and their read and write access permissions to the branches are subject to regular filesystem permissions. There are several ways of accessing Bazaar branches over SSH: Users access the server with their own SSH account Users access the branches with a shared restricted SSH account Users access the server with their own SSH account over SFTP Using the smart server over SSH If Bazaar is installed on the server, remote clients can benefit from the built-in smart server when accessing branches by using the bzr+ssh:// protocol. In this mode, the bzr serve command is invoked on the server side to handle incoming Bazaar commands. This mode is called smart server , because remote clients receive assistance from the server, significantly speeding up Bazaar operations. In addition to Bazaar being installed on the server, the bzr command must be in a directory included on the user's PATH variable. Otherwise, the absolute path of bzr must be specified at the client side, either in the BZR_REMOTE_PATH environment variable or in Bazaar's user configuration. For example, if bzr is installed in /usr/local/bin/bzr, then you can execute Bazaar commands on the remote location as follows: $ export BZR_REMOTE_PATH=/usr/local/bin/bzr $ bzr info bzr+ssh://[email protected]/repos/projectx Alternatively, the remote path can be specified in the locations.conf file in your Bazaar configuration directory as follows: [bzr+ssh://example.com/repos/projectx] bzr_remote_path = /usr/local/bin/bzr See bzr help configuration for more details. Use the bzr version command to the find the location of the Bazaar configuration directory. Using individual SSH accounts This is the easiest way to access Bazaar repositories on a remote computer. Users with shell access to a computer can access Bazaar branches by using the bzr+ssh:// protocol. For example: $ bzr info bzr+ssh://[email protected]/repos/projectx The path component in the URL must be the absolute path of the branch on the server; in this example, the branch is in /repos/projectx. If the branch is in the user's home directory, then the home directory part can be replaced with ~; for example, instead of /home/jack/repos/projectx, you can use the more simple form ~/repos/projectx: $ bzr info bzr+ssh://[email protected]/~/repos/projectx To refer to a Bazaar branch in another user's home directory, you can use the ~username shortcut. For example: $ bzr log bzr+ssh://[email protected]/~mike/repos/projectx In order to let multiple users commit to the same branches, their user accounts must have write permission to the branch and repository files used by Bazaar. One way to do that is by adding the users to a dedicated group, and setting the ownership and access permissions appropriately. Let's call this group bzrgroup, and let's set up a shared repository at /srv/repos/projectx for members of the group, as follows: $ bzr init-repository /srv/repos/projectx --no-trees Shared repository (format: 2a) Location: shared repository: /srv/repos/projectx $ chgrp -R bzrgroup /src/repos/projectx $ chmod g+s /src/repos/projectx With this setup, the members of bzrgroup can create branches and commit to them. With appropriate permissions, other users can be permitted strictly the read-only access. Using a shared restricted SSH account Instead of creating individual SSH accounts for each collaborator, an interesting alternative is to use a shared SSH account with command restrictions. This setup requires that collaborators use the SSH public key authentication when connecting to the server, and that appropriate access permissions to the branches be configured in the ~/.ssh/authorized_keys file of the shared SSH account. Let's suppose that: There is a shared repository on the server in /srv/bzr/projectx You want to give Jack and Mike write access to the shared repository The shared repository is owned by the user bzruser To make this work, add the following two lines to the ~/.ssh/authorized_keys file of bzruser: command="bzr serve --inet --allow-writes --directory=/srv/bzr/projectx",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding PUBKEY_OF_JACK command="bzr serve --inet --allow-writes --directory=/srv/bzr/projectx",no-agent-forwarding,no-port-forwarding, no-pty,no-user-rc,no-X11-forwarding PUBKEY_OF_MIKE Replace PUBKEY_OF_JACK and PUBKEY_OF_MIKE with the SSH public key of Jack and Mike, respectively. For example, an SSH public key looks similar to the following: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAo6a+TOzByRt9EVUjpMBs5kRft9SSPamI3cRlvaX4DuMbRqjtfkRTO4tik+MAWaFeIHyO5EsdFBGp+XVH9BMqehXdjAQga4Wa2oGX/w7bn+O+gdIoJE2wzMlGV2eXcaW2PKdDIqQpUn0n+xX68vjRaCiZmqGXWhVej3cVi9dtIwIQMrcIF4T+4wONic09UjPXZKbjL2GmkzsR6SMQJBomr4TUcRgyaR5ija9R8AzvsSdNeDKkVwf83lva3jruwEMute3aZFulM5JqvjFIFqooAlSjWjdniF8ZdweeN1c2Q2QH+eCl48hY2drUsdZ+oQH+xp8x6llkZiDWFE/RZLa3Glw== Joe The command parameter restricts the login shell to the bzr serve command. In this way, the users will not be able to do anything else on the server except run Bazaar commands. The --directory parameter further restricts Bazaar operations to the specified directory. To give only read-only access, simply drop the --allow-writes flag. The other options on the line after command are to make the SSH sessions as restricted as possible, as a good measure of security. When accessing branches in this setup, the path component in the branch URL must be relative to the directory specified in the authorization line. For example, the trunk in /srv/bzr/projectx/trunk can be accessed as follows: $ bzr info bzr+ssh://[email protected]/trunk The drawback of this setup is that you can only have one configuration line per SSH key. Using SFTP If SFTP is enabled on the SSH server, you can access branches without installing Bazaar on the server by using the sftp:// URL prefix instead of bzr+ssh://. For example: $ bzr info sftp://[email protected]/home/mike/repos/projectx This type of access is called "dumb server" mode, because in this case Bazaar is not used on the server side, and thus it cannot provide assistance to the client. In this setup, operations will be much less efficient compared to using the smart server. Using bzr serve directly You can use the Bazaar smart server directly to listen to incoming connections and serve the branch data. Use the bzr serve command to start the smart server. By default, it listens on port 4155, and serves branch data from the current working directory in read-only mode. It has several command-line parameters and flags to change the default behavior. For example: --directory DIR: This specifies the base directory to serve the branch data from, instead of the current working directory --port PORT: This specifies the port number to listen on, instead of the default 4155 port --allow-writes: This allows write operations instead of strictly read-only Use the -h or --help flags to see the list of supported command-line parameters. Branches served in this way can be accessed by URLs in the following format: bzr://host/[path] Here, host is the hostname of the server, and path is the relative path from the base directory of the server process. For example, if the server is example.com, the smart server is running in the directory /srv/bzr/repo, and there is a Bazaar branch at the path /srv/bzr/repo/projectx/feature-123, then the branch can be accessed as follows: $ bzr info bzr://example.com/projectx/feature-123 The advantage of this setup is that the smart server provides good performance. On the other hand, it completely lacks authentication. Using bzr serve over inetd On GNU/Linux and UNIX systems, you can configure inetd to start the bzr serve command automatically as needed, by adding a line in the inetd.conf file as follows: 4155 stream TCP nowait bzruser /usr/bin/bzr /usr/bin/bzr serve --inet --directory=/srv/bzr/repo Here: 4155 is the port number where the Bazaar server should listen for incoming connection. bzruser is the user account the bzr serve process will run as. /usr/bin/bzr is the absolute path of the bzr command. /usr/bin/bzr serve --inet --directory=/srv/bzr/repo is the complete command to execute when starting the server. The --directory parameter is used to specify the base directory of Bazaar branches. Once configured, this setup works exactly in the same way as using bzr serve directly, with the same advantages and disadvantages. Creating branches on the central server Creating branches on a server works much in the same way as when creating branches locally. Here, we emphasize on some good practices for optimal performance. The same way as when working with local branches, it is a good idea to create a shared repository per project to host multiple Bazaar branches. Even if you don't intend to use multiple branches at first, you might want to do that later, and it is easier to have a shared repository right from the start, than migrating an existing branch later. Another important point is to configure the shared repository to not create working trees by default. Working trees are unnecessary on the server, because collaborators work in their local checkouts, and Bazaar may give warnings during branch operations if the central branch contains a working tree. In order to avoid confusion, it is better to completely omit working trees on the server. Creating a shared repository without working trees Similar to when working with local branches, using a shared repository on the server is a good way to save disk space. In addition, when pushing a new branch to the server that shares revisions with an existing branch, the shared revisions don't need to be copied, thus the push operation will be faster. When creating the shared repository, make sure to use the --no-trees flag, so that new branches will be created without trees by default. Although, most probably, you will create new branches using push operations, and most protocols don't support creating a working tree when used with push, nonetheless it is a good precaution to set up a shared repository in this way right from the start. Reconfiguring a shared repository to not use working trees You can use the bzr info command to check whether a shared repository is configured with or without working trees. For example: $ bzr info bzr+ssh://[email protected]/tmp/repo/ Shared repository with trees (format: unnamed) Location: shared repository: bzr+ssh://[email protected]/tmp/repo/ If the first line of the output says Shared repository with trees instead of simply Shared repository, then you should log in to the server and reconfigure it by using the bzr reconfigure command with the --with-no-trees flag. For example: $ cd /tmp/repo $ bzr reconfigure --with-no-trees $ bzr info Shared repository (format: 2a) Location: shared repository: . Removing an existing working tree If you already have branches on the central server with a working tree, then it is a good idea to remove them. First, check the status of the working tree by using the bzr status command. If there are any pending changes, then commit or revert them. To remove the working tree, use the bzr reconfigure command with the --branch flag. Creating branches on the server without a working tree Although you can use the bzr init and bzr branch commands directly on the server in the same way as you would do it locally, it would defeat the purpose of the centralized setup, and invite mistakes such as creating working trees by accident. A common way to create new branches on the server is by using a push operation from your local branch. For example: $ bzr push bzr+ssh://[email protected]/tmp/repo/branch1 Created new branch. After pushing a branch, if you would like to work on it in the centralized mode, then you can bind to the remote branch by using the :push location alias: $ bzr bind :push Practical use cases The key feature of the centralized mode is that it automatically keeps bound branches synchronized with their master branch. This opens interesting possibilities that can be useful in many situations, regardless of the workflow or the size of a team. To give you some idea here, we briefly introduce a few example use cases. Working on branches using multiple computers If you use multiple computers to work on a project, for example, a desktop and a laptop, or computers at different locations, then you probably need a way to synchronize your work done at physically different locations. Although you can synchronize branches between the two locations by using mirror operations such as bzr push and bzr pull, they are not automatic, and thus you may easily find yourself in a situation that you cannot access some changes you did on another computer, because you forgot to run bzr push before you switched off the machine, for example. Using the centralized mode can help here, because the synchronization between two branches is automatic, as it takes place at the time of each commit. You can start using the centralized mode by converting the branch you used to push to into a master branch, and binding to it with your other branches. Let's say you have two computers, computerA and computerB, they both can access a branch at some location branchX, and you work on the branch sometimes by using computerA, and at other times by using computerB. (Whether branchX is hosted on computerA or computerB or a third computer doesn't matter, the example will still hold true.) You can keep your work environments synchronized by using the bzr push and bzr pull operations, by adopting the following workflow on both the computers when working on branches you want to share: Pull from branchX. Work, make changes, and commit. Push to branchX. This can be tedious and error-prone; for example, if you forget to push your changes on one computer, then you might not be able to access those changes after switching to the other computer, as it may have been powered down, or be inaccessible directly over the network. Using the centralized mode would simplify the workflow to only two steps: Update from branchX. Work, make changes, and commit. Not only there is one less step to do, but since in this case branchX is automatically updated at every commit, the possibility of forgetting to run bzr push is completely eliminated. You can convert your existing setup to using centralized mode simply by binding to branchX on both the computers, and then using the update command to synchronize. Assuming that both branches have no pending changes and both have been pushed to branchX as their last operation, you can convert them by using the following commands: On computerA: $ bzr pull $ bzr bind :push On computerB: $ bzr bind :push $ bzr update After this, you can start using branchX in the centralized mode, as a cycle of the bzr update and bzr commit operations. Synchronizing backup branches An easy way to back up a branch is by pushing it to another location. For example: $ bzr push BACKUP_URL BACKUP_URL can be a path on an external disk, a path on a network share or network filesystem, or any remote URL. However, the push operation is not automatic; it must be executed manually every time you want to update the backup. Another way is to bind the branch to the backup location, effectively using it in the centralized mode. In this case, all commits in the bound branch will be automatically applied to its master branch too, keeping the backup up-to-date at all times. You can convert the branch to this setup, simply by binding to the push location: $ bzr bind :push Since this practically means switching to the centralized mode, it is important to have fast access to BACKUP_URL, otherwise the delay at every commit might be annoying. If you need to break out of the centralized mode, for example when the BACKUP_URL is temporarily unavailable for some reason, then simply run bzr unbind. And after BACKUP_URL becomes available again, you can bring the remote branch up-to-date with bzr push, and re-bind to it by using bzr bind without additional parameters to return to the centralized mode. Summary In this article, we explained the core principles of the centralized mode with its advantages and disadvantages. Bazaar fully supports the centralized mode by using bound branches, and we have demonstrated, with examples, how you can switch in and out of this mode at any time. We have covered a few simple ways of setting up a central server, where team members can have shared write access to branches, and a few practical use cases. The centralized mode in Bazaar is very flexible. It can be used for more than just to imitate the workflow of centralized version control systems. Essentially, it provides automatic synchronization of two branches, which can be practical in many situations, even as a part of more sophisticated distributed workflows. Resources for Article :   Further resources on this subject: Configuration and Handy Tweaks for UDK [Article] Parallel Dimensions – Branching with Git [Article] Installing and customizing Redmine [Article]
Read more
  • 0
  • 0
  • 1648

article-image-motion-detection
Packt
12 Aug 2013
6 min read
Save for later

Motion Detection

Packt
12 Aug 2013
6 min read
(For more resources related to this topic, see here.) Obtaining the frame difference To begin with, we create a patch with name Frame001.pd. Put in all those elements for displaying the live webcam image in a rectangle. We use a dimen 800 600 message for the gemwin object to show the GEM window in 800 x 600 pixels. We plan to display the video image in the full size of the window. The aspect ratio of the current GEM window is now 4:3. We use a rectangle of size 5.33 x 4 (4:3 aspect ratio) to cover the whole GEM window: Now we have one single frame of the video image. To make a comparison with another frame, we have to store that frame in memory. In the following patch, you can click on the bang box to store a copy of the current video frame in the buffer. The latest video frame will compare against the stored copy, as shown in the following screenshot: The object to compare two frames is pix_diff. It is similar to the Difference layer option in Photoshop. Those pixels that are the same in both frames are black. The color areas are those with changes across the two frames. Here is what you would expect in the GEM window: To further simplify the image, we can get rid of the color and use only black and white to indicate the changes: The pix_grey object converts a color image into grey scale. The pix_threshold object will zero out the pixels (black) with color information lower than a threshold value supplied by the horizontal slider that has value between 0 and 1. Refer to the following screenshot: Note that a default slider has a value between 0 and 127. You have to change the range to 0 and 1 using the Properties window of the slider. In this case, we can obtain the information about those pixels that are different from the stored image. Detecting presence Based on the knowledge about those pixels that have changed between the stored image and the current video image, we can detect the presence of a foreground subject in front of a static background. Point your webcam in front of a relatively static background; click on the bang box, which is next to the Store comment, to store the background image in the pix_buffer object. Anything that appears in front of the background will be shown in the GEM window. Now we can ask the question: how can we know if there is anything present in front of the background? The answer will be in the pix_blob object: The pix_blob object calculates the centroid of an image. The centroid (http://en.wikipedia.org/wiki/Centroid) of an image is its center of mass. Imagine that you cut out the shape of the image in a cardboard. The centroid is the center of mass of that piece of cardboard. You can balance the cardboard by using one finger to hold it as the center of mass. In our example, the image is mostly a black-grey scale image. The pix_blob object finds out the center of the nonblack pixels and returns its position in the first and second outlets. The third outlet indicates the size of the nonblack pixel group. To detect the presence of a foreground subject in front of the background, the first and second number boxes connected to the corresponding pix_blob outlets will return roughly the center of the foreground subject. The third number box will tell how big that foreground subject is. If you pay attention to the changes in the three number boxes, you can guess how we will implement the way to detect presence. When you click on the store image bang button, the third number box (size) will turn zero immediately. Once you enter into the frame, in front of the background, the number increases. The bigger the portion you occupy of the frame, the larger the number is. To complete the logic, we can check whether the third number box value is greater than a predefined number. If it is, we conclude that something is present in front of the background. If it is not, there is nothing in front of the background. The following patch Frame002.pd will try to display a warning message when something is present: A comparison object > 0.002 detects the size of the grey area (blob). If it is true, it sends a value 1 to the gemhead object for the warning text to display. If it is false, it sends a value 0. We'll use a new technique to turn on/off the text. Each gemhead object can accept a toggle input to turn it on or off. A value 1 enables the rendering of that gemhead path. A value 0 disables the rendering. When you first click on the store image bang button, the third number box value drops to 0. Minor changes in the background will not trigger the text message: If there is significant change in front of the background, the size number box will have a value larger than 0.002. It thus enables the rendering of the text2d message to display the WARNING message. After you click on the Store bang box, you can drag the horizontal slider attached to the pix_threshold object. Drag it towards the right-hand side until the image in the GEM window turns completely black. It will roughly be the threshold value. Note also that we use a number in each gemhead object. It is the rendering order. The default one is 50. The larger number will be rendered after the lower number. In this case, the gemhead object for the pix_video object will render first. The gemhead object for the text2d object will render afterwards. In this case, we can guarantee that the text will always be on top of the video: Actually, you can replace the previous version with a single pix_background object. A reset message will replace the bang button to store the background image. In the following patch, it will show either the clear or warning message on the screen, depending on the presence of a subject in front of the background image: The GEM window at this moment shows only a black screen when there isn't anything in front of the background. For most applications, it would be better to have the live video image on screen. In the following patch, we split the video signal into two – one to the pix_background object for detection and one to the pix_texture object for display: The patch requires two pix_separator objects to separate the two video streams from pix_video, in order not to let one affect the other. Here is the background image after clicking on the reset message: The warning message shows up after the subject entered the frame, and is triggered by the comparison object > 0.005 in the patch: We have been using the pix_blob object to detect presence in front of a static background image. The pix_blob object will also return the position of the subject (blob) in front of the webcam. We are going to look into this in the next section.
Read more
  • 0
  • 0
  • 1978

article-image-interacting-user
Packt
08 Aug 2013
23 min read
Save for later

Interacting with the User

Packt
08 Aug 2013
23 min read
(For more resources related to this topic, see here.) Creating actions, commands, and handlers The first few releases of the Eclipse framework provided Action as a means of contributing to menu items. These were defined declaratively via actionSets in the plugin.xml file, and many tutorials still reference those today. At the programming level, when creating views, Actions are still used to provide context menus programmatically. They were replaced with commands in Eclipse 3, as a more abstract way of decoupling the operation of a command with its representation of the menu. To connect these two together, a handler is used. E4: Eclipse 4.x uses the command's model, and decouples it further using the @Execute annotation on the handler class. Commands and views are hooked up with entries on the application's model. Time for action – adding context menus A context menu can be added to the TimeZoneTableView class and respond to it dynamically in the view's creation. The typical pattern for Eclipse 3 applications is to create a hookContextMenu() method, which is used to wire up the context menu operation with displaying the menu. A default implementation can be seen by creating an example view, or one can be created from first principles. Eclipse menus are managed by a MenuManager. This is a specialized subclass of a more general ContributionManager, which looks after a dynamic set of contributions that can be made from other sources. When the menu manager is connected to a control, it responds in the standard ways for the platform for showing the menu (typically a context-sensitive click or short key). Menus can also be displayed in other locations, such as a view's or the workspace's coolbar (toolbar). The same MenuManager approach works in these different locations. Open the TimeZoneTableView class and go to the createPartControl() method. At the botom of the method, add a new MenuManager with the ID #PopupMenu and associate it to the viewer's control. MenuManager manager = new MenuManager("#PopupMenu");Menu menu = manager.createContextMenu(tableViewer.getControl());tableViewer.getControl().setMenu(menu); If the Menu is empty, the MenuManager won't show any content, so this currently has no effect. To demonstrate this, an Action will be added to the Menu. An Action has text (for rendering in the pop-up menu, or the menu at the top of the screen), as well as a state (enabled/disabled, selected) and a behavior. These are typically created as subclasses and (although the Action doesn't strictly require it) an implementaton of the run() method. Add this to the botom of the createPartControl() method. Action deprecated = new Action() { public void run() { MessageDialog.openInformation(null, "Hello", "World"); }};deprecated.setText("Hello");manager.add(deprecated); Run the Eclipse instance, open the Time Zone Table View, and right-click on the table. The Hello menu can be seen, and when selected, an informational dialog is shown. What just happened? The MenuManager(with the id #PopupMenu) was bound to the control, which means when that particular control's context sensitive menu is invoked, the manager will be able to ask to display a menu. The manager is associated with a single Menu object (which is also stamped on the underlying control itself) and is responsible for updating the status of the menu. Actions are deprecated. They are included here since examples on the Internet may have preferred references to them, but it's important to note that while they still work, the way of building user interfaces are with the commands and handlers, shown in the next section. When the menu is shown, the actions that the menu contains are rendered in the order in which they are added. Action are usually subclasses that implement a run() method, which performs a certain operation, and have text which is displayed. Action instances also have other metadata, such as whether they are enabled or disabled. Although it is tempting to override the access or methods, this behavior doesn't work—the setters cause an event to be sent out to registered listeners, which causes side effects, such as updating any displayed controls. Time for action – creating commands and handlers Since the Action class is deprecated, the supported mechanism is to create a command, a handler, and a menu to display the command in the menu bar. Open the plug-in manifest for the project, or double-click on the plugin.xml file. Edit the source on the plugin.xml tab, and add a definition of a Hello command as follows: <extension point="org.eclipse.ui.commands"> <command name="Hello" description="Says Hello World" id="com.packtpub.e4.clock.ui.command.hello"/></extension> This creates a command, which is just an identifier and a name. To specify what it does, it must be connected to a handler, which is done by adding the following extension: <extension point="org.eclipse.ui.handlers"> <handler class= "com.packtpub.e4.clock.ui.handlers.HelloHandler" commandId="com.packtpub.e4.clock.ui.command.hello"/></extension> The handler joins the processing of the command to a class that implements IHandler, typically AbstractHandler. Create a class HelloHandler in a new com.packtpub.e4.clock.ui.handlers package, which implements AbstractHandler(from the org.eclipse.core.commands package). public class HelloHandler extends AbstractHandler { public Object execute(ExecutionEvent event) { MessageDialog.openInformation(null, "Hello", "World"); return null; }} The command's ID com.packtpub.e4.clock.ui.command.hello is used to refer to it from menus or other locations. To place the contribution in an existing menu structure, it needs to be specified by its locationURI, which is a URL that begins with menu:such as menu:window?after=additionsor menu:file?after=additions. To place it in the Help menu, add this to the plugin.xml file. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> </command> </menuContribution></extension> Run the Eclipse instance, and there will be a Hello menu item under the Help menu. When selected, it will pop up the Hello World message. If the Hello menu is disabled, verify that the handler extension point is defined, which connects the command to the handler class. What just happened? The main issue with the actions framework was that it tightly coupled the state of the command with the user interface. Although an action could be used uniformly between different menu locations, the Action superclass still lives in the JFace package, which has dependencies on both SWT and other UI components. As a result, Action cannot be used in a headless environment. Eclipse 3.x introduced the concept of commands and handlers, as a means of separating their interface from their implementation. This allows a generic command (such as Copy) to be overridden by specific views. Unlike the traditional command design pattern, which provides implementation as subclasses, the command in Eclipse 3.x uses a final class and then a retargetable IHandler to perform the actual execution. E4: In Eclipse 4.x, the concepts of commands and handlers are used extensively to provide the components of the user interface. The key difference is in their definition; for Eclipse 3.x, this typically occurs in the plugin.xml file, whereas in E4 it is part of the application model. In the example, a specific handler was defined for the command, which is valid in all contexts. The handler's class is the implementation; the command ID is the reference. The org.eclipse.ui.menus extension point allows menuContributions to be added anywhere in the user interface. To address where the menu can be contributed to, the location URIobject defines where the menu item can be created. The syntax for the URI is as follows: menu: Menus begin with the menu: protocol (can also be toolbar:or popup:) identifier: This can be a known short name (such as file, window, and help), the global menu (org.eclipse.ui.main.menu), the global toolbar (org.eclipse.ui.main.toolbar), a view identifier (org.eclipse. ui.views.ContentOutline), or an ID explicitly defined in a pop-up menu's registerContextMenu()call. ?after(or before)=key: This is the placement instruction to put this after or before other items; typically additions is used as an extensible location for others to contribute to. The locationURIallows plug-ins to contribute to other menus, regardless of where they are ultimately located. Note, that if the handler implements the IHandler interface directly instead of subclassing AbstractHandler, the isEnabled() method will need to be overridden as otherwise the command won't be enabled, and the menu won't have any effect. Time for action – binding commands to keys To hook up the command to a keystroke a binding is used. This allows a key (or series of keys) to be used to invoke the command, instead of only via the menu. Bindings are set up via an extension point org.eclipse.ui.bindings, and connect a sequence of keystrokes to a command ID. Open the plugin.xml in the clock.uiproject. In the plugin.xml tab, add the following: <extension point="org.eclipse.ui.bindings"> <key commandId="com.packtpub.e4.clock.ui.command.hello" sequence="M1+9" contextId="org.eclipse.ui.contexts.window" schemeId= "org.eclipse.ui.defaultAcceleratorConfiguration"/></extension> Run the Eclipse instance, and press Cmd+ 9(for OS X) or Ctrl+ 9 (for Windows/Linux). The same Hello dialog should be displayed, as if it was shown from the menu. The same keystroke should be displayed in the Help menu. What just happened? The M1 key is the primary meta key, which is Cmd on OS X and Ctrl on Windows/Linux. This is typically used for the main operations; for example M1+ C is copy and M1+ V is paste on all systems. The sequence notation M1+ 9 is used to indicate pressing both keys at the same time. The command that gets invoked is referenced by its commandId. This may be defined in the same plug-in, but does not have to be; it is possible for one application to provide a set of commands and another plug-in to provide keystrokes that bind them. It is also possible to set up a sequence of key presses; for example, M1+ 9 8 7would require pressing Cmd+ 9 or Ctrl+ 9 followed by 8 and then 7 before the command is executed. This allows a set of keystrokes to be used to invoke a command; for example, it's possible to emulate an Emacs quit operation with the keybinding Ctrl + X Ctrl + Cto the quit command. Other modifier keys include M2(Shift), M3(Alt/Option), and M4(Ctrl on OS X). It is possible to use CTRL, SHIFT, or ALT as long names, but the meta names are preferred, since M1tends to be bound to different keys on different operating systems. The non-modifier keys themselves can either be single characters (A to Z), numbers (0to 9), or one of a set of longer name key-codes, such as F12, ARROW_UP, TAB, and PAGE_UP. Certain common variations are allowed; for example, ESC/ESCAPE, ENTER/RETURN, and so on. Finally, bindings are associated with a scheme, which in the default case should be org.eclipse.ui.defaultAcceleratorConfiguration. Schemes exist to allow the user to switch in and out of keybindings and replace them with others, which is how tools like "vrapper" (a vi emulator) and the Emacs bindings that come with Eclipse by default can be used. (This can be changed via Window | Preferences | Keys menu in Eclipse.) Time for action – changing contexts The context is the location in which this binding is valid. For commands that are visible everywhere—typically the kind of options in the default menu—they can be associated with the org.eclipse.ui.contexts.windowcontext. If the command should also be invoked from dialogs as well, then the org.eclipse.ui.context.dialogAndWindowcontext would be used instead. Open the plugin.xml file of the clock.ui project. To enable the command only for Java editors, go to the plugin.xml tab, and modify the contextId as follows: <extension point="org.eclipse.ui.bindings"> <key commandId="com.packtpub.e4.clock.ui.command.hello" sequence="M1+9" contextId="org.eclipse.ui.contexts.window" contextId="org.eclipse.jdt.ui.javaEditorScope" schemeId="org.eclipse.ui.defaultAcceleratorConfiguration"/></extension> Run the Eclipse instance, and create a Java project, a test Java class, and an empty text file. Open both of these in editors. When the focus is on the Java editor, the Cmd + 9 or Ctrl+ 9 operation will run the command, but when the focus is on the text editor, the keybinding will have no effect. Unfortunately, it also highlights the fact that just because the keybinding is disabled when in the Java scope, it doesn't disable the underlying command. If there is no change in behavior, try cleaning the workspace of the test instance at launch, by going to the Run | Run... menu, and choosing Clear on the workspace. This is sometimes necessary when making changes to the plugin.xml file, as some extensions are cached and may lead to strange behavior. What just happened? Context scopes allow bindings to be valid for certain situations, such as when a Java editor is open. This allows the same keybinding to be used for different situations, such as a Format operation—which may have a different effect in a Java editor than an XML editor, for instance. Since scopes are hierarchical, they can be specifically targeted for the contexts in which they may be used. The Java editor context is a subcontext of the general text editor, which in turn is a subcontext of the window context, which in turn is a subcontext of the windowAndDialogcontext. The available contexts can be seen by editing the plugin.xml file in the plug-in editor; in the extensions tab the binding shows an editor window with a form: Clicking on the Browse… button next to the contextId brings up a dialog, which presents the available contexts: It's also possible to find out all the contexts programmatically or via the running OSGi instance, by navigating to Window | Show View | Console, and then using New Host OSGi Console in the drop-down menu, and then running the following code snippet: osgi> pt -v org.eclipse.ui.contextsExtension point: org.eclipse.ui.contexts [from org.eclipse.ui]Extension(s):-------------------null [from org.eclipse.ant.ui] <context> name = Editing Ant Buildfiles description = Editing Ant Buildfiles Context parentId = org.eclipse.ui.textEditorScope id = org.eclipse.ant.ui.AntEditorScope </context>null [from org.eclipse.compare] <context> name = Comparing in an Editor description = Comparing in an Editor parentId = org.eclipse.ui.contexts.window id = org.eclipse.compare.compareEditorScope </context> Time for action – enabling and disabling the menu's items The previous section showed how to hide or show a specific keybinding depending on the open editor type. However, it doesn't stop the command being called via the menu, or from it showing up in the menu itself. Instead of just hiding the keybinding, the menu can be hidden as well by adding a visibleWhenblock to the command. The expressions framework provides a number of variables, including activeContexts, which contains a list of the active contexts at the time. Since many contexts can be active simultaneously, the active contexts is a list (for example, [dialogAndWindows,windows, textEditor, javaEditor]). So, to find an entry (in effect, a contains operation) an iterate operator with the equals expression is used. Open up the plugin.xml file, and update the the Hello command by adding a visibleWhen expression. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> <visibleWhen> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> </visibleWhen> </command> </menuContribution></extension> Run the Eclipse instance, and verify that the menu is hidden until a Java editor is opened. If this behavior is not seen, run the Eclipse application with the clean argument to clear the workspace. After clearing, it will be necessary to create a new Java project with a Java class, as well as an empty text file, to verify that the menu's visibility is correct. What just happened? Menus have a visibleWhen guard that is evaluated when the menu is shown. If it is false, he menu is hidden. The expressions syntax is based on nested XML elements with certain conditions. For example, an <and> block is true if all of its children are true, whereas an <or> block is true if one of its children is true. Variables can also be used with a property test using a combination of a <with> block (which binds the specified variable to the stack) and an <equals> block or other comparison. In the case of variables that have lists, an <iterate> can be used to step through elements using either operator="or" or operator="and" to dynamically calculate enablement. To find out if a list contains an element, a combination of <iterate> and <equals> operators is the standard pattern. There are a number of variables that can be used in tests; they include the following variables: activeContexts: List of context IDs that are active at the time activeShell: The active shell (dialog or window) activeWorkbenchWindow: The active window activeEditor: The current or last active editor activePart: The active part (editor or view) selection: The current selection org.eclipse.core.runtime.Platform: The Platform object The Platform object is useful for performing dynamic tests using test, such as the following: <test value="ACTIVE" property="org.eclipse.core.runtime.bundleState" args="org.eclipse.core.expressions"/><test property="org.eclipse.core.runtime.isBundleInstalled" args="org.eclipse.core.expressions"/> Knowing if a bundle is installed is often useful; it's better to only enable functionality if a bundle is started (or in OSGi terminology, ACTIVE). As a result, use of isBundleInstalled has been replaced by the bundleState=ACTIVE tests. Time for action – reusing expressions Although it's possible to copy and paste expressions between places where they are used, it is preferable to re-use an identical expression. Declare an expression using the expression's extension point, by opening the plugin.xml file of the clock.uiproject. <extension point="org.eclipse.core.expressions.definitions"> <definition id="when.hello.is.active"> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> </definition></extension> If defined via the extension wizard, it will prompt to add dependency on the org.eclipse.core.expressions bundle. This isn't strictly necessary for this example to work. To use the definition, the enablement expressions needs to use the reference. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> <visibleWhen> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> <reference definitionId="when.hello.is.active"/> </visibleWhen> </command> </menuContribution></extension> Now that the reference has been defined, it can be used to modify the handler as well, so that the handler and menu become active and visible together. Add the following to the Hellohandler in the plugin.xml file: <extension point="org.eclipse.ui.handlers"> <handler class="com.packtpub.e4.clock.ui.handlers.Hello" commandId="com.packtpub.e4.clock.ui.command.hello"> <enabledWhen> <reference definitionId="when.hello.is.active"/> </enabledWhen> </handler></extension> Run the Eclipse application and exactly the same behavior will occur; but should the enablement change, it can be done in one place. What just happened? The org.eclipse.core.expressions extension point defined a virtual condition that could be evaluated when the user's context changes, so both the menu and the handler can be made visible and enabled at the same time. The reference was bound in the enabledWhen condition for the handler, and the visibleWhencondition for the menu. Since references can be used anywhere, expressions can also be defined in terms of other expressions. As long as the expressions aren't recursive, they can be built up in any manner. Time for action – contributing commands to pop-up menus It's useful to be able to add contributions to pop-up menus so that they can be used by different places. Fortunately, this can be done fairly easily with the menuContribution element and a combination of enablement tests. This allows the removal of the Action introduced in the first part of this article with a more generic command and handler pairing. There is a deprecated extension point—which still works in Eclipse 4.2 today—called objectContribution, which is a single specialized hook for contributing a pop-up menu to an object. This has been deprecated for some time, but often older tutorials or examples may refer to it. Open the TimeZoneTableView class and add the hookContextMenu()method as follows: private void hookContextMenu(Viewer viewer) { MenuManager manager = new MenuManager("#PopupMenu"); Menu menu = manager.createContextMenu(viewer.getControl()); viewer.getControl().setMenu(menu); getSite().registerContextMenu(manager, viewer);} Add the same hookContextMenu() method to the TimeZoneTreeView class. In the TimeZoneTreeView class, at the end of the createPartControl() method, call hookContextMenu(tableViewer). In the TimeZoneTableViewclass, at the end of the createPartControl() method, replace the call to the action with a call to hookContextMenu()instead: hookContextMenu(tableViewer);MenuManager manager = new MenuManager("#PopupMenu");Menu menu = manager.createContextMenu(tableViewer.getControl());tableViewer.getControl().setMenu(menu);Action deprecated = new Action() { public void run() { MessageDialog.openInformation(null, "Hello", "World"); }};deprecated.setText("Hello");manager.add(deprecated); Running the Eclipse instance now and showing the menu results in nothing being displayed, because no menu items have been added to it yet. Create a command and a handler Show the Time. <extension point="org.eclipse.ui.commands"> <command name="Show the Time" description="Shows the Time" id="com.packtpub.e4.clock.ui.command.showTheTime"/></extension><extension point="org.eclipse.ui.handlers"> <handler class= "com.packtpub.e4.clock.ui.handlers.ShowTheTime" commandId="com.packtpub.e4.clock.ui.command.showTheTime"/></extension> Create a class ShowTheTime, in the com.packtpub.e4.clock.ui.handlers package, which extends org.eclipse.core.commands.AbstractHandler, to show the time in a specific time zone. public class ShowTheTime extends AbstractHandler { public Object execute(ExecutionEvent event) { ISelection sel = HandlerUtil.getActiveWorkbenchWindow(event) .getSelectionService().getSelection(); if (sel instanceof IStructuredSelection && !sel.isEmpty()) { Object value = ((IStructuredSelection)sel).getFirstElement(); if (value instanceof TimeZone) { SimpleDateFormat sdf = new SimpleDateFormat(); sdf.setTimeZone((TimeZone) value); MessageDialog.openInformation(null, "The time is", sdf.format(new Date())); } } return null; }} Finally, to hook it up, a menu needs to be added to the special locationURI popup:org.eclipse.ui.popup.any. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="popup:org.eclipse.ui.popup.any"> <command label="Show the Time" style="push" commandId="com.packtpub.e4.clock.ui.command.showTheTime"> <visibleWhen checkEnabled="false"> <with variable="selection"> <iterate ifEmpty="false"> <adapt type="java.util.TimeZone"/> </iterate> </with> </visibleWhen </command> </menuContribution></extension> Run the Eclipse instance, and open the Time Zone Table view or Time Zone Table view. Right-click on a TimeZone, and the command Show the Time will be displayed (that is, one of the leaves of the tree or one of the rows of the table). Select the command and a dialog should show the time. What just happened? The views and the knowledge of how to wire up commands in this article provided a unified means of adding commands, based on the selected object type. This approach of registering commands is powerful, because any time a time zone is exposed as a selection in the future it will now have a Show the Time menu added to it automatically. The commands define a generic operation, and handlers bind those commands to implementations. The context-sensitive menu is provided by the pop-up menu extension point using the locationURI popup:org.eclipse.ui.popup.any. This allows the menu to be added to any pop-up menu that uses a MenuManager and when the selection contains a TimeZone. The MenuManager is responsible for listening to the mouse gestures to show a menu, and filling it with details when it is shown. In the example, the command was enabled when the object was an instance of a TimeZone, and also if it could be adapted to a TimeZone. This would allow another object type (say, a contact card) to have an adapter to convert it to a TimeZone, and thus show the time in that contact's location. Have a go hero – using view menus and toolbars The way to add a view menu is similar to adding a pop-up menu; the locationURI used is the view's ID rather than the menu item itself. Add a Show the Time menu to the TimeZone view as a view menu. Another way of adding the menu is to add it as a toolbar, which is an icon in the main Eclipse window. Add the Show the Time icon by adding it to the global toolbar instead. To facilitate testing of views, add a menu item that allows you to show the TimeZone views with PlatformUI.getActiveWorkbenchWindow().getActivePage().showView(id). Jobs and progress Since the user interface is single threaded, if a command takes a long amount of time it will block the user interface from being redrawn or processed. As a result, it is necessary to run long-running operations in a background thread to prevent the UI from hanging. Although the core Java library contains java.util.Timer, the Eclipse Jobs API provides a mechanism to both run jobs and report progress. It also allows jobs to be grouped together and paused or joined as a whole.
Read more
  • 0
  • 0
  • 1080

Packt
08 Aug 2013
4 min read
Save for later

Ext.NET – Understanding Direct Methods and Direct Events

Packt
08 Aug 2013
4 min read
(For more resources related to this topic, see here.) How to do it... The steps to handle events raised by different controls are as follows: Open the Pack.Ext2.Examples solution Press F5 or click on the Start button to run the solution. Click on the Direct Methods & Events hyperlink. This will run the example code for this recipe. Familiarize yourself with the code behind and the client-side markup. How it works... Applying the [DirectMethod(namespace="ExtNetExample")] attribute to the server-side method GetDateTime(int timeDiff) has exposed this method to our client-side code with the namespace of ExtNetExample, which we append to the method name call on the client side. As we can see in the example code, we call this server method in the markup using the Ext.NET button btnDateTime and the code ExtNetExamples.GetDateTime(3). When the call hits the server, we update the Ext.NET control lblDateTime text property, which updates the control related to the property. Adding namespace="ExtNetExample" allows us to neatly group server-side methods and the JavaScript calls in our code. A good notation is CompanyName.ProjectName. BusinessDomain.MethodName. Without applying the namespace attribute, we would access our server-side method using the default namespace of App.direct. So, to call the GetDateTime method without the namespace attribute, we would use App.direct. GetDateTime(3). We can also see how to return a response from Direct Method to the client-side JavaScript. If a Direct Method returns a value, it is sent back to the success function defined in a configuration object. This configuration object contains a number of functions, properties, and objects. We have dealt with the two most common functions in our example, the success and failure responses. The server-side method GetCar()returns a custom object called Car. If the btnReturnResponse button is clicked on and GetCar() successfully returns a response, we can access the value when Ext.NET calls the JavaScript function named in the success configuration object CarResponseSuccess. This JavaScript function accepts the response parameter from the method and we can process it accordingly. The response parameter is serialized into JSON, and so object values can be accessed using the JavaScript object notation of object.propertyValue. Note that we alert the FirstRegistered property of the Car object returned. Likewise, if a failure response is received, we call the client-side method CarResponseFailure alerting the response, which is a string value. There are a number of other properties that form a part of the configuration object, which can be accessed as part of the callback, for example, failure to return a response. Please refer to the Direct Methods Overview Ext.NET examples website (http://examples.ext.net/#/ Events/DirectMethods/Overview/ ). To demonstrate DirectEvent in action, we've declared a button called btnFireEvent and secondly, a checkbox called chkFireEvent. Note that each control points to the same DirectEvent method called WhoFiredMe. You'll notice that in the markup we declare the WhoFiredMe method using the OnEvent property of the controls. This means that when the Click event is fired on the btnFireEvent button and the Change event is fired on the chkFireEvent checkbox, a request to the server is made where we call the WhoFiredMe method. From this, we can get the control that invoked the request via the object sender parameter and the arguments of the event using the DirectEventArgs e method. Note that we don't have to decorate the DirectEvent method, WhoFiredMe, with any attributes. Ext.NET takes care of all the plumbing. We just need to specify the method, which needs to be called on the server. There's more... Raising DirectMethods is far more flexible in terms of being able to specify the parameters you want to send to the server. You also have the ability to send the control objects to the server or to client-side functions using the #{controlId} notation. It is generally not a good idea though to send the whole control to the server from a Direct Method, as Ext.NET controls can contain references to themselves. Therefore, when Ext.NET encodes the control, it can end up in an infinite loop, and you will end up breaking your code. With a DirectEvent method, you can send extra parameters to the server using the ExtraParams property inside the controls event element. This can then be accessed using the e parameter on the server. Summary In this article we discussed about how to connect client-side and server-side code. Resources for Article : Further resources on this subject: Working with Microsoft Dynamics AX and .NET: Part 1 [Article] Working with Microsoft Dynamics AX and .NET: Part 2 [Article] Dynamically enable a control (Become an expert) [Article]
Read more
  • 0
  • 0
  • 2825
article-image-form-customizations
Packt
08 Aug 2013
26 min read
Save for later

Form customizations

Packt
08 Aug 2013
26 min read
(For more resources related to this topic, see here.) Forms are probably the most important visual element of the Dynamics CRM 2011 interface. To find the underlying data in every entity record, the user has to open the form. Dynamics CRM 2011 supports two types of forms: The main form : Dynamics CRM 2011 uses this form to allow the user to enter and view data within the Dynamics CRM 2011 web user interface as well as the Dynamics CRM 2011 within Microsoft Outlook interface. One main form per entity exists by default. However, multiple main forms can be created for an entity. Dynamics CRM 2011 supports role-based forms, which means separate forms can be visible depending on the security roles of the current user. Usually, multiple main forms are created when role-based forms have to be supported. The mobile form : Dynamics CRM 2011 uses this form when a user is accessing CRM from a mobile device that is compatible with HTML 4.0 using a URL such as <CRM_server> /m, where <CRM_server> is the path of Microsoft Dynamics CRM 2011 Server. A separate form for mobile devices is useful considering the limited space usually available on a mobile screen. A mobile form does not store data on a mobile device. If users try to access Dynamics CRM 2011 from an unsupported browser, they will be redirected to the mobile form. The following table outlines the browsers supported by Microsoft Dynamics CRM 2011: Browser Version / other requirements Internet Explorer IE7 (only for the on-premises version) IE 8, IE9 IE10 (desktop mode only) Mozilla Firefox Latest publicly released version running on Windows 8, Windows 7, Windows Vista, or Windows XP Google Chrome Latest publicly released version running on Windows 8, Windows 7, Windows Vista, or Windows XP Apple Safari Latest publicly released version running on Mac OS X 10.7 (Lion) or 10.8 (Mountain Lion) Detailed information about supported browsers can be found at http://technet.microsoft.com/en-us/library/hh699710.aspx. Dynamics CRM 2011 also supports special variants of the main form, as follows: The read-optimized form : Dynamics CRM 2011 has another type of form called the read-optimized form. Introduced in Update Rollup 7, this form is designed for the fast display of a record by disabling the ribbon and form scripts. This form displays the record in the read-only mode. Read-optimized forms are disabled by default and can be enabled by going to System | Administration | System Settings | Customization | Form Mode . Update Rollup 12 has introduced the following changes in read-optimized forms: The navigation pane for read-optimized forms is now enabled and the navigation pane can be expanded or collapsed. Support for web resources has been added. A new setting in the web resource properties, called Show this Web Resources in Read Optimized form , has been added. This setting must be enabled for the web resources to display in the read-optimized form. If the web resource depends on form resources, which are not available in a read-optimized form, we should not display it. Read-optimized forms honor all field-level security and role-based form definitions. If an entity has more than one form enabled, the read-optimized form uses the form that the user last used. The process-driven form : The December 2012 Service Update (Polaris update) of Dynamics CRM 2011 has introduced an enhanced read-optimized form, commonly known as the process-driven form for the Account, Contact, Lead, Opportunity, and Case entities. This new type of form is very useful, especially for touch devices, as the new form is designed to contain everything in one form; there is no need to open multiple pop ups. However, this new form type cannot be used for any entity other than the entities listed above. For the Account, Contact, Lead, Opportunity, and Case entities, in addition to the information form, there will be a new form with the same name as that of the entity. The <entity name> form will always display using the updated presentation, regardless of the settings for read-optimized forms. However, if read-optimized forms are enabled for the organization, the information form will also display using the updated presentation. These new forms are not available in an on-premises deployment of Microsoft Dynamics CRM 2011. Form editor We need to use a form editor to customize a form within Dynamics CRM 2011. The form layout definition is actually stored as an XML file called Form Xml in the SystemForm entity. The customization.xml file exported with an unmanaged solution contains the definition of the entity forms. Creating and customizing an entity main form Almost all the business entities have a customizable main form. The Activity entity does not have any form and some entity forms such as the Case Resolution entity form are not customizable. When a custom entity is created, one main and one mobile form are added automatically. In this recipe, we will focus our discussion on how to customize a main form. Getting ready Dynamics CRM 2011 introduced a flexible layout for form design. The following diagram outlines the typical main form layout within the Dynamics CRM 2011 system: The major visible components of a standard main form are as follows: Ribbon : This is the top area of the form. We cannot customize this using the form editor. Entity icon : This displays the Icon for Entity Form icon of the entity. It is a 32 x 32 pixel image and can be updated for an entity.  Header and footer : The header and footer are two read-only areas of the form layout. These two sections remain static when a user scrolls through the form data displayed by the various tabs and sections. So any data that is required to be available to the user irrespective of any scrolling, can be included in these sections. Form selector : When an entity has multiple forms and the current user's security role has access to more than one form, the form selector is displayed. The user can use the form selector to choose a form from multiple forms available to them. Navigation : This section allows users to navigate to related records of the current record. We can add, modify, delete, or reorganize the link to the related entity records using the form editor. We can also include links to URLs or web resources by adding navigation links using the form editor. Form assistant : It helps when we set values for lookup fields. Dynamics CRM 2011 has introduced improved capabilities to filter data returned in the lookup dialog. Hence, the form assistant is no longer useful; the form assistant has been turned off for all except the following three entity forms: Case Product Service activity Tabs and sections : Tabs and sections allow grouping and laying out of controls in a form. A tab can contain multiple sections. Each form can have a maximum of 100 tabs. Tabs have a vertical collapse/expand feature. We will now take a look at the various form-body elements that can be added or associated with an entity form: Field : Each field represents an attribute of the entity. A field can be added to a form using the form editor and the form editor allows us to add the same field multiple times in a form. Each instance of a field in a form is known as a control . The appearance and behavior of a control is driven by the type and formatting options of the attribute as well as display and formatting properties set on the control, using the form editor. Tab and section : As previously discussed, tabs and sections are used for grouping the controls in the form. A tab can contain multiple sections within it. Each tab or section can be assigned a name. We can choose to display the name of the tab or section on the form or include a separator line at the top of the tab or section, underneath the name. A tab can have one column or two columns; when two columns are specified, the width of each column is a percentage of the width of the tab. A section, on the other hand, may have up to four columns and we can control the width available for control labels to be displayed in the section as well as how labels for controls in the section should be aligned. Spacer : The Spacer element provides extra space between fields and controls in the form. This is used to improve the control layout in a section. Sub-Grid : Sub-Grid allows us to display a list of records, charts, or both. The first four subgrids can be populated with data in a form when it loads. If more than four subgrids exist on a form, the remaining subgrids require some user or form script action to retrieve data. This is for performance optimization. IFRAME : This control provides the HTML iFrame element in the form. Using the control, we can host another web page within the Dynamics CRM 2011 entity form. The form editor provides the ability to set regular iFrame properties along with properties specific to Dynamics CRM 2011. Web Resource : This control displays a form-enabled web resource to be displayed on the page. A form-enabled web resource includes a web page (HTML), image (JPG, PNG, GIF, ICO), or Silverlight (XAP) resource. The web resource contents are hosted within Dynamics CRM 2011. Notes : If the entity uses notes and attachments, we can add the Notes control into the form. This control can only be added if the entity has Notes enabled in the entity definition. Navigation Link : This control is available only within the Navigation section of the form. This control allows us to add a link to an external URL or web resource. How to do it… In this recipe, we will first discuss how to create a new main form and then discuss the form-customization options. The customization steps can be carried out on any main form. The entity main form can be customized by carrying out the following tasks: Editing tabs Editing sections Editing fields Editing header and footer Adding subgrids Adding iFrames Adding web resources Editing the Navigation area Editing form properties Making the form non-customizable In this recipe, we will discuss all the previously stated tasks one after the other. Please follow these steps to customize the main form for an entity: Log in to the Dynamics CRM 2011 system as a system administrator or with a relevant security role. Navigate to Settings | Customizations | Solutions and change the view to Unmanaged Solutions , if not already selected. Then double-click on the unmanaged solution to open it. On the expanded Solution page, navigate to Components | Entities | <Entity> | Forms . The next step is to create a new main form; this can be done in two ways. We will discuss both of these here: Creating an entirely new main form : Go to New | Main Form in the actions toolbar. This will create a new form by copying the existing main form. When the new form pops up, click on the save button to save the form. Creating a new form from an existing form : Open the existing form by double-clicking on it. When the form launches, click on Save As in the top ribbon. When the Save As -- Webpage Dialog window pops up, provide data for the Name and Description fields of the new form. Finally, click on the OK button to save the new form as shown in the following screenshot: Any newly created main form will be assigned only to the system administrator and system customizer security roles by default. To customize a main form, open the form by double-clicking on it in the forms list. The next step is to discuss the editing of tabs in the form. Tabs are collapsible controls that can contain section controls. The following two points will demonstrate adding a new tab and editing tab properties: Adding a new tab in the form : Click on Body in the form ribbon and then click on the Insert tab in the form. In the Insert tab, under the Tab group, select One Column to create a one-column tab, or Two Columns to create a two-column tab: If we add a tab, Dynamics CRM 2011 will automatically add a section for each column. To remove any control in an entity form, use the Delete key on the keyboard. Alternatively, the Remove button in the ribbon can also be used. Editing tab properties : Select the tab control and then click on the Change Properties button in the form ribbon. The Tab Properties page will open with the following properties being modifiable: Tab property Description Under the Display tab Name The unique name of the tab. Label The display label for this tab. This text will appear on the form. Show the label of this tab on the Form This determines whether the label defined for this tab will be displayed on the form. Select this option to enable the display of the tab's label on the form. Expand this tab by default If selected, the tab control will be displayed in expanded mode by default. Visible by default If selected, the tab control will be visible by default in the form. Under the Formatting tab Select tab layout Choose between One Column and Two Columns  to define the layout of the tab. Column 1 width If the Two Columns option is selected in the tab layout, we can specify the width of column 1 as a percentage. Column 2 width If the Two Columns option is selected in the tab layout, we can specify the width of column 2 as a percentage. The Events properties   Scripts libraries can be linked to the tab. The scripts functions will be called on the TabStateChange event. Next we will see the editing of a section in a tab. A section contains fields in the form. The following two sections will demonstrate adding a section in a form and editing the section's properties: Adding a section in the form : Select the tab control where the new section is to be added and then click on the Insert tab in the form ribbon. Thereafter, click on One Column , Two Columns , Three Columns , or Four Columns under the Section group depending on whether a section with one, two, three, or four columns is to be added. Editing section properties : Select the section control and then click on the Change Properties button in the form ribbon. The Section Properties page will open and the following properties will be modifiable: Section property Description Under the Display tab Name The unique name of the tab. Label The display label for this tab. This text will appear on the form. Show the label of this section on the Form This determines whether the label defined for this section will be displayed on the form. Select this option to enable the display of the section's label on the form. Show a line at top of the section If selected, a divider line will be displayed underneath the name of the section. Width Specify the width of the label area of the fields in this field. The width must be set between 50 and 250 pixels. Visible by default If selected, the section control will be visible by default on the form. Lock the section of the Form If selected, the section would be locked in the form. Under the Formatting tab Layout Choose from among One Column, Two Columns, Three Columns, and Four Columns to define the layout of the section control. Field label alignment Select between the Left and Right alignments for the field labels in the section control. Next we will take a look at editing a field in the section: Adding a field in a section : Select the section where the field has to be added. Thereafter, find the field in the right-hand side Field Explorer pane. By default, the Field Explorer pane displays all unused fields in the form. If we want to add a field that is already used in the form, uncheck the Only show unused fields checkbox as shown in the following screenshot: After selecting the field in Field Explorer , move the field by pressing the left mouse button and drop the field in the intended column of the section. The red line on top of the column indicates that the column has been selected. Now drop the field on the selected column. Editing field properties : To edit the form-level properties of the field, select the field and then click on the Change Properties button in the form ribbon. Then the Field Properties pop up will open and the following properties can be modified: Field property Description Under the Display tab Label Here you can edit the display name of the field on the form. By default, the display name of the field will be displayed there, which can be edited to provide a new display name for the field on the form. Display Label on the form This determines whether the display name of the field is to be displayed in the form. Field is read-only This determines whether a field is to be read-only for the users in the form. Lock the field on the form This determines whether the field is to be locked on the form. Visible by default This determines the default visibility of the control in the form. Under the Formatting tab Layout This determines the width of this field on the form. The width of a field depends on the layout settings of the section it is in. The Details properties   This tab displays the details of the field definition. Click on the Edit button to modify those properties of the field definition that can be modified. The Event properties   Script libraries can be linked to the tab. The scripts' functions will be called on the OnChange event. If the field is of type Lookup (N:1 relationship with another entity), then there exists an additional set of properties in the Field Properties list. These properties can be set to save the user's time, find the appropriate parent record, or to restrict the user to select among a subset of records in the parent entity. The following form-level properties of the lookup field can be edited: Property name Description Turn off automatic resolutions in the field If this setting is disabled (not selected) and if a user enters a partial value for the lookup field and tabs away, Dynamics CRM 2011 will try to autopopulate the lookup field. Disable most recently used items for this field If this setting is disabled (not selected), Dynamics CRM 2011 will automatically provide a list of recently selected values for the user to choose from. This property is not supported for process-driven forms of Microsoft Dynamics CRM 2011 Online. Related Record Filtering This setting provides a way to limit the list of records that the user can choose from. The list under the Only show records where heading displays all the potential relationships that can be used to filter this lookup. Once a record is selected, the list under the Contains  heading will display all relationships that connect the related entity (selected in the first list) to the target entity. Select the Allow users to turn off filter checkbox to provide users with the option to turn off the filter defined here. This makes it possible for them to view a wider range of records. Additional properties This setting controls how much search flexibility the user will have in terms of changing among various views and searching the record with a search box. Select the Display Search Box in lookup dialog checkbox if you want a search box to be available in the lookup. In the Default View list, select the default view for which results will be displayed in the lookup. Finally, choose the views we want users to have access to in the lookup, using the View Selector list. Adding a new entity field and then adding it to the form : A new field can also be created and then added to the entity from the form. To create a new field, click on the New Field button at the bottom of the Field Explorer pane. This will launch the new field pop up.  Next we will delve into editing headers and footers. To edit the header or footer of the form, click on the Header or Footer button in the form ribbon and the section will be focused automatically. Then click on Change Properties in the ribbon. The Header Properties or Footer Properties page will pop up and we can edit the following settings: Header/footer property Description Under the Display tab Width Specify the width field label area here. The width must be set between 50 and 250 pixels. Lock the section of the Form This setting is selected by default and cannot be modified. This setting determines whether the section would be locked in the form or not. Under the Formatting tab Layout Here you can choose from among One Column,  Two Columns, Three Columns, and Four Columns to define the layout of the header/footer control. Field Label Alignment Select from the Left (default), Right, or Center alignment for the field labels in the header/footer control. Field Label Position Select between Side (default) and Top to specify whether the field label in this section will be on the left-hand side or above the field. Fields can be added to the header or footer controls in the same way they are added in any section control in the form. Next we will look at how to add subgrids. The Sub-Grid control displays related entity records in the form body, using the following steps: Select the section control where the subgrid is to be added in the form. Then click on the Sub-Grid button under the Insert tab in the form ribbon. This will bring up the List or Chart Properties page, where we can specify the following properties of a subgrid: Subgrid property Description Under the Display tab Name The unique name of the subgrid control. Label The display text of the subgrid. This text will be displayed on the form. Display label on the Form Select to confirm that the Label text will be displayed on the form. Data Source This specifies the primary data source of the subgrid. The Records list allows us to select between Only Related Records (to set only entities having a relationship to the current entity) and All Record Types (to set all available entities). We can choose the related entity from the Entity list. This list content will vary based on the earlier list's selection. The Default View list allows us to choose which view is to be displayed in the subgrid. Display Search Box Select this setting to display the search box in the subgrid. Display Index Select this setting to display the alphabetic index record selector in the subgrid. This property is not supported for process-driven forms of Microsoft Dynamics CRM 2011 Online. View Selector Select this setting to display the view selector in the subgrid. This property is not supported for process-driven forms of Microsoft Dynamics CRM 2011 Online. Chart Options Select whether to display a chart selector along with a default chart or show only a specified chart in place of the subgrid. This property is not supported for process-driven forms of Microsoft Dynamics CRM 2011 Online. Under the Formatting tab Layout Choose from among One Column, Two Columns, Three Columns, and Four Columns to define the layout of the subgrid control. Number of Rows Select the maximum number of rows to be displayed in the subgrid control. The number of rows has to be between 2 and 250. Automatically expand to use available space Select this setting to enable automatic expansion of the subgrid to use available space in the form. iFrames or Inline Frames are HTML documents embedded inside the Dynamics CRM entity form. The following steps will guide you through adding an iFrame in the form: Select the section control where the iFrame is to be added in the form. Then click on the IFRAME button under the Insert tab in the form ribbon. This will bring up the Add an IFRAME page, where we can specify the following properties of an iFrame: iFrame property Description Under the General tab Name The unique name of the iFrame control. URL The URL of the HTML document to be displayed in the iFrame control. Pass record object-type code and unique identifier as parameters Select this option to pass contextual information entity object-type code and the record's unique identifier to the iFrame. Read more about this in the How it works... section of this recipe. Label Here, specify the display text for the iFrame. Display label on the Form Select this setting to display the label on the form. Restrict cross-frame scripting, where supported This checkbox is selected by default. We can remove this restriction only if we are certain that the HTML document/site we are using as the target of the iFrame can be trusted. Visible by default Select this setting to make the iFrame visible by default on the form. Under the Formatting tab Layout Choose from among One Column, Two Columns, Three Columns, and Four Columns to define the layout of the iFrame control. Number of Rows Select the maximum number of rows the iFrame control occupies on the form. The number of rows has to be between 1 and 40. Automatically expand to use available space Select this setting to enable automatic expansion of the iFrame control to use the available space in the form. Scrolling Select the scrolling option for the iFrame content display. Display Border Specify whether a border for the iFrame control is to be displayed. Web resources represent files that can be used to extend the Microsoft Dynamics CRM 2011 web application, such as HTML files, Image files, JScript library, and Silverlight applications. The following steps can be used to add a web resource in the form: Select the section control where the web resource is to be added in the form. Then click on the Web Resource button under the Insert tab in the form ribbon. This will bring up the Add Web Resource page, where we can specify the following properties of a web resource: Web resource property Description Under the General tab Web Resource Lookup to find a form-enabled web resource. Name The unique name for the web resource. Label Specify the display text for the web resource here. Display label on the Form Select this setting to display the label on the form. Visibility by default Select this setting to make the web resource visible by default on the form. Show this web resource in Read-Optimized Form Select this setting if the web resource is to be displayed in the read-optimized form. Under the Formatting tab Layout Choose from among One Column, Two Columns, Three Columns, and Four Columns to define the layout of the web resource control. Number of Rows Select the maximum number of rows the web resource control occupies on the form. The number of rows has to be between 1 and 40. Automatically expand to use available space Select this setting to enable automatic expansion of the web resource control to use the available space in the form. Scrolling Select the scrolling option for the web resource content display. Display Border Specify here whether a border for the web resource control is to be displayed. The Dependencies properties   Select the fields from the Available fields list that are required by the web resource, and then click on the (add selected records) button to move the selected fields to the Dependent fields list. The navigation area displays entities that are related to the current entity. Each relationship has a Label property and in this navigation section this Label property is displayed by default. However, the display name for the related entity can be changed. This display name does not update the Label property of the relationship. In order to edit the navigation area, perform the following steps: Select the Navigation button in the form ribbon. The navigation section will be enabled. Then click on any relationship label and select Change Properties to edit the display text. This will bring up the Relationship Properties page. Modify the Label field here. Next we will edit the form properties; in order to do this, click on the Form Properties button in the form ribbon and the Form Properties page will pop up. The following properties can be edited there: Form property Description The Event properties   Add or remove the JScript libraries that will be available for the form or field events. Under the Display tab Form Name The display name for the form. Modify this to rename the form. Description Specify a description for this form here. Show navigation items Select this setting to display the page navigation in the form. The Parameters properties   Add query string parameters to be passed to the form. Click on the green plus sign to add a query string. We have to provide a Name value and select a Type value of the query string parameter. The Non- Event Dependencies properties   Select the fields from the Available fields list that are required by any external, non-event scripts, and then click on the (add selected records) button to move the selected fields to the Dependent fields list. These fields will not be removable from the form. Lastly, making a form non-customizable restricts any future customization of the form. Therefore, to make a form non-customizable, perform the following steps: Select the Managed Properties button in the form ribbon. The Managed Properties of System Form: Form web page dialog will pop up. In this page, mark Customizable as False . After making any changes to an entity form, the form has to be saved and published. Use the Publish button in the form ribbon to publish the changes. How it works… Web resources and iFrames are not displayed using the Microsoft Dynamics CRM 2011 for Outlook reading pane, but iFrames are displayed in read-optimized forms. When the Pass record object-type code and unique identifier as parameters setting is enabled, iFrames allow the form to pass the following contextual parameters to itself: Parameter name Description typename The name of the entity. type This takes in the entity type code, which is an integer value to uniquely identify an entity in a specific organization. Id A GUID that represents a record. orgname The organization's name. userlcid The user's language code. orglcid The organization's language code. The list of entity type codes can be found at http://msdn.microsoft.com/en-us/library/gg328086.aspx. The key points about entity type codes are as follows: Type codes below 10,000 are reserved for out-of-the-box entities. Custom entities will have a type code greater than or equal to 10,000. Custom entities' type codes might change during solution import. Hence the type codes of a custom entity might be different in the development and test environments. The entity codes are stored in the Dynamics CRM database and can be retrieved from the EntityView table of the <OrganizationName>_MSCRM database.
Read more
  • 0
  • 0
  • 2684

article-image-map-reduce
Packt
08 Aug 2013
10 min read
Save for later

Map Reduce

Packt
08 Aug 2013
10 min read
(For more resources related to this topic, see here.) Map-reduce is a technique that is used to take large quantities of data and farm it out for processing. A somewhat trivial example might be: given 1TB of HTTP log data, count the number of hits that come from a given country, and report those numbers. For example, if you have the log entries: 204.12.226.2 - - [09/Jun/2013:09:12:24 -0700] "GET /who-we-are HTTP/1.0"404 471 "-" "Mozilla/5.0 (compatible; MJ12bot/v1.4.3; http://www.majestic12.co.uk/bot.php?+)"174.129.187.73 - - [09/Jun/2013:10:58:22 -0700] "GET /robots.txtHTTP/1.1" 404 452 "-" "CybEye.com/2.0 (compatible; MSIE 9.0; Windows NT5.1; Trident/4.0; GTB6.4)"157.55.35.37 - - [02/Jun/2013:23:31:01 -0700] "GET / HTTP/1.1" 200 483"-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)"206.183.1.74 - - [02/Jun/2013:18:24:35 -0700] "GET / HTTP/1.1" 200 482"-" "Mozilla/4.0 (compatible; http://search.thunderstone.com/texis/websearch/about.html)"1.202.218.21 - - [02/Jun/2013:17:38:20 -0700] "GET /robots.txt HTTP/1.1"404 471 "-" "Mozilla/5.0 (compatible; JikeSpider; +http://shoulu.jike.com/spider.html)" Then the answer to the question would be as follows: US: 4China: 1 Clearly this example dataset does not warrant distributing the data processing among multiple machines, but imagine if instead of five rows of log data we had twenty-five billion rows. If your program took a single computer a half a second to process five records, it would take a little short of eighty years to process twenty-five billion records. To solve for this, we could break up the data into smaller chunks and then process those smaller chunks, rejoining them when we were finished. To apply this to a slightly larger dataset, imagine you extrapolated these five records to one hundred records and then split those one hundred records into five groups, each containing twenty records. From those five groups we might compute the following results: Group 1   Group 2   Group 3   Group 4   Group 5   US 5 Mexico 2 US 15 Italy 1 Finland 5 Greece 4 Scotland 6 China 2 Greece 4 China 5 Ireland 8 Canada 9 Finland 3 Scotland 10 US 10 Canada 3 Ireland 3     US 5     If we were to combine these data points by using the country name as a key and store them in a map, adding the value to any existing value, we would get the count per country across all one hundred records. Using Ruby, we can write a simple program to do this, first without using Gearman, and then with it. To demonstrate this, we will write the following: A simple library that we can use in our non-distributed program and in our Gearman-enabled programs An example program that demonstrates using the library A client that uses the library to split up our data and submit jobs to our manager A worker that uses the library to process the job requests and return the results The shared library First we will develop a library that we can reuse. This will demonstrate that you can reuse existing logic to quickly take advantage of Gearman because it ensures the following things: The program, client, and worker are much simpler so we can see what's going on in them The behavior between our program, client, and worker is guaranteed to be consistent The shared library will have two methods, map_data and reduce_data. The map_data method will be responsible for splitting up the data into chunks to be processed, and the reduce_data method will process those chunks of data and return something that can be merged together into an accurate answer. Take the following example, and save it to a file named functions.rb for later use: #!/bin/env ruby# Generate sub-lists of the data# each sub-list has size = blocksizedef map_data(lines, blocksize)blocks = []counter = 0block = []lines.each do |line|if (counter >= blocksize)blocks << blockblock = []counter = 0endblock << linecounter += 1endblocks << block if block.size> 0blocksend# Extract the number of times we see a unique line# Result is a hash with key = line, value = countdef reduce_data(lines)results = {}lines.each do |line|results[line] ||= 0results[line] += 1endresultsend A simple program To use this library, we can write a very simple program that demonstrates the functionality: require './functions.rb'countries = ["china", "us", "greece", "italy"]lines = []results = {}(1..100).each { |i| lines << countries[i % 4] }blocks = map_data(lines, 20)blocks.each do |block|reduce_data(block).each do |k,v|results[k] ||= 0results[k] += vendendputs results.inspect Put the contents of this example into a Ruby source file, named mapreduce.rb in the same directory as you placed your functions.rb file, and execute it with the following: [user@host:$] ruby ./mapreduce.rb This script will generate a list with one hundred elements in it. Since there are four distinct elements, each will appear 25 times as the following output shows: {"us"=>25, "greece"=>25, "italy"=>25, "china"=>25} Following in this vein, we can add in Gearman to extend our example to operate using a client that submits jobs and a single worker that will process the results serially to generate the same results. The reason we wrote these methods in a separate module from the driver application was to make them reusable in this fashion. The client The following code for the client in this example will be responsible for the mapping phase, it will split apart the results and submit jobs for the blocks of data it needs processed. In this example worker/client setup, we are using JSON as a simple way to serialize/deserialize data being sent back and forth: require 'rubygems'require 'gearman'require 'json'require './functions.rb'client = Gearman::Client.new('localhost:4730')taskset = Gearman::TaskSet.new(client)countries = ["china", "us", "greece", "italy"]jobcount = 1lines = []results = {}(1..100).each { |i| lines << countries[i % 4] }blocks = map_data(lines, 20)blocks.each do |block|# Generate a task with a unique iduniq = rand(36**8).to_s(36)task = Gearman::Task.new('count_countries',JSON.dump(block),:uniq =>uniq)# When the task is complete, add its results into ourstask.on_complete do |d|# We are passing data back and forth as JSON, so# decode it to a hash and then iterate over the# k=>v pairsJSON.parse(d).each do |k,v|results[k] ||= 0results[k] += vendendtaskset.add_task(task)puts "Submitted job #{jobcount}"jobcount += 1endputs "Submitted all jobs, waiting for results."start_time = Time.nowtaskset.wait(100)time_diff = (Time.now - start_time).to_iputs "Took #{time_diff} seconds: #{results.inspect}" This client uses a few new concepts that were not used in the introductory examples, that is, task sets and unique identifiers. In the Ruby client, a task set is a group of tasks that are submitted together and can be waited upon collectively. To generate a task set, you construct it by giving it the client that you want to submit the task set with: taskset = Gearman::TaskSet.new(client) Then you can create and add tasks to the task set: task = Gearman::Task.new('count_countries',JSON.dump(block), :uniq =>uniq)taskset.add_task(task) Finally, you tell the task set how long you want to wait for the results: taskset.wait(100) This will block the program until the timeout passes, or all the tasks in the task set complete hold true (again, complete does necessarily mean that the worker succeeded at the task, but that it saw it to completion). In this example, it will wait 100 seconds for all the tasks to complete before giving up on them. This doesn't mean that the jobs won't complete if the client disconnects, just that the client won't see the end results (which may or may not be acceptable). The worker To complete the distributed MapReduce example, we need to implement the worker that is responsible for performing the actual data processing. The worker will perform the following tasks: Receive a list of countries serialized as JSON from the manager Decode that JSON data into a Ruby structure Perform the reduce operation on the data converting the list of countries into a corresponding hash of counts Serialize the hash of counts as a JSON string Return the JSON string to the manager (to be passed on to the client) require 'rubygems'require 'gearman'require 'json'require './functions.rb'Gearman::Util.logger.level = Logger::DEBUG@servers = ['localhost:4730']w = Gearman::Worker.new(@servers)w.add_ability('count_countries') do |json_data,job|puts "Received: #{json_data}"data = JSON.parse(json_data)result = reduce_data(data)puts "Result: #{result.inspect}"returndata = JSON.dump(result)puts "Returning #{returndata}"sleep 4returndataendloop { w.work } Notice that we have introduced a slight delay in returning the results by instructing our worker to sleep for four seconds before returning the data. This is here in order to simulate a job that takes a while to process. To run this example, we will repeat the exercise from the first section. Save the contents of the client to a file called mapreduce_client.rb, and then contents of the worker to a file named mapreduce_worker.rb in the same directory as the functions.rb file. Then, start the worker first by running the following: ruby mapreduce_worker.rb And then start the client by running the following: ruby mapreduce_client.rb When you run these scripts, the worker will be waiting to pick up jobs, and then the client will generate five jobs, each with a block containing a list of countries to be counted, and submit them to the manager. These jobs will be picked up by the worker and then processed, one at a time, until they are all complete. As a result there will be a twenty second difference between when the jobs are submitted and when they are completed. Parallelizing the pipeline Implementing the solution this way clearly doesn't gain us much performance from the original example. In fact, it is going to be slower (even ignoring the four second sleep inside each job execution) than the original because there is time involved in serialization and deserialization of the data, transmitting the data between the actors, and transmitting the results between the actors. The goal of this exercise is to demonstrate building a system that can increase the number of workers and parallelize the processing of data, which we will see in the following exercise. To demonstrate the power of parallel processing, we can now run two copies of the worker. Simply open a new shell and execute the worker via ruby mapreduce_worker.rb and this will spin up a second copy of the worker that is ready to process jobs. Now, run the client a second time and observe the behavior. You will see that the client has completed in twelve seconds instead of twenty. Why not ten? Remember that we submitted five jobs, and each will take four seconds. Five jobs do not get divided evenly between two workers and so one worker will acquire three jobs instead of two, which will take it an additional four seconds to complete: [user@host]% ruby mapreduce_client.rbSubmitted job 1Submitted job 2Submitted job 3Submitted job 4Submitted job 5Submitted all jobs, waiting for results.Took 12 seconds: {"us"=>25, "greece"=>25, "italy"=>25, "china"=>25} Feel free to experiment with the various parameters of the system such as running more workers, increasing the number of records that are being processed, or adjusting the amount of time that the worker sleeps during a job. While this example does not involve processing enormous quantities of data, hopefully you can see how this can be expanded for future growth. Summary In this article, we have discussed MapReduce technique. Hope this article gives you a glimpse of how the book flows. Resources for Article : Further resources on this subject: BPMN 2.0 Concepts and The Sales Quote Process [Article] Simplifying Parallelism Complexity in C# [Article] Oracle BPM Suite 11gR1: Creating a BPM Application [Article]
Read more
  • 0
  • 0
  • 4050

Packt
07 Aug 2013
13 min read
Save for later

.NET 4.5 Parallel Extensions – Async

Packt
07 Aug 2013
13 min read
(For more resources related to this topic, see here.) Creating an async method The TAP is a new pattern for asynchronous programming in .NET Framework 4.5. It is based on a task, but in this case a task doesn't represent work which will be performed on another thread. In this case, a task is used to represent arbitrary asynchronous operations. Let's start learning how async and await work by creating a Windows Presentation Foundation (WPF ) application that accesses the web using HttpClient. This kind of network access is ideal for seeing TAP in action. The application will get the contents of a classic book from the web, and will provide a count of the number of words in the book. How to do it… Let's go to Visual Studio 2012 and see how to use the async and await keywords to maintain a responsive UI by doing the web communications asynchronously. Start a new project using the WPF Application project template and assign WordCountAsync as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create a simple user interface containing Button and TextBlock: <Window x_Class="WordCountAsync.MainWindow" Title="WordCountAsync" Height="350" Width="525"> <Grid> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="219,195,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <TextBlock x_Name="TextResults" HorizontalAlignment="Left" Margin="60,28,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="139" Width="411"/> </Grid> </Window> Next, open up MainWindow.xaml.cs. Go to the Project and add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Add a button click event for the StartButton and add the async modifier to the method signature to indicate that this will be a async method. Please note that async methods that return void are normally only used for event handlers, and should be avoided. private async void StartButton_Click(object sender, RoutedEventArgs e) { } Next, let's create a async method called GetWordCountAsync that returns Task<int>. This method will create HttpClient and call its GetStringAsync method to download the book contents as a string. It will then use the Split method to split the string into a wordArray. We can return the count of the wordArray as our return value. public async Task<int> GetWordCountAsync() { TextResults.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); var bookContents = await client.GetStringAsync(@"http://www.gutenberg.org/files/2009/2009.txt"); var wordArray = bookContents.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } Finally, let's complete the implementation of our button click event. The Click event handler will just call GetWordCountAsync with the await keyword and display the results to TextBlock. private async void StartButton_Click(object sender, RoutedEventArgs e) { var result = await GetWordCountAsync(); TextResults.Text += String.Format("Origin of Species word count: {0}",result); } In Visual Studio 2012, press F5 to run the project. Click on the Start button, and your application should appear as shown in the following screenshot: How it works… In the TAP, asynchronous methods are marked with an async modifier. The async modifier on a method does not mean that the method will be scheduled to run asynchronously on a worker thread. It means that the method contains control flow that involves waiting for the result of an asynchronous operation, and will be rewritten by the compiler to ensure that the asynchronous operation can resume this method at the right spot. Let me try to put this a little more simply. When you add the async modifier to a method, it indicates that the method will wait on an asynchronous code to complete. This is done with the await keyword. The compiler actually takes the code that follows the await keyword in an async method and turns it into a continuation that will run after the result of the async operation is available. In the meantime, the method is suspended, and control returns to the method's caller. If you add the async modifier to a method, and then don't await anything, it won't cause an error. The method will simply run synchronously. An async method can have one of the three return types: void, Task, or Task<TResult>. As mentioned before, a task in this context doesn't mean that this is something that will execute on a separate thread. In this case, task is just a container for the asynchronous work, and in the case of Task<TResult>, it is a promise that a result value of type TResult will show up after the asynchronous operation completes. In our application, we use the async keyword to mark the button click event handler as asynchronous, and then we wait for the GetWordCountAsync method to complete by using the wait keyword. private async void StartButton_Click(object sender, RoutedEventArgs e) { StartButton.Enabled = false; var result = await GetWordCountAsync(); TextResults.Text += String.Format("Origin of Species word count: {0}", .................. result); StartButton.Enabled = true; } The code that follows the await keyword, in this case, the same line of code that updates TextBlock, is turned by the compiler into a continuation that will run after the integer result is available. If the Click event is fired again while this asynchronous task is in progress, another asynchronous task is created and awaited. To prevent this, it is a common practice to disable the button that is clicked. It is a convention to name an asynchronous method with an Async postfix, as we have done with GetWordCountAsync. Handling Exceptions in asynchronous code So how would you add Exception handling to code that is executed asynchronously? In previous asynchronous patterns, this was very difficult to achieve. In C# 5.0 it is much more straightforward because you just have to wrap the asynchronous function call with a standard try/catch block. On the surface this sounds easy, and it is, but there is more going on behind the scene that will be explained right after we build our next example application. For this recipe, we will return to our classic books word count scenario, and we will be handling an Exception thrown by HttpClient when it tries to get the book contents using an incorrect URL. How to do it… Let's build another WPF application and take a look at how to handle Exceptions when something goes wrong in one of our asynchronous methods. Start a new project using the WPF Application project template and assign AsyncExceptions as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create a simple user interface containing Button and a TextBlock: <Window x_Class="WordCountAsync.MainWindow" Title="WordCountAsync" Height="350" Width="525"> <Grid> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="219,195,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <TextBlock x_Name="TextResults" HorizontalAlignment="Left" Margin="60,28,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="139" Width="411"/> </Grid> </Window> Next, open up MainWindow.xaml.cs. Go to the Project Explorer , right-click on References , click on Framework from the menu on the left side of the Reference Manager , and then add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Now let's create our GetWordCountAsync method. This method will be very similar to the last recipe, but it will be trying to access the book on an incorrect URL. The asynchronous code will be wrapped in a try/catch block to handle Exception. We will also use a finally block to dispose of HttpClient. public async Task<int> GetWordCountAsync() { ResultsTextBlock.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); try { var bookContents = await client.GetStringAsync(@"http://www.gutenberg.org/files/2009/No_Book_Here.txt"); var wordArray = bookContents.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } catch (Exception ex) { ResultsTextBlock.Text += String.Format("An error has occurred: {0} \n", ex.Message); return 0; } finally { client.Dispose(); } } Finally, let create the Click event handler for our StartButton. This is pretty much the same as the last recipe, just wrapped in a try/catch block. Don't forget to add the async modifier to the method signature. private async void StartButton_Click(object sender, RoutedEventArgs e) { try { var result = await GetWordCountAsync(); ResultsTextBlock.Text += String.Format("Origin of Species word count: {0}", result); } catch(Exception ex) { ResultsTextBlock.Text += String.Format("An error has occurred: {0} \n", ex.Message); } } Now, in Visual Studio 2012, press F5 to run the project. Click on the Start button. Your application should appear as shown in the following screenshot: How it works… Wrapping your asynchronous code in a try/catch block is pretty easy. In fact, it hides some of the complex work Visual Studio 2012 to doing for us. To understand this, you need to think about the context in which your code is running. When the TAP is used in Windows Forms or WPF applications, there's already a context that the code is running in, such as the message loop UI thread. When async calls are made in those applications, the awaited code goes off to do its work asynchronously and the async method exits back to its caller. In other words, the program execution returns to the message loop UI thread. The Console applications don't have the concept of a context. When the code hits an awaited call inside the try block, it will exit back to its caller, which in this case is Main. If there is no more code after the awaited call, the application ends without the async method ever finishing. To alleviate this issue, Microsoft included async compatible context with the TAP that is used for Console apps or unit test apps to prevent this inconsistent behavior. This new context is called GeneralThreadAffineContext. Do you really need to understand these context issues to handle async Exceptions? No, not really. That's part of the beauty of the Task-based Asynchronous Pattern. Cancelling an asynchronous operation In .NET 4.5, asynchronous operations can be cancelled in the same way that parallel tasks can be cancelled, by passing in CancellationToken and calling the Cancel method on CancellationTokenSource. In this recipe, we are going to create a WPF application that gets the contents of a classic book over the web and performs a word count. This time though we are going to set up a Cancel button that we can use to cancel the async operation if we don't want to wait for it to finish. How to do it… Let's create a WPF application to show how we can add cancellation to our asynchronous methods. Start a new project using the WPF Application project template and assign AsyncCancellation as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create our user interface. In this case, the UI contains TextBlock, StartButton, and CancelButton. <Window x_Class="AsyncCancellation.MainWindow" Title="AsyncCancellation" Height="400" Width="599"> <Grid Width="600" Height="400"> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="142,183,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <Button x_Name="CancelButton" Content="Cancel" HorizontalAlignment="Left" Margin="379,185,0,0" VerticalAlignment="Top" Width="75" Click="CancelButton_Click"/> <TextBlock x_Name="TextResult" HorizontalAlignment="Left" Margin="27,24,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="135" Width="540"/> </Grid> </Window> Next, open up MainWindow.xaml.cs, click on the Project Explorer , and add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Next, let's create the GetWordCountAsync method. This method is very similar to the method explained before. It needs to be marked as asynchronous with the async modifier and it returns Task<int>. This time however, the method takes a CancellationToken parameter. We also need to use the GetAsync method of HttpClient instead of the GetStringAsync method, because the former supports cancellation, whereas the latter does not. We will add a small delay in the method so we have time to cancel the operation before the download completes. public async Task<int> GetWordCountAsync(CancellationToken ct) { TextResult.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); await Task.Delay(500); try { HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct); var words = await response.Content.ReadAsStringAsync(); var wordArray = words.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } finally { client.Dispose(); } } Now, let's create the Click event handler for our CancelButton. This method just needs to check if CancellationTokenSource is null, and if not, it calls the Cancel method. private void CancelButton_Click(object sender, RoutedEventArgs e) { if (cts != null) { cts.Cancel(); } } Ok, let's finish up by adding a Click event handler for StartButton. This method is the same as explained before, except we also have a catch block that specifically handles OperationCancelledException. Don't forget to mark the method with the async modifier. public async Task<int> GetWordCountAsync(CancellationToken ct) { TextResult.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); await Task.Delay(500); try { HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct); var words = await response.Content.ReadAsStringAsync(); var wordArray = words.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } finally { client.Dispose(); } } In Visual Studio 2012, press F5 to run the project Click on the Start button, then the Cancel button. Your application should appear as shown in the following screenshot: How it works… Cancellation is an aspect of user interaction that you need to consider to build a professional async application. In this example, we implemented cancellation by using a Cancel button, which is one of the most common ways to surface cancellation functionality in a GUI application. In this recipe, cancellation follows a very common flow. The caller (start button click event handler) creates a CancellationTokenSource object. private async void StartButton_Click(object sender, RoutedEventArgs e) { cts = new CancellationTokenSource(); ... } The caller calls a cancelable method, and passes CancellationToken from CancellationTokenSource (CancellationTokenSource.Token). public async Task<int> GetWordCountAsync(CancellationToken ct ) { ... HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct ); ... } The cancel button click event handler requests cancellation using the CancellationTokenSource object (CancellationTokenSource.Cancel()). private void CancelButton_Click(object sender, RoutedEventArgs e) { if (cts != null) { cts.Cancel(); } } The task acknowledges the cancellation by throwing OperationCancelledException, which we handle in a catch block in the start button click event handler.
Read more
  • 0
  • 0
  • 1696
article-image-creating-sample-project
Packt
06 Aug 2013
6 min read
Save for later

Creating a sample project

Packt
06 Aug 2013
6 min read
(For more resources related to this topic, see here.) In this article, we will create a sample project based on a transit system. We will use AutoMapper to help us transform our domain objects to ViewModel objects that we will use in the presentation layer. Before we begin, we need to explain the business overview of our domain objects used in the transit system. There are four main domain objects that we will work with. TransitStop will be the main entry point to our system; this represents a bus stop in the real world. The TransitStop domain object provides a commuter GeoLocation to give the latitude and longitude of the location, which would allow a commuter to easily locate nearby bus stops in a map if they desire to. A commuter can also use its UniqueNumber to find out when the next bus will be arriving at the bus stop. Each TransitStop has a UniqueNumber identifier so that a commuter can easily locate the stop without a GPS device. From there, we have the Transit object, which represents the vehicle that will be arriving at the stop. There could be multiples vehicles arriving, and everyone has an arrival time object representing the arrival time of the bus/transit to the stop for the commuter to go from one destination to the other. The following screenshot shows us the overall model: Step 1 – defining the ViewModel object Our first view model that we will define and use in our presentation layer will be the BusStopViewModel object, which will be mapped from TransitStop. The following screenshot represents the same: The preceding screenshot shows the class diagram of our BusStopViewModel and TransitStop domain objects. The code for the same is defined as follows: public class BusStopViewModel { public string Name { get; set; } public GeoLocationViewModel Location { get; set; } public int TransitStopNumber { get; set; } } Compare that to our domain object, as in the following code: public class TransitStop { public string StationName { get; set; } public int UniqueNumber { get; set; } public GeoLocation Location { get; set; } public IEnumerable<Transit> Buses { get; set; } public string GetName() { return string.Format("{0} - {1}", UniqueNumber, StationName); } } Our ViewModel object will most likely have a different structure than our domain object, since there are properties or data that do not pertain to the presentation layer but are required for our domain object. Note that the naming convention in BusStopViewModel is mostly identical; in doing so, we will have AutoMapper automatically map the data for us. Step 2 – creating the mapping In our mapping code, we will use a repository to get the data. Our repository will be an in-memory data that we generate; however, in real-world situations, one would use a WCF web service, WebApi, SQL, or a NoSQL solution as in the following code: public class TransitStopRepository { private readonly IList<TransitStop> _data = new List<TransitStop>(); /// <summary> /// Initializes a new instance of the <see cref="DataRepository"/> class. /// </summary> public TransitStopRepository() { Populate(); } /// <summary> /// Populates this instance. /// </summary> private void Populate() { var transitStop = new TransitStop { StationName = "Tsim Sha Tsui", UniqueNumber = 123, Location = new GeoLocation(114.171575, 22.293314), Buses = GenerateTSTBuses() }; _data.Add(transitStop); } We are using an IList that contains all our transit data. The Populate method populates the list with all the information needed. It acts like a database containing all the TransitStop information. The following code interprets the same: /// <summary> /// Generates the TST KMB buses. /// </summary> /// <returns></returns> private IEnumerable<Transit> GenerateTSTBuses() { var buses = new List<Transit> { new Transit { Number = "23A", ArrivalTimes = GenerateTimeTable(DateTime.Now), TransitColor = "Green" }, new Transit { Number = "23B", ArrivalTimes = GenerateTimeTable(DateTime.Now), TransitColor = "Yellow" } }; return buses; } We use the GenerateTSTBuses method to generate two transit routes that will be arriving at the transit stop. It will also generate a timetable for the transits arriving at the transit stop for our in-memory database so that our domain objects will be fully populated when we send a query to them, as in the following code: /// <summary> /// Generates a timetable with 2 minute apart arrival time. /// </summary> /// <param name="from">From.</param> /// <param name="incrementMinutes">The incrementMinutes.</param> /// <returns></returns> private IEnumerable<ArrivalTime> GenerateTimeTable(DateTime from) { //create a list var list = new List<ArrivalTime>(); var everyXminute = 2; //we will always generate 10 transits arriving to the stop for(var i = 0; i < 10; i++) { var arrivalTime = new ArrivalTime { Arrival = from.AddMinutes(everyXminutes) }; //double it for next everyXminutes *= 2; list.Add(arrivalTime); } return list; } The GenerateTimeTable method generates the ArrivalTime object of the transit acting as a timetable for buses; we have hardcoded it to ten transits for our in-memory database, each arriving at an increment of 2 minutes apart. Now that our repository has data fully populated, we can now use our repository to get fully populated domain objects. The following example uses the GetByUniqueNumber method to get a TransitStop object from the repository: public static BusStopViewModel GetTransitStop(int uniqueNumber) { var repo = new TransitStopRepository(); var tStop = repo.GetByUniqueNumber(uniqueNumber); //create the map from -> to object AutoMapper.Mapper.CreateMap<TransitStop, BusStopViewModel>(); We then create the declaration of our mapping of the domain object to ViewModel by calling CreateMap, where AutoMapper will automatically map the objects that have the same naming convention. The step of creating a map is essential to AutoMapper your doing so tells AutoMapper what to map and what not to map. Step 3 – mapping the object The mapping is done by calling the Map method in AutoMapper and passing the data to map from. In the preceding example, the tStop variable contains data that we will be passing into for AutoMapper. By calling the Map method, we are essentially requesting AutoMapper to create another object based on the data we have provided, as illustrated in the following code: //map the object var result = AutoMapper.Mapper.Map<TransitStop, BusStopViewModel>(tStop); return result; } In the preceding code, we are requesting AutoMapper to map the TransitStop object into BusStopViewModel object. Step 4 – not mapping certain data Let's assume that the Location object is a very computing-intense object that requires lots of memory and CPU, or that we wish to use some other means of finding geolocation from a service, such as Google or Bing Maps. We can request AutoMapper not to map the object for us in our CreateMap method by using its mapping expression method ForMember as follows: AutoMapper.Mapper.CreateMap<TransitStop, BusStopViewModel>() .ForMember(dest => dest.Location, opt => opt.Ignore()); By providing the Ignore option in the ForMember method, AutoMapper will not map the Location object. Summary In this article, we saw how to install AutoMapper, how to create a sample project, and how to transform our domain objects to ViewModel objects. Resources for Article : Further resources on this subject: ER Diagrams, Domain Model, and N-Layer Architecture with ASP.NET 3.5 (part2) [Article] Deploying .NET-based Applications on to Microsoft Windows CE Enabled Smart Devices [Article] ER Diagrams, Domain Model, and N-Layer Architecture with ASP.NET 3.5 (part1) [Article]
Read more
  • 0
  • 0
  • 777

article-image-setting-your-profile
Packt
06 Aug 2013
5 min read
Save for later

Setting Up Your Profile

Packt
06 Aug 2013
5 min read
(For more resources related to this topic, see here.) Setting up your profile (Simple) While it may seem trivial, setting up your profile is one of the most important steps in starting your Edmodo experience. Your profile gives others insight into the professional you! Remember that the users on Edmodo are fellow teachers, and the ability to connect with these educators around the world is an opportunity that should not be overlooked. Thus, it is important to take care to provide a snapshot of your educational expertise. Getting ready Create your teacher account at http://www.edmodo.com. How to do it... Creating an Edmodo account only takes minutes, but is the most important step as you begin your Edmodo journey. Click on I'm a Teacher. Choose a username and password. Connect to your educational institution to verify your teacher account. Upload a photo of yourself. Join online Edmodo communities (available now, but advisable to skip at this juncture). Find teacher connections. Fill in your About Me section. How it works... The Edmodo website looks like the following: On the Edmodo home page, http://www.edmodo.com, click on I'm a Teacher to begin your Edmodo journey. Set up your teacher account by using a unique username and a password that you can remember. If your first choice of username is not available, please choose another until you are notified of a successful selection. The e-mail address that you attach to your Edmodo account should be your school e-mail. You will also need to choose your school affiliation at this time. If your school is not listed as a choice on Edmodo, you may do a manual search for your school. Selecting your school will ensure that fellow teachers within your district can easily connect with you. This also provides Edmodo with the background to be able to make suggestions on other educators with whom you may want to connect. Additionally, once you become active in the Edmodo community, your school selection will provide better insight into your teaching background, and will provide a greater context for other teachers to potentially partner with you in collaborative endeavors. Once you have created your account, you will be prompted to upload a photo of yourself. This is advisable in order to make you easier to distinguish when you interact in the professional communities, and it literally puts a face to your name. Certainly you have the option of using one of Edmodo's generic pictures. However, you will inevitably be sharing this generic picture with thousands of other users. You will be prompted to create a unique URL. This will provide additional ease to search for you when making professional connections. Your username is probably the easiest option for this. Next, you have the option of joining an array of online professional communities. We will come back to this step later in the section on Edmodo Communities. However, you will notice that Edmodo has automatically enrolled you in their Help community. This community is designated with the question mark symbol and once you have been redirected to your home page, you will notice it located in the left section of your screen, directly below your established Groups. From your profile page, you can find teacher connections. You can choose from the teacher suggestions made by Edmodo. Edmodo makes these connection suggestions based on your school district selection. These suggestions are located on the left-hand side of the profile screen. Simply click on a teacher with whom you would like to make a connection. If you would like to connect to other teachers who are not on Edmodo, you may send them an invitation from your profile page. Simply hover over the link How to improve my profile? that is located on the right-hand side of the screen. From here, enter the e-mail addresses of those educators you would like to join Edmodo. Your profile page also provides you with the ability to write an About Me description. In this portion, include the courses you teach and any educational interests you might have that could potentially pique the interest of your fellow educators. Note my personal Edmodo About Me description as seen in the preceding screenshot. There's more... You have created the basics of your profile. However, in order to gain clout in the Edmodo online community, you may want to begin earning badges. Your first chance to do so is in your profile setting. Earning teacher badges You will notice on your Edmodo profile page that you have the opportunity to earn teacher badges. Simply having your teacher account verified as being one that belongs to an educator will earn you the Verified Teacher badge. However, you can collect many others. Joining one of the subject area communities will net you a Community Member badge and following a publisher community will score you the Publisher Collaborator badge. Connect to at least 10 other educators on Edmodo and you will find yourself awarded with the Connected badge. The more educators with whom you connect, the more ways you can earn differentiated levels of this badge. The other badge you may want to covet earning is your Librarian badge. This is possible when you begin sharing resources on Edmodo that other educators find to be useful. (See Sharing Resources for additional information on how to do so.) Summary This article provided details on getting started with a simple yet effective classroom environment based set up, Edmodo. This article also listed the procedure to set up your profile on Edmodo. Resources for Article: Further resources on this subject: Getting to Grips with the Facebook Platform [Article] Introduction to Moodle Modules [Article] Getting Started with Facebook Application Development using ColdFusion/Railo [Article]
Read more
  • 0
  • 0
  • 987