Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-liferay-portal6-build-wap-sites-and-integrate-crm-and-netvibes-widgets
Packt
18 May 2010
8 min read
Save for later

Liferay Portal6: Build WAP sites and Integrate with CRM and Netvibes widgets

Packt
18 May 2010
8 min read
(Read more interesting articles on Liferay Portal 6 here.) WAP Liferay goes mobile! As smart-phones continue to impede the space between dialing a number, taking a picture, or discovering new music, mobile browsers offer us the next frontier in the previously desktop-exclusive market of web design. A mobile browser is a web browser designed for use on a mobile device such as a mobile phone, PDA, iPhone, which is optimized to display web content most effectively for small screens on portable devices. A WAP browser provides all of the basic services of a web browser, but is simplified to operate within the restrictions of a mobile phone, such as its smaller view screen. The websites generated by the portal go with mobile browsers, or any WAP browsers. Thus you can browse portal websites, called WAP sites, through mobile devices. Themes (look and feel of websites or WAP sites) in the portal will detect mobile devices dynamically. As mentioned earlier, each site may have its own look and feel and each page could have its own look and feel. Under the tab Look and Feel, you would see Regular Browsers and Mobile Devices, where available themes will appear. Of course, you can develop your mobile themes or WAP themes depending on your own requirements. Here we're going to discuss several existing mobile themes or WAP themes, and going further, see what themes are and how they work. Jedi Mobile theme The theme Jedi Mobile has been applied on the home page of the community Guest. As you can see, the theme Jedi Mobile takes the original Jedi theme and packs it int a bite-sized, smart-phone punch. Structure The theme Jedi Mobile has the following folder structure at $AS_WEB_APP_HOME/jedi-mobile-theme. CSS: CSS files images: Image files javascripts: JavaScript files templates: Velocity template files WEB-INF: Web info specification includes sub-folders classes, lib, and tld As you can see, web-info specification covers liferay-look-and-feel.xml,liferay-plugin-package.properties, liferay-plugin-package.xml, and web.xml. Knowing the structure of the theme would be helpful to customize that theme. How does it work? You could bring the theme Jedi Mobile Theme into the portal by following these steps: Download the WAR file ${jedi.mobile.theme.war} from http://liferay.cignex.com/palm_tree/book/0387/chapter12/jedi-mobiletheme-6.0.0.1.war Drop the WAR file ${jedi.mobile.theme.war} to the folder $LIFERAY_HOME/deploys when the portal is running Then apply the theme as the current look and feel of pages. What's happening? The theme Jedi Mobile has specified the following script at $AS_WEB_APP_HOME/jedi-mobile-theme/templates/portal_normal.vm. <script type="text/javascript">iPhone = function() {setTimeout("window.scrollTo(0,1) ", 100)}if (navigator.userAgent.indexOf('iPhone') != -1) {addEventListener("load", iPhone, false)addEventListener("onorientationchange", iPhone, false)}</script> As shown in the preceding code, it detects the navigator, whether it is an iPhone or not. iPhone theme The theme iPhone takes a much more direct approach to web applications. With its indigenous appearance and feel, the WAP site starts to feel like a playlist, user experience appears native, and navigation comes naturally. How does it work? You can bring the iPhone Theme into the portal by following these steps: Download the WAR file ${iphone.theme.war} from http://liferay.cignex.com/palm_tree/book/0387/chapter12/iphone-theme-6.0.0.1.war Drop the WAR file ${iphone.theme.war} to the folder $LIFERAY_HOME/deploy swhen the portal is running. Then apply the theme as the current look and feel of pages. What's happening? The theme iPhone introduces a new browser detection mechanism for specialized mobile functionality. If you visit the site on an iPhone, you get the bare minimum—JavaScript, HTML, and CSS. If you visit the site on a regular browser, you get all the more advanced UI features. The theme iPhone has specified the following script at $AS_WEB_APP_HOME/iphonetheme/templates/portal_normal.vm. #set ($isIphone = $request.getHeader("User-Agent").toLowerCase().indexOf("iphone") != -1)<!-- ignore details -->#if ($isIphone)<!-- ignore details -->#else$theme.include($top_head_include)#end</head><!-- ignore details --> iPhone Redirect theme The theme iPhone Redirect takes the browser detection mechanism and takes intelligent redirection. The theme iPhone Redirect is an unstyled theme, coming with a custom initialization feature—that is, it can detect an iPhone browser visiting the page, check for a Mobile community, and automatically redirect the iPhone user to that community if found. Moreover, it will work with a virtual host. How does it work? You could bring the theme iPhone Redirect Theme into the portal by following these steps: Download the WAR file ${iphone.redirect.theme.war} fromhttp://liferay.cignex.com/palm_tree/book/0387/chapter12/so-theme-6.0.0.1.war Drop the WAR file ${iphone. redirect.theme.war} to the folder$LIFERAY_HOME/deploy when the portal is running.   Then apply the theme as the current look and feel of pages. What's happening? The theme iPhone Redirect has the following code specified at $AS_WEB_APP_HOME/iphone-detect-theme/templates/init_custom.vm. //ignore details#set ($isIphone = $request.getHeader("User-Agent").toLowerCase().indexOf("iphone") != -1)#if ($isIphone && $mobileGroup && $group_id != $mobileGroup.groupId)<script type="text/javascript">window.location.href = '${layoutSet.virtualHost}' ? 'http://' +'${layoutSet.virtualHost}' + ((window.location.port) ? ':' + window.location.port : '') : '/web${mobileGroup.friendlyURL}'</script>#end Of course, you could customize the preceding themes according to your own requirements. Reporting JasperReports is an open source Java reporting tool that can write to screen, a printer, or to PDF, HTML, Microsoft Excel, RTF, ODT, CSV (Comma Separated Value) formats, and XML files. It can be used in Java-enabled applications, including Java EE or Web applications to generate dynamic content. It reads its instructions from an XML or .jasper file. Refer to http://www.jasperforge.org/jasperreports for more information. The portal provides full integration of JasperReports with the reporting framework—a web called reporting-jasper-web and a portlet called reports-console-portlet. The portal provides the ability to schedule reports and deliver them via Document Library and e-mail. In addition, the portal has added support for Jasper XLS data source to a reporting framework. JasperReports Engine The Liferay JasperReports Report Engine provides implementation of Liferay BI using Jasper. You can bring the web Reporting Jasper into the portal by following these steps: Download the WAR file ${reporting.jasper.web.war} from http://liferay.cignex.com/palm_tree/book/0387/chapter12/reporting-jasper-web-6.0.0.1.war Drop the WAR file ${reporting.jasper.web.war} to the folder $LIFERAY_HOME/deploy when the portal is running. Note that the current integration of JasperReports version is 3.6.2. You will be able to upgrade it to the latest version of JasperReports anytime. The Reports portlets The plugin Reports Console defines two portlets: Reports Console at Control Panel and Reports Display. The portlet Reports Display is instanceable—that is, you can add more than one instance of the portlet on a page. You could bring the plugin Reports Console into the portal by following these steps: Download the WAR file ${reports.console.portlet.war} from http://liferay.cignex.com/palm_tree/book/0387/chapter12/reports-console-portlet-6.0.0.1.warr Drop the WAR file ${reports.console.portlet.war} to the folder$LIFERAY_HOME/deploy when the portal is running. As shown in the following screenshot, the portlet Reports Display provides the ability to search for reports and display search results with pagination. Search results will be displayed with a set of columns: Report Definition Name, Report Format, Requested Date, and Reporting Date. As you can see, you can search for reports via basic search or advanced search. The advanced search would cover the following items: Match All of the following fields: All or Any Definition Name: User's input Datasource Name: Jasper Empty, Portal Format: csv, excel, HTML, pdf, rtf, Text, and XML Requesting User Name:User's input Start date:A date End date:A date Active: Yes or No The portlet Reports Console provides the abilities to manage reports in the Control Panel. By going to Content | Reports Console under the Control Panel, you can search for generated reports under the tab Generated Reports. You can also create report definitions under the tab Report Definitions. Under the tab Report Definition, you would be able to search report definitions via basic search or advanced search. The advanced search would cover the following items: Match All of the following fields: All or Any Definition Name: User's input Description: User's input Datasource Name: Multiple checkboxes, Jasper Empty or Portal Of course, you could add definitions with the following items: Definition Name: Input required Description: User's input Datasource Name: Empty or Portal Template: Uploading template file (required) Report Parameters: Multiple pair (key, value) Key: User's input; optionally, it is bound to Report parameters Value: User's input Permissions: A checkbox—Public permissions configuration.
Read more
  • 0
  • 0
  • 1416

article-image-liferay-portal-6pluggable-enterprise-search-and-plugin-management
Packt
17 May 2010
13 min read
Save for later

Liferay Portal 6:Pluggable Enterprise Search and Plugin Management

Packt
17 May 2010
13 min read
(Read more interesting articles on Liferay Portal 6 here.) Pluggable Enterprise Search As an alternative to using Lucene, the portal supports pluggable search engines. The first implementation of this uses the open source search engine Solr, but in the future, there will be many such plugins for search engine of your choice, such as FAST, GSA, Coveo, and so on. In this section, we're going to discuss caching, indexing, and using Solr for search Caching settings EHCache is a widely-used cache implemented in Java, which the portal uses to provide distributed caching in a clustered environment. EHCache is also used in a non-clustered environment to speed up repeated data retrievals. The portal uses EHCache caching by default. At the same time, the portal uses Hibernate caching as well. The portal provides the capability to confi gure EHCache caching and Hibernate caching. The portal has specified Hibernate as default ORM (Object-Relational Mapping) persistence in portal.properties. persistence.provider=hibernate The preceding code sets the provider hibernate used for ORM persistence. Of course, you can set this property to jpa (Java Persistence API), thus the properties with the prefi x jpa.* will be read. Similarly, if this property is set to hibernate, then the properties with the prefix hibernate.* will be read. Note that this property affects the loading of hibernate-spring.xml or jpa-spring.xml in the property spring.configs. For example, the portal has the following JPA configuration specified in portal.properties: jpa.configs=META-INF/mail-orm.xml,META-INF/portal-orm.xmljpa.provider=eclipselinkjpa.provider.property.eclipselink.allow-zero-id=truejpa.load.time.weaver=org.springframework.instrument.classloading.ReflectiveLoadTimeWeaver As shown in the preceding code, the property jpa.configs sets a list of commadelimited JPA configurations. The default JPA provider is set as eclipselink via the property jpa.provider. You can set it to other values such as hibernate, openjpa, and toplink. The property jpa.provider.property.eclipselink. allow-zero-id specifies provider-specific properties prefixed with jpa.provider. property.*.On the other hand, LoadTimeWeaver interface specified via the property jpa.load.time.weaver is a Spring class that allows JPA ClassTransformer instances to be plugged in a specific manner depending on the environment. Note that not all JPA providers require a JVM agent. If your provider doesn't require an agent or you have other alternatives, the loadtime weaver shouldn't be used. Configure Hibernate caching First of all, let's consider Hibernate caching settings. The portal will automatically detect the Hibernate dialect. However, you can set the property in portal-ext.properties to manually override the automatically detected dialect. hibernate.dialect= The portal also specified the following properties related to Hibernate caching in portal.properties. hibernate.configs=//ignore detailsMETA-INF/ext-hbm.xmlhibernate.cache.provider_class=com.liferay.portal.dao.orm.hibernate.EhCacheProvidernet.sf.ehcache.configurationResourceName=/ehcache/hibernate.xmlhibernate.cache.use_query_cache=truehibernate.cache.use_second_level_cache=truehibernate.cache.use_minimal_puts=truehibernate.cache.use_structured_entries=falsehibernate.jdbc.batch_size=20hibernate.jdbc.use_scrollable_resultset=truehibernate.bytecode.use_reflection_optimizer=truehibernate.query.factory_class=org.hibernate.hql.classic.ClassicQueryTranslatorFactoryhibernate.generate_statistics=false As shown in the preceding code, the property hibernate.configs sets Hibernate configurations. You may input a list of comma-delimited Hibernate configurations in portal-ext.properties. The property hibernate.cache.provider_class sets the Hibernate cache provider. On the other hand, the property net.sf.ehcache.configurationResourceName is used if Hibernate is confi gured to use Ehcache's cache provider, where Ehcache is recommended in a clustered environment. In a clustered environment, you need to set the property in portal-ext.properties as follows: net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml The portal has specified other Hibernate cache settings with properties starting with hibernate.cache.use_*. The property hibernate.jdbc.batch_size sets the JDBC batch size to improve performance. Note that if you're using Hypersonic databases or Oracle 9i, you should set the batch size to 0 as a workaround for a logging bug in the Hypersonic database driver or Oracle 9i driver. In addition, the property hibernate.query.factory_class sets the classic query factory, whereas the portal sets the property hibernate.generate_statistics to false. Of course, you could set the property hibernate.generate_statistics to true to enable Hibernate cache monitoring in portal-ext.properties. Setting up EHCache caching The portal has specified the following EHCache caching settings in portal.properties ehcache.single.vm.config.location=/ehcache/liferay-single-vm.xmlehcache.multi.vm.config.location=/ehcache/liferay-multi-vm.xmlehcache.portal.cache.manager.jmx.enabled=trueehcache.blocking.cache.allowed=true As shown in the preceding code, the property ehcache.single.vm.config.location sets the classpath to the location of the Ehcache configuration file / ehcache/liferay-single-vm.xml for internal caches of a single VM, whereas the property ehcache.multi.vm.config.location sets the classpath to the location of the Ehcache configuration file /ehcache/liferay-multi-vm.xml for internal caches of multiple VMs. In a clustered environment, you need to set the following in ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml In addition, the portal sets the property ehcache.portal.cache.manager.jmx.enabled to true to enable JMX integration in com.liferay.portal.cache.EhcachePortalCacheManager. Moreover, the portal sets the property ehcache.blocking.cache.allowed to, true to allow Ehcache to use blocking caches. This improves performance signifi cantly by locking on keys instead of the entire cache. The drawback is that threads can hang if the cache isn't used properly. Therefore, make sure that all queries that return a miss also immediately populate the cache, or else other threads that are blocked on a query of that same key will continue to hang. Of course, you can override the preceding properties in portal-ext.properties. Customization As you can see, the property net.sf.ehcache.configurationResourceName can have the value /ehcache/hibernate.xml for a non-clustered environment and /ehcache/hibernate-clustered.xml for a clustered environment. The propertynet.sf.ehcache.configurationResourceName can have the value /ehcache/hibernate.xml for a non-clustered environment and /ehcache/hibernateclustered.xml for a clustered environment. In the same pattern, the property ehcache.single.vm.config.location can havethe value /ehcache/liferay-single-vm.xml and the property ehcache.multi.vm.config.location can have the value /ehcache/liferay-multi-vm.xml for a non-clustered environment. ehcache.multi.vm.config.location has a value /ehcache/liferay-multi-vmclustered.xml for a clustered environment. In real cases, you may need to update both Hibernate caching settings and Ehcache caching settings, either in a non-clustered environment or in a clustered environment. The following is an example of how to do this: Create a folder named ext-ehcache under the folder $PORTAL_ROOT_HOME/WEB-INF/classes/. Obviously, you can have different names for the folder ${ehcache.folder.name}. Here we use the folder ext-ehcache of ${ehcache.folder.name} as an example Locate the JAR file portal-impl.jarunder the folder $PORTAL_ROOT_HOME/WEB-INF/lib and unzip all the fi les under the folder ehcache into the folder $PORTAL_ROOT_HOME/WEB-INF/classes/ext-ehcache. Update following fi les according to your requirements for both a non-clustered environment and a clustered environment. hibernate.xmlhibernate-clustered.xmlliferay-single-vm.xmlliferay-multi-vm.xmlliferay-multi-vm-clustered.xml Set the following for a non-clustered environment in portal-ext.properties:net.sf.ehcache.configurationResourceName=/ext-ehcache/hibernate.xmlehcache.single.vm.config.location=/ext-ehcache/liferay-single-vm.xmlehcache.multi.vm.config.location=/ext-ehcache/liferay-multi-vm.xml Otherwise, set the following for a clustered environment in portal-ext.properties:net.sf.ehcache.configurationResourceName=/ext-ehcache/hibernateclustered.xmlehcache.multi.vm.config.location=/ext-ehcache/liferay-multi-vmclustered.xml That's it! You have customized both the both Hibernate caching settings and Ehcache caching settings   Indexing settings Search engine indexing collects, parses, and stores data to facilitate fast and accurate information retrieval. Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java-based indexing. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform. Refer to http://lucene.apache.org for more information. By default, the portal uses Lucene search and indexing. Lucene search The portal sets the default Lucene index on start-up to false for faster performance, as follows,in portal.properties: index.read.only=falseindex.on.startup=falseindex.on.startup.delay=60index.with.thread=true As you can see, the portal sets the property index.read.only to false to allow any writes to the index. You should set it to true if you want to avoid any writes to the index. This is useful in some clustering environments where there is a shared index, and only one node of the cluster updates it. The portal sets the property index.on.startup to false in order to avoid indexing on every startup. You could set this property to true if you want to index your entire library of files on startup. This property is available so that automated test environments index the files on startup. Don't set this to true on production systems, or else your index data will be indexed on every startup. The property index.on.startup.delay adds a delay before indexing on startup. A delay may be necessary if a lot of plugins need to be loaded and re-indexed. Note that this property is only valid if the property index.on.startup is set to true. In addition, the portal sets the property index.with.thread to true to allow indexing on startup to be executed on a separate thread to speed up execution. Of course, you could re-index either all resources or an individual resource through web UI. For example, for re-indexing all search indexing, you can go to Control Panel | Server | Server Administration | Resources | Actions, and click on the button Execute next to the "Reindex all search indexes" option. Suppose that you're going to re-index individual resource like Users, you can use the Plugin Installation portlet in the Control Panel. Go to Control Panel | Server | Plugin Installation | Portlet Plugins, and click on the button Reindex next to the portlet Users. Index storage Lucene stores could be in the filesystem, the database, or in RAM. Anyway, the portal provides a set of properties to configure index storage as follows in portal.properties. lucene.store.type=filelucene.store.jdbc.auto.clean.up=falselucene.store.jdbc.dialect.*lucene.dir=${liferay.home}/data/lucene/lucene.file.extractor=com.liferay.portal.search.lucene.LuceneFileExtractorlucene.file.extractor.regexp.strip=lucene.analyzer=org.apache.lucene.analysis.standard.StandardAnalyzerlucene.commit.batch.size=0lucene.commit.time.interval=0lucene.buffer.size=16lucene.merge.factor=10lucene.optimize.interval=100 As shown in the preceding code, the property lucene.store.type designates whether Lucene stores indexes in a database via JDBC, filesystem, or in RAM. The default setting is filesystem. When using Lucene's storage of indexes via JDBC, temporary files don't get removed properly. This can eat up disk space over time. Thus set the property lucene.store.jdbc.auto.clean.up to true to automatically clean up the temporary files once a day. The property lucene.store.jdbc.dialect.* sets the JDBC dialect so that Lucene can use it to store indexes in the database. This property is referenced only when Lucene stores indexes in the database. The portal will attempt to load the proper dialect based on the URL of the JDBC connection. The property lucene.dir sets the directory where Lucene indexes are stored. This is referenced only when Lucene stores indexes in the filesystem. In a clustered environment, you could point the property lucene.dir to a shared folder, which is accessible for all nodes. More interestingly, you could set one node to allow any writes to the indexes via the property index.read.only and set the rest of nodes to allow read only. The property lucene.file.extractor specifies a class, called by Lucene to extract text from complex fi les so that they can be properly indexed. The file extractor can sometimes return text that isn't valid for Lucene. The property lucene.file. extractor.regexp.strip expects a regular expression. Any character that doesn't match the regular expression will be replaced with a blank space. You can set an empty regular expression to disable this feature. The property lucene.analyzer sets the default analyzer used for indexing and retrieval. In addition, the property lucene.commit.batch.size sets how often index updates will be committed. Set the batch size to confi gure how many consecutive updates will trigger a commit. If the value is 0, then the index will be committed on every update. The property lucene.commit.time.interval sets the time interval in milliseconds to confi gure how often to commit the index. The time interval isn't read unless the batch size is greater than 0 because the time interval works in conjunction with the batch size to guarantee that the index is committed after a specifi ed time interval. The portal sets the time interval to 0 to disable committing the index by a time interval. The property lucene.buffer.size sets Lucene's buffer size in megabytes and the property lucene.merge.factor sets Lucene's merge factor. For both of these properties, higher numbers mean that indexing goes faster but uses more memory. The default value from Lucene is 10. Note that this should never be set to a number less than 2. The property lucene.optimize.interval sets how often to run Lucene's optimize method. Optimization speeds up searching but slows down writing. You can set this property to 0 to always optimize. Indexer framework As mentioned earlier, you could re-index either all resources or an individual resource through web UI. For example, you could re-index out-of-the-box portlets like Users (Portlet ID 125) and plugins like the Mail portlet. This is because, in $PORTAL_ROOT_HOME/WEB-INF/liferay-portlet.xml, the portlet Users (named as enterprise_admin_users) has specified the following line: <indexer-class>com.liferay.portlet.enterpriseadmin.util.UserIndexer</indexer-class> As shown in the preceding code, the indexer-class value, which is the specified indexer framework, must be a class that implements com.liferay.portal.kernel.search.Indexer and is called to create or update a search index for the portlet Users. Additionally, you could fi nd the indexer framework in out-of-the-box portlets such as Organizations (portlet ID 126, called enterprise_admin_organizations), Web Content (portlet ID 15), Image Gallery (Portlet ID 31), Document Library (Portlet ID 20), and so on. Similarly the indexer-class value, which is the specified indexer framework, is also available in plugins. For example, the portlet Mail has specifi ed the following line in $AS_WEB_APP_HOME/mail-portlet/WEB-INF/liferay-portlet.xml. <indexer-class>com.liferay.mail.search.Indexer</indexer-class> In the same pattern, you may add the indexer framework in other plugins, like the Knowledge base portlet KBIndexer, which supports keyword search against titles, descriptions, content, tags, categories and category hierarchy, and "San Francisco" as oneword, and 'San Francisco' as multiple words ("San" or "Francisco") at http://liferay.cignex.com/palm_tree/book/0387/chapter11/knowledge-base-portlet-6.0.0.1.war
Read more
  • 0
  • 0
  • 1246

article-image-liferay-portal-6-employ-federated-search-opensearch-csz-search-maps-search-and-web-con
Packt
17 May 2010
12 min read
Save for later

Liferay Portal 6: Employ federated search, OpenSearch, CSZ search, maps search and web Content search

Packt
17 May 2010
12 min read
(Read more interesting articles on Liferay Portal 6 here.) Federated search Federated search is the simultaneous searching of multiple online databases or web resources, and it is an emerging feature of automated, web-based library, and information retrieval systems. Here, federated search refers to the portal. It is very useful to provide federated search abilities, such as searches for blog entries, users, organizations, Calendar entries, Bookmarks entries, Document Library documents, Image Gallery images, Message Boards messages, Wiki articles, Web Content articles, Directory, and so on. The portal provides a set of search portlets. In this section, we're going to take an in-depth look at these portlets. The Search portlet The Search portlet (portlet ID 3) is a JSR-286-compliant portlet that can be used for federated search. By default, the portal itself is the search provider. As shown in following screenshot, Search Portlet provides a federated search against Blogs entries, users, organizations, Calendar entries, Bookmarks entries, Document Library documents, Image Gallery images, Message Boards message, Wiki articles, Web Content articles, Directory, and so on. In addition, the Search Portlet provides a federated search against plugin portlets like the Alfresco Content portlet. The following is an example of how to use the Search portlet: Add the Search portlet to the page Home of the community Guest where you want to carry out a search, if the Search portlet isn't already present Enter the search criterion, for example My Click on the Search icon Note that when searching for assets, you will have the ability to specify the scope of the search results: Everything or This Community. Everything would generate search results that will come from any group in the current portal instance such as communities, organizations, and my community. This Community will generate search results that come from the current group in the portal instance such as community Guest, organization "Palm Tree Enterprise", and My Community. The search results would cover Blogs entries, users, organizations, Calendar entries, Bookmarks entries, Document Library documents, Image Gallery images, Message Boards messages, Wiki articles, Web Content articles, Directory, and so on. Additionally, search results will include assets from plugin portlets like Alfresco Content portlet. As you can see, search results would be displayed as a title with a link. If you have the proper permission on an asset, you could click on the title of the asset (which is a link to it) and view the asset as well. But if you don't have proper permission on an asset, clicking on the title would bring up a the permission error message. What's happening? The portal provides many portlets to support OpenSearch framework such as Message Boards, Blogs, Wikis, Directory and Document Library, Users, Organizations, and so on. In addition, plugins like the Alfresco Content portlet also supports the OpenSearch framework. Normally, these portlets have the following OpenSearch framework configuration. <open-search-class>class-name </open-search-class> The Search portlet obtains an OpenSearch instance from each portlet that has the tag &ltopen-search-class> definition. For example, the portlet Directory (portlet ID 11) allows users to search for other users, organizations, or user groups. OpenSearch has been specified for the portlet Directory in $PORTAL_ROOT_HOME/WEB-INF/liferayportlet. xml as follows: <open-search-class>com.liferay.portlet.directory.util.DirectoryOpenSearchImpl</open-search-class> As shown in the preceding code, the open-search-class value must be a class that implements com.liferay.portal.kernel.search.OpenSearch, which is called to get search results in the OpenSearch standard.Besides the OpenSearch framework, the portal provides UI taglib to display search results. In$PORTAL_ROOT_HOME/html/portlet/search/view.jsp, you could find the following code. <liferay-ui:search /> For more details on UI taglib <liferay-ui:search>, you would check JSP files start.jsp and end.jsp under the folder $PORTAL_ROOT_HOME/html/taglib/ui/ search. In addition, the portal scopes OpenSearch results through the UI taglib <liferay-ui:search>. For example, the scope of search results, namely, Everything or This Community has been specified in $PORTAL_ROOT_HOME/html/taglib/ui/search/start.jsp as follows: <select name="<%= namespace %>groupId"><!-ignore details --><option value="<%= group.getGroupId() %>" <%= (groupId != 0) ?"selected" : "" %>><liferay-ui:message key='<%= "this-" +(group.isOrganization() ? "organization" : "community") %>' /></option></select> As you can see, the value of This Community would be an organization or community. By default, the OpenSearch implementation in the portal supports both formats: ATOM and RSS. The default format would be ATOM. Therefore, the search results from plugin portlets must be returned in the format ATOM. For example, in the portlet Alfresco Content, the format of search results must be ATOM. Why? The portlet Search has specified the following code in $PORTAL_ROOT_HOME/html/ portlet/search/open_search_description.jsp. <OpenSearchDescription ><!-ignore details --><Url type="application/rss+xml" template="<%=themeDisplay.getPortalURL() %><%= PortalUtil.getPathMain() %>/search/open_search?keywords={searchTerms}&amp p={startPage?}&ampc={count?}&amp format=rss" /></OpenSearchDescription> In addition, search results are displayed in pagination through the search container. Fortunately, search container is configurable. The portal has specified the following properties in portal.properties. search.container.page.delta.values=5,10,20,30,50,75search.container.page.iterator.max.pages=25 As shown in the preceding code, the property search.container.page.delta. values sets the available values for the number of entries to be displayed per page. An empty value, or commenting out the value, will disable delta resizing. The default of 20 will apply in all cases. Note that you need to always include 20 because it is the default page size when no delta is specified. The absolute maximum allowed delta value is 200. The property search.container.page.iterator.max.pages sets the maximum number of pages, which are available before and / or after the currently displayed page. Of course, you could override these properties anytime in portal-ext.properties. Configuration As mentioned previously, OpenSearch in the Search portlet covers the in-and-out of Blogs, Calendar, Bookmarks, Document Library, Image Gallery, Message Boards, Wiki, Web Content, Directory, and so on. Fortunately, the portal adds the ability to remove these portlets from the list of portlets searched by the portlet Search as follows: com.liferay.portlet.blogs.util.BlogsOpenSearchImpl=true## ignore detailscom.liferay.portlet.wiki.util.WikiOpenSearchImpl=true As shown in the preceding code, you can set any of these properties to false to disable the portlet from being searched by the Search portlet in portal-ext.properties. Customization In real cases, you may be required to use the portlet Search in different ways. You would be able to customize the portlet Search. Here we're going to discuss how to use the Search portlet in Social Office and how to use the Search portlet in themes. The Social Office overrides the UI taglib <liferay-ui:search> in the portlet soportlet through JSP file hooks in $AS_WEB_APP_HOME/so-portlet/META-INF/ custom_jsps/html/taglib/ui/search/start.jsp as follows: <liferay-util:include page="/html/taglib/ui/search/start.portal.jsp"/><c:if test="<%= group.isUser() %>"><script type="text/javascript">var searchOptions = jQuery('select[name=<%= namespace %>groupId] option')searchOptions.each( //ignore details )</script></c:if> As shown in the preceding code, the Social Office overrides the look and feel of the portlet Search. For example, it will remove search options. Of course, you can add the portlet Search as a runtime portlet in themes. You could add the Velocity template $theme.search() in the theme, specifically in the VM file portal_normal.vm or the VM file included in portal_normal.vm. For example,Social Office specified the following lines in the theme so-theme such as $AS_WEB_ APP_HOME/so-theme/templates/navigation_top.vm. #if ($is_signed_in)<div class="my-search">$theme.search()</div>#end As shown in the preceding code, when the user signs in, the Social Office will show the customized portlet Search in "my-search" style. OpenSearch in plugins In general, the portal provides an OpenSearch framework, so that a user can create an OpenSearch implementation in the plugin environment. The portal will try to call this OpenSearch implementation when you hit the Search portlet. The Search portlet goes through all registered implementations and tries to create an instance. We could search content from the Alfresco repository, just as we did for Blogs, Bookmarks, Calendar, Directory, and so on via the OpenSearch framework of the portlet Search. How does it work? How does it work? First of all, we need to install the Alfresco Web Client, and then we need to deploy the portlet Alfresco Content. By following these three steps, you would bring the Alfresco Web Client into Tomcat as well: Download the latest Alfresco-Tomcat bundle from http://www.alfresco.com, and install it to the folder $ALFRESCO_HOME. Locate the Alfresco Web Client application alfresco.war under the folder $ALFRESCO_HOME, and drop it to the folder $TOMCAT_AS_DIR/webapps. Create a database alfresco in MySQL and restart Tomcat. drop database if exists alfrescocreate database alfresco character set utf8grant all on alfresco.* to 'alfresco'@'localhost' identified by'alfresco' with grant optiongrant all on alfresco.* to 'alfresco'@'localhost.localdomain'identified by 'alfresco' with grant option Then we could deploy the plugin Alfresco Content portlet. The following is example of how to bring the portlet Alfresco Content into the portal. Download the WAR file ${alfresco.content.portlet.war} from http:// liferay.cignex.com/palm_tree/book/0387/chapter12/netvibeswidget- portlet-6.0.0.1.war. Drop the WAR file ${alfresco.content.portlet.war} to the folder$LIFERAY_HOME/deploy when the portal is running. That's it! When you search for content again in the portlet Search, you will be able to see assets coming from the Alfresco Web Client. In addition, you would see a message like "Searched Alfresco Content, Blogs …" in the portlet Search. Web services As you can see, the Alfresco Content plugin displays content from the Alfresco repository. Two kinds of services are involved—web services and RESTful services. A web service is a software system designed to support interoperable machine-tomachine interaction over a network. With the portlet Alfresco Content, you could search or navigate content of the Alfresco repository via web services. You have to simply go to More | Configuration | Setup | Current of the portlet Alfresco Content first. Then you should enter a User ID like "admin" and a password like "admin", and click on the Save button. Now, you will be able to see the root folder "Company Home". The following property is specified in the portlet Alfresco Content: $AS_WEB_APP_HOME/alfresco-content-portlet/WEB-INF/classes/portlet.propertiescontent.server.url=http://localhost:8080 As shown in the preceding code, the property content.server.url sets the location of the Alfresco server URL. RESTful services Representational State Transfer (REST) is a style of software architecture for distributed hypermedia systems. Alfresco not only provides the ability to expose its search engines via OpenSearch, but it also provides an aggregate OpenSearch feature in the Alfresco Web Client through RESTful services. To summarize, Alfresco RESTful services-based keyword search mimics the keyword search of the Alfresco Web Client. The following search URL template is used for OpenSearch in the plugin Alfresco Content. http://<host>:<port>/alfresco/service/api/search/keyword.atom?q={searchTerms}&p={startPage?}&c={count?}&l={language?} In the preceding code, the URL will have the following values: searchTerms: The keyword or keywords to search startPage (optional): The page number of search results desired by the client count (optional): The number of search results per page (the default is 10) language (optional): The locale to search with (XML 1.0 Language ID, for example en-GB) Besides RESTful APIs for OpenSearch, Alfresco provides the following RESTful APIs built as Web Scripts: Repository API Reference: Remote services for interacting with the Alfresco Repository CMIS API Reference: Content Management Interoperability Services Portlets such as My Inbox and My Checked-Out for hosting in any portal Office Integration for hosting in Microsoft Office Moreover, in order to allow the Alfresco Content portlet to support OpenSearch, the portlet has set the value open-search-class at $AS_WEB_APP_HOME/alfrescocontent-portlet/WEB-INF/liferay-portlet.xml as follows: <open-search-class>com.liferay.portlet.alfrescocontent.util.AlfrescoOpenSearchImpl</open-search-class> Finally, the portlet has set the following values in $AS_WEB_APP_HOME/alfrescocontent-portlet/WEB-INF/classes/portlet.properties, which will be used to query Alfresco via OpenSearch. open.search.enabled=true## ignore details open.search.path=/alfresco/service/api/search/keyword.atom Of course, you could override the preceding properties according to your own environment's, for example, server domain name, port number, search user name, search password, and so on. CMIS Besides web services and OpenSearch, Alfresco supports CMIS as well. Content Management Interoperability Services (CMIS) is a specification that defines how Enterprise Content Management (ECM) systems exchange content, defining a domain model and set of bindings, such as Web Service and RESTful Atom-Pub that can be used by applications to work with one or more content management repositories. Alfresco supports the CMIS REST API Binding, the CMIS web services API Binding, and Web Service WSDL. The portal has specified the following properties for CMIS in portal.properties: cmis.credentials.username=nonemis.credentials.password=nonecmis.repository.url=http://localhost:8080/alfresco/service/api/cmiscmis.repository.version=1.0cmis.system.root.dir=Liferay Home As mentioned earlier, we could use CMIS hook to configure a repository. In addition, we could use these properties in the Alfresco Content portlet and provide OpenSearch capabilities based on CMIS.
Read more
  • 0
  • 0
  • 2572
Visually different images

article-image-creating-nhibernate-session-access-database-within-aspnet
Packt
14 May 2010
7 min read
Save for later

Creating a NHibernate session to access database within ASP.NET

Packt
14 May 2010
7 min read
NHibernate is an open source object-relational mapper, or simply put, a way to rapidly retrieve data from your database into standard .NET objects. This article teaches you how to create NHibernate sessions, which use database sessions to retrieve and store data into the database. In this article by Aaron B. Cure, author of Nhibernate 2 Beginner's Guide we'll talk about: What is an NHibernate session? How does it differ from a regular database session? Retrieving and committing data Session strategies for ASP.NET (Read more interesting articles on Nhibernate 2 Beginner's Guide here.) What is an NHibernate session? Think of an NHibernate session as an abstract or virtual conduit to the database. Gone are the days when you have to create a Connection, open the Connection, pass the Connection to a Command object, create a DataReader from the Command object, and so on. With NHibernate, we ask the SessionFactory for a Session object, and that's it. NHibernate handles all of the "real" sessions to the database, connections, pooling, and so on. We reap all the benefits without having to know the underlying intricacies of all of the database backends we are trying to connect to. Time for action – getting ready Before we actually connect to the database, we need to do a little "housekeeping". Just a note, if you run into trouble (that is, your code doesn't work like the walkthrough), then don't panic. See the troubleshooting section at the end of this Time for action section. Before we get started, make sure that you have all of the Mapping and Common files and that your Mapping files are included as "Embedded Resources". Your project should look as shown in the following screenshot: The first thing we need to do is create a new project to use to create our sessions. Right-click on the Solution 'Ordering' and click on Add | New Project. For our tests, we will use a Console Application and name it Ordering.Console. Use the same location as your previous project. Next, we need to add a few references. Right-click on the References folder and click on Add Reference. In VB.NET, you need to right-click on the Ordering.Console project, and click on Add Reference. Select the Browse tab, and navigate to the folder that contains your NHibernate dlls. You should have six files in this folder. Select the NHibernate.dll, Castle.Core.dll, Castle.DynamicProxy2.dll, Iesi.Collections.dll, log4net.dll, and NHibernate.ByteCode.Castle.dll files, and click on OK to add them as references to the project. Right-click on the References folder (or the project folder in VB.NET), and click on Add Reference again. Select the Projects tab, select the Ordering.Data project, and click on OK to add the data tier as a reference to our console application. The last thing we need to do is create a configuration object. We will discuss configuration in a later chapter, so for now, it would suffice to say that this will give us everything we need to connect to the database. Your current Program.cs file in the Ordering.Console application should look as follows: using System;using System.Collections.Generic;using System.Text;namespace Ordering.Console{ class Program { static void Main(string[] args) { } }} Or, if you are using VB.NET, your Module1.vb file will look as follows: Module Module1 Sub Main() End SubEnd Module At the top of the file, we need to import a few references to make our project compile. Right above the namespace or Module declarations, add the using/Imports statements for NHibernate, NHibernate.Cfg, and Ordering.Data: using NHibernate;using NHibernate.Cfg;using Ordering.Data; In VB.NET you need to use the Imports keyword as follows: Imports NHibernateImports NHibernate.CfgImports Ordering.Data Inside the Main() block, we want to create the configuration object that will tell NHibernate how to connect to the database. Inside your Main() block, add the following code: Configuration cfg = new Configuration();cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionProvider, typeof(NHibernate.Connection.DriverConnectionProvider) .AssemblyQualifiedName); cfg.Properties.Add(NHibernate.Cfg.Environment.Dialect, typeof(NHibernate.Dialect.MsSql2008Dialect) .AssemblyQualifiedName); cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionDriver, typeof(NHibernate.Driver.SqlClientDriver) .AssemblyQualifiedName); cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionString, "Server= (local)SQLExpress;Database= Ordering;Trusted_Connection=true;"); cfg.Properties.Add(NHibernate.Cfg.Environment. ProxyFactoryFactoryClass, typeof (NHibernate.ByteCode.LinFu.ProxyFactoryFactory) .AssemblyQualifiedName); cfg.AddAssembly(typeof(Address).AssemblyQualifiedName); For a VB.NET project, add the following code: Dim cfg As New Configuration()cfg.Properties.Add(NHibernate.Cfg.Environment. _ ConnectionProvider, GetType(NHibernate.Connection. _ DriverConnectionProvider).AssemblyQualifiedName) cfg.Properties.Add(NHibernate.Cfg.Environment.Dialect, _ GetType(NHibernate.Dialect.MsSql2008Dialect). _ AssemblyQualifiedName) cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionDriver, _ GetType(NHibernate.Driver.SqlClientDriver). _ AssemblyQualifiedName) cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionString, _ "Server= (local)SQLExpress;Database=Ordering; _ Trusted_Connection=true;") cfg.Properties.Add(NHibernate.Cfg.Environment. _ ProxyFactoryFactoryClass, GetType _ (NHibernate.ByteCode.LinFu.ProxyFactoryFactory). _ AssemblyQualifiedName) cfg.AddAssembly(GetType(Address).AssemblyQualifiedName) Lastly, right-click on the Ordering.Console project, and select Set as Startup Project, as shown in the following screenshot: Press F5 or Debug | Start Debugging and test your project. If everything goes well, you should see a command prompt window pop up and then go away. Congratulations! You are done! However, it is more than likely you will get an error on the line that says cfg.AddAssembly(). This line instructs NHibernate to "take all of my HBM.xml files and compile them". This is where we will find out how well we handcoded our HBM.xml files. The most common error that will show up is MappingException was unhandled. If you get a mapping exception, then see the next step for troubleshooting tips. Troubleshooting: NHibernate will tell us where the errors are and why they are an issue. The first step to debug these issues is to click on the View Detail link under Actions on the error pop up. This will bring up the View Detail dialog, as shown in the following screenshot: If you look at the message, NHibernate says that it Could not compile the mapping document: Ordering.Data.Mapping.Address.hbm.xml. So now we know that the issue is in our Address.hbm.xml file, but this is not very helpful. If we look at the InnerException, it says "Problem trying to set property type by reflection". Still not a specific issue, but if we click on the + next to the InnerException, I can see that there is an InnerException on this exception. The second InnerException says "class Ordering.Data.Address, Ordering.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null not found while looking for property: Id". Now we are getting closer. It has something to do with the ID property. But wait, there is another InnerException. This InnerException says "Could not find a getter for property 'Id' in class 'Ordering.Data.Address'". How could that be? Looking at my Address.cs class, I see: using System;using System.Collections.Generic;using System.Text;namespace Ordering.Data{ public class Address { }} Oops! Apparently I stubbed out the class, but forgot to add the actual properties. I need to put the rest of the properties into the file, which looks as follows: using System;using System.Collections.Generic;using System.Text; namespace Ordering.Data{ public class Address { #region Constructors public Address() { } public Address(string Address1, string Address2, string City, string State, string Zip) : this() { this.Address1 = Address1; this.Address2 = Address2; this.City = City; this.State = State; this.Zip = Zip; } #endregion #region Properties private int _id; public virtual int Id { get { return _id; } set { _id = value; } } private string _address1; public virtual string Address1 { get { return _address1; } set { _address1 = value; } } private string _address2; public virtual string Address2 { get { return _address2; } set { _address2 = value; } } private string _city; public virtual string City { get { return _city; } set { _city = value; } } private string _state; public virtual string State { get { return _state; } set { _state = value; } } private string _zip; public virtual string Zip { get { return _zip; } set { _zip = value; } } private Contact _contact; public virtual Contact Contact { get { return _contact; } set { _contact = value; } } #endregion }} By continuing to work my way through the errors that are presented in the configuration and starting the project in Debug mode, I can handle each exception until there are no more errors. What just happened? We have successfully created a project to test out our database connectivity, and an NHibernate Configuration object which will allow us to create sessions, session factories, and a whole litany of NHibernate goodness!
Read more
  • 0
  • 0
  • 7204

article-image-introduction-flash-builder-4-network-monitor
Packt
13 May 2010
3 min read
Save for later

An Introduction to Flash Builder 4-Network Monitor

Packt
13 May 2010
3 min read
Adobe Flash Builder 4 (formally known as Adobe Flex Builder), which no doubt needs no words of introduction, has become a de-facto standard in rich internet applications development. Latest version is considered as a ground breaking release not only for its dozens of new and enhanced features but also for its new-fangled component architectures like Spark, designer-developer workflow, integration with flash and catalyst, Data centric development, Unit testing, Debugging enhancements etc. In this article, we’ll get acquainted with a brand new premium feature of Adobe Flash Builder 4 called Network Monitor. Network Monitor enables developers to inspect and monitor client-server traffic in the form of textual, XML, AMF, or JSON data within the Adobe Flash Builder 4. It shows real-time data-traffic between application and a local or remote server along with wealth of other related information about the transferred data such as status, size, body etc. If you have used FireBug (A firefox plugin), then you will appreciate Network Monitor too. It is extremely handy during HTTP errors to check the response which is not accessible from the fault event object. Creating a Sample Application Enough Talking lets start and create a very simple application which will serve as groundwork to explore Network Monitor. Assuming you are already equipped with basic knowledge of application creation, we will move on quickly without going through minor details. This sample application will read the PackPublishing Official RSS feed and display every news title along with its publishing date in a DataGrid control. Network monitor will set forth into action when data request will be triggered. Go to File Menu, Select New > Flex Project. Insert information in the New Flex Project dialog box according to following screenshot and hit Enter. In Flex 4, all the non-visual mxml components such as RPC components, effects, validators, formatters etc are declared inside <fx:Declarations> tag. Declare a HTTPService component inside <fx:Declarations> tag. Set its id property to newsService Set its url property to https://www.packtpub.com/rss.xml Set its showBusyCursor property to true and resultFormat property to e4x. Generate result and fault event handlers, though only result event handler will be used. Your HTTPService code should look like following <s:HTTPService id="newsService" url="https://www.packtpub.com/rss.xml" showBusyCursor="true" resultFormat="e4x" result="newsService_resultHandler(event)" fault="newsService_faultHandler(event)"/> Now set the application layout to VerticalLayout <s:layout> <s:VerticalLayout verticalAlign="middle" horizontalAlign="center"/> </s:layout> Add a Label control and set its text property to Packt Publishing Add a DataGrid control, set its id property to dataGrid, and add two DataGridColumn in it. Set first column’s dataField property to title and headerText to Title Set second column’s dataField property to pubDate and headerText to Date Your controls should look like as following <s:Label text="Packt Publishing" fontWeight="bold" fontSize="22"/> <mx:DataGrid id="dataGrid" width="600"> <mx:columns> <mx:DataGridColumn dataField="title" headerText="Title"/> <mx:DataGridColumn dataField="pubDate" width="200" headerText="Date"/> </mx:columns> </mx:DataGrid> Finally add following code in newsService’s result handler. var xml:XML = XML(event.result); dataGrid.dataProvider = xml..item;
Read more
  • 0
  • 0
  • 1642

article-image-creating-new-publication-using-mobile-database-workbench-oracle-mobile-server
Packt
13 May 2010
8 min read
Save for later

Creating a New Publication using Mobile Database Workbench with Oracle Mobile Server

Packt
13 May 2010
8 min read
If you are a mobile device user, it is likely that you would have performed a sync at one point in time (with or without being aware of it). We are all familiar with the convenience of being able to just dock our PDA devices and have our calendars, tasks, and contacts automatically synced to our desktop machines. The synchronization process is a necessity for any type of mobile device, whether it's a Smart Phone, iPhone, or a Pocket PC. The core of this necessity is simple—people need to have access to their data when they're on the move and when they're back at the office, and this data needs to be consistent—wherever they're accessing it from. In a business scenario, the importance of this necessity increases manifold—it's not just about your personal data anymore. The data you've keyed in on your PDA needs to be synced to the server so that it can be shared with other users, used to generate reports, or even sent for number-crunching. With hundreds of mobile users synchronizing their data and server-side applications updating this data at the same time, things can quickly get messy. The synchronization process has to ensure that conflicts are gracefully handled, auto-generated numbers don't overlap, that each user only syncs down the data they're meant to see, and so on. The Oracle Mobile Server can be a bit tedious to set up for first time beginners. Once you get going, however, it can be a powerful tool that can manage not only database synchronization but also mobile application deployment. A publication represents an application (and its database) in the Oracle mobile server. You can create a publication through the Mobile Database Workbench tool provided with Oracle Mobile Server. Creating a new mobile project Launch the Mobile Database Workbench tool from Start | All Programs | Oracle Database Lite 10g | Mobile Database Workbench. Create a new project by clicking on the File | New | Project menu item in the Mobile Database Workbench window. A project creation wizard will run. Specify a name for your project and a location to store the project files. The next screen will request you to key in mobile repository particulars. Specify your mobile repository connection settings, and use the mobile server administrator password you specified earlier to log in. In the next step, specify a schema to use for the application. As you've created the master tables in the MASTER schema, you can specify your MASTER account username and password here. The next screen will show a summary of what you've configured so far. Click the Finish button to generate the project. If your project is generated successfully, you should be able to see your project and a tree list of its components in the left pane. Adding publication items to your project Each publication item corresponds to a database table that you intend to publish. For example, if your application contained five tables, you will need to create five publication items. Let's create the publication items now for the Accounts, AccountTasks, AccountHistories, AccountFiles, and Products tables. Click on the File | New | Publication Item menu item to launch the Publication Item wizard. In the first step of the wizard, specify a name for the publication item (use the table name as a rule of thumb). There are two options here worth noting: Synchronization refresh type This refers to the type of refresh used for a particular table: Fast: This is a type of incremental refresh—only the changes are synced down from the server during a sync. This is the most common mode of refresh used. Complete: In this type of refresh, all content is synced down from the server during each sync. It is comparatively more time consuming and resource intensive. You might use this option with tables containing small lists of data that change very frequently. Queue based: This is a custom refresh in that the developer can define the entire logic for the sync. It can be used for custom scenarios that may not exactly require synchronization—for instance you might need to simply collect data on the client and have it stored at the server. In such a case, the queue-based refresh works better because you can bypass the overhead of conflict detection. Enable automatic synchronization Automatic synchronization allows a sync to be initiated automatical-ly in the background of the mobile device when a set of rules are met. For example, you might decide to use automatic synchronization if you wanted to spread out synchronization load over time and reduce peak-out on the server. In the next step, choose the table that you want to map the publication item to. Select the MASTER schema, and click the Search button to retrieve a list of the tables under this schema. Locate the Accounts table and highlight it. In the next screen, you will need to select all the columns you need from the Accounts table. As you need to sync every single column from the snapshot to the master table, include all columns. Move all columns from the Available list to the Selected list using the arrow buttons and click on the Next button to proceed. The next step is one of the most important steps in creating a publication item. The SQL statement shown here basically defines how data is retrieved from the Accounts table at the server and synced down to the snapshot on the mobile device. This SQL statement is called the Publication Item Query. The first obvious thing you need to do is to edit the default query. You need to include a filter to sync down only the accounts owned by the specific mobile device user. You can easily use a filter that looks like the following: WHERE OwnerID = :OwnerID The following screenshot shows how your Publication Item Query will look after editing. If any part of it is defined or formatted incorrectly, you will receive a notification. Click on Next after that to get to the summary screen, then click on the Finish button to generate the publication item. After creating the publication item for the Accounts table, let's move on to a child table—the AccountTasks table. Create another publication item in the same fashion that maps to the AccountTasks table. At Step 4 of the wizard, the Publication Item Query that you need to specify will be a little bit different. The AccountTasks table does not contain the OwnerID field, so how do we filter what gets synced down to each specific mobile device. You obviously don't want to sync down every single record in this table—including those that are not meant to be accessible by the specific mobile device user. One way to still apply the OwnerID filter is to use a table join with the Accounts table. You can easily specify a table join in the following manner: SELECT "TASKID", A."ACCOUNTGUID", "TASKSUBJECT", "TASKDESCRIPTION", "TASKCREATED", "TASKDATE", "TASKSTATUS" FROM MASTER.ACCOUNTTASKS A, MASTER.ACCOUNTS B WHERE A.ACCOUNTGUID=B.ACCOUNTGUID AND B.OWNERID = :OwnerID If you try to save the Publication Item Query above in the Edit Query box, it may prompt you to select the primary base object for the publication item (as shown in the following screenshot). This should be set to AccountTasks because we are creating a publication item that maps to this table. If you choose the Accounts table again, you will end up with two publication items that map to the same Accounts table. This will cause problems when you attempt to add both items to a publication. If you have typed in everything correctly, you will be able to see your Publication Item Query show up in the Query tab shown as follows. You can then click on the Next and Finish buttons to complete the wizard. Now that you've seen how to create a publication item based on a child table, repeat the same steps above for the other child tables – AccountFiles and AccountHistories. The last table—the Products table deserves a special mention because it's different. You do not need a filter for this table, simply because every mobile device user will need to see the full list of products. You can, therefore, use the default Publication Item Query for the Products table: SELECT "PRODUCTID", "PRODUCTCODE", "PRODUCTNAME", "PRODUCTPRICE" FROM MASTER.Products After you've done this, you can now move on to creating the "sequences" necessary in this mobile application.
Read more
  • 0
  • 0
  • 1015
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-how-does-ocs-inventory-ng-meet-our-needs
Packt
12 May 2010
8 min read
Save for later

How does OCS Inventory NG meet our needs?

Packt
12 May 2010
8 min read
OCS Inventory NG stands for Open Computer and Software Inventory Next Generation , and it is the name of an open source project that was started back in late 2005. The project matured into the first final release in the beginning of year 2007. It's an undertaking that is still actively maintained, fully documented, and has support forums. It has all of the requirements that an open source application should have in order to be competitive. There is a tricky part when it comes to open source solutions. Proposing them and getting them accepted by the management requires quite a bit of research. One side of the coin is that it is always favorable, everyone appreciates cutting down licensing costs. The problem with such a solution is that you cannot always take for granted their future support. In order to take an educated guess on whether an open source solution could be beneficial for the company, we need to look at the following criteria: how frequently is the project updated, check the download count, what is the feedback of the community, whether the application is thoroughly documented, and the existence of active community support. OCS-NG occupies a dominant position when it comes to open source projects on the area of inventorying computers and software. Brief overview on OCS Inventory NG's architecture The architecture of OCS-NG is based on the client-server model. The client program is called a network agent. These agents need to be deployed on the client computers that we want to include in our inventory. The management server is composed of four individual server roles: database server, communication server, deployment server, and the administration console server. More often than not, these can be run from the same machine. OCS Inventory NG is cross-platform and supports most Unices, BSD derivates (including Mac OS X), and all kinds of Windows-based operating systems. The server can be also be run on either platform. As it is an open source project, it's based on the popular LAMP or WAMP solution stack. This means that the main server-side prerequisites are Apache web server, MySQL database server, and PHP server. These are also the viable components of a fully functional web server. The network agents communicate with the management server under standardized HTTP protocols. The data that is exchanged is formatted under XML conventions. The screenshot below describes a general overview on the way clients communicate with the management server's sub-server components. Rough performance evaluation of OCS-NG The data that is collected in case of a fully - inventoried computer sums up to something around 5KB. That is a small amount and it will neither overload the server nor create network congestion. It is often said that around one million systems can be inventoried daily on a 3GHz bi-Xeon processor based server with 4 GB of RAM without any issues. Any modest old-generation server should suffice for the inventory of few thousand systems. When scalability is necessary such as over 10,000-20,000 inventoried systems, it is recommended to split those 4 server-role components on two individual servers. Should this be the case, the database server needs to be installed on the same machine with the communication server, and on another system with the administration server and the deployment server with a database replica. Any other combination is also possible. Although distributing the server components is possible, very rarely do we really need to do that. In this day and age, we can seamlessly virtualize up to four or more servers on any dual or quad-core new generation computer. OCS-NG's management server can be one of those VMs. If necessary, distributing server components in the future is possible. Meeting our inventory demands First and foremost, OCS Inventory NG network agents are able to collect all of the must-have attributes of a client computer and many more. Let's do a quick checkup on these: BIOS: System serial number, manufacturer, and model Bios manufacturer, version, and date Processors: Type, count (how many of them), manufacturer, speed, and cache Memory: Physical memory type, manufacturer, capacity, and slot number Total physical memory Total swap/paging memory Video: Video adapter: Chipset/model, manufacturer, memory size, speed, and screen resolution Display monitor: Manufacturer, description, refresh rate, type, serial number, and caption Storage/removable devices: Manufacturer, model, size, type, speed( all when applicable) Drive letter, filesystem type, partition/volume size, free space Network adapters/telephony: Manufacturer, model, type, speed, and description MAC and IP address, mask and IP gateway, DHCP server used Miscellaneous hardware Input devices: Keyboard, mouse, and pointing device Sound devices: Manufacturer name, type, and description System slots: Name, type, and designation System ports: Type, name, caption, and description Software Information: Operating system: Name, version, comments, and registration info Installed software: Name, publisher, version (from Add / Remove software or Programs and Features menu) Custom-specified registry queries (applicable to Windows OS) Not only computers but also networking components can be used for inventorying. OCS Inventory NG detects and collects network-specific information about these (such as MAC address and IP address, subnet mask, and so on.). Later on we can set labels and organize them appropriately. The place where OCS-NG comes as a surprise is its unique capability to make an inventory of hosts that are not on the network. The network agent can be run manually on these offline hosts and are then imported into the centralized management server. One of its features include intelligent auto-discovering functionalities and its ability to detect hosts that have not been inventoried. It is based on popular network diagnosing and auditing tools such as the nmap . The algorithm can decide whether it's an actual workstation computer or rather just a printer. If it's the former, the agent needs to be deployed. The network scanning is not done by the management server. It is delegated to network agents. This way the network is never overcrowded or congested. If the management server itself scans for populated networks spanning throughout different subnets, the process would be disastrous. This way the process is seamless and simply practical. Another interesting part is the election mechanism based on which the server is able to decide the most suited client to carry out the discovery. A rough sketch of this in action can be seen in the next figure. Set of functions and what it brings to the table At this moment, we're fully aware that the kind information that the network agents are getting into the database are relevant and more than enough for our inventorying needs. Nevertheless, we won't stop here. It's time to analyze and present its web interface. We will also shed a bit of light on the set of features it supports out of the box without any plugins or other mods yet. There will be a time for those too. Taking a glance at the OCS-NG web interface The web interface of OCS Inventory NG is slightly old-fashioned. One direct advantage of this is that the interface is really snappy. Queries are displayed quickly, and the UI won't lag. The other side of the coin is that intuitiveness is not the interface's strongest point. Getting used to it might take a while. At least it does not make you feel that the interface is overcrowded. However, the location and naming of buttons leaves plenty of room for improvement. Some people might prefer to see captions below the shortcuts as the meaning of the icons is not always obvious. After the first few minutes, we will easily get used to them. A picture is worth thousands of words, so let's exemplify our claims. The buttons that appear in the previous screenshot from left to right are the following: All computers Tag/Number of PC repartition Groups All softwares Search with various criteria In the same fashion, in this case the buttons in the previous screenshot stand for the following features: Deployment Security Dictionary Agent Configuration (this one is intuitive!) Registry (self-explanatory) Admin Info Duplicates Users Local Import Help When you click on the name of the specific icon, the drop-down menu appears right below on the cursor All in all, the web interface is not that bad after all. We must accept that the strongestpoint lies in its snappiness, and the wealth of information that is presented in a fraction of a second rather than its design or intuitiveness. We appreciate its overall simplicity and its quick response time. We are often struggling with new generation Java-based and AJAX-based overcrowded interfaces of network equipment that seem slow as hell. So, we'll choose OCS Inventory NG's UI over those anytime!
Read more
  • 0
  • 0
  • 6289

article-image-introduction-it-inventory-and-resource-management
Packt
12 May 2010
8 min read
Save for later

Introduction to IT Inventory and Resource Management

Packt
12 May 2010
8 min read
For the past decade or so we have begun to realize that computers are an indispensable necessity. They're around us everywhere, starting from our comfortable households to rovers from other planets. Currently, it is not uncommon at all to have more than a few dozen of office computers and other IT equipment in the infrastructure of a small company that does nothing directly related to that specific area. It should not surprise anyone that in case of business environments there has to be some streamlined inventory. Especially, when we consider that the network might have a total of several hundreds, if not thousands, of workstation computers, servers, portable devices, other office equipment such as printers, scanners, and other networking components. Resource management, in its essence, when viewed from an IT perspective, is providing a method to gather and store all kinds of information about items in our infrastructure. Later on supporting means to further maintain the said inventory. Also, performing routine tasks based on the collected data such as generating reports, locating relevant information easily (like where is a specific memory module with the model number you're looking for), auditing type of software installed on workstation computers, and more. Our plan of action is going to be pretty straightforward; we analyze the IT inventorying needs and some general requisites when it comes to managing those assets. What's more we'll be presenting the client-sever model that is the underlying foundation on which most centralized management solutions are working. This is when OCS Inventory NG pops into the picture saving the day. Soon we will see why. We will get to know more about OCS Inventory NG soon, for now it's enough to realize that it's an open source project. No matter how successful a company is, open source solutions are always appreciated by the IT staff and management. As long as the project is actively developed, it's fairly popular, well documented, provides community support, and meets their needs. Among others, open source projects end up modular and flexible. Inventorying requirements in the real world One of the general requirements of an IT inventory is to be efficient and practical. The entire process should be seamless to the clients and requires limited (or none at all) user interaction. Once set up, it just needs to be automated to update the inventory database based on the latest changes without manually being required to do so. Thereafter, the collage of data gathered is ought to be organized and labeled the way we want. Businesses everywhere have come to realize that process integration is the best method for querying, standardizing, and organizing information about the infrastructure. The age of hi-tech computing made this possible by speeding up routine tasks and saving up employee time, eliminating bureaucracy and unnecessary filing of papers that all lead to frustration and waste of resources. Implementing integrated processes can change the structure and behavior of an organization. But finding the correct integration often becomes a dilemma. Feasible solution to avoid inevitable havoc Drifting back to the case of IT department, the necessity of having an integrated and centralized solution to manage numerous systems and other hardware equipment becomes obvious. The higher the number of systems, the bigger the volume to be managed, the easier the situation can get out of control, thus leading to crisis. Everyone runs around in panic like headless zombies trying to figure out who can be held responsible and what's there to do in order to avoid such scenarios. Taking a rational approach soon enough can improve the stability of entire organizations. Chances are you already know this, but usually system administrators tend to dislike working with papers. Filling in forms, storing them purely for archiving needs, and then when they least expect it, finding relevant information. A system like that won't make anyone happy. A centralized repository in some shape or form of a database gives almost instant access to results whenever such a query happens. Its actual state of always being up-to-date and reflecting the actual state of the infrastructure can be guaranteed by implementing updating mechanism. Later on, once the database is in healthy state and the process is integrated, tried and proven, it won't make any significant difference between managing dozens of computers or thousands. A well-designed integrated process is future proof and scalable. This way it won't become a setback if and when the company decides to expand. Streamlining software auditing and license management As mentioned earlier, it is important to understand that auditing workstation machines cannot be neglected. In certain environments, the users or employees have limited access and work within a sort of enclosed program area and they can do little to nothing outside of their specialization. But there are situations when the employees are supposed to have administrative access and full permissions. It is for the good of both the user and company to monitor and pay attention to what happens within each and every computer. Having an up to par auditing mechanism can integrate the system of license management as well. The persons responsible for this can track the total amount of licenses used and owned by the company, can calculate balance, notify when this number is about to run out, and so forth. It isn't all that uncommon to automate the purchasing of licenses either. The license management process description varies from firm to firm, but usually it's something similar to the following: user requests for a license, supervisor agrees, and the request arrives to the relevant IT staff. After this step, the license request gets analyzed and based on the result it is either handed out or ordered/acquired if necessary. If the process is not automated, all this would involve paperwork. And soon you will see frustrated employees running back-and-forth through departments asking who else needs to sign this paper. The process of automating and printing the end result is elegant and takes no trouble. The responsible department can then store the printed document for archiving purposes, if required. But the key of the process lies in integration. Inventorying can help here too. More uses of an integrated IT inventory solution The count of office consumables can also be tracked and maintained. This is a trickier process because it cannot be made unattended totally. Unless by installing some sort of sensor to track the count of printer cartridges inside an office furniture or the warehouse. However, you can update this field each time the said item gets restocked. A centralized method for consumables means the responsible parties can get notified before running out of stock. Once again, this step eliminates unexpected scenarios and unnecessary tasks. The beauty of centralized management solutions in the IT world is that if it is done right, they can open doors to numerous other activities as well. For example, in case of workstation PCs, the integrated process can be expanded into providing remote administration and similar other activities to be carried out remotely on the client machine. Package deployment and execution of scripts are just few distinctive examples. Think of it like, license is granted, the package is deployed, and the script is run to ensure proper registration of the application, if required. System administrators can usually help fixing common issues of employees via remote execution of scripts. Surely there are other means to administer the machines, but we're focusing on all-in-one integrated solutions. Another possibility is integrating the help-desk and ticketing system within the centralized inventory's management control panel as well. This way when an employee asks for help or reports a hardware issue, the system administrator can take a look at what's inside that system (hardware specifications, software installed, and so on.).Therefore, the system administrator gets to know the situation beforehand and thus use the right tools to troubleshoot the said issue. Gathering relevant inventory information We can conclude that in order to have a complete inventory on top of which we can build and implement other IT-related and administrative tasks, we need at least the following: Collecting relevant hardware information in case of workstation computers Manufacturer, serial number, model number of every component When applicable,some of the following: revision number, size, speed, memory, type, description, designation, connection port, interface, slot number, driver, MAC and IP address, and so on Collecting installed software/OS (licensing) information Operating system: Name, version, and registration information Application name, publisher, version, location Custom-queries from the Windows registry (if applicable) Collecting information about networking equipment and office peripherals Manufacturer, serial number, model, type of component, and so on MAC and IP address When applicable: revision number, firmware, total uptime, and so on
Read more
  • 0
  • 0
  • 919

article-image-setting-msmq-your-mobile-and-writing-msmq-application-net-compact-framework-35
Packt
29 Apr 2010
3 min read
Save for later

Setting up MSMQ on your Mobile and Writing MSMQ Application with .NET Compact Framework 3.5

Packt
29 Apr 2010
3 min read
Let's get started. Setting up Microsoft Messaging Queue Service (MSMQ) on your mobile device MSMQ is not installed by default on the Windows Mobile platform. This section will guide you on how to install MSMQ on your mobile device or device emulator. You will first need to download the Redistributable Server Components for Windows Mobile 5.0 package (which can also be used for Windows Mobile 6.0) from this location: href="http://www.microsoft.com/downloads/details.aspx?FamilyID=cdfd2bb2-fa13-4062-b8d1-4406ccddb5fd&displaylang=en After downloading and unzipping this file, you will have access to the MSMQ.arm.cab file in the following folder: Optional Windows Mobile 5.0 Server Componentsmsmq Copy this file via ActiveSync to your mobile device and run it on the device. This package contains two applications (and a bunch of other DLL components) that you will be using frequently on the device: msmqadm.exe:This is the command line tool that allows you to start and stop the MSMQ service on the mobile device and also configure MSMQ settings. It can also be invoked programmatically from code. visadm.exe: This tool does the same thing as above, but provides a visual interface. These two files will be unpacked into the Windows folder of your mobile device. The following DLL files will also be unpacked into the Windows folder: msmqd.dll msmqrt.dll Verify that these files exist. The next thing you need to do is to change the name of your device (if you haven't done so earlier). In most cases, you are probably using the Windows Mobile Emulator, which comes with an unassigned device name by default. To change your device name, navigate to Settings | System | About on your mobile device. You can change the device name in the Device ID tab. At this point, you have the files for MSMQ unpacked, but it isn't exactly installed yet. To do this, you must invoke either msmqadm.exe or visadm.exe. Launch the following application: Windowsvisadm.exe A pop-up window will appear. This window contains a text box and a Run button that allows you to type in the desired command and to execute it. The first command you need to issue is the register install command. Type in the command and click the Run button. No message will be displayed in the window. This command will install MSMQ (as a device driver) on your device. Run the following commands in the given order next (one after the other): MSMQ Command Name Purpose register You will need to run the register command one more time (without the install keyword) to create the MSMQ configuration keys in the registry. enable binary This command enables the proprietary MSMQ binary protocol to send messages to remote queues. enable srmp This command enables SRMP (SOAP Reliable Messaging Protocol), for sending messages to remote queues over HTTP. start This command starts the MSMQ service   Verify that the MSMQ service has been installed successfully by clicking on the Shortcuts button and then clicking the Verify button in the ensuing pop-up window. You will be presented with a pop-up dialog as shown in the following screenshot: MSMQ log information If you scroll down in this same window above, you will find the Base Dir path, which contains the MSMQ auto-generated log file. This log file, named MQLOGFILE by default, contains useful MSMQ related information and error messages. After you've done the preceding steps, you will need to do a soft-reset of your device. The MSMQ service will automatically start upon boot up.
Read more
  • 0
  • 0
  • 2566

article-image-service-oriented-jbi-invoking-external-web-services-servicemix
Packt
29 Apr 2010
4 min read
Save for later

Service Oriented JBI: Invoking External Web Services from ServiceMix

Packt
29 Apr 2010
4 min read
You can use XFire to create stub classes based on your WSDL exposed by your external web service. Now you can inject the stub into your JSR181 SU. The stub will be used by the proxy to generate the exchange with the HTTP provider (which should be referenced as the "service"). Using JBI proxy now, it is possible to invoke web services in the RPC style from within the JBI bus. For this we leverage the stub classes generated out from the web service WSDL using Axis. Web Service Code Listing We are interested in proxy setup to access a remote web service, hence we will not discuss the details of the web service deployment in this section. Instead, we will just browse through the important web service interfaces and the associated WSDL and then move on to binding the proxy. The web service implements the IHelloWeb remote interface which in turn extends the IHello business interface. They are listed here as follows: IHello.java: IHello is a simple BI, having a single business method hello. public interface IHello{ String hello(String param);} IHelloWeb.java: In order to deploy a web service, we need an interface complying with the Java RMI semantics, and IHelloWeb will serve this purpose. public interface IHelloWeb extends IHello, java.rmi.Remote {} HelloWebService.wsdl: The main sections in the web service WSDL is shown as follows: <?xml version="1.0" encoding="UTF-8"?><wsdl:definitions targetNamespace="http://AxisEndToEnd. axis.apache.binildas.com" ...> <wsdl:types ... /> <wsdl:message ... /> <wsdl:portType name="IHelloWeb"> </wsdl:portType> <wsdl:binding name="HelloWebServiceSoapBinding" type="impl:IHelloWeb"> </wsdl:binding> <wsdl:service name="IHelloWebService"> <wsdl:port binding="impl:HelloWebServiceSoapBinding" name="HelloWebService"> <wsdlsoap:address location="http://localhost:8080/AxisEndToEnd/services/ HelloWebService"/> </wsdl:port> </wsdl:service></wsdl:definitions> This is enough about the web service and we will move on to the next step. Axis Generated Client Stubs We use org.apache.axis.wsdl.WSDL2Java class in the wsdl2java task to generate client-side binding classes and stubs. The main classes are available in the folder ch13JbiProxy 3_AccessExternalWebService 1_wsgensrc and they are as follows: HelloWebService.java HelloWebServiceSoapBindingStub.java IHelloWeb.java IHelloWebService.java IHelloWebServiceLocator.java All the above artifacts are Axis generated client-side stubs, hence we will not look into the details of them here. Instead, let us look into the structural relationship between the various developer created and Axis generated artifacts shown in the following figure: Referring to the above diagram, let us understand the relevant artifacts. Here our aim is to generate a JBI proxy for an externally bound web service. We are doing this using the following classes: ITarget.java: This interface is synonymous to the BI IHello, having a single business method hello. We want to auto-route request-response through the JBI proxy. In order to facilitate this we have retained the method signature in the interfaces the same. public interface ITarget{ String hello(String input);} TargetService.java: In TargetService, we auto-wire the web service stub. So, the helloWeb instance field in TargetService will hold a reference to the stub to the web service. When the hello method is invoked in TargetService, the call is delegated to the stub which will invoke the remote web service. public class TargetService implements ITarget{ private com.binildas.apache.axis.AxisEndToEnd. IHelloWeb helloWeb; public TargetService(){} public TargetService(com.binildas.apache.axis. AxisEndToEnd.IHelloWeb helloWeb) { this.helloWeb = helloWeb; } public String hello(String input) { System.out.println("TargetService.echo : String. this = " + this); try { return helloWeb.hello(input); } catch(Exception exception) { exception.printStackTrace(); return exception.getMessage(); } }} IHelloProxy.java: We now need to wire the JBI proxy to the web services stub. IHelloProxy is an interface defined for this purpose and hence is having the same single business method, hello. public interface IHelloProxy{ public String hello(String input);} IHelloProxyService.java: HelloProxyService is a wrapper or adapter for the JBI proxy. In other words, the helloProxy instance field in HelloProxyService will refer to the JBI proxy. public class HelloProxyService implements IHelloProxy{ private IHelloProxy helloProxy; public void setHelloProxy(IHelloProxy helloProxy) { this.helloProxy = helloProxy; } public String hello(String input) { System.out.println("HelloProxyService.hello. this = " + this); return helloProxy.hello(input); }} The bean wiring discussed in this section is done using Spring and is shown in the next section.
Read more
  • 0
  • 0
  • 1115
article-image-creating-data-forms-silverlight-4
Packt
23 Apr 2010
4 min read
Save for later

Creating Data Forms in Silverlight 4

Packt
23 Apr 2010
4 min read
Collecting data Now that we have created a business object and a WCF service here-http://www.packtpub.com/article/creating-wcf-service-business-object-data-submission-silverlight, we are ready to collect data from the customer through our Silverlight application. Silverlight provides all of the standard input controls that .NET developers have come to know with Windows and ASP.NET development, and of course the controls are customizable through styles. Time for action – creating a form to collect data We will begin by creating a form in Silverlight for collecting the data from the client. We are going to include a submission form to collect the name, phone number, email address, and the date of event for the person submitting the sketch. This will allow the client (Cake O Rama) to contact this individual and follow up on a potential sale. We'll change the layout of MainPage.xaml to include a form for user input. We will need to open the CakeORama project in Expression Blend and then open MainPage.xaml for editing in the Blend art board. Our Ink capture controls are contained within a Grid, so we will just add a column to the Grid and place our input form right next to the Ink surface. To add columns in Blend, select the Grid from the Objects and Timeline panel, position your mouse in the highlighted area above the Grid and click to add a column: Blend will add a <Grid.ColumnDefinitions> node to our XAML: <Grid.ColumnDefinitions><ColumnDefinition Width="0.94*"/><ColumnDefinition Width="0.06*"/></Grid.ColumnDefinitions> Blend also added a Grid.ColumnSpan="2" attribute to both the StackPanel and InkPresenter controls that were already on the page. We need to modify the StackPanel and inkPresenter, so that they do not span both columns and thereby forcing us to increase the size of our second column. In Blend, select the StackPanel from the Objects and Timeline panel: In the Properties panel, you will see a property called ColumnSpan with a value of 2. Change this value to 1 and press the Enter key. We can see that Blend moved the StackPanel into the first column, and we now have a little space next to the buttons. We need to do the same thing to the inkPresenter control, so that it is also within the first column. Select the inkPresenter control from the Objects and Timeline panel: Change the ColumnSpan from 2 to 1 to reposition the inkPresenter into the left column: The inkPresenter control should be positioned in the left column and aligned with the StackPanel containing our ink sketch buttons: Now that we have moved the existing controls into the first column, we will change the size of the second column, so that we can start adding our input controls. We also need to change the overall size of the MainPage.xaml control to fit more information on the right side of the ink control. Click on the [UserControl] in the Objects and Timeline panel, and then in the Properties panel change the Width to 800: Now we need to change the size of our grid columns. We can do this very easily in XAML, so switch to the XAML view in Blend by clicking on the XAML icon: In the XAML view, change the grid column settings to give both columns an equal width: <Grid.ColumnDefinitions><ColumnDefinition Width="0.5*"/><ColumnDefinition Width="0.5*"/></Grid.ColumnDefinitions> Switch back to the design view by clicking on the design button: Our StackPanel and inkPresenter controls are now positioned to the left of the page and we have some empty space to the right for our input controls: Select the LayoutRoot control in the Objects and Timeline panel and then doubleclick on the TextBlock control in the Blend toolbox to add a new TextBlock control: Drag the control to the top and right side of the page: On the Properties panel, change the Text of the TextBlock to Customer Information, change the FontSize to 12pt and click on the Bold indicator:
Read more
  • 0
  • 0
  • 1149

article-image-creating-wcf-service-business-object-and-data-submission-silverlight-4
Packt
23 Apr 2010
10 min read
Save for later

Creating a WCF Service, Business Object and Data Submission with Silverlight 4

Packt
23 Apr 2010
10 min read
Data applications When building applications that utilize data, it is important to start with defining what data you are going to collect and how it will be stored once collected. In the last chapter, we created a Silverlight application to post a collection of ink strokes to the server. We are going to expand the inkPresenter control to allow a user to submit additional information. Most developers would have had experience building business object layers, and with Silverlight we can still make use of these objects, either by using referenced class projects/libraries or by consuming WCF services and utilizing the associated data contracts. Time for action – creating a business object We'll create a business object that can be used by both Silverlight and our ASP.NET application. To accomplish this, we'll create the business object in our ASP.NET application, define it as a data contract, and expose it to Silverlight via our WCF service. Start Visual Studio and open the CakeORamaData solution. When we created the solution, we originally created a Silverlight application and an ASP.NET web project. In the web project, add a reference to the System.Runtime.Serialization assembly. Right-click on the web project and choose to add a new class. Name this class ServiceObjects and click OK. In the ServiceObjects class file, replace the existing code with the following code: using System; using System.Runtime.Serialization; namespace CakeORamaData.Web { [DataContract] public class CustomerCakeIdea { [DataMember] public string CustomerName { get; set; } [DataMember] public string PhoneNumber { get; set; } [DataMember] public string Email { get; set; } [DataMember] public DateTime EventDate { get; set; } [DataMember] public StrokeInfo[] Strokes { get; set; } } [DataContract] public class StrokeInfo { [DataMember] public double Width { get; set; } [DataMember] public double Height { get; set; } [DataMember] public byte[] Color { get; set; } [DataMember] public byte[] OutlineColor { get; set; } [DataMember] public StylusPointInfo[] Points { get; set; } } [DataContract] public class StylusPointInfo { [DataMember] public double X { get; set; } [DataMember] public double Y { get; set; } } } What we are doing here is defining the data that we'll be collecting from the customer. What just happened? We just added a business object that will be used by our WCF service and our Silverlight application. We added serialization attributes to our class, so that it can be serialized with WCF and consumed by Silverlight. The [DataContract] and [DataMember] attributes are the serialization attributes that WCF will use when serializing our business object for transmission. WCF provides an opt-in model, meaning that types used with WCF must include these attributes in order to participate in serialization. The [DataContract] attribute is required, however if you wish to, you can use the [DataMember] attribute on any of the properties of the class. By default, WCF will use the System.Runtime.Serialization.DataContractSerialzer to serialize the DataContract classes into XML. The .NET Framework also provides a NetDataContractSerializer which includes CLR information in the XML or the JsonDataContractSerializer that will convert the object into JavaScript Object Notation (JSON). The WebGet attribute provides an easy way to define which serializer is used. For more information on these serializers and the WebGet attribute visit the following MSDN web sites: http://msdn.microsoft.com/en-us/library/system.runtime.serialization.datacontractserializer.aspx. http://msdn.microsoft.com/en-us/library/system.runtime.serialization.netdatacontractserializer.aspx. http://msdn.microsoft.com/en-us/library/system.runtime.serialization.json.datacontractjsonserializer.aspx. http://msdn.microsoft.com/en-us/library/system.servicemodel.web.webgetattribute.aspx. Windows Communication Foundation (WCF) Windows Communication Foundation (WCF) provides a simplified development experience for connected applications using the service oriented programming model. WCF builds upon and improves the web service model by providing flexible channels in which to connect and communicate with a web service. By utilizing these channels developers can expose their services to a wide variety of client applications such as Silverlight, Windows Presentation Foundation and Windows Forms. Service oriented applications provide a scalable and reusable programming model, allowing applications to expose limited and controlled functionality to a variety of consuming clients such as web sites, enterprise applications, smart clients, and Silverlight applications. When building WCF applications the service contract is typically defined by an interface decorated with attributes that declare the service and the operations. Using an interface allows the contract to be separated from the implementation and is the standard practice with WCF. You can read more about Windows Communication Foundation on the MSDN website at: http://msdn.microsoft.com/en-us/netframework/aa663324.aspx. Time for action – creating a Silverlight-enabled WCF service Now that we have our business object, we need to define a WCF service that can accept the business object and save the data to an XML file. With the CakeORamaData solution open, right-click on the web project and choose to add a new folder, rename it to Services. Right-click on the web project again and choose to add a new item. Add a new WCF Service named CakeService.svc to the Services folder. This will create an interface and implementation files for our WCF service. Avoid adding the Silverlight-enabled WCF service, as this adds a service that goes against the standard design patterns used with WCF: The standard design practice with WCF is to create an interface that defines the ServiceContract and OperationContracts of the service. The interface is then provided, a default implementation on the server. When the service is exposed through metadata, the interface will be used to define the operations of the service and generate the client classes. The Silverlight-enabled WCF service does not create an interface, just an implementation, it is there as a quick entry point into WCF for developers new to the technology. Replace the code in the ICakeService.cs file with the definition below. We are defining a contract with one operation that allows a client application to submit a CustomerCakeIdea instance: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace CakeORamaData.Web.Services { // NOTE: If you change the interface name "ICakeService" here, you must also update the reference to "ICakeService" in Web.config. [ServiceContract] public interface ICakeService { [OperationContract] void SubmitCakeIdea(CustomerCakeIdea idea); } } The CakeService.svc.cs file will contain the implementation of our service interface. Add the following code to the body of the CakeService.svc.cs file to save the customer information to an XML file: using System; using System.ServiceModel.Activation; using System.Xml; namespace CakeORamaData.Web.Services { // NOTE: If you change the class name "CakeService" here, you must also update the reference to "CakeService" in Web.config. [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class CakeService : ICakeService { public void SubmitCakeIdea(CustomerCakeIdea idea) { if (idea == null) return; using (var writer = XmlWriter.Create(String.Format(@"C: ProjectsCakeORamaCustomerData{0}.xml", idea.CustomerName))) { writer.WriteStartDocument(); //<customer> writer.WriteStartElement("customer"); writer.WriteAttributeString("name", idea.CustomerName); writer.WriteAttributeString("phone", idea.PhoneNumber); writer.WriteAttributeString("email", idea.Email); // <eventDate></eventDate> writer.WriteStartElement("eventDate"); writer.WriteValue(idea.EventDate); writer.WriteEndElement(); // <strokes> writer.WriteStartElement("strokes"); if (idea.Strokes != null && idea.Strokes.Length > 0) { foreach (var stroke in idea.Strokes) { // <stroke> writer.WriteStartElement("stroke"); writer.WriteAttributeString("width", stroke.Width. ToString()); writer.WriteAttributeString("height", stroke.Height. ToString()); writer.WriteStartElement("color"); writer.WriteAttributeString("a", stroke.Color[0]. ToString()); writer.WriteAttributeString("r", stroke.Color[1]. ToString()); writer.WriteAttributeString("g", stroke.Color[2]. ToString()); writer.WriteAttributeString("b", stroke.Color[3]. ToString()); writer.WriteEndElement(); writer.WriteStartElement("outlineColor"); writer.WriteAttributeString("a", stroke. OutlineColor[0].ToString()); writer.WriteAttributeString("r", stroke. OutlineColor[1].ToString()); writer.WriteAttributeString("g", stroke. OutlineColor[2].ToString()); writer.WriteAttributeString("b", stroke. OutlineColor[3].ToString()); writer.WriteEndElement(); if (stroke.Points != null && stroke.Points.Length > 0) { writer.WriteStartElement("points"); foreach (var point in stroke.Points) { writer.WriteStartElement("point"); writer.WriteAttributeString("x", point. X.ToString()); writer.WriteAttributeString("y", point. Y.ToString()); writer.WriteEndElement(); } writer.WriteEndElement(); } // </stroke> writer.WriteEndElement(); } } // </strokes> writer.WriteEndElement(); //</customer> writer.WriteEndElement(); writer.WriteEndDocument(); } } } } We added the AspNetCompatibilityRequirements attribute to our CakeService implementation. This attribute is required in order to use a WCF service from within ASP.NET. Open Windows Explorer and create the path C:ProjectsCakeORamaCustomerData on your hard drive to store the customer XML files. One thing to note is that you will need to grant write permission to this directory for the ASP.NET user account when in a production environment. When adding a WCF service through Visual Studio, binding information is added to the web.config file. The default binding for WCF is wsHttpBinding, which is not a valid binding for Silverlight. The valid bindings for Silverlight are basicHttpBinding, binaryHttpBinding (implemented with a customBinding), and netTcpBinding. We need to modify the web.config, so that Silverlight can consume the service. Open the web.config file and add this customBinding section to the <system.serviceModel> node: <bindings> <customBinding> <binding name="customBinding0"> <binaryMessageEncoding /> <httpTransport> <extendedProtectionPolicy policyEnforcement="Never" /> </httpTransport> </binding> </customBinding> </bindings> We'll need to change the <service> node in the web.config to use our new customBinding, (we use the customBinding to implement binary HTTP which sends the information as a binary stream to the service), rather than the wsHttpbinding from: <service behaviorConfiguration="CakeORamaData.Web.Services. CakeServiceBehavior" name="CakeORamaData.Web.Services.CakeService"> <endpoint address="" binding="wsHttpBinding" contract="CakeORamaData.Web.Services.ICakeService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IM etadataExchange" /> </service> To the following: <service behaviorConfiguration="CakeORamaData.Web.Services. CakeServiceBehavior" name="CakeORamaData.Web.Services.CakeService"> <endpoint address="" binding="customBinding" bindingConfiguratio n="customBinding0" contract="CakeORamaData.Web.Services.ICakeService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMeta dataExchange" /> </service> Set the start page to the CakeService.svc file, then build and run the solution. We will be presented with the following screen, which lets us know that the service and bindings are set up correctly: Our next step is to add the service reference to Silverlight. On the Silverlight project, right-click on the References node and choose to Add a Service Reference: On the dialog that opens, click the Discover button and choose the Services in Solution option. Visual Studio will search the current solution for any services: Visual Studio will find our CakeService and all we have to do is change the Namespace to something that makes sense such as Services and click the OK button: We can see that Visual Studio has added some additional references and files to our project. Developers used to WCF or Web Services will notice the assembly references and the Service References folder: Silverlight creates a ServiceReferences.ClientConfig file that stores the configuration for the service bindings. If we open this file, we can take a look at the client side bindings to our WCF service. These bindings tell our Silverlight application how to connect to the WCF service and the URL where it is located: <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomBinding_ICakeService"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647"> <extendedProtectionPolicy policyEnforcemen t="Never" /> </httpTransport> </binding> </customBinding> </bindings> <client> <endpoint address="http://localhost:2268/Services/ CakeService.svc" binding="customBinding" bindingConfiguration="Cust omBinding_ICakeService" contract="Services.ICakeService" name="CustomBinding_ICakeService" /> </client> </system.serviceModel> </configuration>
Read more
  • 0
  • 0
  • 2576

article-image-data-validation-silverlight-4
Packt
23 Apr 2010
7 min read
Save for later

Data Validation in Silverlight 4

Packt
23 Apr 2010
7 min read
With Silverlight, data validation has been fully implemented, allowing controls to be bound to data objects and those data objects to handle the validation of data and provide feedback to the controls via the Visual State Machine. The Visual State Machine is a feature of Silverlight used to render to views of a control based on its state. For instance, the mouse over state of a button can actually change the color of the button, show or hide parts of the control, and so on. Controls that participate in data validation contain a ValidationStates group that includes a Valid, InvalidUnfocused, and InvalidFocused states. We can implement custom styles for these states to provide visual feedback to the user. Data object In order to take advantage of the data validation in Silverlight, we need to create a data object or client side business object that can handle the validation of data. Time for action – creating a data object We are going to create a data object that we will bind to our input form to provide validation. Silverlight can bind to any properties of an object, but for validation we need to do two way binding, for which we need both a get and a set accessor for each of our properties. In order to use two way binding, we will need to implement the INotifyPropertyChanged interface that defines a PropertyChanged event that Silverlight will use to update the binding when a property changes. Firstly, we will need to switch over to Visual Studio and add a new class named CustomerInfo to the Silverlight project: Replace the body of the CustomerInfo.cs file with the following code: using System;using System.ComponentModel;namespace CakeORamaData{ public class CustomerInfo : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged =delegate { }; private string _cutomerName = null; public string CustomerName { get { return _cutomerName; } set { if (value == _cutomerName) return; _cutomerName = value; OnPropertyChanged("CustomerName"); } } private string _phoneNumber = null; public string PhoneNumber { get { return _phoneNumber; } set { if (value == _phoneNumber) return; _phoneNumber = value; OnPropertyChanged("PhoneNumber"); } } private string _email = null; public string Email { get { return _email; } set { if (value == _email) return; _email = value; OnPropertyChanged("Email"); } } private DateTime _eventDate = DateTime.Now.AddDays(7); public DateTime EventDate { get { return _eventDate; } set { if (value == _eventDate) return; _eventDate = value; OnPropertyChanged("EventDate"); } } private void OnPropertyChanged(string propertyName) { PropertyChanged(this, new PropertyChangedEventArgs (propertyName)); } }} Code Snippets Code snippets are a convenient way to stub out repetitive code and increase productivity, by removing the need to type a bunch of the same syntax over and over. The following is a code snippet used to create properties that execute the OnPropertyChanged method and can be very useful when implementing properties on a class that implements the INotifyPropertyChanged interface. To use the snippet, save the file as propnotify.snippet to your hard drive. In Visual Studio go to Tools | Code Snippets Manager (Ctrl + K, Ctrl + B) and click the Import button. Find the propnotify.snippet file and click Open, this will add the snippet. To use the snippet in code, simply type propnotify and hit the Tab key; a property will be stubbed out allowing you to change the name and type of the property. <?xml version="1.0" encoding="utf-8" ?><CodeSnippets > <CodeSnippet Format="1.0.0"> <Header> <Title>propnotify</Title> <Shortcut>propnotify</Shortcut> <Description>Code snippet for a property that raises the PropertyChanged event in a class.</Description> <Author>Cameron Albert</Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Declarations> <Literal> <ID>type</ID> <ToolTip>Property type</ToolTip> <Default>int</Default> </Literal> <Literal> <ID>property</ID> <ToolTip>Property name</ToolTip> <Default>MyProperty</Default> </Literal> <Literal> <ID>field</ID> <ToolTip>Private field</ToolTip> <Default>_myProperty</Default> </Literal> <Literal> <ID>defaultValue</ID> <ToolTip>Default Value</ToolTip> <Default>null</Default> </Literal> </Declarations> <Code Language="csharp"> <![CDATA[private $type$ $field$ = $defaultValue$; public $type$ $property$ { get { return $field$; } set { if (value == $field$) return; $field$ = value; OnPropertyChanged("$property$"); } } $end$]]> </Code> </Snippet> </CodeSnippet></CodeSnippets> What just happened? We created a data object or client-side business object that we can use to bind to our input controls. We implemented the INotifyPropertyChanged interface, so that our data object can raise the PropertyChanged event whenever the value of one of its properties is changed. We also defined a default delegate value for the PropertyChanged event to prevent us from having to do a null check when raising the event. Not to mention we have a nice snippet for stubbing out properties that raise the PropertyChanged event. Now we will be able to bind this object to Silverlight input controls and the controls can cause the object values to be updated so that we can provide data validation from within our data object, rather than having to include validation logic in our user interface code. Data binding We are going to bind our CustomerInfo object to our data entry form, using Blend. Be sure to build the solution before switching back over to Blend. With MainPage.xaml open in Blend, select the LayoutRoot control. In the Properties panel enter DataContext in the search field and click the New button: In the dialog that opens, select the CustomerInfo class and click OK: Blend will set the DataContext of the LayoutRoot to an instance of a CustomerInfo class: Blend inserts a namespace to our class; set the Grid.DataContext in the XAML of MainPage.xaml: <Grid.DataContext> <local:CustomerInfo/></Grid.DataContext> Now we will bind the value of CustomerName to our customerName textbox. Select the customerName textbox and then on the Properties panel enter Text in the search field. Click on the Advanced property options icon, which will open a context menu for choosing an option: Click on the Data Binding option to open the Create Data Binding dialog: In the Create Data Binding dialog (on the Explicit Data Context tab), click the arrow next to the CustomerInfo entry in the Fields list and select CustomerName: At the bottom of the Create Data Binding dialog, click on the Show advanced properties arrow to expand the dialog and display additional binding options: Ensure that TwoWay is selected in the Binding direction option and that Update source when is set to Explicit. This creates a two-way binding, meaning that when the value of the Text property of the textbox changes the underlying property, bound to Text will also be updated. In our case the customerName property of the CustomerInfo class: Click OK to close the dialog; we can now see that Blend indicates that this property is bound by the yellow border around the property input field: Repeat this process for both the phoneNumber and emailAddress textbox controls, to bind the Text property to the PhoneNumber and Email properties of the CustomerInfo class. You will see that Blend has modified our XAML using the Binding Expression: <TextBox x_Name="customerName" Margin="94,8,8,0" Text="{BindingCustomerName, Mode=TwoWay, UpdateSourceTrigger=Explicit}"TextWrapping="Wrap" VerticalAlignment="Top" Grid.Column="1" Grid.Row="1" MaxLength="40"/> In the Binding Expression code the Binding is using the CustomerName property as the binding Path. The Path (Path=CustomerName) attribute can be omitted since the Binding class constructor accepts the path as an argument. The UpdateSourceTrigger is set to Explicit, which causes any changes in the underlying data object to force a re-bind of the control. For the eventDate control, enter SelectedDate into the Properties panel search field and following the same process of data binding, select the EventDate property of the CustomerInfo class. Remember to ensure that TwoWay/Explict binding is selected in the advanced options:
Read more
  • 0
  • 0
  • 4058
article-image-animation-silverlight-4
Packt
20 Apr 2010
8 min read
Save for later

Animation in Silverlight 4

Packt
20 Apr 2010
8 min read
Silverlight sports a rich animation system that is surprisingly easy to use. The animation model in Silverlight is time based, meaning that movements occur based on a set timeline. At the heart of every animation is a StoryBoard, which contains all the animation data and independent timeline. Silverlight controls can contain any number of Storyboards. StoryBoards contain one or more Key frame elements, which are responsible for making objects on screen change position, color, or any number of properties. There are four general types of Key frames in Silverlight 4: Linear, Discrete, Spline, and Easing. The table below illustrates what each one does: Very different than Flash The animation model in Silverlight is markedly different than the one found in Adobe Flash. Animations in Flash are frame-based, whereas in Silverlight they are time-based. The term StoryBoard comes from the motion picture industry, where scenes are drawn out before they are filmed. Time for action – animation time The client would like to transform their text-only logo into something a little more elaborate. The designers have once again given us a XAML snippet of code exported from their graphic design tool. We will need to do the following: Open up the CakeORama logo project in Blend. Blend should have automatically loaded the MainControl.xaml file and your screen should look like this: In the Objects and Timeline tab, you'll see a list of objects that make up this vector drawing. There is Path object for every character. Let's add an animation. On the Object and Timeline tab, click the plus sign (+) to create a new StoryBoard. In the Create Storyboard Resource dialog, type introAnimationStoryboard into the text box and click OK. You'll notice a couple of changes to your screen. For one, the art board is surrounded by a red border and a notification that: intoAnimationStoryboard timeline recording is on just like in this screenshot: If you take a look at the Objects and Timeline tab, you'll see the timeline for our newly created introAnimationStoryboard: Let's add a key frame at the very beginning. The vertical yellow line is the play head, which marks where you currently are in the timeline. Select the canvas1 object. You can switch to the Animation Workspace in Blend by pressing F6. Click on the square icon with a green plus sign to create a new Key frame here at position 0. A white oval appears representing the Key frame that you just created. It should look similar to the following screenshot: Move the play head to 0.7 seconds, by clicking on the tick mark to the immediate left of the number 1. Click the same button you did in step 9 to create a new key frame here so that your timeline looks like this: Move the play head back to zero. Make sure the canvas1 object is still selected, click and drag the logo graphic up, so that all of it is in the grey area. This moves the logo "off stage". Hit the play button highlighted in the below screenshot, to preview the animation and enjoy the show! Now all we need to do is tell Silverlight to run the animation when our control loads, but first we need to get out of recording mode. To do this, click the x button on the Objects and Timeline tab. Click on [UserControl] in the Objects and Timeline tab. On the Properties tab, you'll see an icon with a lightning bolt on it. Click on it to see the events associated with a UserControl object: To wire up an event handler for the Loaded event, type UserControl_Loaded in the text box next to Loaded, as shown in the next screenshot: Once you hit Enter, the code behind will immediately pop up with your cursor inside the event handler method. Add this line of code to the method: introAnimationStoryboard.Begin(); Run the solution via the menu bar or by pressing F5. You should see the logo graphic smoothly and evenly animate into view. If for some reason the animation doesn't get displayed, refresh the page in your browser. You should see it now. What just happened? You just created your first animation in Silverlight. First you created a Storyboard and then added a couple of Key frames. You changed the properties of the canvas on one key frame and Silverlight automatically interpolated them in between points to create a nice smooth animation. If your animation didn't show up on the initial page load but did when you reloaded the page, then you've just experienced how seriously the Silverlight animation engine respects time. Since our animation length is relatively short (0.7 seconds) it's possible that more than that amount of time elapsed from the call of the Begin method, to the amount of time it took for your computer to render it. Silverlight noticed that and "jumped" ahead to that part of the timeline to keep everything on schedule. Just like we did before, let's take a look at the XAML to get a better feel of what's really going on. You'll find the Storyboard XAML in the UserControl.Resources section towards the top of the document. Don't worry if the values are slightly different in your project: <Storyboard x_Name="introAnimationStoryboard"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="canvas1" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.Y)"><EasingDoubleKeyFrame KeyTime="00:00:00" Value="-229"/><EasingDoubleKeyFrame KeyTime="00:00:00.7000000" Value="0"/> </DoubleAnimationUsingKeyFrames><DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="canvas1" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.X)"><EasingDoubleKeyFrame KeyTime="00:00:00" Value="1"/><EasingDoubleKeyFrame KeyTime="00:00:00.7000000" Value="0"/> </DoubleAnimationUsingKeyFrames></Storyboard> There are a couple of things going on here, so let's dissect the animation XAML starting with the Storyboard declaration which creates a Storyboard and assigns the name we gave it in the dialog box: <Storyboard x_Name="introAnimationStoryboard"> That's easy enough, but what about the next node? This line tells the Storyboard that we will be modifying a Double value starting at 0 seconds. It also further specifies a target for our animation: canvas1 and a property on our target: <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="canvas1" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.Y)"> Clear enough, but what does the TargetProperty value mean? Here is that value highlight below. (UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.Y) We know that the net effect of the animation is that the logo moves from above the visible area back to its original position. If we're familiar with X, Y coordinates, where X represents a horizontal coordinate and Y a vertical coordinate, then the TranslateTransform.Y part makes sense. We are changing or, in Silverlight terms, transforming the Y property of the canvas. But what's all this TransformGroup about? Take a look at our canvas1 node further down in the XAML. You should see the following lines of XAML that weren't there earlier: <Canvas.RenderTransform> <TransformGroup> <ScaleTransform /> <SkewTransform/> <RotateTransform/> <TranslateTransform/> </TransformGroup></Canvas.RenderTransform> Blend automatically inserted them into the Canvas when we created the animation. They have no properties. Think of them as stubbed declarations of these objects. If you remove them, Silverlight will throw an exception at runtime like the one below complaining about not being able to resolve TargetProperty: Clearly this code is important, but what's really going on here? The TranslateTransform object is a type of Transform object which determines how an object can change in Silverlight. They are packaged in a TransformGroup, which can be set in the RenderTransform property on any object descending from UIElement, which is the base class for any kind of visual element. With that bit of knowledge, we now see that (TransformGroup.Children)[3] refers to the fourth element in a zero-based collection. Not so coincidentally, the TranslateTransform node is the fourth item inside the TransformGroup in our XAML. Changing the order of the transforms in the XAML will also cause an exception at runtime. That line of XAML just tells the Silverlight runtime that we're going to animation, now we tell it how and when with our two EasingDoubleKeyFrame nodes: <EasingDoubleKeyFrame KeyTime="00:00:00" Value="-229"/><EasingDoubleKeyFrame KeyTime="00:00:00.7000000" Value="0"/> The first EasingDoubleKeyFrame node tells Silverlight that, at zero seconds, we want the value to be -229. This corresponds to when the logo was above the visible area. The second EasingDoubleKeyFrame node tells Silverlight that at 0.7 seconds, we want the value of the property to be 0. This corresponds to the initial state of the logo, where it was before any transformations were applied. Silverlight handles all changes to the value in between the start and the end point. Silverlight's default frame rate is 60 frames per second, but Silverlight will adjust its frame rate based on the hardware that it is running on. Silverlight can adjust the amount by which it changes the values to keep the animation on schedule. If you had to reload the web page to see the animation run, then you've already experienced this. Once again, notice how few lines (technically only one line) of procedural code you had to write.
Read more
  • 0
  • 0
  • 1764

article-image-understanding-expression-blend-and-how-use-it-silverlight-4
Packt
16 Apr 2010
5 min read
Save for later

Understanding Expression Blend and How to Use it with Silverlight 4

Packt
16 Apr 2010
5 min read
Creating applications in Expression Blend What we've done so far falls short of some of the things you may have already seen and done in Silverlight. Hand editing XAML, assisted by Intellisense, works just fine to a point, but to create anything complex requires another tool to assist with turning our vision into code. Intellisense is a feature of Visual Studio and Blend that auto-completes text when you start typing a keyword, method, or variable name. Expression Blend may scare off developers at first with its radically different interface, but if you look more closely, you'll see that Blend has a lot in common with Visual Studio. For starters, both tools use the same Solution and Project file format. That means it's 100% compatible and enables tighter integration between developers and designers. You could even have the same project open in both Visual Studio and in Blend at the same time. Just be prepared to see the File Modified dialog box like the one below when switching between the two applications: If you've worked with designers on a project before, they typically mock up an interface in a graphics program and ship it off to the development team. Many times, a simple graphic embellishment can cause us developers to develop heartburn. Anyone who's ever had to implement a rounded corner in HTML knows the special kind of frustration that it brings along. Here's the good news: those days are over with Silverlight. A crash course in Expression Blend In the following screenshot, our CakeNavigationButton project is loaded into Expression Blend. Blend can be a bit daunting at first for developers that are used to Visual Studio as Blend's interface is dense with a lot of subtle cues. Solutions and projects are opened in Blend in the same manner as you would in Visual Studio. Just like in Visual Studio, you can customize Expression Blend's interface to suit your preference. You can move tabs around, dock, and undock them to create a workspace that works best for you as the following screenshot demonstrates: If you look at the CakeNavigationButton project, on the left hand side of the application window, you have the toolbar, which is substantially different from the toolbox in Visual Studio. The toolbar in Blend more closely resembles the toolbar in graphics editing software such as Adobe Photoshop or Adobe Illustrator. If you move the mouse over each button, you will see a tooltip that tells you what that button does, as well as the button's keyboard shortcut. In the upper-left corner, you'll notice a tab labeled Projects. This is functionally equivalent to the Solution Explorer in Visual Studio. The asterisk next to MainPage.XAML indicates that the file has not been saved. Examine the next screenshot to see Blend's equivalent to Visual Studio's Solution Explorer: If we look at the following screenshot, we find the Document tab control and the design surface, which Blend calls the art board. On the upper-right of the art board, there are three small buttons to control the switch between Design view, XAML view, or Split view. On the lower edge of the art board, there are controls to modify the view of the design surface. You can zoom in to take a closer look, turn on snap grid visibility, and turn on or off the snapping to snap lines.   If we then move to the upper-right corner of the next screen, we will see the Properties tab, which is a much more evolved version of the Properties tab in Visual Studio. As you can see in this screenshot, the color picker has a lot more to offer. There's also a search feature that narrows down the items in the tab based on the property name you type in.   At the lower left side of the next screen, there is the Objects and Timeline view, which shows the object hierarchy of the open document. Since we have the MainPage.XAML of our CakeNavigationButtons project, the view has StackPanel with six Buttons all inside a grid named LayoutRoot inside of a UserControl. Clicking on an item in this view selects the item on the art board and vice versa. Expression Blend is an intricate and rich application. Time for action – styles revisited Earlier in this chapter, we created and referenced a style directly in the XAML in Visual Studio. Let's modify the style we made in Blend to see how to do it graphically. To do this, we will need to: Open up the CakeNavigationButtons solution in Expression Blend. In the upper right corner, there are three tabs (Properties, Resources, and Data). On the Resources tab, expand the tree node marked [UserControl] and click on the button highlighted below to edit the [Button default] resource. Your art board should look something like this:
Read more
  • 0
  • 0
  • 1714