Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-building-applications-spring-data-redis
Packt
03 Dec 2012
9 min read
Save for later

Building Applications with Spring Data Redis

Packt
03 Dec 2012
9 min read
(For more resources related on Spring, see here.) Designing a Redis data model The most important rules of designing a Redis data model are: Redis does not support ad hoc queries and it does not support relations in the same way than relational databases. Thus, designing a Redis data model is a total different ballgame than designing the data model of a relational database. The basic guidelines of a Redis data model design are given as follows: Instead of simply modeling the information stored in our data model, we have to also think how we want to search information from it. This often leads to a situation where we have to duplicate data in order to fulfill the requirements given to us. Don't be afraid to do this. We should not concentrate on normalizing our data model. Instead, we should combine the data that we need to handle as an unit into an aggregate. Since Redis does not support relations, we have to design and implement these relations by using the supported data structures. This means that we have to maintain these relations manually when they are changed. Because this might require a lot of effort and code, it could be wise to simply duplicate the information instead of using relations. It is always wise to spend a moment to verify that we are using the correct tool for the job. NoSQL Distilled, by Martin Fowler contains explanations of different NoSQL databases and their use cases, and can be found at http://martinfowler.com/books/nosql.html. Redis supports multiple data structures. However, one question remained unanswered: which data structure should we use for our data? This question is addressed in the following table: Data type Description String A string is good choice for storing information that is already converted to a textual form. For instance, if we want to store HTML, JSON, or XML, a string should be our weapon of choice. List A list is a good choice if we will access it only near the start or end. This means that we should use it for representing queues or stacks. Set We should use a set if we need to get the size of a collection or check if a certain item belongs to it. Also, if we want to represent relations, a set is a good choice (for example, "who are John's friends?"). Sorted set Sorted sets should be used in the same situations as sets when the ordering of items is important to us. Hash A hash is a perfect data structure for representing complex objects. Key components Spring Data Redis provides certain components that are the cornerstones of each application that uses it. This section provides a brief introduction to the components that we will later use to implement our example applications. Atomic counters Atomic counters are for Redis what sequences are for relational databases. Atomic counters guarantee that the value received by a client is unique. This makes these counters a perfect tool for creating unique IDs to our data that is stored in Redis. At the moment, Spring Data Redis offers two atomic counters: RedisAtomicInteger and RedisAtomicLong . These classes provide atomic counter operations for integers and longs. RedisTemplate The RedisTemplate<K,V> class is the central component of Spring Data Redis. It provides methods that we can use to communicate with a Redis instance. This class requires that two type parameters are given during its instantiation: the type of used Redis key and the type of the Redis value. Operations The RedisTemplate class provides two kinds of operations that we can use to store, fetch, and remove data from our Redis instance: Operations that require that the key and the value are given every time an operation is performed. These operations are handy when we have to execute a single operation by using a key and a value. Operations that are bound to a specific key that is given only once. We should use this approach when we have to perform multiple operations by using the same key. The methods that require that a key and value is given every time an operation is performed are described in following list: HashOperations<K,HK,HV> opsForHash(): This method returns the operations that are performed on hashes ListOperations<K,V> opsForList(): This method returns the operations performed on lists SetOperations<K,V> opsForSet(): This method returns the operations performed on sets ValueOperations<K,V> opsForValue(): This method returns the operations performed on simple values ZSetOperations<K,HK,HV> opsForZSet(): This method returns the operations performed on sorted sets The methods of the RedisTemplate class that allow us to execute multiple operations by using the same key are described in following list: BoundHashOperarations<K,HK,HV> boundHashOps(K key): This method returns hash operations that are bound to the key given as a parameter BoundListOperations<K,V> boundListOps(K key): This method returns list operations bound to the key given as a parameter BoundSetOperations<K,V> boundSetOps(K key):: This method returns set operations, which are bound to the given key BoundValueOperations<K,V> boundValueOps(K key): This method returns operations performed to simple values that are bound to the given key BoundZSetOperations<K,V> boundZSetOps(K key): This method returns operations performed on sorted sets that are bound to the key that is given as a parameter The differences between these operations become clear to us when we start building our example applications. Serializers Because the data is stored in Redis as bytes, we need a method for converting our data to bytes and vice versa. Spring Data Redis provides an interface called RedisSerializer<T>, which is used in the serialization process. This interface has one type parameter that describes the type of the serialized object. Spring Data Redis provides several implementations of this interface. These implementations are described in the following table: Serializer Description GenericToStringSerializer<T> Serializes strings to bytes and vice versa. Uses the Spring ConversionService to transform objects to strings and vice versa. JacksonJsonRedisSerializer<T> Converts objects to JSON and vice versa. JdkSerializationRedisSerializer Provides Java based serialization to objects. OxmSerializer Uses the Object/XML mapping support of Spring Framework 3. StringRedisSerializer  Converts strings to bytes and vice versa. We can customize the serialization process of the RedisTemplate class by using the described serializers. The RedisTemplate class provides flexible configuration options that can be used to set the serializers that are used to serialize value keys, values, hash keys, hash values, and string values. The default serializer of the RedisTemplate class is JdkSerializationRedisSerializer. However, the string serializer is an exception to this rule. StringRedisSerializer is the serializer that is by default used to serialize string values. Implementing a CRUD application This section describes two different ways for implementing a CRUD application that is used to manage contact information. First, we will learn how we can implement a CRUD application by using the default serializer of the RedisTemplate class. Second, we will learn how we can use value serializers and implement a CRUD application that stores our data in JSON format. Both of these applications will also share the same domain model. This domain model consists of two classes: Contact and Address. We removed the JPA specific annotations from them We use these classes in our web layer as form objects and they no longer have any other methods than getters and setters The domain model is not the only thing that is shared by these examples. They also share the interface that declares the service methods for the Contact class. The source code of the ContactService interface is given as follows: public interface ContactService {public Contact add(Contact added);public Contact deleteById(Long id) throws NotFoundException;public List&lt;Contact&gt; findAll();public Contact findById(Long id) throws NotFoundException;public Contact update(Contact updated) throws NotFoundException;} Both of these applications will communicate with the used Redis instance by using the Jedis connector. Regardless of the user's approach, we can implement a CRUD application with Spring Data Redis by following these steps: Configure the application context. Implement the CRUD functions. Let's get started and find out how we can implement the CRUD functions for contact information. Using default serializers This subsection describes how we can implement a CRUD application by using the default serializers of the RedisTemplate class. This means that StringRedisSerializer is used to serialize string values, and JdkSerializationRedisSerializer serializes other objects. Configuring the application context We can configure the application context of our application by making the following changes to the ApplicationContext class: Configuring the Redis template bean. Configuring the Redis atomic long bean. Configuring the Redis template bean We can configure the Redis template bean by adding a redisTemplate() method to the ApplicationContext class and annotating this method with the @Bean annotation. We can implement this method by following these steps: Create a new RedisTemplate object. Set the used connection factory to the created RedisTemplate object. Return the created object. The source code of the redisTemplate() method is given as follows: @Beanpublic RedisTemplate redisTemplate() {RedisTemplate&lt;String, String&gt; redis = new RedisTemplate&lt;String,String&gt;();redis.setConnectionFactory(redisConnectionFactory());return redis;} Configuring the Redis atomic long bean We start the configuration of the Redis atomic long bean by adding a method called redisAtomicLong() to the ApplicationContext class and annotating the method with the @Bean annotation. Our next task is to implement this method by following these steps: Create a new RedisAtomicLong object. Pass the name of the used Redis counter and the Redis connection factory as constructor parameters. Return the created object. The source code of the redisAtomicLong() method is given as follows: @Beanpublic RedisAtomicLong redisAtomicLong() {return new RedisAtomicLong("contact", redisConnectionFactory());} If we need to create IDs for instances of different classes, we can use the same Redis counter. Thus, we have to configure only one Redis atomic long bean.
Read more
  • 0
  • 0
  • 11561

article-image-basic-coding-hornetq-creating-and-consuming-messages
Packt
28 Nov 2012
4 min read
Save for later

Basic Coding with HornetQ: Creating and Consuming Messages

Packt
28 Nov 2012
4 min read
(For more resources related to this topic, see here.) Installing Eclipse on Windows You can download the Eclipse IDE for Java EE developers (in our case the ZIP file eclipse-jee-indigo-SR1-win32.zip) from http://www.eclipse.org/downloads/. Once downloaded, you have to unzip the eclipse folder inside the archive to the destination folder so that you have a folder structure like the one illustrated in the following screenshot: Now a double-click on the eclipse.exe file will fire the first run of Eclipse. Installing NetBeans on Windows NetBeans is one of the most frequently used IDE for Java development purposes. It mimics the Eclipse plugin module's installation, so you could download the J2EE version from the URL http://netbeans.org/downloads/.But remember that this version also comes with an integrated GlassFish application server and a Tomcat server. Even in this case you only need to download the .exe file (java_ee_sdk-6u3-jdk7-windows.exe, in our case) and launch the installer. Once finished, you should be able to run the IDE by clicking on the NetBeans icon in your Windows Start menu. Installing NetBeans on Linux If you are using a Debian-based version of Linux like Ubuntu, installing both NetBeans and Eclipse is nothing more than typing a command from the bash shell and waiting for the installation process to finish. As we are using Ubuntu Version 11, we will type the following command from a non-root user account to install Eclipse: sudo apt-get install eclipse The NetBeans installation procedure is slightly different due to the fact that the Ubuntu repositories do not have a package for a NetBeans installation. So, for installing NetBeans you have to download a script and then run it. If you are using a non-root user account, you need to type the following commands on a terminal: sudo wget http://download.netbeans.org/netbeans/7.1.1/final/bundles/ netbeans-7.1.1-ml-javaee-linux.sh sudo chmod +x netbeans-7.1.1-ml-javaee-linux.sh ./netbeans-7.1.1-ml-javaee-linux.sh During the first run of the IDE, Eclipse will ask which default workspace the new projects should be stored in. Choose the one suggested, and in case you are not planning to change it, check the Use this as the default and do not ask again checkbox for not re-proposing the question, as shown in the following screenshot: The same happens with NetBeans, but during the installation procedure. Post installation Both Eclipse and NetBeans have an integrated system for upgrading them to the latest version, so when you have correctly launched the first-time run, keep your IDE updated. For Eclipse, you can access the Update window by using the menu Help | Check for updates. This will pop up the window, as shown in this screenshot: NetBeans has the same functionality, which can be launched from the menu. A 10,000 foot view of HornetQ Before moving on with the coding phase, it is time to recover some concepts to allow the user and the coder to better understand how HornetQ manages messages. HornetQ is only a set of Plain Old Java Objects (POJOs) compiled and grouped into JAR files. The software developer could easily grasp that this characteristic leads to HornetQ having no dependency on third-party libraries. It is possible to use and even start HornetQ from any Java class; this is a great advantage over other frameworks. HornetQ deals internally only with its own set of classes, called the HornetQ core, avoiding any dependency on JMS dialect and specifications. Nevertheless, the client that connects with the HornetQ server can speak the JMS language. So the HornetQ server also uses a JMS to core HornetQ API translator. This means that when you send a JMS message to a HornetQ server, it is received as JMS and then translated into the core API dialect to be managed internally by HornetQ. The following figure illustrates this concept: The core messaging concepts of HornetQ are somewhat simpler than those of JMS: Message: This is a unit of data that can be sent/delivered from a consumer to a producer. Messages have various possibilities. But only to cite them, a message can have: durability, priority, expiry time, time, and dimension. Address: HornetQ maintains an association between an address (IP address of the server) and the queues available at that address. So the message is bound to the address. Queue: This is nothing more than a set of messages. Like messages, queues have attributes such as durability, temporary, and filtering expressions.
Read more
  • 0
  • 0
  • 1958

article-image-troubleshooting-your-bam-applications
Packt
21 Nov 2012
9 min read
Save for later

Troubleshooting your BAM Applications

Packt
21 Nov 2012
9 min read
(For more resources related to this topic, see here.) Understanding BAM logging and troubleshooting methodologies In many cases, enabling BAM logging is the prerequisite for troubleshooting BAM issues. It is critical to set up the correct loggers to appropriate levels (for example, SEVERE, WARNING, INFO, CONFIG, FINE, FINER, and FINEST), so that you can collect the information to identify the actual problem, and determine the root cause. Apart from logging, it is also important to have the proven methodologies in place, so that you can follow these methods/procedures to conduct your troubleshooting practice. Understanding BAM logging concepts BAM provides a set of loggers, which are associated with BAM Java packages/ classes. Like Java, these loggers are named by following the dot convention, and are organized hierarchically. Let's take a look at an example. oracle.bam.adc is the root logger for the Active Data Cache. All the loggers for the sub-modules within the Active Data Cache should be named after oracle.bam.adc, and therefore become descendants of the root logger. For instance, oracle.bam.adc.security, which is the logger that tracks Active Data Cache security logs, is the child logger of oracle.bam.adc. The logging level for the descendant/child (for example, oracle.bam.adc.security)inherits from the ancestor/parent (for example, oracle.bam.adc adc) by default, unless its logging level is explicitly specified. Thus, you should be careful when setting a root or parent logger to a low level (for example, TRACE:32 ), which may produce a large amount of log entries in the log file The following table lists the major root-level loggers for troubleshooting key BAM components: Logger name Description oracle.bam.adapter This is the logger for troubleshooting the BAM Adapter issues oracle.bam.adc This is the logger for troubleshooting the BAM Active Data Cache operations, such as data persistence, ADC APIs, Active Data processing with ADC, and so on orable.bam.common This is the logger for debugging BAM common components, for example, BAM Security or BAM Messaging Framework oracle.bam.ems This is the logger for debugging BAM Enterprise Message Sources (EMS) oracle.bam.eventengine This is the logger for debugging the Event Engine oracle.bam.reportcache This is the logger for debugging the Report Cache oracle.bam.web This is the logger for debugging the BAM web applications, which include the Report Server oracle.bam.webservices This is the logger for debugging the BAM web services interface Enabling logging for BAM To set up loggers for BAM, perform the following steps: Log in to Enterprise Manager 11g Fusion Middleware Control. Click on OracleBamServer(bam_server1) from the left pane, and select BAM Server | Logs | Log Configuration. Expand oracle.bam to set the BAM loggers. To ensure that the log-level changes are persistent, check the checkbox for Persist log level state across component restarts. Click on Apply. The logs are written to the <server_name>-diagnostic.log file in the <mserver_domain_dir>/servers/<server_name>/logs directory. By default, the log file follows the size-based rotation policy, and the rotational size is 10M. You can change the default behavior by editing the log file configuration as follows: Log in to Enterprise Manager 11g Fusion Middleware Control. Click on OracleBamServer(bam_server1) from the left pane, and select BAM Server | Logs | Log Configuration. In the Log Configurations screen, click on the Log Files tab. Select odl-handler, and then click on Edit Configuration Edit the log file configuration, and click on OK. Setting BAM loggers to appropriate values Similar to what most Fusion Middleware components do, Oracle BAM uses the lo g levels specified in the Oracle Diagnostic Logging (ODL) standard to control the log details in the diagnostic log file. Similar to the log levels in Java, ODL log levels and their descriptions are listed in the following table: Log level (Java) Log level (ODL) Description SEVERE+100 INCIDENT_ ERROR:1 This log level enables the BAM Server that reports critical issues or fatal errors. SEVERE ERROR:1 This log level enables the BAM Server components to report issues (system errors, exceptions, or malfunctions) that may prevent the system from working properly. WARNING WARNING:1 This log level enables the BAM Server components to report events or conditions that should be reviewed and may require actions. INFO NOTIFICATION:1 This is the default setting for all the BAM loggers. This log level is used to capture the lifecycle events of BAM Server components and the key messages for notification purposes. For example, if you need to verify the cache location or its running status for the BAM Report Cache, you can set the report cache logger to this log level. CONFIG NOTIFICATION:16 This log level enables the BAM Server components to write more detailed configuration information. FINE TRACE:1 This log level enables the BAM Server components to write the debug information to the log file. To  troubleshoot the BAM server components, you may start with this log level, and increase to FINER or FINEST, if needed. FINER TRACE:16 This log level enables the BAM Server components to write a fairly detailed debug information to the log file. FINEST TRACE:32 This log level enables the BAM Server components to write a highly detailed debug information. Inherited from parent   Specify this value if you want a specific logger to inherit the log level from its parent logger. The default setting for all BAM loggers is NOTIFICATION:1. For troubleshooting purposes, it is recommended to set the appropriate loggers to TRACE:1, TRACE:16, or TRACE:32. The logging configuration is persisted in the following location on your BAM host: <domain_dir>/config/fmwconfig/ servers/<server_name>/logging.xml. Theoretically, you can edit this file to modify BAM loggers. However, it is not recommended to do so unless you have a good understanding of the configuration file. If you have multiple BAM instances in your environment, you can easily duplicate the logging configuration by copying the logging.xml file to all BAM instances, rather than making changes through EM. Introducing the methodologies for troubleshooting BAM Oracle BAM utilizes different technologies such as EMS, BAM Adapter, Web services, and ODI to integrate different enterprise information systems. Business data received from various data sources are then pushed all the way from the Active Data Cache, through the Report Cache and the Report Server, to Web browsers for rendering reports in real-time. Due to the complexity of the system and various technologies involved, it is critical to use the right troubleshooting methodologies to analyze and resolve issues. The following are the basic rules of thumb for troubleshooting your BAM applications: Understand the BAM key terminologies and architecture Identify the problem Set up BAM loggers Collect the information and debugging Understanding the key terminologies and the BAM architecture Understanding the key terminologies and the BAM architecture is a prerequisite for troubleshooting the BAM applications. The key terminologies for BAM include Active Data, Active Data Cache, Report Cache, Report Server, ChangeList, Data Objects, Report, and so on. Identifying the problem Different issues may require different techniques to troubleshoot. For example, for a report design issue (for example, calculated fields do not show correct values), you should focus on the building blocks for the report design, instead of enabling loggings for the BAM server, which does not provide any help at all. A BAM issue typically falls into the following categories: Report design and report loading (static rendering) Report rendering with Active Data (Active Data processing) Issues with the key BAM Server components (Active Data Cache, securities, Report Cache, and Event Engine) BAM Web applications (Active Viewer, Active Studio, Architect, Administrator, and Report Server) Issues with BAM Integration Framework (EMS, Web services APIs, SOA Suite integration, and ODI integration) To fully identify a problem, you need to first understand the problem description and the category to which the issue belongs. Then, you can gather relevant information, such as the time frame of when the issue happened, the BAM Server release information and patch level, BAM deployment topologies (single node or HA), and so on. Setting up BAM loggers Setting up BAM loggers with appropriate logging levels is the key for troubleshooting BAM issues. BAM loggers can be set up to the following levels (Java logging): SEVERE, WARNING, INFO, CONFIG, FINE, FINER, and FINEST. In a normal situation, all the BAM loggers are set to INFO, by default. In the case of debugging, it is recommended to increase the level to FINER or FINEST. Loggers contain hierarchies. You need to be careful when setting up root level loggers to FINE, FINER, or FINEST. Suppose that you want to troubleshoot a login issue with the BAM start page using the root logger oracle.bam.adc, which is set to FINEST. In this case, all the descendants that inherit from it have the same logging level. As a result, a large amount of unused log entries are produced, which is not helpful for troubleshooting, and can also impact the overall performance. Therefore, you should set up the corresponding child logger (oracle.bam.adc.security) without enabling the root logger.Collecting information and debugging. Once the problem is identified and BAM loggers are set up appropriately, it is time to collect the logs to analyze the problem. The following table lists the files that can be used to troubleshoot BAM issues: Name Description <server_name>.out This is the standard output file redirected by the Node Manager. If the server is started using the  tartManagedWebLogic.sh script directly, you need to refer to the standard output (either the command prompt output or another redirected file specified). This file is located in the following directory: <mserver_domain_dir>/servers/<server_name>/logs. <mserver_domain_dir> refers to the domain home directory for the Managed Server for BAM; <server_name> refers to the name of the Managed Server for BAM, for example, WLS_BAM1. Use this file to collect the server starting information and standard output. <server_name>.log This log provides information for the WebLogic Server that hosts BAM. This file is located in the following directory: <mserver_domain_dir>/servers/<server_name>/logs. <server_name>-diagnostic.log Unlike the <server_name>.log file, this log file keeps a track of BAM-specific logs produced by BAM loggers. The location of this file is as follows: <mserver_domain_dir>/servers/<server_name>/logs. Debugging actually becomes easier once you have all this relevant information in place.
Read more
  • 0
  • 0
  • 1422
Banner background image

article-image-dispatchers-and-routers
Packt
12 Nov 2012
5 min read
Save for later

Dispatchers and Routers

Packt
12 Nov 2012
5 min read
(For more resources related to this topic, see here.) Dispatchers In the real world, dispatchers are the communication coordinators that are responsible for receiving and passing messages. For the emergency services (for example, in U.S. – 911), the dispatchers are the people responsible for taking in the call, and passing on the message to the other departments (medical, police, fire station, or others). The dispatcher coordinates the route and activities of all these departments, to make sure that the right help reaches the destination as early as possible. Another example is how the airport manages airplanes taking off. The air traffic controllers (ATCs) coordinate the use of the runway between the various planes taking off and landing. On one side, air traffic controllers manage the runways (usually ranging from 1 to 3), and on the other, aircrafts of different sizes and capacity from different airlines ready to take off and land. An air traffic controller coordinates the various airplanes, gets the airplanes lined up, and allocates the runways to take off and land: As we can see, there are multiple runways available and multiple airlines, each having a different set of airplanes needing to take off. It is the responsibility of air traffic controller(s) to coordinate the take-off and landing of planes from each airline and do this activity as fast as possible. Dispatcher as a pattern Dispatcher is a well-recognized and used pattern in the Java world. Dispatchers are used to control the flow of execution. Based on the dispatching policy, dispatchers will route the incoming message or request to the business process. Dispatchers as a pattern provide the following advantages: Centralized control: Dispatchers provide a central place from where various messages/requests are dispatched. The word "centralized" means code is re-used, leading to improved maintainability and reduced duplication of code. Application partitioning: There is a clear separation between the business logic and display logic. There is no need to intermingle business logic with the display logic. Reduced inter-dependencies: Separation of the display logic from the business logic means there are reduced inter-dependencies between the two. Reduced inter-dependencies mean less contention on the same resources, leading to a scalable model. Dispatcher as a concept provides a centralized control mechanism that decouples different processing logic within the application, which in turn reduces inter-dependencies. Executor in Java In Akka, dispatchers are based on the Java Executor framework (part of java.util.concurrent).Executor provides the framework for the execution of asynchronous tasks. It is based on the producer–consumer model, meaning the act of task submission (producer) is decoupled from the act of task execution (consumer). The threads that submit tasks are different from the threads that execute the tasks. Two important implementations of the Executor framework are as follows: ThreadPoolExecutor: It executes each submitted task using thread from a predefined and configured thread pool. ForkJoinPool: It uses the same thread pool model but supplemented with work stealing. Threads in the pool will find and execute tasks (work stealing) created by other active tasks or tasks allocated to other threads in the pool that are pending execution. Fork/join is based a on fine-grained, parallel, divide-andconquer style, parallelism model. The idea is to break down large data chunks into smaller chunks and process them in parallel to take advantage of the underlying processor cores. Executor is backed by constructs that allow you to define and control how the tasks are executed. Using these Executor constructor constructs, one can specify the following: How many threads will be running? (thread pool size) How are the tasks queued until they come up for processing? How many tasks can be executed concurrently? What happens in case the system overloads, when tasks to be rejected are selected? What is the order of execution of tasks? (LIFO, FIFO, and so on) Which pre- and post-task execution actions can be run? In the book Java Concurrency in Practice, Addison-Wesley Publishing, the authors have described the Executor framework and its usage very nicely. It will be useful to read the book for more details on the concurrency constructs provided by Java language. Dispatchers in Akka In the Akka world, the dispatcher controls and coordinates the message dispatching to the actors mapped on the underlying threads. They make sure that the resources are optimized and messages are processed as fast as possible. Akka provides multiple dispatch policies that can be customized according to the underlying hardware resource (number of cores or memory available) and type of application workload. If we take our example of the airport and map it to the Akka world, we can see that the runways are mapped to the underlying resources—threads. The airlines with their planes are analogous to the mailbox with the messages. The ATC tower employs a dispatch policy to make sure the runways are optimally utilized and the planes are spending minimum time on waiting for clearance to take off or land: For Akka, the dispatchers, actors, mailbox, and threads look like the following diagram: The dispatchers run on their threads; they dispatch the actors and messages from the attached mailbox and allocate on heap to the executor threads. The executor threads are configured and tuned to the underlying processor cores that available for processing the messages.
Read more
  • 0
  • 0
  • 5510

article-image-low-level-c-practices
Packt
21 Oct 2012
14 min read
Save for later

Low-level C# Practices

Packt
21 Oct 2012
14 min read
Working with generics Visual Studio 2005 included .NET version 2.0 which included generics. Generics give developers the ability to design classes and methods that defer the specification of specific parts of a class or method's specification until declaration or instantiation. Generics offer features previously unavailable in .NET. One benefit to generics, that is potentially the most common, is for the implementation of collections that provide a consistent interface to collections of different data types without needing to write specific code for each data type. Constraints can be used to restrict the types that are supported by a generic method or class, or can guarantee specific interfaces. Limits of generics Constraints within generics in C# are currently limited to a parameter-less constructor, interfaces, or base classes, or whether or not the type is a struct or a class (value or reference type). This really means that code within a generic method or type can either be constructed or can make use of methods and properties. Due to these restrictions types within generic types or methods cannot have operators. Writing sequence and iterator members Visual Studio 2005 and C# 2.0 introduced the yield keyword. The yield keyword is used within an iterator member as a means to effectively implement an IEnumerable interface without needing to implement the entire IEnumerable interface. Iterator members are members that return a type of IEnumerable or IEnumerable<T>, and return individual elements in the enumerable via yield return, or deterministically terminates the enumerable via yield break. These members can be anything that can return a value, such as methods, properties, or operators. An iterator that returns without calling yield break has an implied yield break, just as a void method has an implied return. Iterators operate on a sequence but process and return each element as it is requested. This means that iterators implement what is known as deferred execution. Deferred execution is when some or all of the code, although reached in terms of where the instruction pointer is in relation to the code, hasn't entirely been executed yet. Iterators are methods that can be executed more than once and result in a different execution path for each execution. Let's look at an example: public static IEnumerable<DateTime> Iterator() { Thread.Sleep(1000); yield return DateTime.Now; Thread.Sleep(1000); yield return DateTime.Now; Thread.Sleep(1000); yield return DateTime.Now; }   The Iterator method returns IEnumerable which results in three DateTime values. The creation of those three DateTime values is actually invoked at different times. The Iterator method is actually compiled in such a way that a state machine is created under the covers to keep track of how many times the code is invoked and is implemented as a special IEnumerable<DateTime> object. The actual invocation of the code in the method is done through each call of the resulting IEnumerator. MoveNext method. The resulting IEnumerable is really implemented as a collection of delegates that are executed upon each invocation of the MoveNext method, where the state, in the simplest case, is really which of the delegates to invoke next. It's actually more complicated than that, especially when there are local variables and state that can change between invocations and is used across invocations. But the compiler takes care of all that. Effectively, iterators are broken up into individual bits of code between yield return statements that are executed independently, each using potentially local shared data. What are iterators good for other than a really cool interview question? Well, first of all, due to the deferred execution, we can technically create sequences that don't need to be stored in memory all at one time. This is often useful when we want to project one sequence into another. Couple that with a source sequence that is also implemented with deferred execution, we end up creating and processing IEnumerables (also known as collections) whose content is never all in memory at the same time. We can process large (or even infinite) collections without a huge strain on memory. For example, if we wanted to model the set of positive integer values (an infinite set) we could write an iterator method shown as follows: static IEnumerable<BigInteger> AllThePositiveIntegers() { var number = new BigInteger(0); while (true) yield return number++; }   We can then chain this iterator with another iterator, say something that gets all of the positive squares: static IEnumerable<BigInteger> AllThePostivieIntegerSquares( IEnumerable<BigInteger> sourceIntegers) { foreach(var value in sourceIntegers) yield return value*value; }   Which we could use as follows: foreach(var value in AllThePostivieIntegerSquares(AllThePositiveIntegers())) Console.WriteLine(value);   We've now effectively modeled two infi nite collections of integers in memory. Of course, our AllThePostiveIntegerSquares method could just as easily be used with fi nite sequences of values, for example: foreach (var value in AllThePostivieIntegerSquares( Enumerable.Range(0, int.MaxValue) .Select(v => new BigInteger(v)))) Console.WriteLine(value);   In this example we go through all of the positive Int32 values and square each one without ever holding a complete collection of the set of values in memory. As we see, this is a useful method for composing multiple steps that operate on, and result in, sequences of values. We could have easily done this without IEnumerable<T>, or created an IEnumerator class whose MoveNext method performed calculations instead of navigating an array. However, this would be tedious and is likely to be error-prone. In the case of not using IEnumerable<T>, we'd be unable to operate on the data as a collection with things such as foreach. Context: When modeling a sequence of values that is either known only at runtime, or each element can be reliably calculated at runtime. Practice: Consider using an iterator. Working with lambdas Visual Studio 2008 introduced C# 3.0 . In this version of C# lambda expressions were introduced. Lambda expressions are another form of anonymous functions. Lambdas were added to the language syntax primarily as an easier anonymous function syntax for LINQ queries. Although you can't really think of LINQ without lambda expressions, lambda expressions are a powerful aspect of the C# language in their own right. They are concise expressions that use implicitly-typed optional input parameters whose types are implied through the context of their use, rather than explicit de fi nition as with anonymous methods. Along with C# 3.0 in Visual Studio 2008, the .NET Framework 3.5 was introduced which included many new types to support LINQ expressions, such as Action<T> and Func<T>. These delegates are used primarily as definitions for different types of anonymous methods (including lambda expressions). The following is an example of passing a lambda expression to a method that takes a Func<T1, T2, TResult> delegate and the two arguments to pass along to the delegate: ExecuteFunc((f, s) => f + s, 1, 2);   The same statement with anonymous methods: ExecuteFunc(delegate(int f, int s) { return f + s; }, 1, 2);   It's clear that the lambda syntax has a tendency to be much more concise, replacing the delegate and braces with the "goes to" operator (=>). Prior to anonymous functions, member methods would need to be created to pass as delegates to methods. For example: ExecuteFunc(SomeMethod, 1, 2);   This, presumably, would use a method named SomeMethod that looked similar to: private static int SomeMethod(int first, int second) { return first + second; }   Lambda expressions are more powerful in the type inference abilities, as we've seen from our examples so far. We need to explicitly type the parameters within anonymous methods, which is only optional for parameters in lambda expressions. LINQ statements don't use lambda expressions exactly in their syntax. The lambda expressions are somewhat implicit. For example, if we wanted to create a new collection of integers from another collection of integers, with each value incremented by one, we could use the following LINQ statement: var x = from i in arr select i + 1;   The i + 1 expression isn't really a lambda expression, but it gets processed as if it were first converted to method syntax using a lambda expression: var x = arr.Select(i => i + 1);   The same with an anonymous method would be: var x = arr.Select(delegate(int i) { return i + 1; });   What we see in the LINQ statement is much closer to a lambda expression. Using lambda expressions for all anonymous functions means that you have more consistent looking code. Context: When using anonymous functions. Practice: Prefer lambda expressions over anonymous methods. Parameters to lambda expressions can be enclosed in parentheses. For example: var x = arr.Select((i) => i + 1);   The parentheses are only mandatory when there is more than one parameter: var total = arr.Aggregate(0, (l, r) => l + r);   Context: When writing lambdas with a single parameter. Practice: Prefer no parenthesis around the parameter declaration. Sometimes when using lambda expressions, the expression is being used as a delegate that takes an argument. The corresponding parameter in the lambda expression may not be used within the right-hand expression (or statements). In these cases, to reduce the clutter in the statement, it's common to use the underscore character (_) for the name of the parameter. For example: task.ContinueWith(_ => ProcessSecondHalfOfData());   The task.ContinueWith method takes an Action <Task> delegate. This means the previous lambda expression is actually given a task instance (the antecedent Task). In our example, we don't use that task and just perform some completely independent operation. In this case, we use (_) to not only signify that we know we don't use that parameter, but also to reduce the clutter and potential name collisions a little bit. Context: When writing lambda expression that take a single parameter but the parameter is not used. Practice: Use underscore (_) for the name of the parameter. There are two types of lambda expressions. So far, we've seen expression lambdas. Expression lambdas are a single expression on the right-hand side that evaluates to a value or void. There is another type of lambda expression called statement lambdas. These lambdas have one or more statements and are enclosed in braces. For example: task.ContinueWith(_ => { var value = 10; value += ProcessSecondHalfOfData(); ProcessSomeRandomValue(value); });   As we can see, statement lambdas can declare variables, as well as have multiple statements. Working with extension methods Along with lambda expressions and iterators, C# 3.0 brought us extension methods. These static methods (contained in a static class whose first argument is modified with the this modifier) were created for LINQ so IEnumerable types could be queried without needing to add copious amounts of methods to the IEnumerable interface. An extension method has the basic form of: public static class EnumerableExtensions { public static IEnumerable<int> IntegerSquares( this IEnumerable<int> source) { return source.Select(value => value * value); } }   As stated earlier, extension methods must be within a static class, be a static method, and the first parameter must be modified with the this modifier. Extension methods extend the available instance methods of a type. In our previous example, we've effectively added an instance member to IEnumerable<int> named IntegerSquares so we get a sequence of integer values that have been squared. For example, if we created an array of integer values, we will have added a Cubes method to that array that returns a sequence of the values cubed. For example: var values = new int[] {1, 2, 3}; foreach (var v in values.Cubes()) { Console.WriteLine(v); }   Having the ability to create new instance methods that operate on any public members of a specific type is a very powerful feature of the language. This, unfortunately, does not come without some caveats. Extension methods suffer inherently from a scoping problem. The only scoping that can occur with these methods is the namespaces that have been referenced for any given C# source file. For example, we could have two static classes that have two extension methods named Cubes. If those static classes are in the same namespace, we'd never be able to use those extensions methods as extension methods because the compiler would never be able to resolve which one to use. For example: public static class IntegerEnumerableExtensions { public static IEnumerable<int> Squares( this IEnumerable<int> source) { return source.Select(value => value * value); } public static IEnumerable<int> Cubes( this IEnumerable<int> source) { return source.Select(value => value * value * value); } } public static class EnumerableExtensions { public static IEnumerable<int> Cubes( this IEnumerable<int> source) { return source.Select(value => value * value * value); } }   If we tried to use Cubes as an extension method, we'd get a compile error, for example: var values = new int[] {1, 2, 3}; foreach (var v in values.Cubes()) { Console.WriteLine(v); }   This would result in error CS0121: The call is ambiguous between the following methods or properties. To resolve the problem, we'd need to move one (or both) of the classes to another namespace, for example: namespace Integers { public static class IntegerEnumerableExtensions { public static IEnumerable<int> Squares( this IEnumerable<int> source) { return source.Select(value => value*value); } public static IEnumerable<int> Cubes( this IEnumerable<int> source) { return source.Select(value => value*value*value); } } } namespace Numerical { public static class EnumerableExtensions { public static IEnumerable<int> Cubes( this IEnumerable<int> source) { return source.Select(value => value*value*value); } } }   Then, we can scope to a particular namespace to choose which Cubes to use: Context: When considering extension methods, due to potential scoping problems. Practice: Use extension methods sparingly. Context: When designing extension methods. Practice: Keep all extension methods that operate on a specific type in their own class. Context: When designing classes to contain methods to extend a specific type, TypeName. Practice: Consider naming the static class TypeNameExtensions. Context: When designing classes to contain methods to extend a specific type, in order to scope the extension methods. Practice: Consider placing the class in its own namespace. Generally, there isn't much need to use extension methods on types that you own. You can simply add an instance method to contain the logic that you want to have. Where extension methods really shine is for effectively creating instance methods on interfaces. Typically, when code is necessary for shared implementations of interfaces, an abstract base class is created so each implementation of the interface can derive from it to implement these shared methods. This is a bit cumbersome in that it uses the one-and-only inheritance slot in C#, so an interface implementation would not be able to derive or extend any other classes. Additionally, there's no guarantee that a given interface implementation will derive from the abstract base and runs the risk of not being able to be used in the way it was designed. Extension methods get around this problem by being entirely independent from the implementation of an interface while still being able to extend it. One of the most notable examples of this might be the System.Linq.Enumerable class introduced in .NET 3.5. The static Enumerable class almost entirely consists of extension methods that extend IEnumerable. It is easy to develop the same sort of thing for our own interfaces. For example, say we have an ICoordinate interface to model a three-dimensional position in relation to the Earth's surface: namespace ConsoleApplication { using Numerical; internal class Program { private static void Main(string[] args) { var values = new int[] {1, 2, 3}; foreach (var v in values.Cubes()) { Console.WriteLine(v); } } } }   We could create a static class to contain extension methods to provide shared functionality between any implementation of ICoordinate. For example: public interface ICoordinate { /// <summary>North/south degrees from equator.</summary> double Latitude { get; set; } /// <summary>East/west degrees from meridian.</summary> double Longitude { get; set; } /// <summary>Distance from sea level in meters.</summary> double Altitude { get; set; } }   Context: When designing interfaces that require shared code. Practice: Consider providing extension methods instead of abstract base implementations.
Read more
  • 0
  • 0
  • 4073

article-image-article-odata-on-mobile-devices
Packt
02 Aug 2012
8 min read
Save for later

Odata on Mobile Devices

Packt
02 Aug 2012
8 min read
With the continuous evolution of mobile operating systems, smart mobile devices (such as smartphones or tablets) play increasingly important roles in everyone's daily work and life. The iOS (from Apple Inc., for iPhone, iPad, and iPod Touch devices), Android (from Google) and Windows Phone 7 (from Microsoft) operating systems have shown us the great power and potential of modern mobile systems. In the early days of the Internet, web access was mostly limited to fixed-line devices. However, with the rapid development of wireless network technology (such as 3G), Internet access has become a common feature for mobile or portable devices. Modern mobile OSes, such as iOS, Android, and Windows Phone have all provided rich APIs for network access (especially Internet-based web access). For example, it is quite convenient for mobile developers to create a native iPhone program that uses a network API to access remote RSS feeds from the Internet and present the retrieved data items on the phone screen. And to make Internet-based data access and communication more convenient and standardized, we often leverage some existing protocols, such as XML or JSON, to help us. Thus, it is also a good idea if we can incorporate OData services in mobile application development so as to concentrate our effort on the main application logic instead of the details about underlying data exchange and manipulation. In this article, we will discuss several cases of building OData client applications for various kinds of mobile device platforms. The first four recipes will focus on how to deal with OData in applications running on Microsoft Windows Phone 7. And they will be followed by two recipes that discuss consuming an OData service in mobile applications running on the iOS and Android platforms. Although this book is .NET developer-oriented, since iOS and Android are the most popular and dominating mobile OSes in the market, I think the last two recipes here would still be helpful (especially when the OData service is built upon WCF Data Service on the server side). Accessing OData service with OData WP7 client library What is the best way to consume an OData service in a Windows Phone 7 application? The answer is, by using the OData client library for Windows Phone 7 (OData WP7 client library). Just like the WCF Data Service client library for standard .NET Framework based applications, the OData WP7 client library allows developers to communicate with OData services via strong-typed proxy and entity classes in Windows Phone 7 applications. Also, the latest Windows Phone SDK 7.1 has included the OData WP7 client library and the associated developer tools in it. In this recipe, we will demonstrate how to use the OData WP7 client library in a standard Windows Phone 7 application. Getting ready The sample WP7 application we will build here provides a simple UI for users to view and edit the Categories data by using the Northwind OData service. The application consists of two phone screens, shown in the following screenshot: Make sure you have installed Windows Phone SDK 7.1 (which contains the OData WP7 client library and tools) on the development machine. You can get the SDK from the following website: http://create.msdn.com/en-us/home/getting_started The source code for this recipe can be found in the ch05ODataWP7ClientLibrarySln directory. How to do it... Create a new ASP.NET web application that contains the Northwind OData service. Add a new Windows Phone Application project in the same solution (see the following screenshot). Select Windows Phone OS 7.1 as the Target Windows Phone OS Version in the New Windows Phone Application dialog box (see the following screenshot). Click on the OK button, to finish the WP7 project creation. The following screenshot shows the default WP7 project structure created by Visual Studio: Create a new Windows Phone Portrait Page (see the following screenshot) and name it EditCategory.xaml. Create the OData client proxy (against the Northwind OData service) by using the Visual Studio Add Service Reference wizard. Add the XAML content for the MainPage.xaml page (see the following XAML fragment). <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <ListBox x_Name="lstCategories" ItemsSource="{Binding}"> <ListBox.ItemTemplate>> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="60" /> <ColumnDefinition Width="260" /> <ColumnDefinition Width="140" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="{Binding Path=CategoryID}" FontSize="36" Margin="5"/> <TextBlock Grid.Column="1" Text="{Binding Path=CategoryName}" FontSize="36" Margin="5" TextWrapping="Wrap"/> <HyperlinkButton Grid.Column="2" Content="Edit" HorizontalAlignment="Right" NavigateUri="{Binding Path=CategoryID, StringFormat='/EditCategory.xaml? ID={0}'}" FontSize="36" Margin="5"/> <Grid> <DataTemplate> <ListBox.ItemTemplate> <ListBox> <Grid> Add the code for loading the Category list in the code-behind file of the MainPage. xaml page (see the following code snippet). public partial class MainPage : PhoneApplicationPage { ODataSvc.NorthwindEntities _ctx = null; DataServiceCollection _categories = null; ...... private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { Uri svcUri = new Uri("http://localhost:9188/NorthwindOData.svc"); _ctx = new ODataSvc.NorthwindEntities(svcUri); _categories = new DataServiceCollection(_ctx); _categories.LoadCompleted += (o, args) => { if (_categories.Continuation != null) _categories.LoadNextPartialSetAsync(); else { this.Dispatcher.BeginInvoke( () => { ContentPanel.DataContext = _categories; ContentPanel.UpdateLayout(); } ); } }; var query = from c in _ctx.Categories select c; _categories.LoadAsync(query); } } Add the XAML content for the EditCategory.xamlpage (see the following XAML fragment). <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <StackPanel> <TextBlock Text="{Binding Path=CategoryID, StringFormat='Fields of Categories({0})'}" FontSize="40" Margin="5" /> <Border> <StackPanel> <TextBlock Text="Category Name:" FontSize="24" Margin="10" /> <TextBox x_Name="txtCategoryName" Text="{Binding Path=CategoryName, Mode=TwoWay}" /> <TextBlock Text="Description:" FontSize="24" Margin="10" /> <TextBox x_Name="txtDescription" Text="{Binding Path=Description, Mode=TwoWay}" /> </StackPanel> </Border> <StackPanel Orientation="Horizontal" HorizontalAlignment="Center"> <Button x_Name="btnUpdate" Content="Update" HorizontalAlignment="Center" Click="btnUpdate_Click" /> <Button x_Name="btnCancel" Content="Cancel" HorizontalAlignment="Center" Click="btnCancel_Click" /> </StackPanel> </StackPanel> </Grid> Add the code for editing the selected Category item in the code-behind file of the EditCategory.xaml page. In the PhoneApplicationPage_Loaded event, we will load the properties of the selected Category item and display them on the screen (see the following code snippet). private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { EnableControls(false); Uri svcUri = new Uri("http://localhost:9188/NorthwindOData. svc"); _ctx = new ODataSvc.NorthwindEntities(svcUri); var id = int.Parse(NavigationContext.QueryString["ID"]); var query = _ctx.Categories.Where(c => c.CategoryID == id); _categories = new DataServiceCollection(_ctx); _categories.LoadCompleted += (o, args) => { if (_categories.Count <= 0) { MessageBox.Show("Failed to retrieve Category item."); NavigationService.GoBack(); } else { EnableControls(true); ContentPanel.DataContext = _categories[0]; ContentPanel.UpdateLayout(); } }; _categories.LoadAsync(query); } The code for updating changes (against the Category item) is put in the Click event of the Update button (see the following code snippet). private void btnUpdate_Click(object sender, RoutedEventArgs e) { EnableControls(false); _ctx.UpdateObject(_categories[0]); _ctx.BeginSaveChanges( (ar) => { this.Dispatcher.BeginInvoke( () => { try { var response = _ctx.EndSaveChanges(ar); NavigationService.Navigate(new Uri("/MainPage.xaml", UriKind.Relative)); } catch (Exception ex) { MessageBox.Show("Failed to save changes."); EnableControls(true); } } ); }, null ); } Select the WP7 project and launch it in Windows Phone Emulator (see the following screenshot). Depending on the performance of the development machine, it might take a while to start the emulator. Running a WP7 application in Windows Phone Emulator is very helpful especially when the phone application needs to access some web services (such as WCF Data Service) hosted on the local machine (via the Visual Studio test web server). How it works... Since the OData WP7 client library (and tools) has been installed together with Windows Phone SDK 7.1, we can directly use the Visual Studio Add Service Reference wizard to generate the OData client proxy in Windows Phone applications. And the generated OData proxy is the same as what we used in standard .NET applications. Similarly, all network access code (such as the OData service consumption code in this recipe) has to follow the asynchronous programming pattern in Windows Phone applications. There's more... In this recipe, we use the Windows Phone Emulator for testing. If you want to deploy and test your Windows Phone application on a real device, you need to obtain a Windows Phone developer account so as to unlock your Windows Phone device. Refer to the walkthrough: App Hub - windows phone developer registration walkthrough,available at http://go.microsoft.com/fwlink/?LinkID=202697
Read more
  • 0
  • 0
  • 1365
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-combining-silverlight-and-windows-azure-projects
Packt
22 Mar 2012
6 min read
Save for later

Combining Silverlight and Windows Azure projects

Packt
22 Mar 2012
6 min read
Combining Silverlight and Windows Azure projects Standard Silverlight applications require that they be hosted on HTML pages, so that they can be loaded in a browser. Developers who work with the .Net framework will usually host this page within an ASP.Net website. The easiest way to host a Silverlight application on Azure is to create a single web role that contains an ASP.Net application to host the Silverlight application. Hosting the Silverlight application in this way enables you, as a developer, to take advantage of the full .Net framework to support your Silverlight application. Supporting functionalities can be provided such as hosting WCF services, RIA services, Entity Framework, and so on. In the upcoming chapters, we will explore ways by which RIA services, OData, Entity Framework, and a few other technologies can be used together. For the rest of this chapter, we will focus on the basics of hosting a Silverlight application within Azure and integrating a hosted WCF service. Creating a Silverlight or Azure solution Your system should already be fully configured with all Silverlight and Azure tools. In this section, we are going to create a simple Silverlight application that is hosted inside an Azure web role. This will be the basic template that is used throughout the book as we explore different ways in which we can integrate the technologies together: Start Visual Studio as an administrator. You can do this by opening the Start Menu and finding Visual Studio, then right-clicking on it, and selecting Run as Administrator. This is required for the Azure compute emulator to run successfully. Create a new Windows Azure Cloud Service. The solution name used in the following example screenshot is Chapter3Exercise1: (Move the mouse over the image to enlarge.) Add a single ASP.Net Web Role as shown in the following screenshot. For this exercise, the default name of WebRole1 will be used. The name of the role can be changed by clicking on the pencil icon next to the WebRole1 name: Visual Studio should now be loaded with a single Azure project and an ASP. Net project. In the following screenshot, you can see that Visual Studio is opened with a solution named Chapter3Exercise1. The solution contains a Windows Azure Cloud project, also called Chapter3Exercise1. Finally, the ASP.Net project can be seen named as WebRole1: Right-click on the ASP.Net project named WebRole1 and select Properties. In the WebRole1 properties screen, click on the Silverlight Applications tab. Click on Add to add a new Silverlight project into the solution. The Add button has been highlighted in the following screenshot: For this exercise, rename the project to HelloWorldSilverlightProject. Click on Add to create the Silverlight project. The rest of the options can be left to their default settings, as shown in the following screenshot. Visual Studio will now create the Silverlight project and add it to the solution. The resulting solution should now have three projects as shown in the following screenshot. These include the original Azure project, Chapter3Exercise1; the ASP.Net web role, WebRole1; and the third new project HelloWorldSilverlightProject: Open MainPage.xaml in design view, if not already open. Change the grid to a StackPanel. Inside the StackPanel, add a button named button1 with a height of 40 and a content that displays Click me!. Inside the StackPanel, underneath button1, add a text block named textBlock1 with a height of 20 The final XAML should look similar to this code snippet: <UserControl> <StackPanel x_Name="LayoutRoot" Background="White"> <Button x_Name="button1" Height="40" Content="Click me!" /> <TextBlock x_Name="textBlock1" Height="20" /> </StackPanel> </UserControl> Double-click on button1 in the designer to have Visual Studio automatically create a click event. The final XAML in the designer should look similar to the following screenshot: Open the MainPage.xaml.cs code behind the file and find the button1_Click method. Add a code that will update textBlock1 to display Hello World and the current time as follows: private void button1_Click(object sender, RoutedEventArgs e) { textBlock1.Text = "Hello World at " + DateTime.Now.ToLongTimeString(); } Build the project to ensure that everything compiles correctly.Now that the solution has been built, it is ready to be run and debugged within the Windows Azure compute emulator. The next section will explore what happens while running an Azure application on the compute emulator. Running an Azure application on the Azure compute emulator With the solution built, it is ready to run on the Azure simulation: the compute emulator. The compute emulator is the local simulation of the Windows Azure compute emulator which Microsoft runs on the Azure servers it hosts. When you start debugging by pressing F5 (or by selecting Debug | Start Debugging from the menu), Visual Studio will automatically package the Azure project, then start the Azure compute emulator simulation. The package will be copied to a local folder used by the compute emulator. The compute emulator will then start a Windows process to host or execute the roles, one of which will be started as per the instance request for each role. Once the compute emulator has been successfully initialized, Visual Studio will then launch the browser and attach the debugger to the correct places. This is similar to the way Visual Studio handles debugging of an ASP.Net application with the ASP. Net Development Server. The following steps will take you through the process of running and debugging applications on top of the compute emulator: In Solution Explorer, inside the HelloWorldSilverlightProject, right-click on HelloWorldSilverlightProjectTestPage.aspx, and select Set as startup page. Ensure that the Azure project (Chapter3Exercise1) is still set as the start-up project. In Visual Studio, press F5 to start debugging (or from the menu select Debug | Start Debugging). Visual Studio will compile the project, and if successful, begins to launch the Azure compute emulator as shown in the following screenshot: Once the compute emulator has been started and the Azure package deployed to it, Visual Studio will launch Internet Explorer. Internet Explorer will display the page set as the start-up page (which was set to in an earlier step HelloWorldSilverlightProjectTestPage.aspx). Once the Silverlight application has been loaded, click on the Click me! button. The TextBlock should be updated with the current time, as shown in the following screenshot: Upon this completion, you should now have successfully deployed a Silverlight application on top of the Windows Azure compute emulator. You can now use this base project to build more advanced features and integration with other services. Consuming an Azure-hosted WCF service within a Silverlight application A standalone Silverlight application will not be able to do much by itself. Most applications will require that they consume data from a data source, such as to get a list of products or customer orders. A common way to send data between .Net applications is through WCF services.
Read more
  • 0
  • 0
  • 2276

article-image-introduction-logging-tomcat-7
Packt
21 Mar 2012
9 min read
Save for later

Introduction to Logging in Tomcat 7

Packt
21 Mar 2012
9 min read
(For more resources on Apache, see here.) JULI Previous versions of Tomcat (till 5.x) use Apache common logging services for generating logs. A major disadvantage with this logging mechanism is that it can handle only single JVM configuration and makes it difficult to configure separate logging for each class loader for independent application. In order to resolve this issue, Tomcat developers have introduced a separate API for Tomcat 6 version, that comes with the capability of capturing each class loader activity in the Tomcat logs. It is based on java.util.logging framework. By default, Tomcat 7 uses its own Java logging API to implement logging services. This is also called as JULI. This API can be found in TOMCAT_HOME/bin of the Tomcat 7 directory structures (tomcat-juli.jar). The following screenshot shows the directory structure of the bin directory where tomcat-juli.jar is placed. JULI also provides the feature for custom logging for each web application, and it also supports private per-application logging configurations. With the enhanced feature of separate class loader logging, it also helps in detecting memory issues while unloading the classes at runtime. For more information on JULI and the class loading issue, please refer to http://tomcat.apache.org/tomcat-7.0-doc/logging.html and http://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html respectively. Loggers, appenders, and layouts There are some important components of logging which we use at the time of implementing the logging mechanism for applications. Each term has its individual importance in tracking the events of the application. Let's discuss each term individually to find out their usage: Loggers:It can be defined as the logical name for the log file. This logical name is written in the application code. We can configure an independent logger for each application. Appenders: The process of generation of logs are handled by appenders. There are many types of appenders, such as FileAppender, ConsoleAppender, SocketAppender, and so on, which are available in log4j. The following are some examples of appenders for log4j: log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.out log4j.appender.CATALINA.Append=true log4j.appender.CATALINA.Encoding=UTF-8 The previous four lines of appenders define the DailyRollingFileAppender in log4j, where the filename is catalina.out . These logs will have UTF-8 encoding enabled. If log4j.appender.CATALINA.append=false, then logs will not get updated in the log files. # Roll-over the log once per day log4j.appender.CATALINA.DatePattern='.'dd-MM-yyyy'.log' log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n The previous three lines of code show the roll-over of log once per day. Layout: It is defined as the format of logs displayed in the log file. The appender uses layout to format the log files (also called as patterns). The highlighted code shows the pattern for access logs: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> Loggers, appenders, and layouts together help the developer to capture the log message for the application event. Types of logging in Tomcat 7 We can enable logging in Tomcat 7 in different ways based on the requirement. There are a total of five types of logging that we can configure in Tomcat, such as application, server, console, and so on. The following figure shows the different types of logging for Tomcat 7. These methods are used in combination with each other based on environment needs. For example, if you have issues where Tomcat services are not displayed, then console logs are very helpful to identify the issue, as we can verify the real-time boot sequence. Let's discuss each logging method briefly. Application log These logs are used to capture the application event while running the application transaction. These logs are very useful in order to identify the application level issues. For example, suppose your application performance is slow on a particular transition, then the details of that transition can only be traced in application log. The biggest advantage of application logs is we can configure separate log levels and log files for each application, making it very easy for the administrators to troubleshoot the application. Log4j is used in 90 percent of the cases for application log generation. Server log Server logs are identical to console logs. The only advantage of server logs is that they can be retrieved anytime but console logs are not available after we log out from the console. Console log This log gives you the complete information of Tomcat 7 startup and loader sequence. The log file is named as catalina.out and is found in TOMCAT_HOME/logs. This log file is very useful in checking the application deployment and server startup testing for any environment. This log is configured in the Tomcat file catalina.sh, which can be found in TOMCAT_HOME/bin. The previous screenshot shows the definition for Tomcat logging. By default, the console logs are configured as INFO mode. There are different levels of logging in Tomcat such as WARNING, INFORMATION, CONFIG, and FINE. The previous screenshot shows the Tomcat log file location, after the start of Tomcat services. The previous screenshot shows the output of the catalina.out file, where Tomcat services are started in 1903 ms. Access log Access logs are customized logs, which give information about the following: Who has accessed the application What components of the application are accessed Source IP and so on These logs play a vital role in traffic analysis of many applications to analyze the bandwidth requirement and also helps in troubleshooting the application under heavy load. These logs are configured in server.xml in TOMCAT_HOME/conf. The following screenshot shows the definition of access logs. You can customize them according to the environment and your auditing requirement. Let's discuss the pattern format of the access logs and understand how we can customize the logging format: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> Class Name: This parameter defines the class name used for generation of logs. By default, Apache Tomcat 7 uses the org.apache.catalina.valves.AccessLogValve class for access logs. Directory: This parameter defines the directory location for the log file. All the log files are generated in the log directory—TOMCAT_HOME/logs—but we can customize the log location based on our environment setup and then update the directory path in the definition of access logs. Prefix: This parameter defines the prefix of the access log filename, that is, by default, access log files are generated by the name localhost_access_log.yy-mm-dd.txt. Suffix: This parameter defines the file extension of the log file. Currently it is in .txt format. Pattern: This parameter defines the format of the log file. The pattern is a combination of values defined by the administrator, for example, %h = remote host address. The following screenshot shows the default log format for Tomcat 7. Access logs show the remote host address, date/time of request, method used for response, URI mapping, and HTTP status code. In case you have installed the web traffic analysis tool for application, then you have to change the access logs to a different format. Host manager These logs define the activity performed using Tomcat Manager, such as various tasks performed, status of application, deployment of application, and lifecycle of Tomcat. These configurations are done on the logging.properties, which can be found in TOMCAT_HOME/conf. The previous screenshot shows the definition of host, manager, and host-manager details. If you see the definitions, it defines the log location, log level, and prefix of the filename. In logging.properties, we are defining file handlers and appenders using JULI. The log file for manager looks similar to the following: I28 Jun, 2011 3:36:23 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:37:13 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:37:42 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: undeploy: Undeploying web application at '/sample' 28 Jun, 2011 3:37:43 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:42:59 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:43:01 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:53:44 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' Types of log levels in Tomcat 7 There are seven levels defined for Tomcat logging services (JULI). They can be set based on the application requirement. The following figure shows the sequence of log levels for JULI: Every log level in JULI had its own functionality. The following table shows the functionality of each log level in JULI: Log level Description SEVERE(highest) Captures exception and Error WARNING Warning messages INFO Informational message, related to server activity CONFIG Configuration message FINE Detailed activity of server transaction (similar to debug) FINER More detailed logs than FINE FINEST(least) Entire flow of events (similar to trace) For example, let's take an appender from logging.properties and find out the log level used; the first log appender for localhost is using FINE as the log level, as shown in the following code snippet: localhost.org.apache.juli.FileHandler.level = FINE localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs localhost.org.apache.juli.FileHandler.prefix = localhost. The following code shows the default file handler configuration for logging in Tomcat 7 using JULI. The properties are mentioned and log levels are highlighted: ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .handlers = 3manager.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].handlers = 4host-manager.org.apache.juli.FileHandler
Read more
  • 0
  • 0
  • 6391

article-image-silverlight-5-lob-development-validation-advanced-topics-and-mvvm
Packt
09 Mar 2012
8 min read
Save for later

Silverlight 5 LOB Development : Validation, Advanced Topics, and MVVM

Packt
09 Mar 2012
8 min read
(For more resources on Silverlight, see here.) Validation One of the most important parts of the Silverlight application is the correct implementation of validations in our business logic. These can be simple details, such as the fact that the client must provide their name and e-mail address to sign up, or that before selling a book, it must be in stock. In RIA Services, validations can be defined on two levels: In entities, via DataAnnotations. In our Domain Service, server or asynchronous validations via Invoke. DataAnnotations The space named System.ComponentModel.DataAnnotations implements a series of attributes allowing us to add validation rules to the properties of our entities. The following table shows the most outstanding ones: Validation Attribute Description DataTypeAttribute Specifies a particular type of data such as date or an e-mail EnumDataTypeAttribute Ensures that the value exists in an enumeration RangeAttribute Designates minimum and maximum constraints RegularExpressionAttribute Uses a regular expression to determine valid values RequiredAttribute Specifies that a value must be provided StringLengthAttribute Designates a maximum and minimum number of characters CustomValidationAttribute Uses a custom method for validation The following code shows us how to add a field as "required": [Required()]public string Name{ get { return this._name; } set { (...) }} In the UI layer, the control linked to this field (a TextBox, in this case), automatically detects and displays the error. It can be customized as follows: These validations are based on the launch of exceptions. They are captured by user controls and bound to data elements. If there are errors, these are shown in a friendly way. When executing the application in debug mode with Visual Studio, it is possible to find that IDE captures exceptions. To avoid this, refer to the following link, where the IDE configuration is explained: http://bit.ly/riNdmp. Where can validations be added? The answer is in the metadata definition, entities, in our Domain Service, within the server project. Going back to our example, the server project is SimpleDB.Web and the Domain Service is MyDomainService. medatada.cs. These validations are automatically copied to the entities definition file and the context found on the client side. In the Simple.DB.Web.g.cs file, when the hidden folder Generated Code is opened, you will be surprised to find that some validations are already implemented. For example, the required field, field length, and so on. These are inferred from the Entity Framework model. Simple validations For validations that are already generated, let's see a simple example on how to implement those of the "required" field and "maximum length": [Required()][StringLength(60)]public string Name{ get { return this._name; } set { (...) }} Now, we will implement the syntactic validation for credit cards (format dddddddd- dddd-dddd). To do so, use the regular expression validator and add the server file MyDomainService.metadata.cs, as shown in the following code: [RegularExpression(@"d{4}-d{4}-d{4}-d{4}",ErrorMessage="Credit card not valid format should be: 9999-9999-9999-9999")]public string CreditCard { get; set; } To know how regular expressions work, refer to the following link: http://bit.ly/115Td0 and refer to this free tool to try them in a quick way: http://bit.ly/1ZcGFC. Custom and shared validations Basic validations are acceptable for 70 percent of validation scenarios, but there are still 30 percent of validations which do not fit in these patterns. What do you do then? RIA Services offers CustomValidatorAttribute. It permits the creation of a method which makes a validation defined by the developer. The benefits are listed below: Its code: The necessary logic can be implemented to make validations. It can be oriented for validations to be viable in other modules (for instance, the validation of an IBAN [International Bank Account]). It can be chosen if a validation is executed on only the server side (for example, a validation requiring data base readings) or if it is also copied to the client. To validate the checksum of the CreditCard field, follow these steps: Add to the SimpleDB.Web project, the class named ClientCustomValidation. Within this class, define a static model, ValidationResult, which accepts the value of the field to evaluate as a parameter and returns the validation result. public class ClientCustomValidation{ public static ValidationResult ValidMasterCard(string strcardNumber)} Implement the summarized validation method (the part related to the result call back is returned). public static ValidationResult ValidMasterCard(string strcardNumber){ // Let us remove the "-" separator string cardNumber = strcardNumber.Replace("-", ""); // We need to keep track of the entity fields that are // affected, so the UI controls that have this property / bound can display the error message when applies List<string> AffectedMembers = new List<string>(); AffectedMembers.Add("CreditCard"); (...) // Validation succeeded returns success // Validation failed provides error message and indicates // the entity fields that are affected return (sum % 10 == 0) ? ValidationResult.Success : new ValidationResult("Failed to validate", AffectedMembers);} To make validation simpler, only the MasterCard has been covered. To know more and cover more card types, refer to the page http://bit.ly/aYx39u. In order to find examples of valid numbers, go to http://bit.ly/gZpBj. Go to the file MyDomainService.metadata.cs and, in the Client entity, add the following to the CreditCard field: [CustomValidation(typeof(ClientCustomValidation),"ValidMasterCard")]public string CreditCard { get; set; } If it is executed now and you try to enter an invalid field in the CreditCard field, it won't be marked as an error. What happens? Validation is only executed on the server side. If it is intended to be executed on the client side as well, rename the file called ClientCustomValidation.cs to ClientCustomValidation.shared. cs. In this way, the validation will be copied to the Generated_code folder and the validation will be launched. In the code generated on the client side, the entity validation is associated. /// <summary>/// Gets or sets the 'CreditCard' value./// </summary>[CustomValidation(typeof(ClientCustomValidation), "ValidMasterCard")][DataMember()][RegularExpression("d{4}-d{4}-d{4}-d{4}", ErrorMessage="Creditcard not valid format should be: 9999-9999-9999-9999")][StringLength(30)]public string CreditCard{ This is quite interesting. However, what happens if more than one field has to be checked in the validation? In this case, one more parameter is added to the validation method. It is ValidationContext, and through this parameter, the instance of the entity we are dealing with can be accessed. public static ValidationResult ValidMasterCard( string strcardNumber, ValidationContext validationContext){ client currentClient = (client)validationContext.ObjectInstance; Entity-level validations Fields validation is quite interesting, but sometimes, rules have to be applied in a higher level, that is, entity level. RIA Services implements some machinery to perform this kind of validation. Only a custom validation has to be defined in the appropriate entity class declaration. Following the sample we're working upon, let us implement one validation which checks that at least one of the two payment methods (PayPal or credit card) is informed. To do so, go to the ClientCustomValidation.shared.cs (SimpleDB web project) and add the following static function to the ClientCustomValidation class: public static ValidationResult ValidatePaymentInformed(clientCurrentClient){ bool atLeastOnePaymentInformed = ((CurrentClient.PayPalAccount != null && CurrentClient.PayPalAccount != string.Empty) || (CurrentClient.CreditCard != null && CurrentClient.CreditCard != string.Empty)); return (atLeastOnePaymentInformed) ? ValidationResult.Success : new ValidationResult("One payment method must be informed at least");} Next, open the MyDomainService.metadata file and add, in the class level, the following annotation to enable that validation: [CustomValidation(typeof(ClientCustomValidation), ValidatePaymentInformed")][MetadataTypeAttribute(typeof(client.clientMetadata))]public partial class client When executing and trying the application, it will be realized that the validation is not performed. This is due to the fact that, unlike validations in the field level, the entity validations are only launched client-side when calling EndEdit or TryValidateObject. The logic is to first check if the fields are well informed and then make the appropriate validations. In this case, a button will be added, making the validation and forcing it to entity level. To know more about validation on entities, go to http://bit.ly/qTr9hz. Define the command launching the validation on the current entity in the ViewModel as the following code: private RelayCommand _validateCommand;public RelayCommand ValidateCommand{ get { if (_validateCommand == null) { _validateCommand = new RelayCommand(() => { // Let us clear the current validation list CurrentSelectedClient.ValidationErrors.Clear(); var validationResults = new List<ValidationResult>(); ValidationContext vcontext = new ValidationContext(CurrentSelectedClient, null, null); // Let us run the validation Validator.TryValidateObject(CurrentSelectedClient, vcontext, validationResults); // Add the errors to the entities validation error // list foreach (var res in validationResults) { CurrentSelectedClient.ValidationErrors.Add(res); } },(() => (CurrentSelectedClient != null)) ); } return _validateCommand; }} Define the button in the window and bind it to the command: <Button Content="Validate" Command="{Binding Path=ValidateCommand}"/> While executing, it will be appreciated that the fields be blank, even if we click the button. Nonetheless, when adding a breaking point, the validation is shown. What happens is, there is a missing element showing the result of that validation. In this case, the choice will be to add a header whose DataContext points to the current entity. If entity validations fail, they will be shown in this element. For more information on how to show errors, check the link http://bit.ly/ad0JyD. The TextBox added will show the entity validation errors. The final result will look as shown in the following screenshot:
Read more
  • 0
  • 0
  • 1588

article-image-introduction-enterprise-business-messages
Packt
27 Feb 2012
4 min read
Save for later

Introduction to Enterprise Business Messages

Packt
27 Feb 2012
4 min read
(For more resources on Oracle, see here.) Before we jump into the AIA Enterprise Business Message (EBM) standards, let us understand a little more about Business Messages. In general, Business Message is information shared between people, organizations, systems, or processes. Any information communicated to any object in a standard understandable format are called messages. In the application integration world, there are various information-sharing approaches that are followed. Therefore, we need not go through it again, but in a service-oriented environment, message-sharing between systems is the fundamental characteristic. There should be a standard approach followed across an enterprise, so that every existing or new business system could understand and follow the uniform method. XML technology is a widely-accepted message format by all the technologies and tools. Oracle AIA framework provides a standard messaging format to share the information between AIA components. Overview of Enterprise Business Message (EBM) Enterprise Business Messages (EBMs) are business information exchanged between enterprise business systems as messages. EBMs define the elements that are used to form the messages in service-oriented operations. EBM payloads represent specific content of an EBO that is required to perform a specific service. In an AIA infrastructure, EBMs are messages exchanged between all components in the Canonical Layer. Enterprise Business Services (EBS) accepts EBM as a request message and responds back to EBM as an output payload. However, in Application Business Connector Service (ABCS), the provider ABCS accepts messages in the EBM format and translates them into the application provider's Application Business Message (ABM) format. Alternatively, the requester ABCS receives ABM as a request message, transforms it into an EBS, and calls the EBS to submit the EBM message. Therefore, EBM has been a widely-accepted message standard within AIA components. The context-oriented EBMs are built using a set of common components and EBO business components. Some EBMs may require more than one EBO to fulfill the business integration needs. The following diagram describes the role of an EBM in the AIA architecture: EBM characteristics The fundamentals of EBM and its characteristics are as follows: Each business service request and response should be represented in an EBM format using a unique combination of an action and an EBO instance. One EBM can support only one action or verb. EBM component should import the common component to make use of metadata and data types across the EBM structure. EBMs are application interdependencies. Any requester application that invokes Enterprise Business Services (EBS) through ABCS should follow the EBM format standards to pass as payload in integration. The action that is embedded in the EBM is the only action that sender or requester application can execute to perform integration. The action in the EBM may also carry additional data that has to be done as part of service execution. For example, the update action may carry information about whether the system should notify after successful execution of update. The information that exists in the EBM header is common to all EBMs. However, information existing in the data area and corresponding actions are specific to only one EBM. EBM headers may carry tracking information, auditing information, source and target system information, and error-handling information. EBM components do not rely on the underlying transport protocol. Any service protocols such as HTTP, HTTPs, SMTP, SOAP, and JMS should carry EBM payload documents. Exploring AIA EBMs We explored the physical structure of the Oracle AIA EBO in the previous chapter; EBMs do not have a separate structure. EBMs are also part of the EBO's physical package structure. Every EBO is bound with an EBM. The following screenshot will show the physical structure of the EBM groups as directories: As EBOs are grouped as packages based on the business model, EBMs are also a part of that structure and can be located along with the EBO schema under the Core EBO package.
Read more
  • 0
  • 0
  • 3628
article-image-java-7-managing-files-and-directories
Packt
24 Feb 2012
12 min read
Save for later

Java 7: Managing Files and Directories

Packt
24 Feb 2012
12 min read
(For more resources on Java, see here.) Introduction It is often necessary to perform file manipulations, such as creating files, manipulating their attributes and contents, or removing them from the filesystem. The addition of the java.lang.object.Files class in Java 7 simplifies this process. This class relies heavily on the use of the new java.nio.file.Path interface. The methods of the class are all static in nature, and generally assign the actual file manipulation operations to the underlying filesystem. Many of the operations described in this chapter are atomic in nature, such as those used to create and delete files or directories. Atomic operations will either execute successfully to completion or fail and result in an effective cancellation of the operation. During execution, they are not interrupted from the standpoint of a filesystem. Other concurrent file operations will not impact the operation. To execute many of the examples in this chapter, the application needs to run as administrator. To run an application as administrator under Windows, right-click on the Command Prompt menu and choose Run as administrator. Then navigate to the appropriate directory and execute using the java.exe command. To run as administrator on a UNIX system, use the sudo command in a terminal window followed by the java command. Basic file management is covered in this chapter. The methods required for the creation of files and directories are covered in the Creating Files and Directories recipe. This recipe focuses on normal files. The creation of temporary files and directories is covered in the Managing temporary files and directories recipe and the creation of linked files is covered in the Managing symbolic links recipe. The options available for copying files and directories are found in the Controlling how a file is copied recipe. The techniques illustrated there provide a powerful way of dealing with file replication. Moving and deleting files and directories are covered in the Moving a file or directory and Deleting files and directories recipes, respectively. The Setting time-related attributes of a file or directory recipe illustrates how to assign time attributes to a file. Related to this effort are other attributes, such as file ownership and permissions. File ownership is addressed in the Managing file ownership recipe. File permissions are discussed in two recipes: Managing ACL file permissions and Managing POSIX file permissions. Creating files and directories The process of creating new files and directories is greatly simplified in Java 7. The methods implemented by the Files class are relatively intuitive and easy to incorporate into your code. In this recipe, we will cover how to create new files and directories using the createFile and createDirectory methods. Getting ready In our example, we are going to use several different methods to create a Path object that represents a file or directory. We will do the following: Create a Path object. Create a directory using the Files class' createDirectory method. Create a file using the Files class' createFile method. The FileSystem class' getPath method can be used to create a Path object, as can the Paths class' get method. The Paths class' static get method returns an instance of a Path based on a string sequence or a URI object. The FileSystem class' getPath method also returns a Path object, but only uses a string sequence to identify the file. How to do it... Create a console application with a main method. In the main method, add the following code that creates a Path object for the directory /home/test in the C directory. Within a try block, invoke the createDirectory method with your Path object as the parameter. This method will throw an IOException if the path is invalid. Next, create a Path object for the file newFile.txt using the createFile method on this Path object, again catching the IOException as follows: try{ Path testDirectoryPath = Paths.get("C:/home/test"); Path testDirectory = Files.createDirectory(testDirectoryPath); System.out.println("Directory created successfully!"); Path newFilePath = FileSystems.getDefault(). getPath("C:/home/test/newFile.txt"); Path testFile = Files.createFile(newFilePath); System.out.println("File created successfully!");}catch (IOException ex){ ex.printStackTrace();} Execute the program. Your output should appear as follows: Directory created successfully! File created successfully! Verify that the new file and directory exists in your filesystem. Next, add a catch block prior to the IOException after both methods, and catch a FileAlreadyExistsException: }catch (FileAlreadyExistsException a){System.out.println("File or directory already exists!");}catch (IOException ex){ ex.printStackTrace();} When you execute the program again, your output should appear as follows: File or directory already exists! How it works... The first Path object was created and then used by the createDirectory method to create a new directory. After the second Path object was created, the createFile method was used to create a file within the directory, which had just been created. It is important to note that the Path object used in the file creation could not be instantiated before the directory was created, because it would have referenced an invalid path. This would have resulted in an IOException. When the createDirectory method is invoked, the system is directed to check for the existence of the directory first, and if it does not exist, create it. The createFile method works in a similar fashion. The method fails if the file already exists. We saw this when we caught the FileAlreadyExistsException. Had we not caught that exception, an IOException would have been thrown. Either way, the existing file would not be overwritten. There's more... The createFile and createDirectory methods are atomic in nature. The createDirectories method is available to create directories, as discussed next. All three methods provide the option to pass file attribute parameters for more specific file creation. Using the createDirectories method to create a hierarchy of directories The createDirectories method is used to create a directory and potentially other intermediate directories. In this example, we build upon the previous directory structure by adding a subtest and a subsubtest directory to the test directory. Comment out the previous code that created the directory and file and add the following code sequence: Path directoriesPath = Paths. get("C:/home/test/subtest/subsubtest"); Path testDirectory = Files.createDirectories(directoriesPath); Verify that the operation succeeded by examining the resulting directory structure. See also Creating temporary files and directories is covered in the Managing temporary files and directories recipe. The creation of symbolic files is illustrated in the Managing symbolic links recipe. Controlling how a file is copied The process of copying files is also simplified in Java 7, and allows for control over the manner in which they are copied. The Files class' copy method supports this operation and is overloaded providing three techniques for copying those which differ by their source or destination. Getting ready In our example, we are going to create a new file and then copy it to another target file. This process involves: Creating a new file using the createFile method. Creating a path for the destination file. Copying the file using the copy method. How to do it... Create a console application with a main method. In the main method, add the following code sequence to create a new file. Specify two Path objects, one for your initial file and one for the location where it will be copied. Then add the copy method to copy that file to the destination location as follows: Path newFile = FileSystems.getDefault(). getPath("C:/home/docs/newFile.txt"); Path copiedFile = FileSystems.getDefault(). getPath("C:/home/docs/copiedFile.txt"); try{ Files.createFile(newFile); System.out.println("File created successfully!"); Files.copy(newFile, copiedFile); System.out.println("File copied successfully!");}catch (IOException e){ System.out.println("IO Exception.");} Execute the program. Your output should appear as follows: File created successfully! File copied successfully! When you execute the program again, your output should appear as follows: File copied successfully! How it works... The createFile method created your initial file, and the copy method copied that file to the location specified by the copiedFile variable. If you were to attempt to run that code sequence twice in a row, you would have encountered an IOException, because the copy method will not, by default, replace an existing file. The copy method is overloaded. The second form of the copy method used the java.lang.enum.StandardCopyOption enumeration value of REPLACE_EXISTING, which allowed the file to be replaced. The three enumeration values for StandardCopyOption are listed in the following table: Value Meaning ATOMIC_MOVE Perform the copy operation atomically COPY_ATTRIBUTES Copy the source file attributes to the destination file REPLACE_EXISTING Replace the existing file if it already exists Replace the copy method call in the previous example with the following: Files.copy(newFile, copiedFile, StandardCopyOption.REPLACE_EXISTING); When the code executes, the file should be replaced. Another example of the use of the copy options is found in the There's more... section of the Moving a file and directory recipe. There's more... If the source file and the destination file are the same, then the method completes, but no copy actually occurs. The copy method is not atomic in nature. There are two other overloaded copy methods. One copies a java.io.InputStream to a file and the other copies a file to a java.io.OutputStream. In this section, we will examine, in more depth, the processes of: Copying a symbolic link file Copying a directory Copying an input stream to a file Copying a file to an output stream Copying a symbolic link file When a symbolic link file is copied, the target of the symbolic link is copied. To illustrate this, create a symbolic link file called users.txt in the music directory to the users.txt file in the docs directory. Use the following code sequence to perform the copy operation: Path originalLinkedFile = FileSystems.getDefault(). getPath("C:/home/music/users.txt"); Path newLinkedFile = FileSystems.getDefault(). getPath("C:/home/music/users2.txt"); try{ Files.copy(originalLinkedFile, newLinkedFile); System.out.println("Symbolic link file copied successfully!");}catch (IOException e){ System.out.println("IO Exception.");} Execute the code. You should get the following output: Symbolic link file copied successfully! Examine the resulting music directory structure. The user2.txt file has been added and is not connected to either the linked file or the original target file. Modification of the user2.txt does not affect the contents of the other two files. Copying a directory When a directory is copied, an empty directory is created. The files in the original directory are not copied. The following code sequence illustrates this process: Path originalDirectory = FileSystems.getDefault(). getPath("C:/home/docs"); Path newDirectory = FileSystems.getDefault(). getPath("C:/home/tmp"); try{ Files.copy(originalDirectory, newDirectory); System.out.println("Directory copied successfully!");} catch (IOException e){ e.printStackTrace();} When this sequence is executed, you should get the following output: Directory copied successfully! Examine the tmp directory. It should be empty as any files in the source directory are not copied. Copying an input stream to a file The copy method has a convenient overloaded version that permits the creation of a new file based on the input from an InputStream. The first argument of this method differs from the original copy method, in that it is an instance of an InputStream. The following example uses this method to copy the jdk7.java.net website to a file: Path newFile = FileSystems.getDefault(). getPath("C:/home/docs/java7WebSite.html"); URI url = URI.create("http://jdk7.java.net/"); try (InputStream inputStream = url.toURL().openStream()) Files.copy(inputStream, newFile); System.out.println("Site copied successfully!");} catch (MalformedURLException ex){ ex.printStackTrace();}catch (IOException ex){ ex.printStackTrace();} When the code executes, you should get the following output: Site copied successfully! A java.lang.Object.URI object was created to represent the website. Using the URI object instead of a java.lang.Object.URL object immediately avoids having to create a separate try-catch block to handle the MalformedURLException exception. The URL class' openStream method returns an InputStream, which is used as the first parameter of the copy method. The copy method was then executed. The new file can now be opened with a browser or otherwise can be processed as needed. Notice that the method returns a long value representing the number of bytes written. Copying a file to an output stream The third overloaded version of the copy method will open a file and write its contents to an OutputStream. This can be useful when the content of a file needs to be copied to a non-file object such as a PipedOutputStream. It can also be useful when communicating to other threads or writing to an array of bytes, as illustrated here. In this example, the content of the users.txt file is copied to an instance of a ByteArrayOutputStream>. Its toByteArray method is then used to populate an array as follows: Path sourceFile = FileSystems.getDefault(). getPath("C:/home/docs/users.txt"); try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) { Files.copy(sourceFile, outputStream); byte arr[] = outputStream.toByteArray(); System.out.println("The contents of " + sourceFile.getFileName()); for(byte data : arr) { System.out.print((char)data);} System.out.println();}catch (IOException ex) { ex.printStackTrace();} Execute this sequence. The output will depend on the contents of your file, but should be similar to the following: The contents of users.txt Bob Jennifer Sally Tom Ted Notice the use of the try-with-resources block that handles the opening and closing of the file. It is always a good idea to close the OutputStream, when the copy operation is complete or exceptions occur. The try-with-resources block handles this nicely. The method may block until the operation is complete in certain situations. Much of its behavior is implementation-specific. Also, the output stream may need to be flushed since it implements the Flushable interface. Notice that the method returns a long value representing the number of bytes written. See also See the Managing symbolic links recipe for more details on working with symbolic links.
Read more
  • 0
  • 0
  • 5574

article-image-new-features-notesdomino-853-development
Packt
31 Jan 2012
8 min read
Save for later

New Features in Notes/Domino 8.5.3 Development

Packt
31 Jan 2012
8 min read
Composite applications Composite applications are applications that consist of two or more components that may have been independently developed, working together to perform tasks that none of the member applications could perform by itself. Each component publishes and consumes messages from other components, and performs actions based upon user interaction or information received from other components. Support for composite applications is one of the central points for Notes/Domino 8. Composite applications in Notes 8 can wire together multiple components from Notes applications, Lotus Component Designer applications, and Eclipse into a single application context for the end user. Composite applications, whether they are based on Notes/Domino 8, Web Sphere Portal, or Lotus Expeditor, are the frontend or user interface to an enterprise's SOA strategy. They, in effect, consume the services that are offered by the composite architectures put in place to support SOA. An example of a composite application would be a simple customer relationship management application. This application needs to display a list of accounts, opportunities, and contacts to end users. The accounts component should display accounts owned by the end user. When the end user selects an account in the account component, the opportunities for that account should be displayed in the opportunities component, and all of the contacts for the first opportunity should be displayed in the contacts component. In the application described, the components are "communicating" with each other by publishing and consuming properties via a property broker. When the user clicks on an account, the account component publishes the accountkey property to the property broker. The opportunities component has been written to "listen" for the accountkey property to be published, and when it is, it performs a lookup into a data store, pulling back all the specific opportunities for the published account key. Once it has displayed all of the opportunities for the account, it selects the first opportunity for display and then publishes the opportunitykey property to the property broker. The contacts component then performs a lookup to display all of the contacts for the opportunity. When the user selects a different opportunity, the opportunity component again publishes an opportunitykey property and the contacts component receives this new opportunitykey property and displays the correct contacts for the selected opportunity Using component applications, developers can respond quickly to requests from the line of business for functionality changes. For example, in the case of the customer relationship management application described, the line of business may decide to purchase a telephony component to dial the phone and log all phone calls made. The developers of the application would need to simply modify the contact component to publish the phone number of a contact with a name that the new telephony component listens for and the call could be made on behalf of the user. In addition to being used within the customer relationship management application, the components developed could be put together with other components to form entirely different applications. Each component already understands what data it needs to publish and consume to perform its actions, and contains the code to perform those specific actions on backend systems. The reuse of the components will save the developers and the organization time whenever they are reused. Composite applications also require a new programming model for Notes/Domino 8. This model mirrors the model within WebSphere Portal 6, in that multiple components are aggregated into a single UI with the property broker acting as the "glue" that allows the various components to interact and share data, even if the components are from different systems. This programming model is something new in Notes 8 and required some changes to Domino Designer 8. As a side note, the new programming model of composite applications will most probably bring its own set of problems. For example, what happens in a composite application when one of the components fails? In this "composite crash" situation, what does the composite application need to do in order to recover? Additionally, from an infrastructure point of view, composite applications will only be as available as their weakest component. What good would a reservations system, implemented with many components, be if one of the components were not hosted by a highly available infrastructure, while the others were? We see these sorts of issues being dealt with currently by customers venturing into the composite world via SOAs. There are two main categories of change for development related to composite applications in Notes/Domino 8 application design and programming. We will look at both of them in the following sections. Application design In order to allow your Notes or Domino application to participate within a composite application, you must first decide which design elements need to be accessible to other components. To make these components available to other components within your composite application, they are specified within a Web Services Description Language (WSDL) file. The composite application property broker then uses this WSDL file as a map into your application and its published properties and actions. To allow this mapping to occur, the Composite Application Editor is used. Without making changes to legacy Notes/Domino application functionality, the Composite Application Editor can be used to surface the elements of the application such as forms, views, documents, and other Notes elements to the composite application. Another element of composite application design is deciding where the application components will reside. Composite applications can be hosted within a local NSF file on a Notes client, on a Domino 8 application server, in the WebSphere Portal, or in Lotus Expeditor. The Notes/Domino application components are created with the Composite Application Editor, while WebSphere Portal composite applications can be created with the Composite Application Editor or the Portal Application Template Editor. Programming As mentioned earlier, the addition of composite applications to the development strategy for Notes/Domino 8 required some changes and additions to the existing programming model. Within a composite application, the components must be able to interact even if they were defined with different tools and technologies. Some components may even be stored within different database technologies. One component may be NSF-based while another may be stored within a relational database store. The components need a standardized way to define the properties and actions that they support, so that an application developer can wire them together into a composite application. The standard way to define these properties and actions is via a WSDL file. Let's take a quick look at properties, actions, and wires Properties Component properties are the data items that a given component produces. They are either input properties (consumed by the component) or output (produced by the component) properties. Each property is assigned a data type, which is based on the WC3 primitive data types. These include String, Boolean, decimal, time, and date. The primitive data types can also be utilized to build new data types. For example, within Notes 8, some new data types for components will be available that map to common data available within the mail, calendar, and contacts applications. Some of these new data types are listed in the following table: Data type name   Extends data type   Description   Example   mailTo   String   List of people to receive an e-mail   "mailto:[email protected]?subject=Our Dogs are Smart&[email protected],[email protected]&[email protected]"   e-mailAddress822   String   E-mail address following RFC 822   "My Gerbil <[email protected]>" "Little Man <[email protected]>" Actions Actions are the logic that is used to consume a property. For example, a component may implement an action that sends an e-mail when it receives a mailTo type property from another component. The code within the component that sends the e-mail based on the information consumed from the property is the action for the component. Components can obviously contain multiple actions depending on the business logic required for the component. It is easy to confuse a web services action with a Notes action. The web services action is a name in a WSDL file that represents functionality that will consume a property. Notes actions can be coupled with a web services action so that the Notes action gets called to consume a property. The LotusScript in the Notes action can then implement code to act on the property. The following screenshot shows a Notes action in the Notes 8 mail template that is coupled with a web services action, NewMemoUsingMailtoURL. You can see that in the code, the LotusScript is using a property broker to gain access to the property: Wires Wires are the construct by which components interact within a composite application. A wire is simply a programmatic definition of which components talk to each other. The components must share common properties and then produce and consume them via actions. More simply put, wires connect properties to actions. For example, an application developer could wire together a contact list component with an e-mail component. When the user selects a contact from the contact list, the contact list component would produce and publish a mailTo type property, which could then be consumed by the e-mail component. The e-mail component would consume the published mailTo property and compose an e-mail using the data contained within the property. The following screenshot shows the components available within the Notes 8 mail template that are available for use in other component applications as well, shown from the Component Palette within the new Composite Application Editor:
Read more
  • 0
  • 0
  • 1389

article-image-net-generics-40-container-patterns-and-best-practices
Packt
24 Jan 2012
6 min read
Save for later

.NET Generics 4.0: Container Patterns and Best Practices

Packt
24 Jan 2012
6 min read
(For more resources on .NET, see here.) Generic container patterns There are several generic containers such as List<T>, Dictionary<Tkey,Tvalue>, and so on. Now, let's take a look at some of the patterns involving these generic containers that show up more often in code. How these are organized Each pattern discussed in this article has a few sections. First is the title. This is written against the pattern sequence number. For example, the title for Pattern 1 is One-to-one mapping. The Pattern interface section denotes the interface implementation of the pattern. So anything that conforms to that interface is a concrete implementation of that pattern. For example, Dictionary<TKey,TValue> is a concrete implementation of IDictionary<TKey,TValue>. The Example usages section shows some implementations where TKey and TValue are replaced with real data types such as string or int. The last section, as the name suggests, showcases some ideas where this pattern can be used. Pattern 1: One-to-one mapping One-to-one mapping maps one element to another. Pattern interface The following is an interface implementation of this pattern: IDictionary<TKey,Tvalue> Some concrete implementations Some concrete implementations of this pattern are as follows: Dictionary<TKey,TValue> SortedDictionary<TKey,TValue> SortedList<TKey,TValue> Example usages The following are examples where TKey and TValue are replaced with real data types such as string or int: Dictionary<string,int> SortedDictionary<int,string> SortedList<string,string> Dictionary<string,IClass> Some situations where this pattern can be used One-to-one mapping can be used in the following situations: Mapping some class objects with a string ID Converting an enum to a string General conversion between types Find and replace algorithms where the find and replace strings become key and value pairs Implementing a state machine where each state has a description, which becomes the key, and the concrete implementation of the IState interface becomes the value of a structure such as Dictionary<string,IState> Pattern 2: One-to-many unique value mapping One-to-many unique value mapping maps one element to a set of unique values. Pattern interface The following is an interface implementation of this pattern: IDictionary<TKey,ISet<Tvalue>> Some concrete implementations Some concrete implementations of this pattern are as follows: Dictionary<TKey,HashSet<TValue>> SortedDictionary<TKey,HashSet<TValue>> SortedList<TKey,SortedSet<TValue>> Dictionary<TKey,SortedSet<TValue>> Example usages The following are examples where TKey and TValue are replaced with real data types such as string or int: Dictionary<int,HashSet<string>> SortedDictionary<string,HashSet<int>> Dictionary<string,SortedSet<int>> Some situations where this pattern can be used One-to-many unique value mapping can be used in the following situations: Mapping all the anagrams of a given word Creating spell check where all spelling mistakes can be pre-calculated and stored as unique values Pattern 3: One-to-many value mapping One-to-many value mapping maps an element to a list of values. This might contain duplicates. Pattern interface The following are the interface implementations of this pattern: IDictionary<TKey,ICollection<Tvalue>> IDictionary<TKey,Ilist<TValue>> Some concrete implementations Some concrete implementations of this pattern are as follows: Dictionary<TKey,List<TValue>> SortedDictionary<TKey,Queue<TValue>> SortedList<TKey,Stack<TValue>> Dictionary<TKey,LinkedList<TValue>> Example usages The following are examples where TKey and TValue are replaced with real data types such as string or int: Dictionary<string,List<DateTime>> SortedDictionary<string,Queue<int>> SortedList<int,Stack<float>> Dictionary<string,LinkedList<int>> Some situations where this pattern can be used One-to-many value mapping can be used in the following situations: Mapping all the grades obtained by a student. The ID of the student can be the key and the grades obtained in each subject (which may be duplicate) can be stored as the values in a list. Tracking all the followers of a Twitter account. The user ID for the account will be the key and all follower IDs can be stored as values in a list. Scheduling all the appointments for a patient whose user ID will serve as the key. Pattern 4: Many-to-many mapping Many-to-many mapping maps many elements of a group to many elements in other groups. Both can have duplicate entries. Pattern interface The following are the interface implementations of this pattern: IEnumerable<Tuple<T1,T2,..,ISet<Tresult>>> IEnumerable<Tuple<T1,T2,..,ICollection<Tresult>>> Some concrete implementations A concrete implementation of this pattern is as follows: IList<Tuple<T1,T2,T3,HashSet<TResult>>> Example usages The following are examples where TKey and TValue are replaced with real data types such as string or int: List<Tuple<string,int,int,int>> List<Tuple<string,int,int,int,HashSet<float>>> Some situations where this pattern can be used Many-to-many mapping can be used in the following situations: If many independent values can be mapped to a set of values, then these patterns should be used. ISet<T> implementations don't allow duplicates while ICollection<T> implementations, such as IList<T>, do. Imagine a company wants to give a pay hike to its employees based on certain conditions. In this situation, the parameters for conditions can be the independent variable of the Tuples, and IDs of employees eligible for the hike can be stored in an ISet<T> implementation. For concurrency support, replace non-concurrent implementations with their concurrent cousins. For example, replace Dictionary<TKey,TValue> with ConcurrentDictionary<TKey,TValue>.
Read more
  • 0
  • 0
  • 1530
article-image-oracle-jdeveloper-11gr2-application-modules
Packt
20 Jan 2012
12 min read
Save for later

Oracle JDeveloper 11gR2: Application Modules

Packt
20 Jan 2012
12 min read
(For more resources on JDeveloper, see here.) Creating and using generic extension interfaces In this recipe, we will go over how to expose any of that common functionality as a generic extension interface. By doing so, this generic interface becomes available to all derived business components, which in turn can be exposed to its own client interface and make it available to the ViewController layer through the bindings layer. How to do it… Open the shared components workspace in JDeveloper Create an interface called ExtApplicationModule as follows: public interface ExtApplicationModule { // return some user authority level, based on // the user's name public int getUserAuthorityLevel();} Locate and open the custom application module framework extension class ExtApplicationModuleImpl. Modify it so that it implements the ExtApplicationModule interface. Then, add the following method to it: public int getUserAuthorityLevel() { // return some user authority level, based on the user's name return ("anonymous".equalsIgnoreCase(this.getUserPrincipalName()))? AUTHORITY_LEVEL_MINIMAL : AUTHORITY_LEVEL_NORMAL;} Rebuild the SharedComponents workspace and deploy it as an ADF Library JAR. Now, open the HRComponents workspace Locate and open the HrComponentsAppModule application module defnition. Go to the Java section and click on the Edit application module client interface button (the pen icon in the Client Interface section). On the Edit Client Interface dialog, shuttle the getUserAuthorityLevel() interface from the Available to the Selected list. How it works… In steps 1 and 2, we have opened the SharedComponents workspace and created an interface called HrComponentsAppModule. This interface contains a single method called getUserAuthorityLevel(). Then, we updated the application module framework extension class HrComponentsAppModuleImpl so that it implements the HrComponentsAppModule interface (step 3). We also implemented the method getUserAuthorityLevel() required by the interface (step 4). For the sake of this recipe, this method returns a user authority level based on the authenticated user's name. We retrieve the authenticated user's name by calling getUserPrincipal().getName() on the SecurityContext, which we retrieve from the current ADF context (ADFContext.getCurrent().getSecurityContext()). If security is not enabled for the ADF application, the user's name defaults to anonymous. In this example, we return AUTHORITY_LEVEL_MINIMAL for anonymous users, and for all others we return AUTHORITY_LEVEL_NORMAL. We rebuilt and redeployed the SharedComponents workspace in step 5. In steps 6 through 9, we opened the HRComponents workspace and added the getUserAuthorityLevel() method to the HrComponentsAppModuleImpl client interface. By doing this, we exposed the getUserAuthorityLevel() generic extension interface to a derived application module, while keeping its implementation in the base framework extension class ExtApplicationModuleImpl. There's more… Note that the steps followed in this recipe to expose an application module framework extension class method to a derived class' client interface can be followed for other business components framework extension classes as well. Exposing a custom method as a web service Service-enabling an application module allows you, among others, to expose custom application module methods as web services. This is one way for service consumers to consume the service-enabled application module. The other possibilities are accessing the application module by another application module, and accessing it through a Service Component Architecture (SCA) composite. Service-enabling an application module allows access to the same application module both through web service clients and interactive web user interfaces. In this recipe, we will go over the steps involved in service-enabling an application module by exposing a custom application module method to its service interface. Getting ready The HRComponents workspace requires a database connection to the HR schema. How to do it… Open the HRComponents project in JDeveloper. Double-click on the HRComponentsAppModule application module in the Application Navigator to open its defnition. Go to the Service Interface section and click on the Enable support for Service Interface button (the green plus sign icon in the Service Interface section). This will start the Create Service Interface wizard. In the Service Interface page, accept the defaults and click Next. In the Service Custom Methods page, locate the adjustCommission() method and shuttle it from the Available list to the Selected list. Click on Finish. Observe that the adjustCommission() method is shown in the Service Interface Custom Methods section of the application module's Service Interface. The service interface fles were generated in the serviceinterface package under the application module and are shown in the Application Navigator. Double-click on the weblogic-ejb-jar.xml fle under the META-INF package in the Application Navigator to open it. In the Beans section, select the com.packt.jdeveloper. cookbook.hr.components.model.application.common. HrComponentsAppModuleService Bean bean and click on the Performance tab. For the Transaction timeout feld, enter 120. How it works… In steps 1 through 6, we have exposed the adjustCommission() custom application module method to the application module's service interface. This is a custom method that adjusts all the Sales department employees' commissions by the percentage specifed. As a result of exposing the adjustCommission() method to the application module service interface, JDeveloper generates the following fles: HrComponentsAppModuleService.java: Defnes the service interface HrComponentsAppModuleServiceImpl.java: The service implementation class HrComponentsAppModuleService.xsd: The service schema fle describing the input and output parameters of the service HrComponentsAppModuleService.wsdl: The Web Service Defnition Language (WSDL) fle, describing the web service ejb-jar.xml: The EJB deployment descriptor. It is located in the src/META-INF directory weblogic-ejb-jar.xml: The WebLogic-specifc EJB deployment descriptor, located in the src/META-INF directory In steps 7 and 8, we adjust the service Java Transaction API (JTA) transaction timeout to 120 seconds (the default is 30 seconds). This will avoid any exceptions related to transaction timeouts when invoking the service. This is an optional step added specifcally for this recipe, as the process of adjusting the commission for all sales employees might take longer than the default 30 seconds, causing the transaction to time out. To test the service using the JDeveloper integrated WebLogic application server, right-click on the HrComponentsAppModuleServiceImpl.java service implementation fle in the Application Navigator and select Run or Debug from the context menu. This will build and deploy the HrComponentsAppModuleService web service into the integrated WebLogic server. Once the deployment process is completed successfully, you can click on the service URL in the Log window to test the service. This will open a test window in JDeveloper and also enable the HTTP Analyzer. Otherwise, copy the target service URL from the Log window and paste it into your browser's address feld. This will bring up the service's endpoint page. On this page, select the adjustCommission method from the Operation drop down, specify the commissionPctAdjustment parameter amount and click on the Invoke button to execute the web service. Observe how the employees' commissions are adjusted in the EMPLOYEES table in the HR schema. There's more… For more information on service-enabling application modules consult chapter Integrating Service-Enabled Application Modules in the Fusion Developer's Guide for Oracle Application Development Framework which can be found at http://docs.oracle.com/cd/ E24382_01/web.1112/e16182/toc.htm. Accessing a service interface method from another application module In the recipe Exposing a custom method as a web service in this article, we went through the steps required to service-enable an application module and expose a custom application module method as a web service. We will continue in this recipe by explaining how to invoke the custom application module method, exposed as a web service, from another application module. Getting ready This recipe will call the adjustCommission() custom application module method that was exposed as a web service in the Exposing a custom method as a web service recipe in this article. It requires that the web service is deployed in WebLogic and that it is accessible. The recipe also requires that both the SharedComponents workspace and the HRComponents workspace are deployed as ADF Library JARs and that are added to the workspace used by this specifc recipe. Additionally, a database connection to the HR schema is required. How to do it… Ensure that you have built and deployed both the SharedComponents and HRComponents workspaces as ADF Library JARs. Create a File System connection in the Resource Palette to the directory path where the SharedComponents.jar and HRComponents.jar ADF Library JARs are located. Create a new Fusion Web Application (ADF) called HRComponentsCaller using the Create Fusion Web Application (ADF) wizard. Create a new application module called HRComponentsCallerAppModule using the Create Application Module wizard. In the Java page, check on the Generate Application Module Class checkbox to generate a custom application module implementation class. JDeveloper will ask you for a database connection during this step, so make sure that a new database connection to the HR schema is created. Expand the File System | ReUsableJARs connection in the Resource Palette and add both the SharedComponents and HRComponents libraries to the project. You do this by right-clicking on the jar fle and selecting Add to Project… from the context menu. Bring up the business components Project Properties dialog and go to the Libraries and Classpath section. Click on the Add Library… button and add the BC4J Service Client and JAX-WS Client extensions. Double-click on the HRComponentsCallerAppModuleImpl.java custom application module implementation fle in the Application Navigator to open it in the Java editor. Add the following method to it: public void adjustCommission( BigDecimal commissionPctAdjustment) { // get the service proxy HrComponentsAppModuleService service = (HrComponentsAppModuleService)ServiceFactory .getServiceProxy( HrComponentsAppModuleService.NAME); // call the adjustCommission() service service.adjustCommission(commissionPctAdjustment);} Expose adjustCommission() to the HRComponentsCallerAppModule client interface. Finally, in order to be able to test the HRComponentsCallerAppModule application module with the ADF Model Tester, locate the connections.xml fle in the Application Resources section of the Application Navigator under the Descriptors | ADF META-INF node, and add the following confguration to it: <Reference name="{/com/packt/jdeveloper/cookbook/hr/components/model/ application/common/}HrComponentsAppModuleService" className="oracle.jbo.client.svc.Service" ><Factory className="oracle.jbo.client.svc.ServiceFactory"/><RefAddresses><StringRefAddr addrType="serviceInterfaceName"><Contents>com.packt.jdeveloper.cookbook.hr.components.model. application.common.serviceinterface.HrComponentsAppModuleService </Contents></StringRefAddr><StringRefAddr addrType="serviceEndpointProvider"><Contents>ADFBC</Contents></StringRefAddr><StringRefAddr addrType="jndiName"><Contents>HrComponentsAppModuleServiceBean#com.packt.jdeveloper. cookbook.hr.components.model.application.common. serviceinterface.HrComponentsAppModuleService</Contents></StringRefAddr><StringRefAddr addrType="serviceSchemaName"><Contents>HrComponentsAppModuleService.xsd</Contents></StringRefAddr><StringRefAddr addrType="serviceSchemaLocation"><Contents>com/packt/jdeveloper/cookbook/hr/components/model/ application/common/serviceinterface/</Contents></StringRefAddr><StringRefAddr addrType="jndiFactoryInitial"><Contents>weblogic.jndi.WLInitialContextFactory</Contents></StringRefAddr><StringRefAddr addrType="jndiProviderURL"><Contents>t3://localhost:7101</Contents></StringRefAddr></RefAddresses></Reference> How it works… In steps 1 and 2, we have made sure that both the SharedComponents and HRComponents ADF Library JARs are deployed and that a fle system connection was created, in order that both of these libraries get added to a newly created project (in step 5). Then, in steps 3 and 4, we create a new Fusion web application based on ADF, and an application module called HRComponentsCallerAppModule. It is from this application module that we intend to call the adjustCommission() custom application module method, exposed as a web service by the HrComponentsAppModule service-enabled application module in the HRComponents library JAR. For this reason, in step 4, we have generated a custom application module implementation class. We proceed by adding the necessary libraries to the new project in steps 5 and 6. Specifcally, the following libraries were added: SharedComponents.jar, HRComponents.jar, BC4J Service Client, and JAX-WS Client. In steps 7 through 9, we create a custom application module method called adjustCommission(), in which we write the necessary glue code to call our web service. In it, we frst retrieve the web service proxy, as a HrComponentsAppModuleService interface, by calling ServiceFactory.getServiceProxy() and specifying the name of the web service, which is indicated by the constant HrComponentsAppModuleService.NAME in the service interface. Then we call the web service through the retrieved interface. In the last step, we have provided the necessary confguration in the connections.xml so that we will be able to call the web service from an RMI client (the ADF Model Tester). This fle is used by the web service client to locate the web service. For the most part, the Reference information that was added to it was generated automatically by JDeveloper in the Exposing a custom method as a Web service recipe, so it was copied from there. The extra confguration information that had to be added is the necessary JNDI context properties jndiFactoryInitial and jndiProviderURL that are needed to resolve the web service on the deployed server. You should change these appropriately for your deployment. Note that these parameters are the same as the initial context parameters used to lookup the service when running in a managed environment. To test calling the web service, ensure that you have frst deployed it and that it is running. You can then use the ADF Model Tester, select the adjustCommission method and execute it. There's more… For additional information related to such topics as securing the ADF web service, enabling support for binary attachments, deploying to WebLogic, and more, refer to the Integrating Service-Enabled Application Modules section in the Fusion Developer's Guide for Oracle Application Development Framework which can be found at http://docs.oracle.com/ cd/E24382_01/web.1112/e16182/toc.htm.
Read more
  • 0
  • 0
  • 1864

article-image-working-dashboards-dynamics-crm
Packt
19 Jan 2012
5 min read
Save for later

Working with Dashboards in Dynamics CRM

Packt
19 Jan 2012
5 min read
(For more resources on Microsoft Dynamics CRM, see here.) Editing a user dashboard After creating a user dashboard or getting access to another user dashboard, you may still need to adjust the layout and settings of the dashboard. Getting ready Navigate to the Dashboards section in the Dynamics CRM 2011 Workplace area. How to do it... Carry out the following steps in order to complete this recipe: Select the Dashboards link from the Workplace area. Select one of your user dashboards, as shown in the following screenshot: From the Dashboards menu in the Dynamics CRM 2011 ribbon, click on the Edit button, as highlighted in the following screenshot: The dashboard editor screen will open, and the dashboard is now in Edit mode, as shown in the following screenshot: In order to edit the components on the dashboard, select a component by clicking on it with the mouse, and then click on the Edit Component ribbon button, as shown in the following screenshot: There's more... Dynamics CRM has a robust security system that combines roles-based security and user permissions. These security settings allow the administrator to control access to data and functionality in the Dynamics CRM system. Security roles for editing user dashboards In order for a Dynamics CRM user to edit user dashboards, they must have a security role that grants the Write privilege for the User Dashboard entity . If a user's security role does not have this privilege, then they will not see the Edit button on the dashboard ribbon: Editing a system dashboard The system dashboards are intended to be viewed by all users of Dynamics CRM. These dashboards are created and managed by users with the System Customizer or System Administrator security roles (by default these roles have the Write privilege for the System Forms entity). Edits made to these dashboards are seen by all users. Getting ready Editing a System dashboard requires you to first navigate to the Customization section in the Dynamics CRM 2011 Settings area. How to do it... Carry out the following steps in order to complete this recipe: From the Customization section, click on the Customize the System link, as shown in the following screenshot: This will launch the solution editor dialog showing the Default Solution for Dynamics CRM 2011. Click on the Dashboards link located in the left-hand side navigation section, as shown in the following screenshot: A listing of system dashboards will be shown. Double-click on the Microsoft Dynamics CRM Overview dashboard record. This will launch the dashboard editor screen. In order to edit the components on the dashboard, select a component by clicking on it with the mouse, and then click on the Edit Component ribbon button, as shown in the following screenshot: There's more... Dynamics CRM has a robust security system that combines roles-based security and user permissions. These security settings allow the administrator to control access to data and functionality in the Dynamics CRM system. Security roles for editing system dashboards In order for a Dynamics CRM user to edit system dashboards, they must have a security role which grants the Write privilege for the System Form entity. If a user's security role does not have this privilege, then they will not be able to edit the dashboard when customizing the system. By default, the System Forms are only editable by users with the System Customizer or System Administrator security roles as they both have full privileges to the System Form entity. Deleting a user dashboard Creating new dashboards in Dynamics CRM is an excellent feature; however the on-going management of dashboards may require you to remove or delete some dashboards that are no longer needed. Deleting dashboards in Dynamics CRM cannot be undone; users should understand that deleting a dashboard is permanent. Getting ready Navigate to the Dashboards section in the Dynamics CRM 2011 Workplace area. How to do it... Carry out the following steps in order to complete this recipe: Select the Dashboards link from the Workplace area, as shown in the following screenshot: The user dashboards will be in the My Dashboards section of this list. Once you have selected a user dashboard, the Delete button in the Dashboards bar will be enabled. Click on the Delete button, as shown in the following screenshot: You will be prompted with a Confirm Deletion dialog . As the message in this dialog states, deleting a dashboard cannot be undone. If you want to continue and delete this dashboard from your system, click on the OK button. When the operation is finished, the screen will refresh and that dashboard will no longer be available. How it works... The layouts and settings used to generate user dashboards are stored as records in the Dynamics CRM database. Deleting the dashboard will remove this record from the CRM database and cannot be reversed. Deleting the dashboard will only remove the dashboard layout and settings, not the associated data.
Read more
  • 0
  • 0
  • 3835