Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-google-guice
Packt
24 Sep 2013
13 min read
Save for later

Google Guice

Packt
24 Sep 2013
13 min read
(For more resources related to this topic, see here.) Structure of flightsweb application Our application has two servlets: IndexServlet, which is a trivial example of forwarding any request, mapped with "/" to index.jsp and FlightServlet, which processes the request using the functionality we developed in the previous section and forwards the response to response.jsp. Here in, we simply declare the FlightEngine and SearchRequest as the class attributes and annotate them with @Inject. FlightSearchFilter is a filter with the only responsibility of validating the request parameters. Index.jsp is the landing page of this application and presents the user with a form to search the flights, and response.jsp is the results page. The flight search form will look as shown in the following screenshot: The search page would subsequently lead to the following result page. In order to build the application, we need to execute the following command in the directory, where the pom.xml file for the project resides: shell> mvn clean package The project for this article being a web application project compiles and assembles a WAR file, flightsweb.war in the target directory. We could deploy this file to TOMCAT. Using GuiceFilter Let's start with a typical web application development scenario. We need to write a JSP to render a form for searching flights and subsequently a response JSP page. The search form would post the request parameters to a processing servlet, which processes the parameters and renders the response. Let's have a look at web.xml. A web.xml file for an application intending to use Guice for dependency injection needs to apply the following filter: <filter> <filter-name>guiceFilter</filter-name> <filter-class>com.google.inject.servlet.GuiceFilter</filter-class> </filter> <filter-mapping> <filter-name>guiceFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> It simply says that all the requests need to pass via the Guice filter. This is essential since we need to use various servlet scopes in our application as well as to dispatch various requests to injectable filters and servlets. Rest any other servlet, filter-related declaration could be done programmatically using Guice-provided APIs. Rolling out our ServletContextListener interface Let's move on to another important piece, a servlet context listener for our application. Why do we need a servlet context listener in the first place? A servlet context listener comes into picture once the application is deployed. This event is the best time when we could bind and inject our dependencies. Guice provides an abstract class, which implements ServletContextListener interface. This class basically takes care of initializing the injector once the application is deployed, and destroying it once it is undeployed. Here, we add to the functionality by providing our own configuration for the injector and leave the initialization and destruction part to super class provided by Guice. For accomplishing this, we need to implement the following API in our sub class: protected abstract Injector getInjector(); Let's have a look at how the implementation would look like: package org.packt.web.listener; import com.google.inject.servlet.GuiceServletContextListener; import com.google.inject.servlet.ServletModule; public class FlightServletContextListener extends GuiceServletContextListener { @Override protected Injector getInjector() { return Guice.createInjector( new ServletModule(){ @Override protected void configureServlets() { // overridden method contains various // configurations } }); }} Here, we are returning the instance of injector using the API: public static Injector createInjector(Module... modules) Next, we need to provide a declaration of our custom FlightServletContextListener interface in web.xml: <listener> <listener-class> org.packt.web.listener.FlightServletContextListener </listener-class> </listener> ServletModule – the entry point for configurations In the argument for modules, we provide a reference of an anonymous class, which extends the class ServletModule. A ServletModule class configures the servlets and filters programmatically, which is actually a replacement of declaring the servlet and filters and their corresponding mappings in web.xml. Why do we need to have a replacement of web.xml in the first place? Think of it on different terms. We need to provide a singleton scope to our servlet. We need to use various web scopes like RequestScope, SessionScope, and so on for our classes, such as SearchRequest and SearchResponse. These could not be done simply via declarations in web.xml. A programmatic configuration is far more logical choice for this. Let's have a look at a few configurations we write in our anonymous class extending the ServletModule: new ServletModule(){ @Override protected void configureServlets() { install(new MainModule()); serve("/response").with(FlightServlet.class); serve("/").with(IndexServlet.class); } } A servlet module at first provides a way to install our modules using the install() API. Here, we install MainModule, which is reused from the previous section. Rest all other modules are installed from MainModule. Binding language ServletModule presents APIs, which could be used for configuring filters and servlets. Using these expressive APIs known as EDSL, we could configure the mappings between servlets, filters, and respective URLs. Guice uses an embedded domain specific language or EDSL to help us create bindings simply and readably. We are already using this notation while creating various sort of bindings using the bind() APIs. Readers could refer to the Binder javadoc, where EDSL is discussed with several examples. Mapping servlets Here, following statement maps the /response path in the application to the FlightServlet class's instance: serve("/response").with(FlightServlet.class); serve() returns an instance of ServletKeyBindingBuilder. It provides various APIs, using which we could map a URL to an instance of servlet. This API also has a variable argument, which helps to avoid repetition. For example, in order to map /response as well as /response-quick, both the URLs to FlightServlet.class we could use the following statement: serve("/response","/response-quick").with(FlightServlet.class); serveRegex() is similar to serve(), but accepts the regular expression for URL patterns, rather than concrete URLs. For instance, an easier way to map both of the preceding URL patterns would be using this API: serveRegex("^response").with(FlightServlet.class); ServletKeyBindingBuilder.with() is an overloaded API. Let's have a look at the various signatures. void with(Class<? extends HttpServlet> servletKey); void with(Key<? extends HttpServlet> servletKey); To use the key binding option, we will develop a custom annotation @FlightServe. FlightServlet will be then annotated with it. Following binding maps a URL pattern to a key: serve("/response").with(Key.get(HttpServlet.class, FlightServe.class)); Since this, we need to just declare a binding between @FlightServe and FlightServlet, which will go in modules: bind(HttpServlet.class). annotatedWith(FlightServe.class).to(FlightServlet.class) What is the advantage of binding indirectly using a key? First of all, it is the only way using which we could separate an interface from an implementation. Also it helps us to assign scope as a part of the configuration. A servlet or a filter must be at least in singleton In this case we can assign scope directly in configuration. The option of annotating a filter or a servlet with @Singleton is also available, although. Guice 3.0 provides following overloaded versions, which even facilitate providing initialization parameters and hence provide type safety. void with(HttpServlet servlet); void with(Class<? extends HttpServlet> servletKey, Map<String, String> initParams); void with(Key<? extends HttpServlet> servletKey, Map<String, String> initParams); void with(HttpServlet servlet, Map<String, String> initParams); An important point to be noted here is that ServletModule not only provides a programmatic API to configure the servlets, but also a type-safe idiomatic API to configure the initialization parameters. It is not possible to ensure type safety while declaring the initialization parameters in web.xml. Mapping filters Similar to the servlets, filters could be mapped to URL patterns or regular expressions. Here, the filter() API is used to map a URL pattern to a Filter. For example: filter("/response").through(FlightSearchFilter.class); filter() returns an instance of FilterKeyBindingBuilder. FlightKeyBindingBuilder provides various APIs, using which we can map a URL to an instance of filter. filter() and filterRegex() APIs take exactly the same kind of arguments as serve() and serveRegex() does when it comes to handling the pure URLs or regular expressions. Let's have a look at FilterKeyBindingBuilder.through() APIs. Similar to ServletKeyBindingBuilder.with() it also provides various overloaded versions: void through(Class<? extends Filter> filterKey); void through(Key<? extends Filter> filterKey); Key mapped to a URL, which is then bound via annotation to an implementation could be exemplified as: filter("/response"). through(Key.get(Filter.class,FlightFilter.class)); The binding is done through annotation. Also note, that the filter implementation is deemed as singleton in scope. bind(Filter.class). annotatedWith(FlightFilter.class). to(FlightSearchFilter.class).in(Singleton.class); Guice 3.0 provides following overloaded versions, which even facilitate providing initialization parameters and provide type safety: void through(Filter filter); void through(Class<? extends Filter> filterKey, Map<String, String> initParams); void through(Key<? extends Filter> filterKey, Map<String, String> initParams); void through(Filter filter, Map<String, String> initParams); Again, these type safe APIs provide a better configuration option then declaration driven web.xml. Web scopes Aside from dependency injection and configuration facilities via programmable APIs, Guice provides feature of scoping various classes, depending on their role in the business logic. As we saw, while developing the custom scope, a scope comes into picture during binding phase. Later, when the scope API is invoked, it brings the provider into picture. Actually it is the provider which is the key to the complete implementation of the scope. Same thing applies for the web scope. @RequestScoped Whenever we annotate any class with either of servlet scopes like @RequestScoped or @SessionScoped, call to scope API of these respective APIs are made. This results in eager preparation of the Provider<T> instances. So to harness these providers, we need not configure any type of binding, as these are implicit bindings. We just need to inject these providers where we need the instances of respective types. Let us discuss various examples related to these servlet scopes. Classes scoped to @RequestScoped are instantiated on every request. A typical example would be to instantiate SearchRequest on every request. We need to annotate the SearchRQ with the @RequestScoped. @RequestScoped public class SearchRequest { ……} Next, in FlightServlet we need to inject the implicit provider: @Inject private Provider<SearchRequest> searchRQProvider; The instance could be fetched simply by invoking the .get() API of the provider: SearchRequest searchRequest = searchRQProvider.get(); @SessionScoped The same case goes with @SessionScoped annotation. In FlightSearchFilter, we need an instance of RequestCounter (a class for keeping track of number of requests in a session). This class RequestCounter needs to be annotated with @SessionScoped, and would be fetched in the same way as the SearchRequest. However the Provider takes care to instantiate it on every new session creation: @SessionScoped public class RequestCounter implements Serializable{……} Next, in FlightSearchFilter, we need to inject the implicit provider: @Inject private Provider<RequestCounter> sessionCountProvider; The instance could be fetched simply by invoking the .get() API of the provider. @RequestParameters Guice also provides a @RequestParameters annotation. It could be directly used to inject the request parameters. Let's have a look at an example in FlightSearchFilter. Here, we inject the provider for type Map<String,String[]> in a field: @Inject @RequestParameters private Provider<Map<String, String[]>> reqParamMapProvider; As the provider is bound internally via InternalServletModule (Guice installs this module internally), we can harness the implicit binding and inject the Provider. An important point to be noted over here is that, in case we try to inject the classes annotated with ServletScopes, like @RequestScoped or @SessionScoped, outside of the ServletContext or via a non HTTP request like RPC, Guice throws the following exception: SEVERE: Exception starting filter guiceFilter com.google.inject.ProvisionException: Guice provision errors: Error in custom provider, com.google.inject.OutOfScopeException: Cannot access scoped object. Either we are not currently inside an HTTP Servlet request, or you may have forgotten to apply com.google.inject.servlet.GuiceFilter as a servlet filter for this request. This happens because the Providers associated with these scopes necessarily work with a ServletContext and hence it could not complete the dependency injection. We need to make sure that our dependencies annotated with ServletScopes come into the picture only when we are in WebScope. Another way in which the scoped dependencies could be made available is by using the injector.getInstance() API. This however requires that we need to inject the injector itself using the @Inject injector in the dependent class. This is however not advisable as it is mixing dependency injection logic with the application logic. We need to avoid this approach. Exercising caution while scoping Our examples illustrate cases where we are injecting the dependencies with narrower scope in the dependencies of wider scope. For example, RequestCounter (which is @SessionScoped) is injected in FlightSearchFilter (which is a singleton). This needs to be very carefully designed, as in when we are absolutely sure that a narrowly scoped dependency should be always present else it would create a problem. It basically results in scope widening, which means that apparently we are widening the scope of SessionScoped objects to that of singleton scoped object, the servlet. If not managed properly, it could result into memory leaks, as the garbage collector could not collect the references to the narrowly scoped objects, which are held in the widely scoped objects. Sometimes this is unavoidable, in such a case we need to make sure we are following two basic rules: Injecting the narrow scoped dependency using Providers. By following this strategy, we never allow the widely scoped class to hold the reference to the narrowly scoped dependency, once it goes out of scope. Do not get the injector instance injected in the wide scoped class instance to fetch the narrow scoped dependency, directly. It could result in hard to debug bugs. Make sure that we use the dependent narrowly scoped objects in APIs only. This lets these to live as stack variables rather than heap variables. Once method execution finishes, the stack variables are garbage collected. Assigning the object fetched from the provider to a class level reference could affect garbage collection adversely, and result in memory leaks. Here, we are using these narrowly scoped dependencies in APIs: doGet() and doFilter(). This makes sure that they are always available. Contrarily, injecting widely scoped dependencies in narrowly scoped dependencies works well, for example, in a @RequestScoped annotated class if we inject a @SessionScoped annotated dependency, it is much better since it is always guaranteed that dependency would be available for injection and once narrowly scoped object goes out of scope it is garbage collected properly. We retrofitted our flight search application in a web environment. In doing so we learned about many aspects of the integration facilities Guice offers us: We learned how to set up the application to use dependency injection using GuiceFilter and a custom ServletContextListener. We saw how to avoid servlet, filter mapping in web.xml, and follow a safer programmatic approach using ServletModule. We saw the usage of various mapping APIs for the same and also certain newly introduced features in Guice 3.0. We discussed how to use the various web scopes.
Read more
  • 0
  • 0
  • 2041

article-image-eventbus-class
Packt
23 Sep 2013
14 min read
Save for later

The EventBus Class

Packt
23 Sep 2013
14 min read
(For more resources related to this topic, see here.) When developing software, the idea of objects sharing information or collaborating with each other is a must. The difficulty lies in ensuring that communication between objects is done effectively, but not at the cost of having highly coupled components. Objects are considered highly coupled when they have too much detail about other components' responsibilities. When we have high coupling in an application, maintenance becomes very challenging, as any change can have a rippling effect. To help us cope with this software design issue; we have event-based programming. In event-based programming, objects can either subscribe/listen for specific events, or publish events to be consumed. In Java, we have had the idea of event listeners for some time. An event listener is an object whose purpose is to be notified when a specific event occurs. In this article, we are going to discuss the Guava EventBus class and how it facilitates the publishing and subscribing of events. The EventBus class will allow us to achieve the level of collaboration we desire, while doing so in a manner that results in virtually no coupling between objects. It's worth noting that the EventBus is a lightweight, in-process publish/subscribe style of communication, and is not meant for inter-process communication. We are going to cover several classes in this article that have an @Beta annotation indicating that the functionality of the class may be subject to change in future releases of Guava. EventBus The EventBus class (found in the com.google.common.eventbus package) is the focal point for establishing the publish/subscribe-programming paradigm with Guava. At a very high level, subscribers will register with EventBus to be notified of particular events, and publishers will send events to EventBus for distribution to interested subscribers. All the subscribers are notified serially, so it's important that any code performed in the event-handling method executes quickly. Creating an EventBus instance Creating an EventBus instance is accomplished by merely making a call to the EventBus constructor: EventBus eventBus = new EventBus(); We could also provide an optional string argument to create an identifier (for logging purposes) for EventBus: EventBus eventBus = new EventBus(TradeAccountEvent.class.getName()); Subscribing to events The following three steps are required by an object to receive notifications from EventBus,: The object needs to define a public method that accepts only one argument. The argument should be of the event type for which the object is interested in receiving notifications. The method exposed for an event notification is annotated with an @Subscribe annotation. Finally, the object registers with an instance of EventBus, passing itself as an argument to the EventBus.register method. Posting the events To post an event, we need to pass an event object to the EventBus.post method. EventBus will call the registered subscriber handler methods, taking arguments those are assignable to the event object type. This is a very powerful concept because interfaces, superclasses, and interfaces implemented by superclasses are included, meaning we can easily make our event handlers as course- or fine-grained as we want, simply by changing the type accepted by the event-handling method. Defining handler methods Methods used as event handlers must accept only one argument, the event object. As mentioned before, EventBus will call event-handling methods serially, so it's important that those methods complete quickly. If any extended processing needs to be done as a result of receiving an event, it's best to run that code in a separate thread. Concurrency EventBus will not call the handler methods from multiple threads, unless the handler method is marked with the @AllowConcurrentEvent annotation. By marking a handler method with the @AllowConcurrentEvent annotation, we are asserting that our handler method is thread-safe. Annotating a handler method with the @AllowConcurrentEvent annotation by itself will not register a method with EventBus. Now that we have defined how we can use EventBus, let's look at some examples. Subscribe – An example Let's assume we have defined the following TradeAccountEvent class as follows: public class TradeAccountEvent { private double amount; private Date tradeExecutionTime; private TradeType tradeType; private TradeAccount tradeAccount; public TradeAccountEvent(TradeAccount account, double amount, Date tradeExecutionTime, TradeType tradeType) { checkArgument(amount > 0.0, "Trade can't be less than zero"); this.amount = amount; this.tradeExecutionTime = checkNotNull(tradeExecutionTime,"ExecutionTime can't be null"); this.tradeAccount = checkNotNull(account,"Account can't be null"); this.tradeType = checkNotNull(tradeType,"TradeType can't be null"); } //Details left out for clarity So whenever a buy or sell transaction is executed, we will create an instance of the TradeAccountEvent class. Now let's assume we have a need to audit the trades as they are being executed, so we have the SimpleTradeAuditor class as follows: public class SimpleTradeAuditor { private List<TradeAccountEvent> tradeEvents = Lists.newArrayList(); public SimpleTradeAuditor(EventBus eventBus){ eventBus.register(this); } @Subscribe public void auditTrade(TradeAccountEvent tradeAccountEvent){ tradeEvents.add(tradeAccountEvent); System.out.println("Received trade "+tradeAccountEvent); } } Let's quickly walk through what is happening here. In the constructor, we are receiving an instance of an EventBus class and immediately register the SimpleTradeAuditor class with the EventBus instance to receive notifications on TradeAccountEvents. We have designated auditTrade as the event-handling method by placing the @Subscribe annotation on the method. In this case, we are simply adding the TradeAccountEvent object to a list and printing out to the console acknowledgement that we received the trade. Event Publishing – An example Now let's take a look at a simple event publishing example. For executing our trades, we have the following class: public class SimpleTradeExecutor { private EventBus eventBus; public SimpleTradeExecutor(EventBus eventBus) { this.eventBus = eventBus; } public void executeTrade(TradeAccount tradeAccount, double amount, TradeType tradeType){ TradeAccountEvent tradeAccountEvent = processTrade(tradeAccount, amount, tradeType); eventBus.post(tradeAccountEvent); } private TradeAccountEvent processTrade(TradeAccount tradeAccount, double amount, TradeType tradeType){ Date executionTime = new Date(); String message = String.format("Processed trade for %s of amount %n type %s @ %s",tradeAccount,amount,tradeType,executionTime); TradeAccountEvent tradeAccountEvent = new TradeAccountEvent(tradeAccount,amount,executionTime,tradeType); System.out.println(message); return tradeAccountEvent; } } Like the SimpleTradeAuditor class, we are taking an instance of the EventBus class in the SimpleTradeExecutor constructor. But unlike the SimpleTradeAuditor class, we are storing a reference to the EventBus for later use. While this may seem obvious to most, it is critical for the same instance to be passed to both classes. We will see in future examples how to use multiple EventBus instances, but in this case, we are using a single instance. Our SimpleTradeExecutor class has one public method, executeTrade, which accepts all of the required information to process a trade in our simple example. In this case, we call the processTrade method, passing along the required information and printing to the console that our trade was executed, then returning a TradeAccountEvent instance. Once the processTrade method completes, we make a call to EventBus.post with the returned TradeAccountEvent instance, which will notify any subscribers of the TradeAccountEvent object. If we take a quick view of both our publishing and subscribing examples, we see that although both classes participate in the sharing of required information, neither has any knowledge of the other. Finer-grained subscribing We have just seen examples on publishing and subscribing using the EventBus class. If we recall, EventBus publishes events based on the type accepted by the subscribed method. This gives us some flexibility to send events to different subscribers by type. For example, let's say we want to audit the buy and sell trades separately. First, let's create two separate types of events: public class SellEvent extends TradeAccountEvent { public SellEvent(TradeAccount tradeAccount, double amount, Date tradExecutionTime) { super(tradeAccount, amount, tradExecutionTime, TradeType.SELL); } } public class BuyEvent extends TradeAccountEvent { public BuyEvent(TradeAccount tradeAccount, double amount, Date tradExecutionTime) { super(tradeAccount, amount, tradExecutionTime, TradeType.BUY); } } Now we have created two discrete event classes, SellEvent and BuyEvent, both of which extend the TradeAccountEvent class. To enable separate auditing, we will first create a class for auditing SellEvent instances: public class TradeSellAuditor { private List<SellEvent> sellEvents = Lists.newArrayList(); public TradeSellAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditSell(SellEvent sellEvent){ sellEvents.add(sellEvent); System.out.println("Received SellEvent "+sellEvent); } public List<SellEvent> getSellEvents() { return sellEvents; } } Here we see functionality that is very similar to the SimpleTradeAuditor class with the exception that this class will only receive the SellEvent instances. Then we will create a class for auditing only the BuyEvent instances: public class TradeBuyAuditor { private List<BuyEvent> buyEvents = Lists.newArrayList(); public TradeBuyAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditBuy(BuyEvent buyEvent){ buyEvents.add(buyEvent); System.out.println("Received TradeBuyEvent "+buyEvent); } public List<BuyEvent> getBuyEvents() { return buyEvents; } } Now we just need to refactor our SimpleTradeExecutor class to create the correct TradeAccountEvent instance class based on whether it's a buy or sell transaction: public class BuySellTradeExecutor { … deatails left out for clarity same as SimpleTradeExecutor //The executeTrade() method is unchanged from SimpleTradeExecutor private TradeAccountEvent processTrade(TradeAccount tradeAccount, double amount, TradeType tradeType) { Date executionTime = new Date(); String message = String.format("Processed trade for %s of amount %n type %s @ %s", tradeAccount, amount, tradeType, executionTime); TradeAccountEvent tradeAccountEvent; if (tradeType.equals(TradeType.BUY)) { tradeAccountEvent = new BuyEvent(tradeAccount, amount, executionTime); } else { tradeAccountEvent = new SellEvent(tradeAccount, amount, executionTime); } System.out.println(message); return tradeAccountEvent; } } Here we've created a new BuySellTradeExecutor class that behaves in the exact same manner as our SimpleTradeExecutor class, with the exception that depending on the type of transaction, we create either a BuyEvent or SellEvent instance. However, the EventBus class is completely unaware of any of these changes. We have registered different subscribers and we are posting different events, but these changes are transparent to the EventBus instance. Also, take note that we did not have to create separate classes for the notification of events. Our SimpleTradeAuditor class would have continued to receive the events as they occurred. If we wanted to do separate processing depending on the type of event, we could simply add a check for the type of event. Finally, if needed, we could also have a class that has multiple subscribe methods defined: public class AllTradesAuditor { private List<BuyEvent> buyEvents = Lists.newArrayList(); private List<SellEvent> sellEvents = Lists.newArrayList(); public AllTradesAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditSell(SellEvent sellEvent){ sellEvents.add(sellEvent); System.out.println("Received TradeSellEvent "+sellEvent); } @Subscribe public void auditBuy(BuyEvent buyEvent){ buyEvents.add(buyEvent); System.out.println("Received TradeBuyEvent "+buyEvent); } } Here we've created a class with two event-handling methods. The AllTradesAuditor method will receive notifications about all trade events; it's just a matter of which method gets called by EventBus depending on the type of event. Taken to an extreme, we could create an event handling method that accepts a type of Object, as Object is an actual class (the base class for all other objects in Java), and we could receive notifications on any and all events processed by EventBus. Finally, there is nothing preventing us from having more than one EventBus instance. If we were to refactor the BuySellTradeExecutor class into two separate classes, we could inject a separate EventBus instance into each class. Then it would be a matter of injecting the correct EventBus instance into the auditing classes, and we could have complete event publishing-subscribing independence. Unsubscribing to events Just as we want to subscribe to events, it may be desirable at some point to turn off the receiving of events. This is accomplished by passing the subscribed object to the eventBus.unregister method. For example, if we know at some point that we would want to stop processing events, we could add the following method to our subscribing class: public void unregister(){ this.eventBus.unregister(this); } Once this method is called, that particular instance will stop receiving events for whatever it had previously registered for. Other instances that are registered for the same event will continue to receive notifications. AsyncEventBus We stated earlier the importance of ensuring that our event-handling methods keep the processing light due to the fact that the EventBus processes all events in a serial fashion. However, we have another option with the AsyncEventBus class. The AsyncEventBus class offers the exact same functionality as the EventBus, but uses a provided java.util.concurrent.Executor instance to execute handler methods asynchronously. Creating an AsyncEventBus instance We create an AsyncEventBus instance in a manner similar to the EventBus instance: AsyncEventBus asyncEventBus = new AsyncEventBus(executorService); Here we are creating an AsyncEventBus instance by providing a previously created ExecutorService instance. We also have the option of providing a String identifier in addition to the ExecutorService instance. AsyncEventBus is very helpful to use in situations where we suspect the subscribers are performing heavy processing when events are received. DeadEvents When EventBus receives a notification of an event through the post method, and there are no registered subscribers, the event is wrapped in an instance of a DeadEvent class. Having a class that subscribes for DeadEvent instances can be very helpful when trying to ensure that all events have registered subscribers. The DeadEvent class exposes a getEvent method that can be used to inspect the original event that was undelivered. For example, we could provide a very simple class, which is shown as follows: public class DeadEventSubscriber { private static final Logger logger = Logger.getLogger(DeadEventSubscriber.class); public DeadEventSubscriber(EventBus eventBus) { eventBus.register(this); } @Subscribe public void handleUnsubscribedEvent(DeadEvent deadEvent){ logger.warn("No subscribers for "+deadEvent.getEvent()); } } Here we are simply registering for any DeadEvent instances and logging a warning for the original unclaimed event. Dependency injection To ensure we have registered our subscribers and publishers with the same instance of an EventBus class, using a dependency injection framework (Spring or Guice) makes a lot of sense. In the following example, we will show how to use the Spring Framework Java configuration with the SimpleTradeAuditor and SimpleTradeExecutor classes. First, we need to make the following changes to the SimpleTradeAuditor and SimpleTradeExecutor classes: @Component public class SimpleTradeExecutor { private EventBus eventBus; @Autowired public SimpleTradeExecutor(EventBus eventBus) { this.eventBus = checkNotNull(eventBus, "EventBus can't be null"); } @Component public class SimpleTradeAuditor { private List<TradeAccountEvent> tradeEvents = Lists.newArrayList(); @Autowired public SimpleTradeAuditor(EventBus eventBus){ checkNotNull(eventBus,"EventBus can't be null"); eventBus.register(this); } Here we've simply added an @Component annotation at the class level for both the classes. This is done to enable Spring to pick these classes as beans, which we want to inject. In this case, we want to use constructor injection, so we added an @Autowired annotation to the constructor for each class. Having the @Autowired annotation tells Spring to inject an instance of an EventBus class into the constructor for both objects. Finally, we have our configuration class that instructs the Spring Framework where to look for components to wire up with the beans defined in the configuration class. @Configuration @ComponentScan(basePackages = {"bbejeck.guava.article7.publisher", "bbejeck.guava.article7.subscriber"}) public class EventBusConfig { @Bean public EventBus eventBus() { return new EventBus(); } } Here we have the @Configuration annotation, which identifies this class to Spring as a Context that contains the beans to be created and injected if need be. We defined the eventBus method that constructs and returns an instance of an EventBus class, which is injected into other objects. In this case, since we placed the @Autowired annotation on the constructors of the SimpleTradeAuditor and SimpleTradeExecutor classes, Spring will inject the same EventBus instance into both classes, which is exactly what we want to do. While a full discussion of how the Spring Framework functions is beyond the scope of this book, it is worth noting that Spring creates singletons by default, which is exactly what we want here. As we can see, using a dependency injection framework can go a long way in ensuring that our event-based system is configured properly. Summary In this article, we have covered how to use event-based programing to reduce coupling in our code by using the Guava EventBus class. We covered how to create an EventBus instance and register subscribers and publishers. We also explored the powerful concept of using types to register what events we are interested in receiving. We learned about the AsyncEventBus class, which allows us to dispatch events asynchronously. We saw how we can use the DeadEvent class to ensure we have subscribers for all of our events. Finally, we saw how we can use dependency injection to ease the setup of our event-based system. In the next article, we will take a look at working with files in Guava. Resources for Article: Further resources on this subject: WordPress: Avoiding the Black Hat Techniques [Article] So, what is Google Drive? [Article] Showing your Google calendar on your Joomla! site using GCalendar [Article]
Read more
  • 0
  • 0
  • 2333

article-image-about-test-studio
Packt
20 Sep 2013
13 min read
Save for later

About Test Studio

Packt
20 Sep 2013
13 min read
(For more resources related to this topic, see here.) Testing concepts The following is a conceptual overview of some fundamental testing terminologies and principles. These are used in day-to-day testing activities. Test case A test case is a scenario that will be executed by the tester or by an automation tool, such as the Test Studio for any of the software testing purposes, such as uncovering potential errors in the system. It contains: Test case identifier: This identifier uniquely distinguishes a test case. Priority: The priority holds a value to indicate the importance of a test case so that the most important ones are executed first and so on. Preconditions: The preconditions describe the initial application state in which the test case is to be executed. It includes actions that need to be completed before starting the execution of the test case, such as performing certain configurations on the application, or other details about the application's state that are found relevant. Procedure: The procedure of a test case is the set of steps that the tester or automated testing tool needs to follow. Expected behavior: It is important to set an expected behavior resulting from the procedure. How else would you verify the functionality you are testing? The expected behavior of a test case is specified before running a test, and it describes a logical and friendly response to your input from the system. When you compare the actual response of the system to the preset expected behavior, you determine whether the test case was a success or a failure. Executing a test case When executing a test case, you would add at least one field to your test case description. It is called the actual behavior and it logs the response of the system to the procedure. If the actual behavior deviates from the expected behavior, an incident report is created. This incident report is further analyzed and in case a flaw is identified in the system, a fix is provided to solve the issue. The information that an incident report would include are the details of the test case in addition to the actual behavior that describes the anomalous events. The following example demonstrates the basic fields found in a sample incident report. It describes a transaction carried out at a bank's ATM: Incident report identifier: ATM-398 Preconditions: User account balance is $1000 Procedure: It includes the following steps: User inserts a card. User enters the pin. Attempts to withdraw a sum of $500. Expected behavior: Operation is allowed Actual behavior: Operation is rejected, insufficient funds in account! Procedure results: Fail The exit criteria The following definition appears in the ISTQB (International Software Testing Qualification Board) glossary: "The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task, which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]" The pesticide paradox Software testing is governed by a set of principles out of which we list the pesticide paradox. The following definition appears in the ISTQB glossary: If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this, "pesticide paradox", the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects. Element recognition Element recognition is a pillar of automated test execution as the tool used can't perform an action on an object unless it recognizes it and knows how to find it. Element identification is important in making the automated scripts less fragile during execution. This topic will be reflected in this article. Testing phases The following set of fundamental testing phases is based on their definition by ISTQB. Other organizations might name them differently or include different activities in them. Test planning and control: Test objectives and activities are set during test planning and a test plan is created. It can include: Test strategy: The general approach to testing the application Test tools: Reporting tools, automated testing tool, and so on Test techniques : Will be discussed in the next section Human resources: The personnel needed to carry out the testing As for test control, it should be exercised during all the phases to monitor progress and amend the test plan as needed. Test analysis and design: During this phase, the system specifications are analyzed and test cases, along with their data, are designed. They are also prioritized and the testing environment is identified. Test implementation and execution: When implementing your tests and before executing them, you should set up your environment, generate the detailed test cases, run them, and then log and report the results of your findings. Evaluating the exit criteria and reporting: Evaluating exit criteria is important in order to know when to stop testing. Occasionally, we find that more tests are needed if the risk in one or more application areas hasn't been fully covered. In case it is decided to stop that test implementation and execution, reports are generated and submitted to the implicated persons. Test closure activities: The test closure activities are designed to facilitate reusing of the test data across different versions and products, as well as to promote evaluating and enhancing the testing process. These activities include saving all the test data and testware in a secure repository, evaluating the testing process, and logging suggested amendments. Testing techniques Ranging from easy and straightforward to complex and machine-computed, many testing techniques guide the design and generation of your test cases. In the this section, we will describe the most basic of these techniques based on the ISTQB standards: Equivalence classes: By definition, an equivalence class is a single class of inputs generating an equivalent output. Vice versa, it could be a single class of outputs generated from equivalent inputs. For example, imagine you need to test a simple numeric field which accepts values from 0 to 100. During your testing, you cannot possibly exhaust all the values, hence we would identify one valid equivalence partition and three invalid partitions as follows: For valid partitions: Values between 0 and 100 inclusive For invalid partitions: Values less than zero Values greater than 100 Nonnumeric inputs As a result, you now choose tests from the four equivalence classes instead of testing all the options. The value of equivalence classes analysis lies in the reduction of testing time and effort. Boundary values: When choosing boundary value analysis, you study the limits of your system input. Typically, they are the logical minimum and maximum values in addition to technical or computational limits, such as register sizes, buffer sizes, or memory availability. After determining your logical and technical limits, you would test the system by inputting the actual boundary, the boundary decremented by the smallest possible unit, and the boundary increment by the smallest possible unit. Assuming our system is an application form where you need to enter your first name in one of the fields, you can proceed with a boundary value analysis on the length of the first name string. Considering that the smallest input is one character, and the largest input is one hundred, our boundary values analysis will lead to a test for strings having the following number of characters: zero (empty input), one, two, ninety-nine, one hundred, and one hundred and one. Decision tables: In certain systems, many rules may be interacting with each other to produce the output, such as a security matrix. For instance, let's assume your system is a document management system. The possible factors determining whether a user will have view rights or not are as follows: Belonging to user groups with a permission set for each group Having an individual permission for each user Having access to the documents' file path These factors are called the conditions of the decision table, where the actions might be reading, editing, or deleting a document. A decision table would allow you to test and verify every combination of the listed conditions. Certain rules might simplify your table, but they are outside the scope of this article. The resulting decision table for the previous example of document management system is illustrated as follows: Decision table for user rights State transition diagram: In some systems, not only do the actions performed determine the output and the routing of the application, but also the state in which the system was in before these actions. For such systems, a state transition diagram is used to generate test cases. Firstly, the state transition diagram is drawn with every state as a circle and every possible action as an arrow. Conditions are written between square brackets and the output is preceded by a forward slash. Secondly, each action represented in the diagram is attempted from an initial state. Thirdly, test cases are generated by looping around the state transition diagram and by choosing different possible paths while varying the conditions. The expected behavior in state transition test cases are both the output of the system and the transition to the next expected state. In the following sample diagram, you will find the state transition diagram of a login module: State transition diagram for user authentication to the system About Test Studio This section gives the list of features provided in Test Studio: Functional test automation: The Test Studio solution to functional test automation is going to be discovered through the following topics: building automated tests, using translators and inserting verifications, adding coded steps, executing tests and logging, adding custom logging, inserting manual steps, assigning and reading variables in tests, debugging errors, and integrating automated test creations with Visual Studio. Data-driven architecture: Test Studio offers built-in integration with data sources, allowing you to apply the data-driven architecture during test automation. This feature includes binding tests to SQL, MS Excel, XML, and local data sources, creating data-driven verification, and integrating data-driven architecture with normal automated execution contexts. Element recognition: Element recognition is a powerful feature in Test Studio from which it derives additional test reliability. Element recognition topics will be covered through Test Studio Find expressions for UI elements, element repository consolidation and maintenance, and specialized Find chained expressions. Manual testing: In addition to automated testing, Test Studio guides the manual testing process. Manual testing includes creating manual test steps, integrating with MS Excel, converting manual tests to hybrid, and executing these two types of tests. Organizing the test repository and source control: Tests within the Test Studio project can be organized and reorganized using the features embedded in the tool. Its integration with external source control systems also adds to this management process. The underlying topics are managing tests under folders, setting test properties, and binding your test project to source control from both Test Studio and Visual Studio. Test suites execution and reporting: Grouping tests under test suites is achievable through the Test Studio test lists. This feature comprises creating static and dynamic test lists, executing them, logging their execution result, viewing standard reports, and extending with custom reports. Extended libraries: Extending testing framework automation functionalities for Test Studio is an option available through the creation of Test Studio plugin libraries. Performance testing: In Test Studio, nonfunctional testing is firstly addressed with performance testing. This feature covers developing performance tests, executing them, gathering performance counters, and analyzing and baselining execution results. Load testing: Nonfunctional testing in Test Studio is augmented with another type of test, which is load testing. This topic covers configuring Test Studio load testing services, developing load tests, recording HTTP traffic, creating user profiles and workloads, monitoring machines, gathering performance metrics, executing load tests, and creating custom charts Mobile testing: Test Studio is extended with a version specialized in iOS web, native and hybrid apps testing. It includes preparing applications for testing within Test Studio, creating automated tests, inserting verifications on UI elements, registering applications on the web portal, syncing test projects, sending and viewing built-in feedback messages, sending and viewing crash reports, and managing and monitoring registered applications through web portals. Approach While reading this article, you will find a problem-based approach to automating tests with Test Studio. The following general approach might vary slightly between the different examples: General problem: We will start by stating the general problem that you may face in real-life automation Real-life example: We will then give a real-life example based on our previous experience in software testing Solutions using the Test Studio IDE: Having described the problem, a solution using the Test Studio IDE will be provided Solutions using code: Finally, some solutions will be provided by writing code. Setting up your environment You will get a list of files with this article to help you try the examples properly. The following is an explanation on how to set up the environment to practice the automation examples against the applications under test. The File Comparer application To configure this application environment, you need to: Run the FC_DB-Database Scripts.sql files in the SQL Management Studio. Open the settings.xml file from the solution bin and edit the ConnectionString parameter. Reports The data source files for these reports can be found in the ODCs folder. In order to properly display the charts in the workbook: Edit the ConnectionString parameter inside the ODC extension files. Bind the pivot tables inside the excel workbook to these files as follows: The Execution Metrics for Last Run sheet to the FC_DB-L-EMLR. odc file The Execution Metrics over Time sheet to the FC_DB-MOT.odc file The Feature Coverage sheet to the FC_DB-FC.odc file The Test Execution Duration sheet to the FC_DB-TED.odc file Additional files The following are the additional files used in this article : The Test Studio Automated Solutions folder contains the Test Studio automated solution for the examples in the article The TestStudio.Extension folder is a Visual Studio solution and it corresponds to the Test Studio extension library. Other reference sources Refer to Telerik online documentation for: Test Studio standalone and VS plugin editions found at http://www.telerik.com/automated-testing-tools/support/documentation/user-guide/test-execution/test-list-settings.aspx Mobile testing using Test Studio extension for iOS testing found at http:// www.telerik.com/automated-testing-tools/support/documentation/ mobile-testing/testing.aspx Also, for software testing and automation concepts you can refer to: ISTQB-BCS Certified Tester Foundation Level book, Foundations of Software Testing by Dorothy Graham, Erik Van Veenendaal, Isabel Evans, and Rex Black ISTQB glossary of testing terms 2.2 Summary This article explains in brief about Test Studio, its features, and so on. It gives a basic knowledge of what Test Studio is and some of the useful links. Resources for Article : Further resources on this subject: Load Testing Using Visual Studio 2008: Part 1 [Article] Load Testing Using Visual Studio 2008: Part 2 [Article] Ordered and Generic Tests in Visual Studio 2010 [Article]
Read more
  • 0
  • 0
  • 1718
Banner background image

article-image-testing-backbonejs-application
Packt
20 Sep 2013
6 min read
Save for later

Testing Backbone.js Application

Packt
20 Sep 2013
6 min read
(For more resources related to this topic, see here.) Testing Backbone applications is no different than testing any other application; you are still going to drive your code from your specs, except that Backbone is already leveraging a lot of functionality for you, for free. So expect to write less code, and consequently less specs. Testing Backbone Views We already have seen some of the advantages of using the View pattern in Testing Frontend Code, and are already creating our interface components in such a manner. So how can a Backbone View be different from what we have done so far? It retains a lot of the patterns that we have discussed as best practices for creating maintainable browser code, but with some syntax sugar and automation to make our life easier. They are the glue code between the HTML and the model, and the Backbone View's main responsibility is to add behavior to the interface, while keeping it in sync with a model or collection. As we will see, Backbone's biggest triumph is how it makes an easy-to-handle DOM event delegation, a task usually done with jQuery. Declaring a new View Declaring a new View is going to be a matter of extending the base Backbone.View object. To demonstrate how it works we need an example. We are going to create a new View and its responsibility is going to be to render a single investment on the screen. We are going to create it in such a way that allows its use by the InvestmentListView component. This is a new component and spec, written in src/InvestmentView.js and spec/InvestmentViewSpec.js respectively. In the spec file, we can write something similar to this: describe("InvestmentView", function() { var view; beforeEach(function() { view = new InvestmentView(); }); it("should be a Backbone View", function() { expect(view).toEqual(jasmine.any(Backbone.View)); });}); Which translates into an implementation that extends the base Backbone View component: (function (Backbone) { var InvestmentView = Backbone.View.extend() this.InvestmentView = InvestmentView;})(Backbone); And now we are ready to explore some of the new functionalities provided by Backbone. The el property Like the View pattern a Backbone View also has an attribute containing the reference to its DOM element. The difference here is that Backbone comes with it by default, providing: view.el: The DOM element view.$el: The jQuery object for that element view.$: A scoped jQuery lookup function (the same way we have implemented) And if you don't provide an element on the constructor, it creates an element for you automatically. Of course the element it creates is not attached to the document, and is up to the View's user code to attach it. Here is a common pattern you see while using Views: Instantiate it: var view = new InvestmentView(); Call the render function to draw the View's components view.render() Append its element to the page document: $('body').append(view.el); Given our clean implementation of the InvestmentView, if you would go ahead and execute the preceding code on a clean page, you would get the following result: <body> <div></div></body> An empty div element; that is the default element created by Backbone. But we can change that with a few configuration parameters on the InvestmentView declaration. Let's say we want the DOM element of InvestmentView to be a list item (li) with an investment CSS class. We could write this spec using the familiar Jasmine jQuery matchers: describe("InvestmentView", function() { var view; beforeEach(function() { view = new InvestmentView(); }); it("should be a list item with 'investment' class", function() { expect(view.$el).toBe('li.investment'); });}); You can see that we didn't use the setFixtures function, since we can run this test against the element instance available on the View. Now to the implementation; all we have to do, is define two simple attributes in the View definition, and Backbone will use them to create the View's element: var InvestmentView = Backbone.View.extend({ className: 'investment', tagName: 'li'}); By looking at the implementation you might be wondering if we shouldn't test it. Here I would recommend against it, since you wouldn't get any benefit from that approach, as this spec is much more solid. That is great, but how do we add content to that DOM element? That is up to the render function we are going to see next. var view = new InvestmentView({ el: $('body') }); But by letting the View handle its rendering, we get better componentization and we can also gain on performance. Rendering Now that we understand that it is a good idea to have an empty element available on the View, we must get into the details of how to draw on this empty canvas. Backbone Views already come with an available render function, but it is a dummy implementation, so it is up to you to define how it works. Going back to the InvestmentView example, let's add a new acceptance criterion to describe how it should be rendered. We are going to start by expecting that it renders the return of investment as a percentage value. Here is the spec implementation: describe("InvestmentView", function() { var view, investment; beforeEach(function() { investment = new Investment(); view = new InvestmentView({ model: investment }); }); describe("when rendering", function() { beforeEach(function() { investment.set('roi', 0.1); view.render(); }); it("should render the return of investment", function() { expect(view.$el).toContainHtml('10%'); }); });}); That is a very standard spec with concepts that we have seen before and the implementation is just a matter of defining the render function on the InvestmentView declaration: var InvestmentView = Backbone.View.extend({ className: 'investment', tagName: 'li', render: function () { this.$el.html('<p>'+ formatedRoi.call(this) +'<p>'); return this; }});function formatedRoi () { return (this.model.get('roi') * 100) + '%';} It is using the this.$el property to add some HTML content to the View's element. There are some details that are important for you to notice regarding the render function implementation: We are using the jQuery.html function, so that we can invoke the render function multiple times without duplicating the View's content. The render function returns the View instance once it has completed rendering. This is a common pattern to allow chained calls, such as: $('body').append(new InvestmentView().render().el); Now back to the test. You can see that we weren't testing for the specific HTML snippet, but rather, that just 10 percent text was rendered. You could have done a more thoroughly written spec by checking the exact same HTML at the expectation, but that ends up adding test complexity with little benefit. Summary In this article, you have seen how to use Backbone to do some heavy lifting, allowing you to focus more on your application code. I showed you the power of events, and how they make integration between different components much easier, allowing you to keep your models and Views in sync. Resources for Article: Further resources on this subject: The architecture of JavaScriptMVC [Article] Working with JavaScript in Drupal 6: Part 1 [Article] Syntax Validation in JavaScript Testing [Article]
Read more
  • 0
  • 0
  • 1308

article-image-editing-attributes
Packt
19 Sep 2013
4 min read
Save for later

Editing attributes

Packt
19 Sep 2013
4 min read
(For more resources related to this topic, see here.) There are three main use cases for attribute editing. First, we might want to edit the attributes of one specific feature, for example, to fix a wrong name. Second, we might want to edit attributes of a group of features. Or third, we might want to change the attributes of all the features within a layer. All these use cases are covered by functionality available through the attribute table. We can access it by going to Layer | Open Attribute Table, the Open Attribute Table button present in the Attributes toolbar, or in the layer name context menu. To change attribute values, we always have to first enable editing. Then we can double-click on any cell in the attribute table to activate the input mode. Clicking on Enter confirms the change, but to save the new value permanently, we have to also click on the Save Edit(s) button or press Ctrl + S. In the bottom-right corner of the attribute table dialog, we can switch from the table to the form view, as shown in the following screenshot, and start editing there. Another option for editing the attributes of one feature is to open the attribute form directly by clicking on the feature on the map using the Identify tool. By default, the Identify tool displays the attribute values in the read mode, but we can enable Open feature form if a single feature is identified by going to Settings | Options | Map Tools. In the attribute table, we also find tools to handle selections (from left to right starting at the third button): Delete selected features, Select by expression, Cancel the selection, Move selected features to the top of the table, Invert the selection, Pan to the selected features, Zoom to the selected features, and Copy the selected features. Another way to select features in the attribute table is to click on the row number. The next two buttons allow us to add and remove columns. When we click on the delete column button, we get a list of columns to choose from. Similarly, the add columns button brings up a dialog to specify the name and data type of the new column. If we want to change attributes of multiple or all features in a layer, editing them manually usually isn't an option. That is what Field Calculator is good for. We can access it using the Open field calculator button in the attribute table or using the Ctrl + I keys. In Field Calculator, we can choose to only update selected features or to update all the features in the layer. Besides updating an existing field, we can also create a new field. The function list is the same one we already explored when we selected features by expression. We can use any of these functions to populate a new field or update an existing one. Here are some example expressions that are used often: We can create an id column using the $rownum function, which populates a column with the row numbers as shown in the following screenshot Another common use case is to calculate line length or polygon area using the geometry functions $length and $area respectively Similarly, we can get point coordinates using $x and $y If we want to get the start or end points of a line, we can use xat(0) and yat(0) or xat(-1) and yat(-1) Summary Thus, in this article we have learned how to edit the attributes in QGIS. Resources for Article : Further resources on this subject: Geo-Spatial Data in Python: Working with Geometry [Article] Web Frameworks for Python Geo-Spatial Development [Article] Plotting Geographical Data using Basemap [Article]
Read more
  • 0
  • 0
  • 1967

article-image-using-xml-facade-dom
Packt
13 Sep 2013
25 min read
Save for later

Using XML Facade for DOM

Packt
13 Sep 2013
25 min read
(For more resources related to this topic, see here.) The Business Process Execution Language (BPEL) is based on XML, which means that all the internal variables and data are presented in XML. BPEL and Java technologies are complementary, we seek ways to ease the integration of the technologies. In order to handle the XML content from BPEL variables in Java resources (classes), we have a couple of possibilities: Use DOM (Document Object Model) API for Java, where we handle the XML content directly through API calls. An example of such a call would be reading from the input variable: oracle.xml.parser.v2.XMLElement input_cf= (oracle.xml.parser.v2.XMLElement)getVariableData("inputVariable","payload","/client:Cashflows"); We receive the XMLElement class, which we need to handle further, either be assignment, reading of content, iteration, or something else. As an alternative, we can use XML facade though Java Architecture for XML Binding (JAXB). JAXB provides a convenient way of transforming XML to Java or vice-versa. The creation of XML facade is supported through the xjc utility and of course via the JDeveloper IDE. The example code for accessing XML through XML facade is: java.util.List<org.packt.cashflow.facade.PrincipalExchange>princEx= cf.getPrincipalExchange(); We can see that there is neither XML content nor DOM API anymore. Furthermore, we have to access the whole XML structure represented by Java classes. The latest specification of JAXB at the time of writing is 2.2.7, and its specification can be found at the following location: https://jaxb.java.net/. The purpose of an XML facade operation is the marshalling and un-marshalling of Java classes. When the originated content is presented in XML, we use un-marshalling methods in order to generate the correspondent Java classes. In cases where we have content stored in Java classes and we want to present the content in XML, we use the marshalling methods. JAXB provides the ability to create XML facade from an XML schema definition or from the WSDL (Web Service Definition/Description Language). The latter method provides a useful approach as we, in most cases, orchestrate web services whose operations are defined in WSDL documents. Throughout this article, we will work on a sample from the banking world. On top of this sample, we will show how to build the XML facade. The sample contains the simple XML types, complex types, elements, and cardinality, so we cover all the essential elements of functionality in XML facade. Setting up an XML facade project We start generating XML facade by setting up a project in a JDeveloper environment which provides convenient tools for building XML facades. This recipe will describe how to set up a JDeveloper project in order to build XML facade. Getting ready To complete the recipe, we need the XML schema of the BPEL process variables based on which we build XML facade. Explore the XML schema of our banking BPEL process. We are interested in the structure of the BPEL request message: <xsd:complexType name="PrincipalExchange"><xsd:sequence><xsd:element minOccurs="0"name="unadjustedPrincipalExchangeDate" type="xsd:date"/><xsd:element minOccurs="0"name="adjustedPrincipalExchangeDate" type="xsd:date"/><xsd:element minOccurs="0" name="principalExchangeAmount"type="xsd:decimal"/><xsd:element minOccurs="0" name="discountFactor"type="xsd:decimal"/></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType><xsd:complexType name="CashflowsType"><xsd:sequence><xsd:element maxOccurs="unbounded" minOccurs="0"name="principalExchange" type="prc:PrincipalExchange"/></xsd:sequence></xsd:complexType><xsd:element name="Cashflows" type="prc:CashflowsType"/> The request message structure presents just a small fragment of cash flows modeled in the banks. The concrete definition of a cash flow is much more complex. However, our definition contains all the right elements so that we can show the advantages of using XML facade in a BPEL process. How to do it... The steps involved in setting up a JDeveloper project for XML façade are as follows: We start by opening a new Java Project in JDeveloper and naming it CashflowFacade. Click on Next. In the next window of the Create Java Project wizard, we select the default package name org.packt.cashflow.facade. Click on Finish. We now have the following project structure in JDeveloper: We have created a project that is ready for XML facade creation. How it works... After the wizard has finished, we can see the project structure created in JDeveloper. Also, the corresponding file structure is created in the filesystem. Generating XML facade using ANT This recipe explains how to generate XML facade with the use of the Apache ANT utility. We use the ANT scripts when we want to build or rebuild the XML facade in many iterations, for example, every time during nightly builds. Using ANT to build XML façade is very useful when XML definition changes are constantly in phases of development. With ANT, we can ensure continuous synchronization between XML and generated Java code. The official ANT homepage along with detailed information on how to use it can be found at the following URL: http://ant.apache.org/. Getting ready By completing our previous recipe, we built up a JDeveloper project ready to create XML facade out of XML schema. To complete this recipe, we need to add ANT project technology to the project. We achieve this through the Project Properties dialog: How to do it... The following are the steps we need to take to create a project in JDeveloper for building XML façade with ANT: Create a new ANT build file by right-clicking on the CashflowFacade project node, select New, and choose Buildfile from Project (Ant): The ANT build file is generated and added into the project under the Resources folder. Now we need to amend the build.xml file with the code to build XML facade. We will first define the properties for our XML facade: <property name="schema_file" location="../Banking_BPEL/xsd/Derivative_Cashflow.xsd"/><property name="dest_dir" location="./src"/><property name="package" value="org.packt.cashflow.facade"/> We define the location of the source XML schema (it is located in the BPEL process). Next, we define the destination of the generated Java files and the name of the package. Now, we define the ANT target in order to build XML facade classes. The ANT target presents one closed unit of ANT work. We define the build task for the XML façade as follows: <target name="xjc"><delete dir="src"/><mkdir dir="src"/><echo message="Compiling the schema..." /><exec executable="xjc"><arg value="-xmlschema"/><arg value="${schema_file}"/><arg value="-d"/><arg value="${dest_dir}"/><arg value="-p"/><arg value="${package}"/></exec></target> Now we have XML facade packaged and ready to be used in BPEL processes. How it works… ANT is used as a build tool and performs various tasks. As such, we can easily use it to build XML facade. Java Architecture for XML Binding provides the xjc utility, which can help us in building XML facade. We have provided the following parameters to the xjc utility: Xmlschema: This is the threat input schema as XML schema d: This specifies the destination directory of the generated classes p: This specifies the package name of the generated classes There are a number of other parameters, however we will not go into detail about them here. Based on the parameters we provided to the xjc utility, the Java representation of the XML schema is generated. If we examine the generated classes, we can see that there exists a Java class for every type defined in the XML schema. Also, we can see that the ObjectFactory class is generated, which eases the generation of Java class instances. There's more... There is a difference in creating XML facade between Versions 10g and 11g of Oracle SOA Suite. In Oracle SOA Suite 10g, there was a convenient utility named schema, which is used for building XML facade. However, in Oracle SOA Suite 11g, the schema utility is not available anymore. To provide a similar solution, we create a template class, which is later copied to a real code package when needed to provide functionality for XML facade. We create a new class Facade in the called facade package. The only method in the class is static and serves as a creation point of facade: public static Object createFacade(String context, XMLElement doc)throws Exception {JAXBContext jaxbContext;Object zz= null;try {jaxbContext = JAXBContext.newInstance(context);Unmarshaller unmarshaller = jaxbContext.createUnmarshaller();zz = unmarshaller.unmarshal(doc);return zz;} catch (JAXBException e) {throw new Exception("Cannot create facade from the XML content. "+ e.getMessage());}} The class code implementation is simple and consists of creating the JAXB context. Further, we un-marshall the context and return the resulting class to the client. In case of problems, we either throw an exception or return a null object. Now the calling code is trivial. For example, to create XML facade for the XML content, we call as follows: Object zz = facade.Facade.createFacade("org.packt.cashflow.facade",document.getSrcRoot()); Creating XML facade from XSD This recipe describes how to create XML facade classes from XSD. Usually, the necessity to access XML content out of Java classes comes from already defined XML schemas in BPEL processes. How to do it... We have already defined the BPEL process and the XML schema (Derivative_Cashflow.xsd) in the project. The following steps will show you how to create the XML facade from the XML schema: Select the CashflowFacade project, right-click on it, and select New. Select JAXB 2.0 Content Model from XML Schema. Select the schema file from the Banking_BPEL project. Select the Package Name for Generated Classes checkbox and click on the OK button. The corresponding Java classes for the XML schema were generated. How it works... Now compare the classes generated via the ANT utility in the Generating XML facade using ANT recipe with this one. In essence, the generated files are the same. However, we see the additional file jaxb.properties, which holds the configuration of the JAXB factory used for the generation of Java classes. It is recommended to create the same access class (Facade.java) in order to simplify further access to XML facade. Creating XML facade from WSDL It is possible to include the definitions of schema elements into WSDL. To overcome the extraction of XML schema content from the WSDL document, we would rather take the WSDL document and create XML facade for it. This recipe explains how to create XML facade out of the WSDL document. Getting ready To complete the recipe, we need the WSDL document with the XML schema definition. Luckily, we already have one automatically generated WSDL document, which we received during the Banking_BPEL project creation. We will amend the already created project, so it is recommended to complete the Generating XML facade using ANT recipe before continuing with this recipe. How to do it... The following are the steps involved in creating XML façade from WSDL: Open the ANT configuration file (build.xml) in JDeveloper. We first define the property which identifies the location of the WSDL document: <property name="wsdl_file" location="../Banking_BPEL/Derivative_Cashflow.wsdl"/> Continue with the definition of a new target inside the ANT configuration file in order to generate Java classes from the WSDL document: <target name="xjc_wsdl"><delete dir="src/org"/><mkdir dir="src/org"/><echo message="Compiling the schema..." /><exec executable="xjc"><arg value="-wsdl"/><arg value="${schema_file}"/><arg value="-d"/><arg value="${dest_dir}"/><arg value="-p"/><arg value="${package}"/></exec></target> From the configuration point of view, this step completes the recipe. To run the newly defined ANT task, we select the build.xml file in the Projects pane. Then, we select the xjc_wsdl task in the Structure pane of JDeveloper, right-click on it, and select Run Target "xjc_wsdl": How it works... The generation of Java representation classes from WSDL content works similar to the generation of Java classes from XSD content. Only the source of the XML input content is different from the xjc utility. In case we execute the ANT task with the wrong XML or WSDL content, we receive a kind notification from the xjc utility. For example, if we run the utility xjc with the parameter –xmlschema over the WSDL document, we get a warning that we should use different parameters for generating XML façade from WSDL. Note that generation of Java classes from the WSDL document via JAXB is only available through ANT task definition or the xjc utility. If we try the same procedure with JDeveloper, an error is reported. Packaging XML facade into JAR This recipe explains how to prepare a package containing XML facade to be used in BPEL processes and in Java applications in general. Getting ready To complete this recipe, we need the XML facade created out of the XML schema. Also, the generated Java classes need to be compiled. How to do it... The steps involved for packaging XML façade into JAR are as follows: We open the Project Properties by right-clicking on the CashflowFacade root node. From the left-hand side tree, select Deployment and click on the New button. The Create Deployment Profile window opens where we set the name of the archive. Click on the OK button. The Edit JAR Deployment Profile Properties dialog opens where you can configure what is going into the JAR archive. We confirm the dialog and deployment profile as we don't need any special configuration. Now, we right-click on the project root node (CashflowFacade), then select Deploy and CFacade. The window requesting the deployment action appears. We simply confirm it by pressing the Finish button: As a result, we can see the generated JAR file created in the deploy folder of the project. There's more... In this article, we also cover the building of XML facade with the ANT tool. To support an automatic build process, we can also define an ANT target to build the JAR file. We open the build.xml file and define a new target for packaging purposes. With this target, we first recreate the deploy directory and then prepare the package to be utilized in the BPEL process: <target name="pack" depends="compile"><delete dir="deploy"/><mkdir dir="deploy"/><jar destfile="deploy/CFacade.jar"basedir="./classes"excludes="**/*data*"/></target> To automate the process even further, we define the target to copy generated JAR files to the location of the BPEL process. Usually, this means copying the JAR files to the SCA-INF/lib directory: <target name="copyLib" depends="pack"><copy file="deploy/CFacade.jar" todir="../Banking_BPEL/SCAINF/lib"/></target> The task depends on the successful creation of a JAR package, and when the JAR package is created, it is copied over to the BPEL process library folder. Generating Java documents for XML facade Well prepared documentation presents important aspect of further XML facade integration. Suppose we only receive the JAR package containing XML facade. It is virtually impossible to use XML facade if we don't know what the purpose of each data type is and how we can utilize it. With documentation, we receive a well-defined XML facade capable of integrating XML and Java worlds together. This recipe explains how to document the XML facade generated Java classes. Getting ready To complete this recipe, we only need the XML schema defined. We already have the XML schema in the Banking_BPEL project (Derivative_Cashflow.xsd). How to do it... The following are the steps we need to take in order to generate Java documents for XML facade: We open the Derivative_Cashflow.xsd XML schema file. Initially, we need to add an additional schema definition to the XML schema file: <xsd:schema attributeFormDefault="unqualified"elementFormDefault="qualified"targetNamespace="http:// jxb_version="2.1"></xsd:schema> In order to put documentation at the package level, we put the following code immediately after the <xsd:schema> tag in the XML schema file: <xsd:annotation><xsd:appinfo><jxb:schemaBindings><jxb:package name="org.packt.cashflow.facade"><jxb:javadoc>This package represents the XML facadeof the cashflows in the financial derivativesstructure.</jxb:javadoc></jxb:package></jxb:schemaBindings></xsd:appinfo></xsd:annotation> In order to add documentation at the complexType level, we need to put the following lines into the XML schema file. The code goes immediately after the complexType definition: <xsd:annotation><xsd:appinfo><jxb:class><jxb:javadoc>This class defines the data for theevents, when principal exchange occurs.</jxb:javadoc></jxb:class></xsd:appinfo></xsd:annotation> The elements of the complexType definition are annotated in a similar way. We put the annotation data immediately after the element definition in the XML schema file: <xsd:annotation><xsd:appinfo><jxb:property><jxb:javadoc>Raw principal exchangedate.</jxb:javadoc></jxb:property></xsd:appinfo></xsd:annotation> In JDeveloper, we are now ready to build the javadoc documentation. So, select the project CashflowFacade root node. Then, from the main menu, select the Build and Javadoc CashflowFacade.jpr option. The javadoc content will be built in the javadoc directory of the project. How it works... During the conversion from XML schema to Java classes, JAXB is also processing possible annotations inside the XML schema file. When the conversion utility (xjc or execution through JDeveloper) finds the annotation in the XML schema file, it decorates the generated Java classes according to the specification. The XML schema file must contain the following declarations. In the <xsd:schema> element, the following declaration of the JAXB schema namespace must exist: jxb:version="2.1" Note that the xjb:version attribute is where the Version of the JAXB specification is defined. The most common Version declarations are 1.0, 2.0, and 2.1. The actual definition of javadoc resides within the <xsd:annotation> and <xsd:appinfo> blocks. To annotate at package level, we use the following code: <jxb:schemaBindings><jxb:package name="PKG_NAME"><jxb:javadoc>TEXT</jxb:javadoc></jxb:package></jxb:schemaBindings> We define the package name to annotate and a javadoc text containing the documentation for the package level. The annotation of javadoc at class or attribute level is similar to the following code: <jxb:class|property><jxb:javadoc>TEXT</jxb:javadoc></jxb:class|property> If we want to annotate the XML schema at complexType level, we use the <jaxb:class> element. To annotate the XML schema at element level, we use the <jaxb:property> element. There's more... In many cases, we need to annotate the XML schema file directly for various reasons. The XML schema defined by different vendors is automatically generated. In such cases, we would need to annotate the XML schema each time we want to generate Java classes out of it. This would require additional work just for annotation decoration tasks. In such situations, we can separate the annotation part of the XML schema to a separate file. With such an approach, we separate the annotating part from the XML schema content itself, over which we usually don't have control. For that purpose, we create a binding file in our CashflowFacade project and name it extBinding.xjb. We put the annotation documentation into this file and remove it from the original XML schema. We start by defining the binding file header declaration: <jxb:bindings version="1.0"><jxb:bindings schemaLocation="file:/D:/delo/source_code/Banking_BPEL/xsd/Derivative_Cashflow.xsd" node="/xs:schema"> We need to specify the name of the schema file location and the root node of the XML schema which corresponds to our mapping. We continue by declaring the package level annotation: <jxb:schemaBindings><jxb:package name="org.packt.cashflow.facade"><jxb:javadoc><![CDATA[<body>This package representsthe XML facade of the cashflows in the financialderivatives structure.</body>]]></jxb:javadoc></jxb:package><jxb:nameXmlTransform><jxb:elementName suffix="Element"/></jxb:nameXmlTransform></jxb:schemaBindings> We notice that the structure of the package level annotation is identical to those in the inline XML schema annotation. To annotate the class and its attribute, we use the following declaration: <jxb:bindings node="//xs:complexType[@name='CashflowsType']"><jxb:class><jxb:javadoc><![CDATA[This class defines the data for the events, whenprincipal exchange occurs.]]></jxb:javadoc></jxb:class><jxb:bindingsnode=".//xs:element[@name='principalExchange']"><jxb:property><jxb:javadoc>TEST prop</jxb:javadoc></jxb:property></jxb:bindings></jxb:bindings> Notice the indent annotation of attributes inside the class annotation that naturally correlates to the object programming paradigm. Now that we have the external binding file, we can regenerate the XML facade. Note that external binding files are not used only for the creation of javadoc. Inside the external binding file, we can include various rules to be followed during conversion. One such rule is aimed at data type mapping; that is, which Java data type will match the XML data type. In JDeveloper, if we are building XML facade for the first time, we follow either the Creating XML facade from XSD or the Creating XML facade from WSDL recipe. To rebuild XML facade, we use the following procedure: Select the XML schema file (Cashflow_Facade.xsd) in the CashflowFacade project. Right-click on it and select the Generate JAXB 2.0 Content Model option. The configuration dialog opens with some already pre-filled fields. We enter the location of the JAXB Customization File (in our case, the location of the extBinding.xjb file) and click on the OK button. Next, we build the javadoc part to get the documentation. Now, if we open the generated documentation in the web browser, we can see our documentation lines inside. Invoking XML facade from BPEL processes This recipe explains how to use XML facade inside BPEL processes. We can use XML façade to simplify access of XML content from Java code. When using XML façade, the XML content is exposed over Java code. Getting ready To complete the recipe, there are no special prerequisites. Remember that in the Packaging XML facade into JAR recipe, we defined the ANT task to copy XML facade to the BPEL process library directory. This task basically presents all the prerequisites for XML facade utilization. How to do it... Open a BPEL process (Derivative_Cashflow.bpel) in JDeveloper and insert the Java Embedding activity into it: We first insert a code snippet. The whole code snippet is enclosed by a try catch block: try { Read the input cashflow variable data: oracle.xml.parser.v2.XMLElement input_cf= (oracle.xml.parser.v2.XMLElement)getVariableData("inputVariable","payload","/client:Cashflows"); Un-marshall the XML content through the XML facade: Object obj_cf = facade.Facade.createFacade("org.packt.cashflow.facade", input_cf); We must cast the serialized object to the XML facade class: javax.xml.bind.JAXBElement<org.packt.cashflow.facade.CashflowsType> cfs = (javax.xml.bind.JAXBElement<org.packt.cashflow.facade.CashflowsType>)obj_cf; Retrieve the Java class out of the JAXBElement content class: org.packt.cashflow.facade.CashflowsType cf= cfs.getValue(); Finally, we close the try block and handle any exceptions that may occur during processing: } catch (Exception e) {e.printStackTrace();addAuditTrailEntry("Error in XML facade occurred: " +e.getMessage());} We close the Java Embedding activity dialog. Now, we are ready to deploy the BPEL process and test the XML facade. Actually, the execution of the BPEL process will not produce any output, since we have no output lines defined. In case some exception occurs, we will receive information about the exception in the audit trail as well as the BPEL server console. How it works... We add the XML facade JAR file to the BPEL process library directory (<BPEL_process_home>SCA-INFlib). Before we are able to access the XML facade classes, we need to extract the XML content from the BPEL process. To create the Java representation classes, we transform the XML content through the JAXB context. As a result, we receive an un-marshalled Java class ready to be used further in Java code. Accessing complex types through XML facade The advantage of using XML facade is to provide the ability to access the XML content via Java classes and methods. This recipe explains how to access the complex types through XML facade. Getting ready To complete the recipe, we will amend the example BPEL process from the Invoking XML facade from BPEL processes recipe. How to do it... The steps involved in accessing the complex types through XML façade are as follows: Open the Banking_BPEL process and double-click on the XML_facade_node Java Embedding activity. We amend the code snippet with the following code to access the complex type: java.util.List<org.packt.cashflow.facade.PrincipalExchange>princEx= cf.getPrincipalExchange(); We receive a list of principal exchange cash flows that contain various data. How it works... In the previous example, we receive a list of cash flows. The corresponding XML content definition states: <xsd:complexType name="PrincipalExchange"><xsd:sequence></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType> We can conclude that each of the principle exchange cash flows is modeled as an individual Java class. Depending on the hierarchy level of the complex type, it is modeled either as a Java class or as a Java class member. Complex types are organized in the Java object hierarchy according to the XML schema definition. Mostly, complex types can be modeled as a Java class and at the same time as a member of an other Java class. Accessing simple types through XML facade This recipe explains how to access simple types through XML facade. Getting ready To complete the recipe, we will amend the example BPEL process from our previous recipe, Accessing complex types through XML facade. How to do it... Open the Banking_BPEL process and double-click on the XML_facade_node Java Embedding activity. We amend the code snippet with the code to access the XML simple types: for (org.packt.cashflowfacade.PrincipalExchange pe: princEx) {addAuditTrailEntry("Received cashflow with id: " + pe.getId() +"n" +" Unadj. Principal Exch. Date ...: " + pe.getUnadjustedPrincipalExchangeDate() + "n" +" Adj. Principal Exch. Date .....: " + pe.getAdjustedPrincipalExchangeDate() + "n" +" Discount factor ...............: " +pe.getDiscountFactor() + "n" +" Principal Exch. Amount ........: " +pe.getPrincipalExchangeAmount() + "n");} With the preceding code, we output all Java class members to the audit trail. Now if we run the BPEL process, we can see the following part of output in the BPEL flow trace: How it works... The XML schema simple types are mapped to Java classes as members. If we check our example, we have three simple types in the XML schema: <xsd:complexType name="PrincipalExchange"><xsd:sequence><xsd:element minOccurs="0" name="unadjustedPrincipalExchangeDate"type="xsd:date"/><xsd:element minOccurs="0" name="adjustedPrincipalExchangeDate"type="xsd:date"/><xsd:element minOccurs="0" name="principalExchangeAmount"type="xsd:decimal"/><xsd:element minOccurs="0" name="discountFactor"type="xsd:decimal"/></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType> The simple types defined in the XML schema are <xsd:date>, <xsd:decimal>, and <xsd:int>. Let us find the corresponding Java class member definitions. Open the PrincipalExchange.java file. The definition of members we can see is as follows: @XmlSchemaType(name = "date")protected XMLGregorianCalendar unadjustedPrincipalExchangeDate;@XmlSchemaType(name = "date")protected XMLGregorianCalendar adjustedPrincipalExchangeDate;protected BigDecimal principalExchangeAmount;protected BigDecimal discountFactor;@XmlAttributeprotected Integer id; We can see that the mapping between the XML content and the Java classes was performed as shown in the following table: XML schema simple type Java class member <xsd:date> javax.xml.datatype.XMLGregorianCalendar <xsd:decimal> java.math.BigDecimal <xsd:int> java.lang.Integer Also, we can identify that the XML simple type definitions as well as the XML attributes are always mapped as members in corresponding Java class representations. Summary In this article, we have learned how to set up an XML facade project, generate XML facade using ANT, create XML facade from XSD and WSDL, Package XML facade into a JAR file, generate Java documents for XML facade, Invoke XML facade from BPEL processes, and access complex and simple types through XML facade. Resources for Article: Further resources on this subject: BPEL Process Monitoring [Article] Human Interactions in BPEL [Article] Business Processes with BPEL [Article]
Read more
  • 0
  • 0
  • 3420
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-building-android-must-know
Packt
13 Sep 2013
14 min read
Save for later

Building Android (Must know)

Packt
13 Sep 2013
14 min read
(For more resources related to this topic, see here.) Getting ready You need Ubuntu 10.04 LTS or later (Mac OS X is also supported by the build system, but we will be using Ubuntu for this article). This is the supported build operating system, and the one for which you will get the most help from the online community. In my examples, I use Ubuntu 11.04, which is also reasonably well supported. You need approximately 6 GB of free space for the Android code files. For a complete build, you need 25 GB of free space. If you are using Linux in a virtual machine, make sure the RAM or the swap size is at least 16 GB, and you have 30 GB of disk space to complete the build. As of Android Versions 2.3 (Gingerbread) and later, building the system is only possible on 64-bit computers. Using 32-bit machines is still possible if you work with Froyo (Android 2.2). However, you can still build later versions on a 32-bit computer using a few "hacks" on the build scripts that I will describe later. The following steps outline the process needed to set up a build environment and compile the Android framework and kernel: Setting up a build environment Downloading the Android framework sources Building the Android framework Building a custom kernel In general, your (Ubuntu Linux) build computer needs the following: Git 1.7 or newer (GIT is a source code management tool), JDK 6 to build Gingerbread and later versions, or JDK 5 to build Froyo and older versions Python 2.5 – 2.7 GNU Make 3.81 – 3.82 How to do it... We will first set up the build environment with the help of the following steps: All of the following steps are targeted towards 64-bit Ubuntu. Install the required JDK by executing the following command: JDK6sudo add-apt-repository "deb http: //archive.canonical.com/ lucid partner" sudo apt-get update sudo apt-get install sun-java6-jdkJDK5sudo add-apt-repository "deb http: //archive.ubuntu.com/ubuntu hardy main multiverse" sudo add-apt-repository "deb http: //archive.ubuntu.com/ubuntu hardy-updates main multiverse" sudo apt-get update sudo apt-get install sun-java5-jdk Install the required library dependencies: sudo apt-get install git-core gnupg flex bison gperf build-essential zip curl zlib1g-dev libc6-dev lib32ncurses5-dev ia32-libs x11proto-core-dev libx11-dev lib32readline5-dev lib32z-dev libgl1-mesa-dev g++-multilib mingw32 tofrodos python-markdown libxml2-utils xsltproc [OPTIONAL]. On Ubuntu 10.10, a symlink is not created between libGL.so.1 and libGL.so, which sometimes causes the build process to fail: sudo ln -s /usr/lib32/mesa/libGL.so.1 /usr/lib32/mesa/libGL.so [OPTIONAL] On Ubuntu 11.10, an extra dependency is sudo apt-get install libx11-dev:i386 Now, we will download the Android sources from Google's repository. Install repo. Make sure you have a /bin directory and that it exists in your PATH variable: mkdir ~/bin PATH=~/bin:$PATH curl https: //dl-ssl.google.com/dl/googlesource/git-repo/repo > ~/bin/repo chmod a+x ~/bin/repo Repo is a python script used to download the Android sources, among other tasks. It is designed to work on top of GIT. Initialize repo. In this step, you need to decide the branch of the Android source you wish to download. If you wish to make use of Gerrit, which is the source code reviewing tool used, make sure you have a live Google mail address. You will be prompted to use this e-mail address when repo initializes. Create a working directory on your local machine. We will call this mkdir android_srccd android_src The following command will initialize repo to download the "master" branch: repo init -u https://android.googlesource.com/platform/manifest The following command will initialize repo to download the Gingerbread 2.3.4 branch: repo init -u https: //android.googlesource.com/platform/manifest -b android-2.3.4_r1 The -b switch is used to specify the branch you wish to download. Once repo is configured, we are ready to obtain the source files. The format of the command is as follows: repo sync -jX -jX is optional, and is used for parallel fetch. The following command will sync all the necessary source files for the Android framework. Note that these steps are only to download the Android framework files.Kernel download is a separate process. repo sync -j16 The source code access is anonymous, that is, you do not need to be registered with Google to be able to download the source code. The servers allocate a fixed quota to each IP address that accesses the source code. This is to protect the servers against excessive download traffic. If you happen to be behind a NAT and share an IP address with others, who also wish to download the code, you may encounter error messages from the source code servers warning about excessive usage. In this case, you can solve the problem with authenticated access. In this method, you get a separate quota based on your user ID, generated by the password generator system. The password generator and associated instructions are available at https://android.googlesource.com/new-password. Once you have obtained a user ID/password and set up your system appropriately, you can force authentication by using the following command: repo init -u https://android.googlesource.com/a/platform/manifest Notice the /a/ in the URI. This indicates authenticated access. Proxy issues If you are downloading from behind a proxy, set the following environment variables: export HTTP_PROXY=http://<proxy_user_id>:<proxy_password>@<proxy_server>:<proxy_port>export HTTPS_PROXY=http://<proxy_user_id>:<proxy_password>@<proxy_server>:<proxy_port> Next, we describe the steps needed to build the Android framework sources: Initialize the terminal environment. Certain build-time tools need to be included in your current terminal environment. So, navigate to your source directory: cd android_src/source build/envsetup.sh The sources can be built for various targets. Each target descriptor has the BUILD-BUILDTYPE format: BUILD: Refers to a specific combination of the source code for a certain device. For example, full_maguro targets Galaxy Nexus or generic targets the emulator. BUILDTYPE: This can be one of the following three values: user: Suitable for production builds userdebug: Similar to user, with with root access in ADB for easier debugging eng: Development build only We will be building for the emulator in our current example. Issue the following command to do so: lunch full-eng To actually build the code, we will use make. The format is as follows: make -jX Where X indicates the number of parallel builds. The usual rule is: X is the number of CPU cores + 2. This is an experimental formula, and the reader should feel free to test it with different values. To build the code: make -j6 Now, we must wait till the build is complete. Depending on your system's specifications, this can take anywhere between 20 minutes and 1 hour. At the end of a successful build, the output looks similar to the following (note that this may vary depending on your target): ...target Dex: SystemUI Copying: out/target/common/obj/APPS/SystemUI_intermediates/noproguard.classes.dex target Package: SystemUI (out/target/product/generic/obj/APPS/SystemUI_intermediates/package.apk) 'out/target/common/obj/APPS/SystemUI_intermediates//classes.dex' as 'classes.dex'... Install: out/target/product/generic/system/app/SystemUI.apk Finding NOTICE files: out/target/product/generic/obj/NOTICE_FILES/hash-timestamp Combining NOTICE files: out/target/product/generic/obj/NOTICE.html Target system fs image: out/target/product/generic/obj/PACKAGING/systemimage_intermediates/system.img Install system fs image: out/target/product/generic/system.img Installed file list: out/target/product/generic/installed-files.txt DroidDoc took 440 sec. to write docs to out/target/common/docs/doc-comment-check A better check for a successful build is to examine the newly created files inside the following directory. The build produces a few main files inside android_src/out/target/product/<DEVICE>/, which are as follows: system.img: The system image file boot.img: Contains the kernel recovery.img: Contains code for recovery partition of the device In the case of an emulator build, the preceding files will appear at android_src/out/target/product/generic/. Now, we can test our build simply by issuing the emulator command: emulator This launches an Android emulator, as shown in the following screenshot, running the code we've just built: The code we've downloaded contains prebuilt Linux kernels for each supported target. If you only wish to change the framework files, you can use the prebuilt kernels, which are automatically included in the build images. If you are making specific changes to the kernel, you will have to obtain a specific kernel and build it separately (shown here), which is explained later: Faster Builds – CCACHE The framework code contains C language and Java code. The majority of the C language code exists as shared objects that are built during the build process. If you issue the make clean command, which deletes all the built code (simply deleting the build output directory has the same effect as well) and then rebuild, it will take a significant amount of time. If no changes were made to these shared libraries, the build time can be sped up with CCACHE, which is a compiler cache. In the root of the source directory android_src/, use the following command: export USE_CCACHE=1export CCACHE_DIR=<PATH_TO_YOUR_CCACHE_DIR> To set a cache size: prebuilt/linux-x86/ccache/ccache -M 50G This reserves a cache size of 50 GB. To watch how the cache is used during the build process, use the following command (navigate to your source directory in another terminal): watch -n1 -d prebuilt/linux-x86/ccache/ccache -s In this part, we will obtain the sources and build the goldfish emulator kernel. Building kernels for devices is done in a similar way. goldfish is the name of the kernel modified for the Android QEMU-based emulator. Get the kernel sources: Create a subdirectory of android_src: mkdir kernel_codecd kernel_codegit clone https: //android.googlesource.com/kernel/goldfish.gitgit branch -r This will clone goldfish.git into a folder named goldfish (created automatically) and then list the remote branches available. The output should look like the following (this is seen after the execution of the git branch): origin/HEAD -> origin/master origin/android-goldfish-2.6.29 origin/linux-goldfish-3.0-wip origin/master Here, in the following command, we notice origin/android-goldfish-2.6.29, which is the kernel we wish to obtain: cd goldfishgit checkout --track -b android-goldfish-2.6.29 origin/android-goldfish-2.6.29 This will obtain the kernel code: Set up the build environment. We need to initialize the terminal environment by updating the system PATH variable to point to a cross compiler which will be used to compile the Linux kernel. This cross compiler is already available as a prebuilt binary distributed with the Android framework sources: export PATH=<PATH_TO_YOUR_ANDROID_SRC_DIR>/prebuilt/linux-x86/toolchain/arm-eabi-4.4.3/bin:$PATH Run an emulator (you may choose to run the emulator with the system image that we just built earlier. We need this to obtain the kernel configuration file. Instead of manually configuring it, we choose to pull the config file of a running kernel. Make sure ADB is still in your path. It will be in your PATH variable if you haven't closed the terminal window since building the framework code, otherwise execute the following steps sequentially. (Note that you have to change directory to ANDROID_SRC to execute the following command). source build/envsetup.shlunch full_engadb pull /proc/config.gzgunzip config.gz cp config .config The preceding command will copy the config file of the running kernel into our kernel build tree. Start the compilation process: export ARCH=armexport SUBARCH=arm make If the following comes up: Misc devices (MISC_DEVICES) [Y/n/?] y Android pmem allocator (ANDROID_PMEM) [Y/n] y Enclosure Services (ENCLOSURE_SERVICES) [N/y/?] n Kernel Debugger Core (KERNEL_DEBUGGER_CORE) [N/y/?] n UID based statistics tracking exported to /proc/uid_stat (UID_STAT) [N/y] n Virtual Device for QEMU tracing (QEMU_TRACE) [Y/n/?] y Virtual Device for QEMU pipes (QEMU_PIPE) [N/y/?] (NEW) Enter y as the answer. This is some additional Android-specific configuration needed for the build. Now we have to wait till the build is complete. The final lines of the build output should look like the following (note that this can change depending on your target): ... LD vmlinux SYSMAP System.map SYSMAP .tmp_System.map OBJCOPY arch/arm/boot/Image Kernel: arch/arm/boot/Image is ready AS arch/arm/boot/compressed/head.o GZIP arch/arm/boot/compressed/piggy.gz AS arch/arm/boot/compressed/piggy.o CC arch/arm/boot/compressed/misc.o LD arch/arm/boot/compressed/vmlinux OBJCOPY arch/arm/boot/zImage Kernel: arch/arm/boot/zImage is ready As the last line states, the new zImage is available inside arch/arm/ boot/. To test it, we boot the emulator with this newly built image. Copy zImage to an appropriate directory. I just copied it to android_src/: emulator -kernel zImage To verify that the emulator is indeed running our kernel, use the following command: adb shell # cat /proc/version The output will look like: Linux version 2.6.29-gef9c64a (earlence@earlence-Satellite-L650) (gcc version 4.4.3 (GCC) ) #1 Mon Jun 4 16:35:00 CEST 2012 This is our custom kernel, since we observe the custom build string (earlence@earlence-Satellite-L650) present as well as the time of the compilation. The build string will be the name of your computer. Once the emulator has booted up, you will see a window similar to the following: Following are the steps required to build the framework on a 32-bit system: Make the following simple changes to build Gingerbread on 32-bit Ubuntu. Note that these steps assume that you have set up the system for a Froyo build. Assuming a Froyo build computer setup, the following steps guide you on incrementally making changes such that Gingerbread and later builds are possible. To set up for Froyo, please follow the steps explained at http://source.android.com/source/initializing.html. In build/core/main.mk, change ifneq (64,$(findstring 64,$(build_arch))) to ifneq (i686,$(findstring i686,$(build_arch))). Note that there are two changes on that line. In the following files: external/clearsilver/cgi/Android.mk external/clearsilver/java-jni/Android.mk external/clearsilver/util/Android.mk external/clearsilver/cs/Android.mk change:LOCAL_CFLAGS += -m64 LOCAL_LDFLAGS += -m64 to:LOCAL_CFLAGS += -m32 LOCAL_LDFLAGS += -m32 Install the following packages (in addition to the packages you must have installed for the Froyo build): sudo apt-get install lib64z1-dev libc6-dev-amd64 g++-multilib lib64stdc++6 Install Java 1.6 using the following command: sudo apt-get install sun-java6-jdk Summary The Android build system is a combination of several standard tools and custom wrappers. Repo is one such wrapper script that takes care of GIT operations and makes it easier for us to work with the Android sources. The kernel trees are maintained separately from the framework source trees. Hence, if you need to make customizations to a particular kernel, you will have to download and build it separately. The keen reader may be wondering how we are able to run the emulator if we never built a kernel in when we just compiled the framework. Android framework sources include prebuilt binaries for certain targets. These binaries are located in the /prebuilt directory under the framework source root directory. The kernel build process is more or less the same as building kernels for desktop systems. There are only a few Android-specific compilation switches, which we have shown to be easily configurable given an existing configuration file for the intended target. The sources consist of C/C++ and Java code. The framework does not include the kernel sources, as these are maintained in a separate GIT tree. In the next recipe, we will explain the framework code organization. It is important to understand how and where to make changes while developing custom builds. Resources for Article: Further resources on this subject: Android Native Application API [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] So, what is Spring for Android? [Article]  
Read more
  • 0
  • 0
  • 2144

article-image-extracting-data-using-dom-must-know
Packt
12 Sep 2013
5 min read
Save for later

Extracting data using DOM (Must know)

Packt
12 Sep 2013
5 min read
(For more resources related to this topic, see here.) Getting ready This section will parse the content of the page at, http://jsoup.org. The index.html file in the project is provided if you want to have a fi le as input, instead of connecting to the URL. How to do it... The following screenshot shows the page that is going to be parsed: By viewing the source code for this HTML page, we know the site structure. The jsoup library is quite supportive of the DOM navigation method; it provides ways to find elements and extract their contents efficiently. Create the Document class structure by connecting to the URL. Document doc = Jsoup.connect("http://jsoup.org").get(); Navigate to the menu tag whose class is nav-sections. Elements navDivTag = doc.getElementsByClass("nav-sections"); Get the list of all menu tags that are owned by &#lt;a> . Elements list = navDivTag.get(0).getElementsByTag("a"); Extract content from each Element class in the previous menu list. for(Element menu: list) {System.out.print(String.format("[%s]", menu.html()));} The output should look like the following screenshot after running the code: The complete example source code for this section is placed at sourceSection02. The API reference for this section is available at: http://jsoup.org/apidocs/org/jsoup/nodes/Element.html How it works... Let's have a look at the navigation structure: html > body.n1-home > div.wrap > div.header > div.nav-sections > ul >li.n1-news > a The div class="nav-sections" tag is the parent of the navigation section, so by using getElementsByClass("nav-sections"), it will move to this tag. Since there is only one tag with this class value in this example, we only need to extract the first found element; we will get it at index 0 (first item of results). Elements navDivTag = doc.getElementsByClass("nav-sections"); The Elements object in jsoup represents a collection ( Collection<>) or a list (List<>); therefore, you can easily iterate through this object to get each element, which is known as an Element object. When at a parent tag, there are several ways to get to the children. Navigate from subtag <ul>, and deeper to each <li> tag, and then to the <a> tag. Or, you can directly make a query to find all the <a> tags. That's how we retrieved the list that we found, as shown in the following code: Elements list = navDivTag.get(0).getElementsByTag("a"); The final part is to print the extracted HTML content of each <a> tag. Beware of the list value; even if the navigation fails to find any element, it is always not null, and therefore, it is good practice to check the size of the list before doing any other task. Additionally, the Element.html() method is used to return the HTML content of a tag. There's more... jsoup is quite a powerful library for DOM navigation. Besides the following mentioned methods, the other navigation types to find and extract elements are also supported in the Element class. The following are the common methods for DOM navigation: Methods   Descriptions   getElementById(String id)   Finds an element by ID, including its children.   getElementsByTag(String c)   Finds elements, including and recursively under the element that calls this method, with the specified tag name (in this case, c).   getElementsByClass(String className)   Finds elements that have this class, including or under the element that calls this method. Case insensitive.   getElementsByAttribute(String key)   Find elements that have a named attribute set. Case insensitive. This method has several relatives, such as: getElementsByAttribute Starting(String keyPrefix) getElementsByAttributeValue (String key, String value) getElementsByAttributeValue Not(String key, String value) getElementsMatchingText(Pattern pattern)   Finds elements whose text matches the supplied regular expression.   getAllElements()   Finds all elements under the specified element (including self and children of children).   There is a need to mention all methods that are used to extract content from an HTML element. The following table shows the common methods for extracting elements: Methods   Descriptions   id()   This retrieves the ID value of an element.   className()   This retrieves the class name value of an element.   attr(String key)   This gets the value of a specific attribute.   attributes()   This is used to retrieve all the attributes.   html()   This is used to retrieve the inner HTML value of an element.   data()   This is used to retrieve the data content, usually applied for getting content from the <script> and <style> tags.   text()   This is used to retrieve the text content. This method will return the combined text of all inner children and removes all HTML tags, while the html() method returns everything between its open and closed tags.   tag()   This retrieves the tag of the element.   Summary In this article we saw to extract data using DOM from an HTML page. It was seen that jsoup is quite a powerful library for DOM navigation. Resources for Article : Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] Building HTML5 Pages from Scratch [Article] JBoss Tools Palette [Article]
Read more
  • 0
  • 0
  • 3625

article-image-setting-test-infrastructure
Packt
12 Sep 2013
9 min read
Save for later

Setting Up a Test Infrastructure

Packt
12 Sep 2013
9 min read
(For more resources related to this topic, see here.) Setting up and writing our first tests Now that we have the base test libraries, we can create a test driver web page that includes the application and test libraries, sets up and executes the tests, and displays a test report. The test driver page A single web page is typically used to include the test and application code and drive all frontend tests. Accordingly, we can create a web page named test.html in the chapters/01/test directory of our repository starting with just a bit of HTML boilerplate—a title and meta attributes: <html> <head> <title>Backbone.js Tests</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> Then, we include the Mocha stylesheet for test reports and the Mocha, Chai, and Sinon.JS JavaScript libraries: <link rel="stylesheet" href="js/lib/mocha.css" /> <script src = "js/lib/mocha.js"></script> <script src = "js/lib/chai.js"></script > <script src = "js/lib/sinon.js"></script> Next, we prepare Mocha and Chai. Chai is configured to globally export the expect assertion function. Mocha is set up to use the bdd test interface and start tests on the window.onload event: <script> // Setup. var expect = chai.expect; mocha.setup("bdd"); // Run tests on window load event. window.onload = function () { mocha.run(); }; </script> After the library configurations, we add in the test specs. Here we include a single test file (that we will create later) for the initial test run: <script src = "js/spec/hello.spec.js"></script> </head> Finally, we include a div element that Mocha uses to generate the full HTML test report. Note that a common alternative practice is to place all the script include statements before the close body tag instead of within the head tag: <body> <div id="mocha"></div> </body> </html> And with that, we are ready to create some tests. Now, you could even open chapters/01/test/test.html in a browser to see what the test report looks like with an empty test suite. Adding some tests It is sufficient to say that test development generally entails writing JavaScript test files, each containing some organized collection of test functions. Let's start with a single test file to preview the testing technology stack and give us some tests to run. The test file chapters/01/test/js/spec/hello.spec.js creates a simple function (hello()) to test and implements a nested set of suites introducing a few Chai and Sinon.JS features. The function under test is about as simple as you can get: window.hello = function () { return "Hello World"; }; The hello function should be contained in its own library file (perhaps hello.js) for inclusion in applications and tests. The code samples simply include it in the spec file for convenience. The test code uses nested Mocha describe statements to create a test suite hierarchy. The test in the Chai suite uses expect to illustrate a simple assertion. The Sinon.JS suite's single test shows a test spy in action: describe("Trying out the test libraries", function () { describe("Chai", function () { it("should be equal using 'expect'", function () { expect(hello()).to.equal("Hello World"); }); }); describe("Sinon.JS", function () { it("should report spy called", function () { var helloSpy = sinon.spy(window, 'hello'); expect(helloSpy.called).to.be.false; hello(); expect(helloSpy.called).to.be.true; hello.restore(); }); }); }); Not to worry if you do not fully understand the specifics of these tests and assertions at this point, as we will shortly cover everything in detail. The takeaway is that we now have a small collection of test suites with a set of specifications ready to be run. Running and assessing test results Now that all the necessary pieces are in place, it is time to run the tests and review the test report. The first test report Opening up the chapters/01/test/test.html file in any web browser will cause Mocha to run all of the included tests and produce a test report: Test report This report provides a useful summary of the test run. The top-right column shows that two tests passed, none failed, and the tests collectively took 0.01 seconds to run. The test suites declared in our describe statements are present as nested headings. Each test specification has a green checkmark next to the specification text, indicating that the test has passed. Test report actions The report page also provides tools for analyzing subsets of the entire test collection. Clicking on a suite heading such as Trying out the test libraries or Chai will re-run only the specifications under that heading. Clicking on a specification text (for example, should be equal using 'expect') will show the JavaScript code of the test. A filter button designated by a right triangle is located to the right of the specification text (it is somewhat difficult to see). Clicking the button re-runs the single test specification. The test specification code and filter The previous figure illustrates a report in which the filter button has been clicked. The test specification text in the figure has also been clicked, showing the JavaScript specification code. Advanced test suite and specification filtering The report suite and specification filters rely on Mocha's grep feature, which is exposed as a URL parameter in the test web page. Assuming that the report web page URL ends with something such as chapters/01/test/test.html, we can manually add a grep filter parameter accompanied with the text to match suite or specification names. For example, if we want to filter on the term spy, we would navigate a browser to a comparable URL containing chapters/01/test/test.html?grep=spy, causing Mocha to run only the should report spy called specification from the Sinon.JS suite. It is worth playing around with various grep values to get the hang of matching just the suites or specifications that you want. Test timing and slow tests All of our tests so far have succeeded and run quickly, but real-world development necessarily involves a certain amount of failures and inefficiencies on the road to creating robust web applications. To this end, the Mocha reporter helps identify slow tests and analyze failures. Why is test speed important? Slow tests can indicate inefficient or even incorrect application code, which should be fixed to speed up the overall web application. Further, if a large collection of tests run too slow, developers will have implicit incentives to skip tests in development, leading to costly defect discovery later down the deployment pipeline. Accordingly, it is a good testing practice to routinely diagnose and speed up the execution time of the entire test collection. Slow application code may be left up to the developer to fix, but most slow tests can be readily fixed with a combination of tools such as stubs and mocks as well as better test planning and isolation. Let's explore some timing variations in action by creating chapters/01/test/js/spec/timing.spec.js with the following code: describe("Test timing", function () { it("should be a fast test", function (done) { expect("hi").to.equal("hi"); done(); }); it("should be a medium test", function (done) { setTimeout(function () { expect("hi").to.equal("hi"); done(); }, 40); }); it("should be a slow test", function (done) { setTimeout(function () { expect("hi").to.equal("hi"); done(); }, 100); }); it("should be a timeout failure", function (done) { setTimeout(function () { expect("hi").to.equal("hi"); done(); }, 2001); }); }); We use the native JavaScript setTimeout() function to simulate slow tests. To make the tests run asynchronously, we use the done test function parameter, which delays test completion until done() is called. The first test has no delay before the test assertion and done() callback, the second adds 40 milliseconds of latency, the third adds 100 milliseconds, and the final test adds 2001 milliseconds. These delays will expose different timing results under the Mocha default configuration that reports a slow test at 75 milliseconds, a medium test at one half the slow threshold, and a failure for tests taking longer than 2 seconds. Next, include the file in your test driver page (chapters/01/test/test-timing.html in the example code): <script src = "js/spec/timing.spec.js"></script> Now, on running the driver page, we get the following report: Test report timings and failures This figure illustrates timing annotation boxes for our medium (orange) and slow (red) tests and a test failure/stack trace for the 2001-millisecond test. With these report features, we can easily identify the slow parts of our test infrastructure and use more advanced test techniques and application refactoring to execute the test collection efficiently and correctly. Test failures A test timeout is one type of test failure we can encounter in Mocha. Two other failures that merit a quick demonstration are assertion and exception failures. Let's try out both in a new file named chapters/01/test/js/spec/failure.spec.js: // Configure Mocha to continue after first error to show // both failure examples. mocha.bail(false); describe("Test failures", function () { it("should fail on assertion", function () { expect("hi").to.equal("goodbye"); }); it("should fail on unexpected exception", function () { throw new Error(); }); }); The first test, should fail on assertion, is a Chai assertion failure, which Mocha neatly wraps up with the message expected 'hi' to equal 'goodbye'. The second test, should fail on unexpected exception, throws an unchecked exception that Mocha displays with a full stack trace. Stack traces on Chai assertion failures vary based on the browser. For example, in Chrome, no stack trace is displayed for the first assertion while one is shown in Safari. See the Chai documentation for configuration options that offer more control over stack traces. Test failures Mocha's failure reporting neatly illustrates what went wrong and where. Most importantly, Chai and Mocha report the most common case—a test assertion failure—in a very readable natural language format. Summary In this article, we introduced an application and test structure suitable for development, gathered the Mocha, Chai, and Sinon.JS libraries, and created some basic tests to get things started. Then, we reviewed some facets of the Mocha test reporter and watched various tests in action—passing, slow, timeouts, and failures. Resources for Article: Further resources on this subject: The Aliens Have Landed! [Article] Testing your App [Article] Quick Start into Selenium Tests [Article]
Read more
  • 0
  • 0
  • 1035

article-image-stylecop-analysis
Packt
12 Sep 2013
6 min read
Save for later

StyleCop analysis

Packt
12 Sep 2013
6 min read
(For more resources related to this topic, see here.) Integrating StyleCop analysis results in Jenkins/Hudson (Intermediate) In this article we will see how to build and display StyleCop errors in Jenkins/Hudson jobs. To do so, we will need to see how to configure the Jenkins job with a full analysis of the C# files in order to display the technical debt of the project. As we want it to diminish, we will also set in the job an automatic recording of the last number of violations. Finally, we will return an error if we add any violations when compared to the previous build. Getting ready For this article you will need to have: StyleCop 4.7 installed with the option MSBuild integration checked A Subversion server A working Jenkins server including: The MSBuild plug in for Jenkins The Violation plug in for Jenkins A C# project followed in a subversion repository. How to do it... The first step is to build a working build script for your project. All solutions have their advantages and drawbacks. I will use MSBuild in this article. The only difference here will be that I won't separate files on a project basis but take the "whole" solution: <?xml version="1.0" encoding="utf-8" ?> <Project DefaultTargets="StyleCop" > <UsingTask TaskName="StyleCopTask" AssemblyFile="$(MSBuildExtens ionsPath)..StyleCop 4.7StyleCop.dll" /> <PropertyGroup> <!-- Set a default value of 1000000 as maximum Stylecop violations found --> <StyleCopMaxViolationCount>1000000</StyleCopMaxViolationCount> </PropertyGroup> <Target Name="StyleCop"> <!-- Get last violation count from file if exists --> <ReadLinesFromFile Condition="Exists('violationCount.txt')" File="violationCount.txt"> <Output TaskParameter="Lines" PropertyName="StyleCopMaxViola tionCount" /> </ReadLinesFromFile> <!-- Create a collection of files to scan --> <CreateItem Include=".***.cs"> <Output TaskParameter="Include" ItemName="StyleCopFiles" /> </CreateItem> <!-- Launch Stylecop task itself --> <StyleCopTask ProjectFullPath="$(MSBuildProjectFile)" SourceFiles="@(StyleCopFiles)" ForceFullAnalysis="true" TreatErrorsAsWarnings="true" OutputFile="StyleCopReport.xml" CacheResults="true" OverrideSettingsFile= "StylecopCustomRuleSettings.Stylecop" MaxViolationCount="$(StyleCopMaxViolationCount)"> <!-- Set the returned number of violation --> <Output TaskParameter="ViolationCount" PropertyName="StyleCo pViolationCount" /> </StyleCopTask> <!-- Write number of violation founds in last build --> <WriteLinesToFile File="violationCount.txt" Lines="$(StyleCopV iolationCount)" Overwrite="true" /> </Target> </Project> After that, we prepare the files that will be scanned by the StyleCop engine and we launch the StyleCop task on it. We redirect the current number of violations to the StyleCopViolationCount property. Finally, we write the result in the violationsCount.txt file to find out the level of technical debt remaining. This is done with the WriteLinesToFile element. Now that we have our build script for our job, let's see how to use it with Jenkins. First, we have to create the Jenkins job itself. We will create a Build a free-style software project. After that, we have to set how the subversion repository will be accessed, as shown in the following screenshot: We also set it to check for changes on the subversion repository every 15 minutes. Then, we have to launch our MSBuild script using the MSBuild task. The task is quite simple to configure and lets you fill in three fields: MSBuild Version: You need to select one of the MSBuild versions you configured in Jenkins (Jenkins | Manage Jenkins | Configure System) MSBuild Build File: Here we will provide the Stylecop.proj file we previously made Command Line Arguments: In our case, we don't have any to provide, but it might be useful when you have multiple targets in your MSBuild file Finally we have to configure the display of StyleCop errors. This were we will use the violation plugin of Jenkins. It permits the display of multiple quality tools' results on the same graphic. In order to make it work, you have to provide an XML file containing the violations. As you can see in the preceding screenshot, Jenkins is again quite simple to configure. After providing the XML filename for StyleCop, you have to fix thresholds to build health and the maximum number of violations you want to display in the detail screen of each file in violation. How it works... In the first part of the How to do it…section, we presented a build script. Let's explain what it does: First, as we don't use the premade MSBuild integration, we have to declare in which assembly the StyleCop task is defined and how we will call it. This is achieved through the use of the UsingTask element. Then we try to retrieve the previous count of violations and set the maximum number of violations that are acceptable at this stage of our project. This is the role of the ReadLinesFromFile element, which reads the content of a file. As we added a condition to ascertain the existence of the violationsCount.txt file, it will only be executed if the file exists. We redirect the output to the property StyleCopMaxViolationCount. After that we have configured the Jenkins job to follow our project with StyleCop. We have configured some strict rules to ensure nobody will add new violations over time, and with the violation plugin and the way we addressed StyleCop, we are able to follow the technical debt of the project regarding StyleCop violations in the Violations page. A summary of each file is also present and if we click on one of them, we will be able to follow the violations of the file. How to address multiple projects with their own StyleCop settings As far as I know, this is the limit of the MSBuild StyleCop task. When I need to address multiple projects with their own settings, I generally switch to StyleCopCmd using NAnt or a simple batch script and process the stylecop-report.violations.xml file with an XSLT to get the number of violations. Summary This article talked about integrating StyleCop analysis in Jensons/Hudkins. This article helped in building a job analysis for our project. Resources for Article : Further resources on this subject: Organizing, Clarifying and Communicating the R Data Analyses [Article] Generating Reports in Notebooks in RStudio [Article] Data Profiling with IBM Information Analyzer [Article]
Read more
  • 0
  • 0
  • 2300
article-image-architecture-javascriptmvc
Packt
10 Sep 2013
2 min read
Save for later

The architecture of JavaScriptMVC

Packt
10 Sep 2013
2 min read
(For more resources related to this topic, see here.) DocumentJS DocumentJS is an independent JavaScript documentation application and provides the following: Inline demos with source code and HTML panels Adds tags to the documentation Adds documentation as favorite Auto suggest search Test result page Comments Extends the JSDoc syntax Adds undocumented code because it understands JavaScript FuncUnit FuncUnit is an independent web testing framework and provides the following: Test clicking, typing, moving mouse cursor, and drag-and-drop utility Follows users between pages Multi browser and operating system support Continuous integration solution Writes and debugs tests in the web browser Chainable API that parallels jQuery jQueryMX jQueryMX is the MVC part of JavaScriptMVC and provides the following: Encourages logically separated, deterministic code MVC layer Uniform client-side template interface (supports jq-tmpl, EJS, JAML, Micro, and Mustache) Ajax fixtures Useful DOM utilities Language helpers JSON utilities Class system Custom events StealJS StealJS is an independent code manager and build tool and provides the following powerful features: Dependency management Loads JavaScript and CoffeeScript Loads CSS, Less, and Sass files Loads client-side templates such as TODO Loasd individual files only once Loads files from a different domain Concatenation and compression Google Closure compressor Makes multi-page build Pre processes TODO Can conditionally remove specified code from the production build Builds standalone jQuery plugins Logger Logs messages in a development mode Code generator Generates an application skeleton Adds the possibility to create your own generator Package management Downloads and install plugins from SVN and Git repositories Installs the dependencies Runs install scripts Loads individual files only once Loads files from a different domain Code cleaner Runs JavaScript beautifier against your codebase Runs JSLint against your codebase Resources for Article : Further resources on this subject: YUI Test [Article] From arrays to objects [Article] Writing Your First Lines of CoffeeScript [Article]
Read more
  • 0
  • 0
  • 4034

article-image-different-kind-database
Packt
06 Sep 2013
9 min read
Save for later

A Different Kind of Database

Packt
06 Sep 2013
9 min read
(For more resources related to this topic, see here.) Explosive growth Relational databases worked well when systems were serving hundreds or even thousands of users, but the Internet has changed all of that. The number of users and volume of data is growing exponentially. A variety of social applications have proved that applications can quickly attract millions of users. Relational databases were never built to handle this level of concurrent access. Semi-structured data In addition to the staggering growth, data is no longer simple rows and columns. Semi-structured data is everywhere. Extensible Markup Language (XML) and JavaScript Object Notation (JSON) are the lingua franca of our distributed applications. These formats allow complex relationships to be modeled through hierarchy and nesting. Relational databases struggle to effectively represent these data patterns. Due to this impedance mismatch, our applications are littered with additional complexity. Object relational mapping (ORM) tools have helped but not solved this problem. With the growth of Software as a Service (SaaS) and cloud-based applications, the need for flexible schemas has increased. Each tenant is hosted on a unified infrastructure but they must retain the flexibility to customize their data model to meet their unique business needs. In these multi-tenant environments, a rigid schema structure imposed by a relational database does not work. Architecture changes While data is still king, how we architect our data-dependent systems has changed significantly over the past few decades. In many systems, the database acted as the integration point for different parts of the application. This required the data to be stored in a uniform way since the database was acting as a form of API. The following diagram shows the architectural transitions: With the move to Service Oriented Architectures (SOA), how data is stored for a given component has become less important. The application interfaces with the service, not the database. The application has a dependency on the service contract, not on the database schema. This shift has opened up the possibilities to store data based on the needs of the service. Rethinking the database The factors we have been discussing have led many in our industry to rethink the idea of a database. Engineers wrestled with the limitations of the relational database and set out to build modern web-scale databases. The term NoSQL was coined to label this category of databases. Originally, the term stood for No SQL but has evolved to mean Not Only SQL. To confuse matters further, some NoSQL databases support a form of the SQL dialect. However, in all cases they are not relational databases. While the NoSQL landscape continues to expand with more projects and companies getting in the action, there are four basic categories that databases fall into: Document (CouchDB, MongoDB, RavenDB) Graph (Neo4J, Sones) Key/Value (Cassandra, SimpleDB, Dynamo, Voldemort) Tabular/Wide Column (BigTable, Apache Hbase) Document databases Document databases are made up of semi-structure and schema free data structures known as documents. In this case, the term document is not speaking of a PDF or Word document. Rather, it refers to a rich data structure that can represent related data from the simple to the complex. In document databases, documents are usually represented in JavaScript Object Notation (JSON). A document can contain any number of fields of any length. Fields can also contain multiple pieces of data. Each document is independent and contains all of the data elements required by the entity. The following is an example of a simple document: {Name: "Alexander Graham Bell",BornIn: "Edinburgh, United Kingdom",Spouse: "Mabel Gardiner Hubbard"} And the following is an example of a more complex document: { Name: "Galileo Galilei", BornIn: "Pisa, Italy",YearBorn: "1564", Children: [ { Name: "Virginia", YearBorn: "1600" }, { Name: "Vincenzo", YearBorn: "1606" } ]} Since documents are JSON-based, the impedance mismatch that exists between the object-oriented and relational database worlds is gone. An object graph is simply serialized into JSON for storage. Now, the complexity of the entity has a small impact on the performance. Entire object graphs can be read and written in one database operation. There is no need to perform a series of select statements or create complex stored procedures to read the related objects. JSON documents also add flexibility due to their schema free design. This allows for evolving systems without forcing the existing data to be restructured. The schema free nature simplifies data structure evolution and customization. However, care must be given to the evolving data structure. If the evolution is a breaking change, documents must be migrated or additional intelligence needs to be built into the application. A document database for the .NET platform Prior to RavenDB, document databases such as CouchDB treated .NET as an afterthought. In 2010, Oren Eini from Hibernating Rhinos decided to bring a powerful document database to the .NET ecosystem. According to his blog: Raven is an OSS (with a commercial option) document database for the .NET/Windows platform. While there are other document databases around, such as CouchDB or MongoDB, there really isn't anything that a .NET developer can pick up and use without a significant amount of friction. Those projects are excellent in what they do, but they aren't targeting the .NET ecosystem. RavenDB is built to be a first-class citizen on the .NET platform offering developers the ability to easily extend and embed the database in their applications. A few of the key features that make RavenDB compelling to .NET developers are as follows: RavenDB comes with a fully functional .NET client API, which implements unit of work, change tracking, read and write optimizations, and much more. It also has a REST-based API, so you can access it via the JavaScript directly. It allows developers to define indexes using LINQ (Language Integrated Queries). Supports map/reduce operations on top of your documents using LINQ. It supports System.Transactions and can take part in distributed transactions. The server can be easily extended by adding a custom .NET assembly. RavenDB architecture RavenDB leverages existing storage infrastructure called ESENT that is known to scale to amazing sizes. ESENT is the storage engine utilized by Microsoft Exchange and Active Directory. The storage engine provides the transactional data store for the documents. RavenDB also utilizes another proven technology called Lucene.NET for its high-speed indexing engine. Lucene.NET is an open source Apache project used to power applications such as AutoDesk, StackOverflow, Orchard, Umbraco, and many more. The following diagram shows the primary components of the RavenDB architecture: Storing documents When a document is inserted or updated, RavenDB performs the following: A document change comes in and is stored in ESENT. Documents are immediately available to load by ID, but won't appear in searches until they are indexed. Asynchronous indexing task takes work from the queue and updates the Lucene index. The index can be created manually or dynamically based on the queries executed by the application. The document now appears in queries. Typically, index updates have an average latency of 20 milliseconds. RavenDB provides an API to wait for updates to be indexed if needed. Searching and retrieving documents When a document request comes in, the server is able to pull them directly from the RavenDB database when a document ID is provided. All searches and other inquiries hit the Lucene index. These methods provide near instant access, regardless of the database size. A key difference between RavenDB and a relational database is the way index consistency is handled. A relational database ties index updates to data modifications. The insert, update, or delete only completes once the indexes have been updated. This provides users a consistent view of the data but can quickly degrade when the system is under heavy load. RavenDB on the other hand uses a model for indexes known as eventual consistency. Indexes are updated asynchronously from the document storage. This means that the visibility of a change within an index is not always available immediately after the document is written. By queuing the indexing operation on a background thread, the system is able to continue servicing reads while the indexing operation catches up. Eventual consistency is a bit counter-intuitive. We do not want the user to view stale data. However, in a multiuser system our users view stale data all the time. Once the data is displayed on the screen, it becomes stale and may have been modified by another user. The following diagram illustrates stale data in a multiuser system: In many cases, this staleness does not matter. Consider a blog post. When you publish a new article, does it really matter if the article becomes visible to the entire world that nanosecond? Will users on the Internet really know if it wasn't? What typically matters is providing feedback to the user who made the change. Either let them know when the change becomes available or pausing briefly while the indexing catches up. If a user did not initiate the data change, then it is even easier. The change will simply become available when it enters the index. This provides a mechanism to give each user personal consistency. The user making the change can wait for their own changes to take affect while other users don't need to wait. Eventual consistency is a tradeoff between application responsiveness for all users and consistency between indexes and documents. When used appropriately, this tradeoff becomes a tool for increasing the scalability of a system. Summary As you can see, RavenDB is truly a different kind of database. It makes fundamental changes to what we expect from a database. It requires us to approach problems from a fresh perspective. It requires us to think differently. Resources for Article: Further resources on this subject: Amazon SimpleDB versus RDBMS [Article] So, what is MongoDB? [Article] Getting Started with CouchDB and Futon [Article]
Read more
  • 0
  • 0
  • 3511

article-image-introduction-bluestacks
Packt
05 Sep 2013
4 min read
Save for later

Introduction to BlueStacks

Packt
05 Sep 2013
4 min read
(For more resources related to this topic, see here.) So, what is BlueStacks? BlueStacks is a suite of tools designed to allow you to run Android apps easily on a Windows or Mac computer. The following screenshot shows how it looks: At the time of writing, there are two elements to the BlueStacks suite, which are listed as follows: App Player: This is the engine, which runs the Android apps Cloud Connect: This is a synchronization tool As the BlueStacks tools can be freely downloaded, anyone with a PC running on Windows or Mac can download them and start experimenting with their capabilities. This article will walk you through the process of running BlueStacks on a computer and show you some of the ways in which you can make the most out of this emerging technology. There are other ways by which you can run an emulation of Android on your computer. You can, for instance, run a virtual machine or install the Android Software Development Kit (SDK). These assume a degree of technical understanding that isn't necessarily required with BlueStacks, making BlueStacks the quickest and easiest way of running apps on your computer. BlueStacks is particularly interesting for users of Windows 8 tablets, as it opens up a whole library of mature software designed for a touch interface. This is particularly useful for those wanting to use many, free, or cheap Android apps on their laptop or tablet. It is worth noting that, at the time of writing this article, these tools are beta releases, so it is important that you take time to report the bugs that you may find to the developers through their website. The ongoing development and success of the software depends upon this feedback and results in a better product. If you become reliant on a particular feature, it is a good idea to highlight your love to the developers too. This can help influence which features are to be kept and improved upon as the product matures. App Player BlueStacks App Player allows a Windows or Mac user to run Android apps on their desktop or laptop. It does this by running an emulated version of Android within a window that you can interact with using your keyboard and mouse. The App Player can be downloaded and installed for free from the BlueStacks website, http://www.bluestacks.com. Currently, there are two main versions available for different operating systems that are enlisted as follows: Mac OS X Windows XP, Vista, 7, and 8 Once you have installed the software, an Android emulator runs on your machine. This is a light version of Android that can access app stores so that you can download and run free and paid apps and content. Most apps are compatible with App Player; however, there are some which are not (for technical reasons) and some which have been prevented by the App developers from running. If you are running any another operating system on your computer, the more computing power you can make available to the App Player the better. Otherwise, you might experience slow loading apps or worse still ones that do not function properly. To increase your chances of success, first try running App Player without running any other applications (for example, Word). Cloud Connect Cloud Connect provides a means to synchronize the apps running on an existing phone or tablet with the App Player. This means that you do not have to manually install lots of apps. Instead, you install an app on your device and sign up so that your App Player has exactly the same apps as your device. Summary Thus we learned the basics of BlueStacks and saw a brief of App Player and Cloud Connect feature of BlueStacks Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] New Connectivity APIs – Android Beam [Article]
Read more
  • 0
  • 0
  • 6149
article-image-using-gerrit-github
Packt
04 Sep 2013
14 min read
Save for later

Using Gerrit with GitHub

Packt
04 Sep 2013
14 min read
In this article by Luca Milanesio, author of the book Learning Gerrit Code review, we will learn about Gerrit Code revew. GitHub is the world's largest platform for the free hosting of Git Projects, with over 4.5 million registered developers. We will now provide a step-by-step example of how to connect Gerrit to an external GitHub server so as to share the same set of repositories. Additionally, we will provide guidance on how to use the Gerrit Code Review workflow and GitHub concurrently. By the end of this article we will have our Gerrit installation fully integrated and ready to be used for both open source public projects and private projects on GitHub. (For more resources related to this topic, see here.) GitHub workflow GitHub has become the most popular website for open source projects, thanks to the migration of some major projects to Git (for example, Eclipse) and new projects adopting it, along with the introduction of the social aspect of software projects that piggybacks on the Facebook hype. The following diagram shows the GitHub collaboration model: The key aspects of the GitHub workflow are as follows: Each developer pushes to their own repository and pulls from others Developers who want to make a change to another repository, create a fork on GitHub and work on their own clone When forked repositories are ready to be merged, pull requests are sent to the original repository maintainer The pull requests include all of the proposed changes and their associated discussion threads Whenever a pull request is accepted, the change is merged by the maintainer and pushed to their repository on GitHub   GitHub controversy The preceding workflow works very effectively for most open source projects; however, when the projects gets bigger and more complex, the tools provided by GitHub are too unstructured, and a more defined review process with proper tools, additional security, and governance is needed. In May 2012 Linus Torvalds , the inventor of Git version control, openly criticized GitHub as a commit editing tool directly on the pull request discussion thread: " I consider GitHub useless for these kinds of things. It's fine for hosting, but the pull requests and the online commit editing, are just pure garbage " and additionally, " the way you can clone a (code repository), make changes on the web, and write total crap commit messages, without GitHub in any way making sure that the end result looks good. " See https://github.com/torvalds/linux/pull/17#issuecomment-5654674. Gerrit provides the additional value that Linus Torvalds claimed was missing in the GitHub workflow: Gerrit and GitHub together allows the open source development community to reuse the extended hosting reach and social integration of GitHub with the power of governance of the Gerrit review engine. GitHub authentication The list of authentication backends supported by Gerrit does not include GitHub and it cannot be used out of the box, as it does not support OpenID authentication. However, a GitHub plugin for Gerrit has been recently released in order to fill the gaps and allow a seamless integration. GitHub implements OAuth 2.0 for allowing external applications, such as Gerrit, to integrate using a three-step browser-based authentication. Using this scheme, a user can leverage their existing GitHub account without the need to provision and manage a separate one in Gerrit. Additionally, the Gerrit instance will be able to self-provision the SSH public keys needed for pushing changes for review. In order for us to use GitHub OAuth authentication with Gerrit, we need to do the following: Build the Gerrit GitHub plugin Install the GitHub OAuth filter into the Gerrit libraries (/lib under the Gerrit site directory) Reconfigure Gerrit to use the HTTP authentication type   Building the GitHub plugin The Gerrit GitHub plugin can be found under the Gerrit plugins/github repository on https://gerrit-review.googlesource.com/#/admin/projects/plugins/github. It is open source under the Apache 2.0 license and can be cloned and built using the Java 6 JDK and Maven. Refer to the following example: $ git clone https://gerrit.googlesource.com/plugins/github $ cd github $ mvn install […] [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------- [INFO] Total time: 9.591s [INFO] Finished at: Wed Jun 19 18:38:44 BST 2013 [INFO] Final Memory: 12M/145M [INFO] ------------------------------------------------------- The Maven build should generate the following artifacts: github-oauth/target/github-oauth*.jar, the GitHub OAuth library for authenticating Gerrit users github-plugin/target/github-plugin*.jar, the Gerrit plugin for integrating with GitHub repositories and pull requests Installing GitHub OAuth library The GitHub OAuth JAR file needs to copied to the Gerrit /lib directory; this is required to allow Gerrit to use it for filtering all HTTP requests and enforcing the GitHub three-step authentication process: $ cp github-oauth/target/github-oauth-*.jar /opt/gerrit/lib/ Installing GitHub plugin The GitHub plugin includes the additional support for the overall configuration, the advanced GitHub repositories replication, and the integration of pull requests into the Code Review process. We now need to install the plugin before running the Gerrit init again so that we can benefit from the simplified automatic configuration steps: $ cp github-plugin/target/github-plugin-*.jar /opt/gerrit/plugins/github.jar Register Gerrit as a GitHub OAuth application Before going through the Gerrit init, we need to tell GitHub to trust Gerrit as a partner application. This is done through the generation of a ClientId/ClientSecret pair associated to the exact Gerrit URLs that will be used for initiating the 3-step OAuth authentication. We can register a new application in GitHub through the URL https://github.com/settings/applications/new, where the following three fields are requested: Application name : It is the logical name of the application authorized to access GitHub, for example, Gerrit. Main URL : The Gerrit canonical web URL used for redirecting to GitHub OAuth authentication, for example, https://myhost.mydomain:8443. Callback URL : The URL that GitHub should redirect to when the OAuth authentication is successfully completed, for example, https://myhost.mydomain:8443/oauth. GitHub will automatically generate a unique pair ClientId/ClientSecret that has to be provided to Gerrit identifying them as a trusted authentication partner. ClientId/ClientSecret are not GitHub credentials and cannot be used by an interactive user to access any GitHub data or information. They are only used for authorizing the integration between a Gerrit instance and GitHub. Running Gerrit init to configure GitHub OAuth We now need to stop Gerrit and go through the init steps again in order to reconfigure the Gerrit authentication. We need to enable HTTP authentication by choosing an HTTP header to be used to verify the user's credentials, and to go through the GitHub settings wizard to configure the OAuth authentication. $ /opt/gerrit/bin/gerrit.sh stop Stopping Gerrit Code Review: OK $ cd /opt/gerrit $ java -jar gerrit.war init [...] *** User Authentication *** Authentication method []: HTTP RETURN Get username from custom HTTP header [Y/n]? Y RETURN Username HTTP header []: GITHUB_USER RETURN SSO logout URL : /oauth/reset RETURN *** GitHub Integration *** GitHub URL [https://github.com]: RETURN Use GitHub for Gerrit login ? [Y/n]? Y RETURN ClientId []: 384cbe2e8d98192f9799 RETURN ClientSecret []: f82c3f9b3802666f2adcc4 RETURN Initialized /opt/gerrit $ /opt/gerrit/bin/gerrit.sh start Starting Gerrit Code Review: OK   Using GitHub login for Gerrit Gerrit is now fully configured to register and authenticate users through GitHub OAuth. When opening the browser to access any Gerrit web pages, we are automatically redirected to the GitHub for login. If we have already visited and authenticated with GitHub previously, the browser cookie will be automatically recognized and used for the authentication, instead of presenting the GitHub login page. Alternatively, if we do not yet have a GitHub account, we create a new GitHub profile by clicking on the SignUp button. Once the authentication process is successfully completed, GitHub requests the user's authorization to grant access to their public profile information. The following screenshot shows GitHub OAuth authorization for Gerrit: The authorization status is then stored under the user's GitHub applications preferences on https://github.com/settings/applications. Finally, GitHub redirects back to Gerrit propagating the user's profile securely using a one-time code which is used to retrieve the full data profile including username, full name, e-mail, and associated SSH public keys. Replication to GitHub The next steps in the Gerrit to GitHub integration is to share the same Git repositories and then keep them up-to-date; this can easily be achieved by using the Gerrit replication plugin. The standard Gerrit replication is a master-slave, where Gerrit always plays the role of the master node and pushes to remote slaves. We will refer to this scheme as push replication because the actual control of the action is given to Gerrit through a git push operation of new commits and branches. Configure Gerrit replication plugin In order to configure push replication we need to enable the Gerrit replication plugin through Gerrit init: $ /opt/gerrit/bin/gerrit.sh stop Stopping Gerrit Code Review: OK $ cd /opt/gerrit $ java -jar gerrit.war init [...] *** Plugins *** Prompt to install core plugins [y/N]? y RETURN Install plugin reviewnotes version 2.7-rc4 [y/N]? RETURN Install plugin commit-message-length-validator version 2.7-rc4 [y/N]? RETURN Install plugin replication version 2.6-rc3 [y/N]? y RETURN Initialized /opt/gerrit $ /opt/gerrit/bin/gerrit.sh start Starting Gerrit Code Review: OK The Gerrit replication plugin relies on the replication.config file under the /opt/gerrit/etc directory to identify the list of target Git repositories to push to. The configuration syntax is a standard .ini format where each group section represents a target replica slave. See the following simplest replication.config script for replicating to GitHub: [remote "github"] url = [email protected]:myorganisation/${name}.git The preceding configuration enables all of the repositories in Gerrit to be replicated to GitHub under the myorganisa tion GitHub Team account. Authorizing Gerrit to push to GitHub Now, that Gerrit knows where to push, we need GitHub to authorize the write operations to its repositories. To do so, we need to upload the SSH public key of the underlying OS user where Gerrit is running to one of the accounts in the GitHub myorganisation team, with the permissions to push to any of the GitHub repositories. Assuming that Gerrit runs under the OS user gerrit, we can copy and paste the SSH public key values from the ~gerrit/.ssh/id_rsa.pub (or ~gerrit/.ssh/id_dsa.pub) to the Add an SSH Key section of the GitHub account under target URL to be set to: https://github.com/settings/ssh Start working with Gerrit replication Everything is now ready to start playing with Gerrit to GitHub replication. Whenever a change to a repository is made on Gerrit, it will be automatically replicated to the corresponding GitHub repository. In reality there is one additional operation that is needed on the GitHub side: the actual creation of the empty repositories using https://github.com/new associated to the ones created in Gerrit. We need to make sure that we select the organization name and repository name, consistent with the ones defined in Gerrit and in the replication.config file. Never initialize the repository from GitHub with an empty commit or readme file; otherwise the first replication attempt from Gerrit will result in a conflict and will then fail. Now GitHub and Gerrit are fully connected and whenever a repository in GitHub matches one of the repositories in Gerrit, it will be linked and synchronized with the latest set of commits pushed in Gerrit. Thanks to the Gerrit-GitHub authentication previously configured, Gerrit and GitHub share the same set of users and the commits authors will be automatically recognized and formatted by GitHub. The following screenshot shows Gerrit commits replicated to GitHub: Reviewing and merging to GitHub branches The final goal of the Code Review process is to agree and merge changes to their branches. The merging strategies need to be aligned with real-life scenarios that may arise when using Gerrit and GitHub concurrently. During the Code Review process the alignment between Gerrit and GitHub was at the change level, not influenced by the evolution of their target branches. Gerrit changes and GitHub pull requests are isolated branches managed by their review lifecycle. When a change is merged, it needs to align with the latest status of its target branch using a fast-forward, merge, rebase, or cherry-pick strategy. Using the standard Gerrit merge functionality, we can apply the configured project merge strategy to the current status of the target branch on Gerrit. The situation on GitHub may have changed as well, so even if the Gerrit merge has succeeded there is no guarantee that the actual subsequent synchronization to GitHub will do the same! The GitHub plugin mitigates this risk by implementing a two-phase submit + merge operation for merging opened changes as follows: Phase-1 : The change target branch is checked against its remote peer on GitHub and fast forwarded if needed. If two branches diverge, the submit + merge is aborted and manual merge intervention is requested. Phase-2 : The change is merged on its target branch in Gerrit and an additional ad hoc replication is triggered. If the merge succeeds then the GitHub pull request is marked as completed. At the end of Phase-2 the Gerrit and GitHub statuses will be completely aligned. The pull request author will then receive the notification that his/her commit has been merged. Using Gerrit and GitHub on http://gerrithub.io When using Gerrit and GitHub on the web with public or private repositories, all of the commits are replicated from Gerrit to GitHub, and each one of them has a complete copy of the data. If we are using a Git and collaboration server on GitHub over the Internet, why can't we do the same for its Gerrit counterpart? Can we avoid installing a standalone instance of Gerrit just for the purpose of going through a formal Code Review? One hassle-free solution is to use the GerritHub service (http://gerrithub.io), which offers a free Gerrit instance on the cloud already configured and connected with GitHub through the github-plugin and github-oauth authentication library. All of the flows that we have covered in this article are completely automated, including the replication and automatic pull request to change automation. As accounts are shared with GitHub, we do not need to register or create another account to use GerritHub; we can just visit http://gerrithub.io and start using Gerrit Code Review with our existing GitHub projects without having to teach our existing community about a new tool. GerritHub also includes an initial setup Wizard for the configuration and automation of the Gerrit projects and the option to configure the Gerrit groups using the existing GitHub. Once Gerrit is configured, the Code Review and GitHub can be used seamlessly for achieving maximum control and social reach within your developer community. Summary We have now integrated our Gerrit installation with GitHub authentication for a seamless Single-Sign-On experience. Using an existing GitHub account we started using Gerrit replication to automatically mirror all the commits to GitHub repositories, allowing our projects to have an extended reach to external users, free to fork our repositories, and to contribute changes as pull requests. Finally, we have completed our Code Review in Gerrit and managed the merge to GitHub with a two-phase change submit + merge process to ensure that the target branches on both Gerrit and GitHub have been merged and aligned accordingly. Similarly to GitHub, this Gerrit setup can be leveraged for free on the web without having to manage a separate private instance, thanks to the free set target URL to http://gerrithub.io service available on the cloud. Resources for Article : Further resources on this subject: Getting Dynamics NAV 2013 on Your Computer – For (Almost) Free [Article] Building Your First Zend Framework Application [Article] Quick start - your first Sinatra application [Article]
Read more
  • 0
  • 1
  • 16788

article-image-important-features-mockito
Packt
04 Sep 2013
4 min read
Save for later

Important features of Mockito

Packt
04 Sep 2013
4 min read
Reducing boilerplate code with annotations Mockito allows the use of annotations to reduce the lines of test code in order to increase the readability of tests. Let's take into consideration some of the tests that we have seen in previous examples. Removing boilerplate code by using the MockitoJUnitRunner The shouldCalculateTotalWaitingTimeAndAssertTheArgumentsOnMockUsingArgumentCaptor from Verifying behavior (including argument capturing, verifying call order and working with asynchronous code) section, can be rewritten as follows, using Mockito annotations, and the @RunWith(MockitoJUnitRunner.class) JUnit runner: @RunWith(MockitoJUnitRunner.class) public class _07ReduceBoilerplateCodeWithAnnotationsWithRunner { @Mock KitchenService kitchenServiceMock; @Captor ArgumentCaptor mealArgumentCaptor; @InjectMocks WaiterImpl objectUnderTest; @Test public void shouldCalculateTotalWaitingTimeAndAssert TheArgumentsOnMockUsingArgumentCaptor() throws Exception { //given final int mealPreparationTime = 10; when(kitchenServiceMock.calculate PreparationTime(any(Meal.class))).thenReturn(mealPreparationTime); //when int waitingTime = objectUnderTest.calculate TotalWaitingTime(createSampleMeals ContainingVegetarianFirstCourse()); //then assertThat(waitingTime, is(mealPreparationTime)); verify(kitchenServiceMock).calculatePreparation Time(mealArgumentCaptor.capture()); assertThat(mealArgumentCaptor.getValue(), is (VegetarianFirstCourse.class)); assertThat(mealArgumentCaptor.getAllValues().size(), is(1)); } private List createSampleMeals ContainingVegetarianFirstCourse() { List meals = new ArrayList(); meals.add(new VegetarianFirstCourse()); return meals; } } What happened here is that: All of the boilerplate code can be removed due to the fact that you are using the @RunWith(MockitoJUnitRunner.class) JUnit runner Mockito.mock(…) has been replaced with @Mock annotation You can provide additional parameters to the annotation, such as name, answer or extraInterfaces. The fieldname related to the annotated mock is referred to in any verification so it's easier to identify the mock ArgumentCaptor.forClass(…) is replaced with @Captor annotation. When using the @Captor annotation you avoid warnings related to complex generic types Thanks to the @InjectMocks annotation your object under test is initialized, proper constructor/setters are found and Mockito injects the appropriate mocks for you There is no explicit creation of the object under test You don't need to provide the mocks as arguments of the constructor Mockito @InjectMocksuses either constructor injection, property injection or setter injection Taking advantage of advanced mocks configuration Mockito gives you a possibility of providing different answers for your mocks. Let's focus more on that. Getting more information on NullPointerException Remember the Waiter's askTheCleaningServiceToCleanTheRestaurantMethod(): @Override public boolean askTheCleaningServiceToCleanTheRestaurant (TypeOfCleaningService typeOfCleaningService) { CleaningService cleaningService = cleaningServiceFactory.getMe ACleaningService(typeOfCleaningService); try{ cleaningService.cleanTheTables(); cleaningService.sendInformationAfterCleaning(); return SUCCESSFULLY_CLEANED_THE_RESTAURANT; }catch(CommunicationException communicationException){ System.err.println("An exception took place while trying to send info about cleaning the restaurant"); return FAILED_TO_CLEAN_THE_RESTAURANT; } } Let's assume that we want to test this function. We inject the CleaningServiceFactory as a mock but we forgot to stub the getMeACleaningService(…) method. Normally we would get a NullPointerException since, if the method is not stubbed, it will return null. But what will happen, if as an answer we would provide a RETURNS_SMART_NULLS answer? Let's take a look at the body of the following test: @Mock(answer = Answers.RETURNS_SMART_NULLS) CleaningServiceFactory cleaningServiceFactory; @InjectMocks WaiterImpl objectUnderTest; @Test public void shouldThrowSmartNullPointerExceptionWhenUsingUnstubbedMethod() { //given // Oops forgotten to stub the CleaningServiceFactory.getMeACle aningService(TypeOfCleaningService) method try { //when objectUnderTest.askTheCleaningServiceToCleanTheRestaurant( TypeOfCleaningService.VERY_FAST); fail(); } catch (SmartNullPointerException smartNullPointerException) { System.err.println("A SmartNullPointerException will be thrown here with a very precise information about the error [" + smartNullPointerException + "]"); } } What happened in the test is that: We create a mock with an answer RETURNS_SMART_NULLS of the CleaningServiceFactory The mock is injected to the WaiterImpl We do not stub the getMeACleaningService(…) of the CleaningServiceFactory The SmartNullPointerException will be thrown at the line containing the cleaningService.cleanTheTables() It will contain very detailed information about the reason for the exception to happen and where it happened In order to have the RETURNS_SMART_NULLS as the default answer (you wouldn't have to explicitly define the answer for your mock), you would have to create the class named MockitoConfiguration in a package org.mockito.configuration that either extends the DefaultMockitoConfiguration or implements the IMockitoConfiguration interface: package org.mockito.configuration; import org.mockito.internal.stubbing.defaultanswers.ReturnsSmartNulls; import org.mockito.stubbing.Answer; public class MockitoConfiguration extends DefaultMockitoConfiguration { public Answer<Object> getDefaultAnswer() { return new ReturnsSmartNulls(); } } Summary In this article we learned in detail about reducing the boilerplate code with annotations, and taking advantage of advanced mocks configuration, along with their code implementation. Resources for Article : Further resources on this subject: Testing your App [Article] Drools JBoss Rules 5.0 Flow (Part 2) [Article] Easily Writing SQL Queries with Spring Python [Article]
Read more
  • 0
  • 0
  • 2083