Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-patterns-traversing
Packt
25 Sep 2015
15 min read
Save for later

Patterns of Traversing

Packt
25 Sep 2015
15 min read
 In this article by Ryan Lemmer, author of the book Haskell Design Patterns, we will focus on two fundamental patterns of recursion: fold and map. The more primitive forms of these patterns are to be found in the Prelude, the "old part" of Haskell. With the introduction of Applicative, came more powerful mapping (traversal), which opened the door to type-level folding and mapping in Haskell. First, we will look at how Prelude's list fold is generalized to all Foldable containers. Then, we will follow the generalization of list map to all Traversable containers. Our exploration of fold and map culminates with the Lens library, which raises Foldable and Traversable to an even higher level of abstraction and power. In this article, we will cover the following: Traversable Modernizing Haskell Lenses (For more resources related to this topic, see here.) Traversable As with Prelude.foldM, mapM fails us beyond lists, for example, we cannot mapM over the Tree from earlier: main = mapM doF aTree >>= print -- INVALID The Traversable type-class is to map in the same way as Foldable is to fold: -- required: traverse or sequenceA class (Functor t, Foldable t) => Traversable (t :: * -> *) where -- APPLICATIVE form traverse :: Applicative f => (a -> f b) -> t a -> f (t b) sequenceA :: Applicative f => t (f a) -> f (t a) -- MONADIC form (redundant) mapM :: Monad m => (a -> m b) -> t a -> m (t b) sequence :: Monad m => t (m a) -> m (t a) The traverse fuction generalizes our mapA function, which was written for lists, to all Traversable containers. Similarly, Traversable.mapM is a more general version of Prelude.mapM for lists: mapM :: Monad m => (a -> m b) -> [a] -> m [b] mapM :: Monad m => (a -> m b) -> t a -> m (t b) The Traversable type-class was introduced along with Applicative: "we introduce the type class Traversable, capturing functorial data structures through which we can thread an applicative computation"                         Applicative Programming with Effects - McBride and Paterson A Traversable Tree Let's make our Traversable Tree. First, we'll do it the hard way: – a Traversable must also be a Functor and Foldable: instance Functor Tree where fmap f (Leaf x) = Leaf (f x) fmap f (Node x lTree rTree) = Node (f x) (fmap f lTree) (fmap f rTree) instance Foldable Tree where foldMap f (Leaf x) = f x foldMap f (Node x lTree rTree) = (foldMap f lTree) `mappend` (f x) `mappend` (foldMap f rTree) --traverse :: Applicative ma => (a -> ma b) -> mt a -> ma (mt b) instance Traversable Tree where traverse g (Leaf x) = Leaf <$> (g x) traverse g (Node x ltree rtree) = Node <$> (g x) <*> (traverse g ltree) <*> (traverse g rtree) data Tree a = Node a (Tree a) (Tree a) | Leaf a deriving (Show) aTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) -- import Data.Traversable main = traverse doF aTree where doF n = do print n; return (n * 2) The easier way to do this is to auto-implement Functor, Foldable, and Traversable: {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE DeriveFoldable #-} {-# LANGUAGE DeriveTraversable #-} import Data.Traversable data Tree a = Node a (Tree a) (Tree a)| Leaf a deriving (Show, Functor, Foldable, Traversable) aTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) main = traverse doF aTree where doF n = do print n; return (n * 2) Traversal and the Iterator pattern The Gang of Four Iterator pattern is concerned with providing a way "...to access the elements of an aggregate object sequentially without exposing its underlying representation"                                       "Gang of Four" Design Patterns, Gamma et al, 1995 In The Essence of the Iterator Pattern, Jeremy Gibbons shows precisely how the Applicative traversal captures the Iterator pattern. The Traversable.traverse class is the Applicative version of Traversable.mapM, which means it is more general than mapM (because Applicative is more general than Monad). Moreover, because mapM does not rely on the Monadic bind chain to communicate between iteration steps, Monad is a superfluous type for mapping with effects (Applicative is sufficient). In other words, Applicative traverse is superior to Monadic traversal (mapM): "In addition to being parametrically polymorphic in the collection elements, the generic traverse operation is parametrised along two further dimensions: the datatype being tra- versed, and the applicative functor in which the traversal is interpreted" "The improved compositionality of applicative functors over monads provides better glue for fusion of traversals, and hence better support for modular programming of iterations"                                        The Essence of the Iterator Pattern - Jeremy Gibbons Modernizing Haskell 98 The introduction of Applicative, along with Foldable and Traversable, had a big impact on Haskell. Foldable and Traversable lift Prelude fold and map to a much higher level of abstraction. Moreover, Foldable and Traversable also bring a clean separation between processes that preserve or discard the shape of the structure that is being processed. Traversable describes processes that preserve that shape of the data structure being traversed over. Foldable processes, in turn, discard or transform the shape of the structure being folded over. Since Traversable is a specialization of Foldable, we can say that shape preservation is a special case of shape transformation. This line between shape preservation and transformation is clearly visible from the fact that functions that discard their results (for example, mapM_, forM_, sequence_, and so on) are in Foldable, while their shape-preserving counterparts are in Traversable. Due to the relatively late introduction of Applicative, the benefits of Applicative, Foldable, and Traversable have not found their way into the core of the language. This is due to the change with the Foldable Traversable In Prelude proposal (planned for inclusion in the core libraries from GHC 7.10). For more information, visit https://wiki.haskell.org/Foldable_Traversable_In_Prelude. This will involve replacing less generic functions in Prelude, Control.Monad, and Data.List with their more polymorphic counterparts in Foldable and Traversable. There have been objections to the movement to modernize, the main concern being that more generic types are harder to understand, which may compromise Haskell as a learning language. These valid concerns will indeed have to be addressed, but it seems certain that the Haskell community will not resist climbing to new abstract heights. Lenses A Lens is a type that provides access to a particular part of a data structure. Lenses express a high-level pattern for composition. However, Lens is also deeply entwined with Traversable, and so we describe it in this article instead. Lenses relate to the getter and setter functions, which also describe access to parts of data structures. To find our way to the Lens abstraction (as per Edward Kmett's Lens library), we'll start by writing a getter and setter to access the root node of a Tree. Deriving Lens Returning to our Tree from earlier: data Tree a = Node a (Tree a) (Tree a) | Leaf a deriving (Show) intTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) listTree = Node [1,1] (Leaf [2,1]) (Node [3,2] (Leaf [5,2]) (Leaf [7,4])) tupleTree = Node (1,1) (Leaf (2,1)) (Node (3,2) (Leaf (5,2)) (Leaf (7,4))) Let's start by writing generic getter and setter functions: getRoot :: Tree a -> a getRoot (Leaf z) = z getRoot (Node z _ _) = z setRoot :: Tree a -> a -> Tree a setRoot (Leaf z) x = Leaf x setRoot (Node z l r) x = Node x l r main = do print $ getRoot intTree print $ setRoot intTree 11 print $ getRoot (setRoot intTree 11) If we want to pass in a setter function instead of setting a value, we use the following: fmapRoot :: (a -> a) -> Tree a -> Tree a fmapRoot f tree = setRoot tree newRoot where newRoot = f (getRoot tree) We have to do a get, apply the function, and then set the result. This double work is akin to the double traversal we saw when writing traverse in terms of sequenceA. In that case we resolved the issue by defining traverse first (and then sequenceA i.t.o. traverse): We can do the same thing here by writing fmapRoot to work in a single step (and then rewriting setRoot' i.t.o. fmapRoot'): fmapRoot' :: (a -> a) -> Tree a -> Tree a fmapRoot' f (Leaf z) = Leaf (f z) fmapRoot' f (Node z l r) = Node (f z) l r setRoot' :: Tree a -> a -> Tree a setRoot' tree x = fmapRoot' (_ -> x) tree main = do print $ setRoot' intTree 11 print $ fmapRoot' (*2) intTree The fmapRoot' function delivers a function to a particular part of the structure and returns the same structure: fmapRoot' :: (a -> a) -> Tree a -> Tree a To allow for I/O, we need a new function: fmapRootIO :: (a -> IO a) -> Tree a -> IO (Tree a) We can generalize this beyond I/O to all Monads: fmapM :: (a -> m a) -> Tree a -> m (Tree a) It turns out that if we relax the requirement for Monad, and generalize f' to all the Functor container types, then we get a simple van Laarhoven Lens! type Lens' s a = Functor f' => (a -> f' a) -> s -> f' s The remarkable thing about a van Laarhoven Lens is that given the preceding function type, we also gain "get", "set", "fmap", "mapM", and many other functions and operators. The Lens function type signature is all it takes to make something a Lens that can be used with the Lens library. It is unusual to use a type signature as "primary interface" for a library. The immediate benefit is that we can define a lens without referring to the Lens library. We'll explore more benefits and costs to this approach, but first let's write a few lenses for our Tree. The derivation of the Lens abstraction used here has been based on Jakub Arnold's Lens tutorial, which is available at http://blog.jakubarnold.cz/2014/07/14/lens-tutorial-introduction-part-1.html. Writing a Lens A Lens is said to provide focus on an element in a data structure. Our first lens will focus on the root node of a Tree. Using the lens type signature as our guide, we arrive at: lens':: Functor f => (a -> f' a) -> s -> f' s root :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) Still, this is not very tangible; fmapRootIO is easier to understand with the Functor f' being IO: fmapRootIO :: (a -> IO a) -> Tree a -> IO (Tree a) fmapRootIO g (Leaf z) = (g z) >>= return . Leaf fmapRootIO g (Node z l r) = (g z) >>= return . (x -> Node x l r) displayM x = print x >> return x main = fmapRootIO displayM intTree If we drop down from Monad into Functor, we have a Lens for the root of a Tree: root :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) root g (Node z l r) = fmap (x -> Node x l r) (g z) root g (Leaf z) = fmap Leaf (g z) As Monad is a Functor, this function also works with Monadic functions: main = root displayM intTree As root is a lens, the Lens library gives us the following: -– import Control.Lens main = do -- GET print $ view root listTree print $ view root intTree -- SET print $ set root [42] listTree print $ set root 42 intTree -- FMAP print $ over root (+11) intTree The over is the lens way of fmap'ing a function into a Functor. Composable getters and setters Another Lens on Tree might be to focus on the rightmost leaf: rightMost :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) rightMost g (Node z l r) = fmap (r' -> Node z l r') (rightMost g r) rightMost g (Leaf z) = fmap (x -> Leaf x) (g z) The Lens library provides several lenses for Tuple (for example, _1 which brings focus to the first Tuple element). We can compose our rightMost lens with the Tuple lenses: main = do print $ view rightMost tupleTree print $ set rightMost (0,0) tupleTree -- Compose Getters and Setters print $ view (rightMost._1) tupleTree print $ set (rightMost._1) 0 tupleTree print $ over (rightMost._1) (*100) tupleTree A Lens can serve as a getter, setter, or "function setter". We are composing lenses using regular function composition (.)! Note that the order of composition is reversed in (rightMost._1) the rightMost lens is applied before the _1 lens. Lens Traversal A Lens focuses on one part of a data structure, not several, for example, a lens cannot focus on all the leaves of a Tree: set leaves 0 intTree over leaves (+1) intTree To focus on more than one part of a structure, we need a Traversal class, the Lens generalization of Traversable). Whereas Lens relies on Functor, Traversal relies on Applicative. Other than this, the signatures are exactly the same: traversal :: Applicative f' => (a -> f' a) -> Tree a -> f' (Tree a) lens :: Functor f'=> (a -> f' a) -> Tree a -> f' (Tree a) A leaves Traversal delivers the setter function to all the leaves of the Tree: leaves :: Applicative f' => (a -> f' a) -> Tree a -> f' (Tree a) leaves g (Node z l r) = Node z <$> leaves g l <*> leaves g r leaves g (Leaf z) = Leaf <$> (g z) We can use set and over functions with our new Traversal class: set leaves 0 intTree over leaves (+1) intTree The Traversals class compose seamlessly with Lenses: main = do -- Compose Traversal + Lens print $ over (leaves._1) (*100) tupleTree -- Compose Traversal + Traversal print $ over (leaves.both) (*100) tupleTree -- map over each elem in target container (e.g. list) print $ over (leaves.mapped) (*(-1)) listTree -- Traversal with effects mapMOf leaves displayM tupleTree (The both is a Tuple Traversal that focuses on both elements). Lens.Fold The Lens.Traversal lifts Traversable into the realm of lenses: main = do print $ sumOf leaves intTree print $ anyOf leaves (>0) intTree The Lens Library We used only "simple" Lenses so far. A fully parametrized Lens allows for replacing parts of a data structure with different types: type Lens s t a b = Functor f' => (a -> f' b) -> s -> f' t –- vs simple Lens type Lens' s a = Lens s s a a Lens library function names do their best to not clash with existing names, for example, postfixing of idiomatic function names with "Of" (sumOf, mapMOf, and so on), or using different verb forms such as "droppingWhile" instead of "dropWhile". While this creates a burden as i.t.o has to learn new variations, it does have a big plus point—it allows for easy unqualified import of the Lens library. By leaving the Lens function type transparent (and not obfuscating it with a new type), we get Traversals by simply swapping out Functor for Applicative. We also get to define lenses without having to reference the Lens library. On the downside, Lens type signatures can be bewildering at first sight. They form a language of their own that requires effort to get used to, for example: mapMOf :: Profunctor p => Over p (WrappedMonad m) s t a b -> p a (m b) -> s -> m t foldMapOf :: Profunctor p => Accessing p r s a -> p a r -> s -> r On the surface, the Lens library gives us composable getters and setters, but there is much more to Lenses than that. By generalizing Foldable and Traversable into Lens abstractions, the Lens library lifts Getters, Setters, Lenses, and Traversals into a unified framework in which they are all compose together. Edward Kmett's Lens library is a sprawling masterpiece that is sure to leave a lasting impact on idiomatic Haskell. Summary We started with Lists (Haskel 98), then generalizing for all Traversable containers (Introduced in the mid-2000s). Following that, we saw how the Lens library (2012) places traversing in an even broader context. Lenses give us a unified vocabulary to navigate data structures, which explains why it has been described as a "query language for data structures". Resources for Article: Further resources on this subject: Plotting in Haskell[article] The Hunt for Data[article] Getting started with Haskell [article]
Read more
  • 0
  • 0
  • 2869

article-image-creating-jee-application-ejb
Packt
24 Sep 2015
11 min read
Save for later

Creating a JEE Application with EJB

Packt
24 Sep 2015
11 min read
In this article by Ram Kulkarni, author of Java EE Development with Eclipse (e2), we will be using EJBs (Enterprise Java Beans) to implement business logic. This is ideal in scenarios where you want components that process business logic to be distributed across different servers. But that is just one of the advantages of EJB. Even if you use EJBs on the same server as the web application, you may gain from a number of services that the EJB container provides to the applications through EJBs. You can specify security constraints for calling EJB methods declaratively (using annotations), and you can also easily specify transaction boundaries (specify which method calls from a part of one transaction) using annotations. In addition to this, the container handles the life cycle of EJBs, including pooling of certain types of EJB objects so that more objects can be created when the load on the application increases. (For more resources related to this topic, see here.) In this article, we will create the same application using EJBs and deploy it in a Glassfish 4 server. But before that, you need to understand some basic concepts of EJBs. Types of EJB EJBs can be of following types as per the EJB 3 specifications: Session bean: Stateful session bean Stateless session bean Singleton session bean Message-driven bean In this article, we will focus on session beans. Session beans In general, session beans are meant for containing methods used to execute the main business logic of enterprise applications. Any Plain Old Java Object (POJO) can be annotated with the appropriate EJB-3-specific annotations to make it a session bean. Session beans come in three types, as follows. Stateful session bean One stateful session bean serves requests for one client only. There is a one-to-one mapping between the stateful session bean and the client. Therefore, stateful beans can hold state data for the client between multiple method calls. In our CourseManagement application, we can use a stateful bean to hold the Student data (student profile and the courses taken by him/her) after a student logs-in. The state maintained by the Stateful bean is lost when the server restarts or when the session times out. Since there is one stateful bean per client, using a stateful bean might impact the scalability of the application. We use the @Stateful annotation to create a stateful session bean. Stateless session bean A stateless session bean does not hold any state information for any client. Therefore, one session bean can be shared across multiple clients. The EJB container maintains pools of stateless beans, and when a client request comes, it takes out a bean from the pool, executes methods, and returns the bean to the pool. Stateless session beans provide excellent scalability because they can be shared and need not be created for each client. We use the @Stateless annotation to create a stateless session bean. Singleton session bean As the name suggests, there is only one instance of a singleton bean class in the EJB container (this is true in the clustered environment too; each EJB container will have an instance of a singleton bean). This means that they are shared by multiple clients, and they are not pooled by EJB containers (because there can be only one instance). Since a singleton session bean is a shared resource, we need to manage concurrency in it. Java EE provides two concurrency management options for singleton session beans: container-managed concurrency and bean-managed concurrency. Container-managed concurrency can easily be specified by annotations. See https://docs.oracle.com/javaee/7/tutorial/ejb-basicexamples002.htm#GIPSZ for more information on managing concurrency in a singleton session bean. Using a singleton bean could have an impact on the scalability of the application if there are resource contentions in the code. We use the @Singleton annotation to create a singleton session bean Accessing a session bean from the client Session beans can be designed to be accessed locally (within the same application as a session bean) or remotely (from a client running in a different application or JVM) or both. In the case of remote access, session beans are required to implement a remote interface. For local access, session beans can implement a local interface or no interface (the no-interface view of a session bean). Remote and local interfaces that session beans implement are sometimes also called business interfaces, because they typically expose the primary business functionality. Creating a no-interface session bean To create a session bean with a no-interface view, create a POJO and annotate it with the appropriate EJB annotation type and @LocalBean. For example, we can create a local stateful Student bean as follows: import javax.ejb.LocalBean; import javax.ejb.Singleton; @Singleton @LocalBean public class Student { ... } Accessing a session bean using dependency injection You can access session beans by either using the @EJBannotation (for dependency injection) or performing a Java Naming and Directory Interface (JNDI) lookup. EJB containers are required to make the JNDI URLs of EJBs available to clients. Dependency injection of session beans using @EJB work only for managed components, that is, components of the application whose life cycle is managed by the EJB container. When a component is managed by the container, it is created (instantiated) by the container and also destroyed by the container. You do not create managed components using the new operator. JEE-managed components that support direct injection of EJBs are servlets, managed beans of JSF pages and EJBs themselves (one EJB can have other EJBs injected into it). Unfortunately, you cannot have a web container injecting EJBs into JSPs or JSP beans. Also, you cannot have EJBs injected into any custom classes that you create and are instantiated using the new operator. We can use the Student bean (created previously) from a managed bean of JSF, as follows: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private Student studentEJB; } Note that if you create an EJB with a no-interface view, then all the public methods in that EJB will be exposed to the clients. If you want to control which methods can be called by clients, then you should implement the business interface. Creating a session bean using a local business interface A business interface for EJB is a simple Java interface with either the @Remote or @Local annotation. So we can create a local interface for the Student bean as follows: import java.util.List; import javax.ejb.Local; @Local public interface StudentLocal { public List<Course> getCourses(); } We implement a session bean like this: import java.util.List; import javax.ejb.Local; import javax.ejb.Stateful; @Stateful @Local public class Student implements StudentLocal { @Override public List<CourseDTO> getCourses() { //get courses are return … } } Clients can access the Student EJB only through the local interface: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private StudentLocal student; } The session bean can implement multiple business interfaces. Accessing a session bean using a JNDI lookup Though accessing EJB using dependency injection is the easiest way, it works only if the container manages the class that accesses the EJB. If you want to access EJB from a POJO that is not a managed bean, then dependency injection will not work. Another scenario where dependency injection does not work is when EJB is deployed in a separate JVM (this could be on a remote server). In such cases, you will have to access EJB using a JNDI lookup (visit https://docs.oracle.com/javase/tutorial/jndi/ for more information on JNDI). JEE applications can be packaged in an Enterprise Application Archive (EAR), which contains a .jar file for EJBs and a WAR file for web applications (and the lib folder contains the libraries required for both). If, for example, the name of an EAR file is CourseManagement.ear and the name of an EJB JAR file in it is CourseManagementEJBs.jar, then the name of the application is CourseManagement (the name of the EAR file) and the module name is CourseManagementEJBs. The EJB container uses these names to create a JNDI URL for lookup EJBs. A global JNDI URL for EJB is created as follows: "java:global/<application_name>/<module_name>/<bean_name>![<bean_interface>]" java:global: Indicates that it is a global JNDI URL. <application_name>: The application name is typically the name of the EAR file. <module_name>: This is the name of the EJB JAR. <bean_name>: This is the name of the EJB bean class. <bean_interface>: This is optional if EJB has a no-interface view, or if it implements only one business interface. Otherwise, it is a fully qualified name of a business interface. EJB containers are also required to publish two more variations of JNDI URLs for each EJB. These are not global URLs, which means that they can't be used to access EJBs from clients that are not in the same JEE application (in the same EAR): "java:app/[<module_name>]/<bean_name>![<bean_interface>]" "java:module/<bean_name>![<bean_interface>]" The first URL can be used if the EJB client is in the same application, and the second URL can be used if the client is in the same module (the same JAR file as the EJB). Before you look up any URL in a JNDI server, you need to create an InitialContext that includes information, among other things such as the hostname of JNDI server and the port on which it is running. If you are creating InitialContext in the same server, then there is no need to specify these attributes: InitialContext initCtx = new InitialContext(); Object obj = initCtx.lookup("jndi_url"); We can use the following JNDI URLs to access a no-interface (LocalBean) Student EJB (assuming that the name of the EAR file is CourseManagement and the name of the JAR file for EJBs is CourseManagementEJBs): URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR file, because we are using a global URL. Note that we haven't specified the interface name because we are assuming that the Student bean provides a no-interface view in this example. java:app/CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the application name because the client is expected to be in the same application. This is because the namespace of the URL is java:app. java:module/Student The client must be in the same JAR file as EJB. We can use the following JNDI URLs to access the Student EJB that implemented a local interface, StudentLocal: URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal The client can be anywhere in the EAR file, because we are using a global URL. java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the interface name because the bean implements only one business interface. Note that the object returned from this call will be of the StudentLocal type, and not Student. java:app/CourseManagementEJBs/Student Or java:app/CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal   The client can be anywhere in the EAR. We skipped the application name because the JNDI namespace is java:app. java:module/Student Or java:module/Student!packt.jee.book.ch6.StudentLocal The client must be in the same EAR as the EJB. Here is an example of how we can call the Student bean with the local business interface from one of the objects (that is not managed by the web container) in our web application: InitialContext ctx = new InitialContext(); StudentLocal student = (StudentLocal) ctx.loopup ("java:app/CourseManagementEJBs/Student"); return student.getCourses(id) ; //get courses from Student EJB Creating EAR for Deployment outside Eclipse. Summary EJBs are ideal for writing business logic in web applications. They can act as the perfect bridge between web interface components, such as a JSF, servlet, or JSP, and data access objects, such as JDO. EJBs can be distributed across multiple JEE application servers (this could improve application scalability) and their life cycle is managed by the container. EJBs can easily be injected into managed objects or can be looked up using JNDI. The Eclipse JEE makes creating and consuming EJBs very easy. The JEE application server Glassfish can also be managed and applications can be deployed from within Eclipse. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans[article] WebSockets in Wildfly[article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 3878

article-image-cassandra-design-patterns
Packt
22 Sep 2015
18 min read
Save for later

Cassandra Design Patterns

Packt
22 Sep 2015
18 min read
In this article by Rajanarayanan Thottuvaikkatumana, author of the book Cassandra Design Patterns, Second Edition, the author has discussed how Apache Cassandra is one of the most popular NoSQL data stores. He states this based on the research paper Dynamo: Amazon’s Highly Available Key-Value Store and the research paper Bigtable: A Distributed Storage System for Structured Data. Cassandra is implemented with best features from both of these research papers. In general, NoSQL data stores can be classified into the following groups: Key-value data store Column family data store Document data store Graph data store Cassandra belongs to the column family data store group. Cassandra’s peer-to-peer architecture avoids single point failures in the cluster of Cassandra nodes and gives the ability to distribute the nodes across racks or data centres. This makes Cassandra a linearly scalable data store. In other words, the more processing you need, the more Cassandra nodes you can add to your cluster. Cassandra’s multi data centre support makes it a perfect choice to replicate the data stores across data centres for disaster recovery, high availability, separating transaction processing, analytical environments, and for building resiliency into the data store infrastructure.   Design patterns in Cassandra The term “design patterns” is a highly misinterpreted term in the software development community. In an extremely general sense, it is a set of solutions for some known problems in quite a specific context. It is used in this book to describe a pattern of using certain features of Cassandra to solve some real-world problems. This book is a collection of such design patterns with real-world examples. Coexistence patterns Cassandra is one of the highly successful NoSQL data stores, which is greatly similar to the traditional RDBMS. Cassandra column families (also known as Cassandra tables), in a logical perspective, have a similarity with RDBMS-based tables in the view of the users, even though the underlying structure of these tables are totally different. Because of this, Cassandra is best fit to be deployed along with the traditional RDBMS to solve some of the problems that RDBMS is not able to handle. The caveat here is that because of the similarity of RDBMS tables and Cassandra column families in the view of the end users, many users and data modelers try to use Cassandra in the exact the same way as the RDBMS schema is being modeled, used, and getting into serious deployment issues. How do you prevent such pitfalls? The key here is to understand the differences in a theoretical perspective as well as in a practical perspective, and follow best practices prescribed by the creators of Cassandra. Where do you start with Cassandra? The best place to look at is the new application development requirements and take it from there. Look at the cases where there is a need to normalize the RDBMS tables and keep all the data items together, which would have got distributed if you were to design the same solution in RDBMS. Instead of thinking from the pure data model perspective, start thinking in terms of the application's perspective. How the data is generated by the application, what are the read requirements, what are the write requirements, what is the response time expected out of some of the use cases, and so on. Depending on these aspects, design the data model. In the big data world, the application becomes the first class citizen and the data model leaves the driving seat in the application design. Design the data model to serve the needs of the applications. In any organization, new reporting requirements come all the time. The major challenge in to generate reports is the underlying data store. In the RDBMS world, reporting is always a challenge. You may have to join multiple tables to generate even simple reports. Even though the RDBMS objects such as views, stored procedures, and indexes maybe used to get the desired data for the reports, when the report is being generated, the query plan is going to be very complex most of the time. The consumption of processing power is another need to consider when generating such reports on the fly. Because of these complexities, many times, for reporting requirements, it is common to keep separate tables containing data exported from the transactional tables. This is a great opportunity to start with NoSQL stores like Cassandra as a reporting data store. Data aggregation and summarization are common requirements in any organization. This helps to control the data growth by storing only the summary statistics and moving the transactional data into archives. Many times, this aggregated and summarized data is used for statistical analysis. Making the summary accurate and easily accessible is a big challenge. Most of the time, data aggregation and reporting goes hand in hand. The aggregated data is heavily used in reports. The aggregation process speeds up the queries to a great extent. This is another place where you can start with NoSQL stores like Cassandra. The coexistence of RDBMS and NoSQL data stores like Cassandra is very much possible, feasible, and sensible; and this is the only way to get started with the NoSQL movement, unless you embark on a totally new product development from scratch. In summary, this section of the book discusses about some design patterns related to de-normalization, reporting, and aggregation of data using Cassandra as the preferred NoSQL data store. RDBMS migration patterns A big bang approach to any kind of technology migration is not advisable. A series of deliberations have to happen before the eventual and complete change over. Migration from RDBMS to Cassandra is not different at all. Any new technology replacing an old one must coexist harmoniously, at least for a short period of time. This gives a lot of confidence on the new technology to the stakeholders. Many technology pundits give various approaches on the RDBMS to NoSQL migration strategies. Many such guidelines are specific to the particular NoSQL data stores giving attention to specific areas, and most of the time, this will end up on the process rather than the technology. The migration from RDBMS to Cassandra is not an easy task. Mainly because the RDBMS-based systems are really time tested and trust worthy in most of the organizations. So, migrating from such a robust RDBMS-based system to Cassandra is not going to be easy for anyone. One of the best approaches to achieve this goal is to exploit some of the new or unique features in Cassandra, which many of the traditional RDBMS don't have. This also prevents the usage of Cassandra just like any other RDBMS. Cassandra is unique. Cassandra is not an RDBMS. The approach of banking on the unique features is not only applicable to the RDBMS to Cassandra migration, but also to any migration from one paradigm to another. Some of the design patterns that are discussed in this section of the book revolve around very simple and important features of Cassandra, but have profound application potential when designing the next generation NoSQL data stores using Cassandra. A wise usage of these unique features in Cassandra will give a head start on the eventual and complete migration from RDBMS. The modeling of collection objects in RDBMS is a real pain, because multiple tables are to be defined and a join is required to access data. Many RDBMS offer this by providing capability to define user-defined data types, but there is absolutely no standardization at all in this space. Collection objects are very commonly seen in the real-world applications. A list of actions, tuple of related values, set of objects, dictionaries, and things like that come quite often in applications. Cassandra has elegant ways to model this because they are data types in column families. Counting is a very commonly required process in many business processes and applications. In RDBMS, this has to be modeled as integers or long numbers, but many times, applications make big mistakes in using them in wrong ways. Cassandra has a counter data type in the column family that alleviates this problem. Getting rid of unwanted records from an RDBMS table is not an automatic process. When some application events occur, they have to be removed by application programs or through some other means. But in many situations, many data items will have a preallocated time to live. They should go away without the intervention of any external events. Cassandra has a way to assign time-to-live (TTL) attribute to data items. By making use of TTL, data items get removed without any other external event's intervention. All the design patterns covered in this section of the book revolve around some of the new features of Cassandra that will make the migration from RDBMS to Cassandra an easy task. Cache migration pattern Database access whether it is from RDBMS or other highly distributed NoSQL data stores is always an input/output (I/O) intensive operation. It makes perfect sense to cache the frequently used, but reasonably static data for fast access for the applications consuming this data. In such situations, the in-memory cache is preferred to the repeated database access for each request. Using cache is not always a pleasant experience. Getting into really weird problems such as data loss, data getting out of sync with its source and other data integrity problems are very common. It is very common to see wrong components coming into the enterprise solution stack all the time for various reasons. Overlooking on some of the features and adopting the technology without much background work is a very common pitfall. Many a times, the use of cache comes into the solution stack to reduce the latency of the responses. Once the initial results are favorable, more and more data will get tossed into the cache. Slowly, this will become a practice to see that more and more data is getting into cache. Now is the time when problems start popping up one by one. Pure in-memory cache solutions are favored by everybody, by the virtue of its ability to serve the data quickly until you start loosing data. This is because of the faults in the system, along with application and node crashes. Cache serves data much faster than being served from other data stores. But if the caching solution in use is giving data integrity problems, it is better to migrate to NoSQL data stores like Cassandra. Is Cassandra faster than the in-memory caching solutions? The obvious answer is no. But it is not as bad as many think. Cassandra can be configured to serve fast reads, and bonus comes in the form of high data integrity with strong replication capabilities. Cache is good as long as it serves its purpose without any data loss or any other data integrity issues. Emphasizing on the use case of the key/value type cache and various methods of cache to NoSQL migration are discussed in this section of the book. Cassandra cannot be used as a replacement for cache in terms of the speed of data access. But when it comes to data integrity, Cassandra shines all the time with its tuneable consistency feature. With a continual tuning and manipulating data with clean and well-written application code, data access can be improved to a great level, and it will be much better than many other data stores. The design pattern covered in this section of the book gives some guidance on migrating from caching solutions to Cassandra, if this is a must. CAP patterns When it comes to large-scale Internet applications or web services, popularly known as the Internet of Things (IoT) applications, the number of components are huge and the way they are distributed is beyond imagination. There will be hundreds of application servers, hundreds of data store nodes, and many other components in the whole ecosystem. In such a scenario, for doing an atomic transaction by getting an agreement from all the components involved is, for all practical purposes, impossible. Consistency, availability, and partition tolerance are three important guarantees, popularly known as CAP guarantees that any distributed computing systems should offer even though all is not possible simultaneously. In the IoT applications, the distribution of the application nodes is unavoidable. This means that the possibility of network partition is pretty much there. So, it is mandatory to give the P guarantee. Now, the question is whether to forfeit the C guarantee or the A guarantee. At this stage, the situation is not as grave as portrayed in the CAP Theorem conjectured by Eric Brewer. For all the use cases in a given IoT application, there is no need of having 100% of C guarantee and 100% of A guarantee. So, depending on the need of the level of A guarantee, the C guarantee can be tuned. In other words, it is called tunable consistency. Depending on the way data is ingested into Cassandra, and the way it is consumed from Cassandra, tuning is possible to give best results for the appropriate read and write requirements of the applications. In some applications, the speed at which the data is written will be very high. In other words, the velocity of the data ingestion into Cassandra is very high. This falls into the write-heavy applications. In some applications, the need to read data quickly will be an important requirement. This is mainly needed in the applications where there is a lot of data processing required. Data analytics applications, batch processing applications, and so on fall under this category. These fall into the read-heavy applications. Now, there is a third category of applications where there is an equal importance for fast writes as well as fast reads. These are the kind of applications where there is a constant inflow of data, and at the same time, there is a need to read the data by clients for various purposes. This falls into the read-write balanced applications. The consistency level requirements for all the previous three types of applications are totally different. There is no one way to tune so that it is optimal for all the three types of applications. All the three applications' consistency levels are to be tuned differently from use case to use case. In this section of the book, various design patterns related to applications with the needs of fast writes, fast reads, and moderate write and read are discussed. All these design patterns revolve around using the tuneable consistency parameters of Cassandra. Whether it is for write or read and if the consistency levels are set high, the availability levels will be low and vice versa. So, by making use of the consistency level knob, the Cassandra data store can be used for various types of writing and reading use cases. Temporal patterns In any applications, the usage of data that varies over the period of time is called as temporal data, which is very important. Temporal data is needed wherever there is a need to maintain chronology. There are so many applications in which there is a huge need for storage, retrieval, and processing of data that is tied to time. The biggest challenge in dealing with temporal data stored in a data store is that they are hugely used for analytical purposes and retrieving the data, based on various sort orders in terms of time. So, the data stores that are used to capture the temporal data should be capable of storing the data strictly adhering to the chronology. There are so many usage patterns that are seen in the real world that fall into showing temporal behavior. For the classification purpose in this book, they are bucketed into three. The first one is the general time series category. The second one is the log category, such as in an audit log, a transaction log, and so on. The third one is the conversation category, such as in the conversation messages of a chat application. There is relevance in this classification, because these are commonly used across in many of the applications. In many of the applications, these are really cross cutting concerns; and designers underestimate this aspect; and finally, many of the applications will have different data stores capturing this temporal data. There is a need to have a common strategy dealing with temporal data that fall in these three commonly seen categories in an enterprise wide solution architecture. In other words, there should be a uniform way of capturing temporal data; there should be a uniform way of processing temporal data; and there should be a commonly used set of tools and libraries to manage the temporal data. Out of the three design patterns that are discussed in this section of the book, the first Time Series pattern is a general design pattern that covers the most general behavior of any kind of temporal data. The next two design patterns namely Log pattern and Conversation pattern are two special cases of the first design pattern. This section of the book covers the general nature of temporal data, some specific instances of such data items in the real-world applications, and why Cassandra is the best fit as a NoSQL data store to persist the temporal data. Temporal data comes quite often in many use cases of lots of applications. Data modeling of temporal data is very important in the Cassandra perspective for optimal storage and quick access of the data. Some common design patterns to model temporal data have been covered in this section of the book. By focusing on some very few aspects, such as the partition key, primary key, clustering column and the number of records that gets stored in a wide row of Cassandra, very effective and high performing temporal data models can be built. Analytical patterns The 3Vs of big data namely Volume, Variety, and Velocity pose another big challenge, which is the analysis of the data stored in NoSQL data stores, such as Cassandra. What are the analytics use cases? How can the distributed data be processed? What are the data transformations that are typically seen in the applications? These are the topics covered in this section of the book. Unlike other sections of this book, the focus is shifted from Cassandra to other technologies like Apache Hadoop, Hadoop MapReduce, and Apache Spark to introduce the big data analytics tool space. The design patterns such as Map/Reduce Pattern and Transformation Pattern are very commonly seen in the data analytics world. Cassandra with Apache Spark has good compatibility, and is a very ideal tool set in the data analysis use cases. This section of the book covers some data analysis aspects and mainly discusses about data processing. Data transformation is one of the major activity in data processing. Out of the many data processing patterns, Map/Reduce Pattern deserves a special mention, because it is being used in so many batch processing and analysis use cases, dealing with big data. Spark has been chosen as the tool of choice to explain the data processing activities. This section explains how a Map/Reduce kind of data processing task can be done using Cassandra. Spark has also been discussed, which is very powerful to perform online data analysis. This section of the book also covers some of the commonly seen data transformations that are used in the data processing applications. Summary Many Cassandra design patterns have been covered in this book. If the design patterns are not being used in any real-world applications, it has only theoretical value. To give a practical approach to the applicability of these design patterns, an end-to-end application is taken as a case point and described as the last chapter of the book, which is used as a vehicle to explain the applicability of the Cassandra design patterns discussed in the earlier sections of the book. Users love Cassandra because of its SQL-like interface CQL. Also, its features are very closely related to the RDBMS even though the paradigm is totally new. Application developers love Cassandra because of the plethora of drivers available in the market so that they can write applications in their preferred programming language. Architects love Cassandra because they can store structured, semi-structured, and unstructured data in it. Database administers love Cassandra because it comes with almost no maintenance overhead. Service managers love Cassandra because of the wonderful monitoring tools available in the market. CIOs love Cassandra because it gives value for their money. And Cassandra works! An application based on Cassandra will be perfect only if its features are used in the right way, and this book is an attempt to guide the Cassandra community in this direction. Resources for Article: Further resources on this subject: Cassandra Architecture [article] Getting Up and Running with Cassandra [article] Getting Started with Apache Cassandra [article]
Read more
  • 0
  • 0
  • 3271
Banner background image

article-image-putting-function-functional-programming
Packt
22 Sep 2015
27 min read
Save for later

Putting the Function in Functional Programming

Packt
22 Sep 2015
27 min read
 In this article by Richard Reese, the author of the book Learning Java Functional Programming, we will cover lambda expressions in more depth. We will explain how they satisfy the mathematical definition of a function and how we can use them in supporting Java applications. In this article, you will cover several topics, including: Lambda expression syntax and type inference High-order, pure, and first-class functions Referential transparency Closure and currying (For more resources related to this topic, see here.) Our discussions cover high-order functions, first-class functions, and pure functions. Also examined are the concepts of referential transparency, closure, and currying. Examples of nonfunctional approaches are followed by their functional equivalent where practical. Lambda expressions usage A lambda expression can be used in many different situations, including: Assigned to a variable Passed as a parameter Returned from a function or method We will demonstrate how each of these are accomplished and then elaborate on the use of functional interfaces. Consider the forEach method supported by several classes and interfaces, including the List interface. In the following example, a List interface is created and the forEach method is executed against it. The forEach method expects an object that implements the Consumer interface. This will display the three cartoon character names: List<String> list = Arrays.asList("Huey", "Duey", "Luey"); list.forEach(/* Implementation of Consumer Interface*/); More specifically, the forEach method expects an object that implements the accept method, the interface's single abstract method. This method's signature is as follows: void accept(T t) The interface also has a default method, andThen, which is passed and returns an instance of the Consumer interface. We can use any of three different approaches for implementing the functionality of the accept method: Use an instance of a class that implements the Consumer interface Use an anonymous inner class Use a lambda expression We will demonstrate each method so that it will be clear how each technique works and why lambda expressions will often result in a better solution. We will start with the declaration of a class that implements the Consumer interface as shown next: public class ConsumerImpl<T> implements Consumer<T> { @Override public void accept(T t) { System.out.println(t); } } We can then use it as the argument of the forEach method: list.forEach(new ConsumerImpl<>()); Using an explicit class allows us to reuse the class or its objects whenever an instance is needed. The second approach uses an anonymous inner function as shown here: list.forEach(new Consumer<String>() { @Override public void accept(String t) { System.out.println(t); } }); This was a fairly common approach used prior to Java 8. It avoids having to explicitly declare and instantiate a class, which implements the Consumer interface. A simple statement that uses a lambda expression is shown next: list.forEach(t->System.out.println(t)); The lambda expression accepts a single argument and returns void. This matches the signature of the Consumer interface. Java 8 is able to automatically perform this matching process. This latter technique obviously uses less code, making it more succinct than the other solutions. If we desire to reuse this lambda expression elsewhere, we could have assigned it to a variable first and then used it in the forEach method as shown here: Consumer consumer = t->System.out.println(t); list.forEach(consumer); Anywhere a functional interface is expected, we can use a lambda expression. Thus, the availability of a large number of functional interfaces will enable the frequent use of lambda expressions and programs that exhibit a functional style of programming. While developers can define their own functional interfaces, which we will do shortly, Java 8 has added a large number of functional interfaces designed to support common operations. Most of these are found in the java.util.function package. We will use several of these throughout the book and will elaborate on their purpose, definition, and use as we encounter them. Functional programming concepts in Java In this section, we will examine the underlying concept of functions and how they are implemented in Java 8. This includes high-order, first-class, and pure functions. A first-class function is a function that can be used where other first-class entities can be used. These types of entities include primitive data types and objects. Typically, they can be passed to and returned from functions and methods. In addition, they can be assigned to variables. A high-order function either takes another function as an argument or returns a function as the return value. Languages that support this type of function are more flexible. They allow a more natural flow and composition of operations. Pure functions have no side effects. The function does not modify nonlocal variables and does not perform I/O. High-order functions We will demonstrate the creation and use of the high-order function using an imperative and a functional approach to convert letters of a string to lowercase. The next code sequence reuses the list variable, developed in the previous section, to illustrate the imperative approach. The for-each statement iterates through each element of the list using the String class' toLowerCase method to perform the conversion: for(String element : list) { System.out.println(element.toLowerCase()); } The output will be each name in the list displayed in lowercase, each on a separate line. To demonstrate the use of a high-order function, we will create a function called, processString, which is passed a function as the first parameter and then apply this function to the second parameter as shown next:   public String processString(Function<String,String> operation,String target) { return operation.apply(target); } The function passed will be an instance of the java.util.function package's Function interface. This interface possesses an accept method that is passed one data type and returns a potentially different data type. With our definition, it is passed String and returns String. In the next code sequence, a lambda expression using the toLowerCase method is passed to the processString method. As you may remember, the forEach method accepts a lambda expression, which matches the Consumer interface's accept method. The lambda expression passed to the processString method matches the Function interface's accept method. The output is the same as produced by the equivalent imperative implementation. list.forEach(s ->System.out.println( processString(t->t.toLowerCase(), s))); We could have also used a method reference as show next: list.forEach(s ->System.out.println( processString(String::toLowerCase, s))); The use of the high-order function may initially seem to be a bit convoluted. We needed to create the processString function and then pass either a lambda expression or a method reference to perform the conversion. While this is true, the benefit of this approach is flexibility. If we needed to perform a different string operation other than converting the target string to lowercase, we will need to essentially duplicate the imperative code and replace toLowerCase with a new method such as toUpperCase. However, with the functional approach, all we need to do is replace the method used as shown next: list.forEach(s ->System.out.println(processString(t- >t.toUpperCase(), s))); This is simpler and more flexible. A lambda expression can also be passed to another lambda expression. Let's consider another example where high-order functions can be useful. Suppose we need to convert a list of one type into a list of a different type. We might have a list of strings that we wish to convert to their integer equivalents. We might want to perform a simple conversion or perhaps we might want to double the integer value. We will use the following lists:   List<String> numberString = Arrays.asList("12", "34", "82"); List<Integer> numbers = new ArrayList<>(); List<Integer> doubleNumbers = new ArrayList<>(); The following code sequence uses an iterative approach to convert the string list into an integer list:   for (String num : numberString) { numbers.add(Integer.parseInt(num)); } The next sequence uses a stream to perform the same conversion: numbers.clear(); numberString .stream() .forEach(s -> numbers.add(Integer.parseInt(s))); There is not a lot of difference between these two approaches, at least from a number of lines perspective. However, the iterative solution will only work for the two lists: numberString and numbers. To avoid this, we could have written the conversion routine as a method. We could also use lambda expression to perform the same conversion. The following two lambda expression will convert a string list to an integer list and from a string list to an integer list where the integer has been doubled:   Function<List<String>, List<Integer>> singleFunction = s -> { s.stream() .forEach(t -> numbers.add(Integer.parseInt(t))); return numbers; }; Function<List<String>, List<Integer>> doubleFunction = s -> { s.stream() .forEach(t -> doubleNumbers.add( Integer.parseInt(t) * 2)); return doubleNumbers; }; We can apply these two functions as shown here: numbers.clear(); System.out.println(singleFunction.apply(numberString)); System.out.println(doubleFunction.apply(numberString)); The output follows: [12, 34, 82] [24, 68, 164] However, the real power comes from passing these functions to other functions. In the next code sequence, a stream is created consisting of a single element, a list. This list contains a single element, the numberString list. The map method expects a Function interface instance. Here, we use the doubleFunction function. The list of strings is converted to integers and then doubled. The resulting list is displayed: Arrays.asList(numberString).stream() .map(doubleFunction) .forEach(s -> System.out.println(s)); The output follows: [24, 68, 164] We passed a function to a method. We could easily pass other functions to achieve different outputs. Returning a function When a value is returned from a function or method, it is intended to be used elsewhere in the application. Sometimes, the return value is used to determine how subsequent computations should proceed. To illustrate how returning a function can be useful, let's consider a problem where we need to calculate the pay of an employee based on the numbers of hours worked, the pay rate, and the employee type. To facilitate the example, start with an enumeration representing the employee type: enum EmployeeType {Hourly, Salary, Sales}; The next method illustrates one way of calculating the pay using an imperative approach. A more complex set of computation could be used, but these will suffice for our needs: public float calculatePay(int hoursWorked, float payRate, EmployeeType type) { switch (type) { case Hourly: return hoursWorked * payRate; case Salary: return 40 * payRate; case Sales: return 500.0f + 0.15f * payRate; default: return 0.0f; } } If we assume a 7 day workweek, then the next code sequence shows an imperative way of calculating the total number of hours worked: int hoursWorked[] = {8, 12, 8, 6, 6, 5, 6, 0}; int totalHoursWorked = 0; for (int hour : hoursWorked) { totalHoursWorked += hour; } Alternatively, we could have used a stream to perform the same operation as shown next. The Arrays class's stream method accepts an array of integers and converts it into a Stream object. The sum method is applied fluently, returning the number of hours worked: totalHoursWorked = Arrays.stream(hoursWorked).sum(); The latter approach is simpler and easier to read. To calculate and display the pay, we can use the following statement which, when executed, will return 803.25.    System.out.println( calculatePay(totalHoursWorked, 15.75f, EmployeeType.Hourly)); The functional approach is shown next. A calculatePayFunction method is created that is passed by the employee type and returns a lambda expression. This will compute the pay based on the number of hours worked and the pay rate. This lambda expression is based on the BiFunction interface. It has an accept method that takes two arguments and returns a value. Each of the parameters and the return type can be of different data types. It is similar to the Function interface's accept method, except that it is passed two arguments instead of one. The calculatePayFunction method is shown next. It is similar to the imperative's calculatePay method, but returns a lambda expression: public BiFunction<Integer, Float, Float> calculatePayFunction( EmployeeType type) { switch (type) { case Hourly: return (hours, payRate) -> hours * payRate; case Salary: return (hours, payRate) -> 40 * payRate; case Sales: return (hours, payRate) -> 500f + 0.15f * payRate; default: return null; } } It can be invoked as shown next: System.out.println( calculatePayFunction(EmployeeType.Hourly) .apply(totalHoursWorked, 15.75f)); When executed, it will produce the same output as the imperative solution. The advantage of this approach is that the lambda expression can be passed around and executed in different contexts. First-class functions To demonstrate first-class functions, we use lambda expressions. Assigning a lambda expression, or method reference, to a variable can be done in Java 8. Simply declare a variable of the appropriate function type and use the assignment operator to do the assignment. In the following statement, a reference variable to the previously defined BiFunction-based lambda expression is declared along with the number of hours worked: BiFunction<Integer, Float, Float> calculateFunction; int hoursWorked = 51; We can easily assign a lambda expression to this variable. Here, we use the lambda expression returned from the calculatePayFunction method: calculateFunction = calculatePayFunction(EmployeeType.Hourly); The reference variable can then be used as shown in this statement: System.out.println( calculateFunction.apply(hoursWorked, 15.75f)); It produces the same output as before. One shortcoming of the way an hourly employee's pay is computed is that overtime pay is not handled. We can add this functionality to the calculatePayFunction method. However, to further illustrate the use of reference variables, we will assign one of two lambda expressions to the calculateFunction variable based on the number of hours worked as shown here: if(hoursWorked<=40) { calculateFunction = (hours, payRate) -> 40 * payRate; } else { calculateFunction = (hours, payRate) -> hours*payRate + (hours-40)*1.5f*payRate; } When the expression is evaluated as shown next, it returns a value of 1063.125: System.out.println( calculateFunction.apply(hoursWorked, 15.75f)); Let's rework the example developed in the High-order functions section, where we used lambda expressions to display the lowercase values of an array of string. Part of the code has been duplicated here for your convenience: list.forEach(s ->System.out.println( processString(t->t.toLowerCase(), s))); Instead, we will use variables to hold the lambda expressions for the Consumer and Function interfaces as shown here: Consumer<String> consumer; consumer = s -> System.out.println(toLowerFunction.apply(s)); Function<String,String> toLowerFunction; toLowerFunction= t -> t.toLowerCase(); The declaration and initialization could have been done with one statement for each variable. To display all of the names, we simply use the consumer variable as the argument of the forEach method: list.forEach(consumer); This will display the names as before. However, this is much easier to read and follow. The ability to use lambda expressions as first-class entities makes this possible. We can also assign method references to variables. Here, we replaced the initialization of the function variable with a method reference: function = String::toLowerCase; The output of the code will not change. The pure function The pure function is a function that has no side effects. By side effects, we mean that the function does not modify nonlocal variables and does not perform I/O. A method that squares a number is an example of a pure method with no side effects as shown here: public class SimpleMath { public static int square(int x) { return x * x; } } Its use is shown here and will display the result, 25: System.out.println(SimpleMath.square(5)); An equivalent lambda expression is shown here: Function<Integer,Integer> squareFunction = x -> x*x; System.out.println(squareFunction.apply(5)); The advantages of pure functions include the following: They can be invoked repeatedly producing the same results There are no dependencies between functions that impact the order they can be executed They support lazy evaluation They support referential transparency We will examine each of these advantages in more depth. Support repeated execution Using the same arguments will produce the same results. The previous square operation is an example of this. Since the operation does not depend on other external values, re-executing the code with the same arguments will return the same results. This supports the optimization technique call memoization. This is the process of caching the results of an expensive execution sequence and retrieving them when they are used again. An imperative technique for implementing this approach involves using a hash map to store values that have already been computed and retrieving them when they are used again. Let's demonstrate this using the square function. The technique should be used for those functions that are compute intensive. However, using the square function will allow us to focus on the technique. Declare a cache to hold the previously computed values as shown here: private final Map<Integer, Integer> memoizationCache = new HashMap<>(); We need to declare two methods. The first method, called doComputeExpensiveSquare, does the actual computation as shown here. A display statement is included only to verify the correct operation of the technique. Otherwise, it is not needed. The method should only be called once for each unique value passed to it. private Integer doComputeExpensiveSquare(Integer input) { System.out.println("Computing square"); return 2 * input; } A second method is used to detect when a value is used a subsequent time and return the previously computed value instead of calling the square method. This is shown next. The containsKey method checks to see if the input value has already been used. If it hasn't, then the doComputeExpensiveSquare method is called. Otherwise, the cached value is returned. public Integer computeExpensiveSquare(Integer input) { if (!memoizationCache.containsKey(input)) { memoizationCache.put(input, doComputeExpensiveSquare(input)); } return memoizationCache.get(input); } The use of the technique is demonstrated with the next code sequence: System.out.println(computeExpensiveSquare(4)); System.out.println(computeExpensiveSquare(4)); The output follows, which demonstrates that the square method was only called once: Computing square 16 16 The problem with this approach is the declaration of a hash map. This object may be inadvertently used by other elements of the program and will require the explicit declaration of new hash maps for each memoization usage. In addition, it does not offer flexibility in handling multiple memoization. A better approach is available in Java 8. This new approach wraps the hash map in a class and allows easier creation and use of memoization. Let's examine a memoization class as adapted from http://java.dzone.com/articles/java-8-automatic-memoization. It is called Memoizer. It uses ConcurrentHashMap to cache value and supports concurrent access from multiple threads. Two methods are defined. The doMemoize method returns a lambda expression that does all of the work. The memorize method creates an instance of the Memoizer class and passes the lambda expression implementing the expensive operation to the doMemoize method. The doMemoize method uses the ConcurrentHashMap class's computeIfAbsent method to determine if the computation has already been performed. If the value has not been computed, it executes the Function interface's apply method against the function argument: public class Memoizer<T, U> { private final Map<T, U> memoizationCache = new ConcurrentHashMap<>(); private Function<T, U> doMemoize(final Function<T, U> function) { return input -> memoizationCache.computeIfAbsent(input, function::apply); } public static <T, U> Function<T, U> memoize(final Function<T, U> function) { return new Memoizer<T, U>().doMemoize(function); } } A lambda expression is created for the square operation: Function<Integer, Integer> squareFunction = x -> { System.out.println("In function"); return x * x; }; The memoizationFunction variable will hold the lambda expression that is subsequently used to invoke the square operations: Function<Integer, Integer> memoizationFunction = Memoizer.memoize(squareFunction); System.out.println(memoizationFunction.apply(2)); System.out.println(memoizationFunction.apply(2)); System.out.println(memoizationFunction.apply(2)); The output of this sequence follows where the square operation is performed only once: In function 4 4 4 We can easily use the Memoizer class for a different function as shown here: Function<Double, Double> memoizationFunction2 = Memoizer.memoize(x -> x * x); System.out.println(memoizationFunction2.apply(4.0)); This will square the number as expected. Functions that are recursive present additional problems. Eliminating dependencies between functions When dependencies between functions are eliminated, then more flexibility in the order of execution is possible. Consider these Function and BiFunction declarations, which define simple expressions for computing hourly, salaried, and sales type pay, respectively: BiFunction<Integer, Double, Double> computeHourly = (hours, rate) -> hours * rate; Function<Double, Double> computeSalary = rate -> rate * 40.0; BiFunction<Double, Double, Double> computeSales = (rate, commission) -> rate * 40.0 + commission; These functions can be executed, and their results are assigned to variables as shown here: double hourlyPay = computeHourly.apply(35, 12.75); double salaryPay = computeSalary.apply(25.35); double salesPay = computeSales.apply(8.75, 2500.0); These are pure functions as they do not use external values to perform their computations. In the following code sequence, the sum of all three pays are totaled and displayed: System.out.println(computeHourly.apply(35, 12.75) + computeSalary.apply(25.35) + computeSales.apply(8.75, 2500.0)); We can easily reorder their execution sequence or even execute them concurrently, and the results will be the same. There are no dependencies between the functions that restrict them to a specific execution ordering. Supporting lazy evaluation Continuing with this example, let's add an additional sequence, which computes the total pay based on the type of employee. The variable, hourly, is set to true if we want to know the total of the hourly employee pay type. It will be set to false if we are interested in salary and sales-type employees: double total = 0.0; boolean hourly = ...; if(hourly) { total = hourlyPay; } else { total = salaryPay + salesPay; } System.out.println(total); When this code sequence is executed with an hourly value of false, there is no need to execute the computeHourly function since it is not used. The runtime system could conceivably choose not to execute any of the lambda expressions until it knows which one is actually used. While all three functions are actually executed in this example, it illustrates the potential for lazy evaluation. Functions are not executed until needed. Referential transparency Referential transparency is the idea that a given expression is made up of subexpressions. The value of the subexpression is important. We are not concerned about how it is written or other details. We can replace the subexpression with its value and be perfectly happy. With regards to pure functions, they are said to be referentially transparent since they have same effect. In the next declaration, we declare a pure function called pureFunction: Function<Double,Double> pureFunction = t -> 3*t; It supports referential transparency. Consider if we declare a variable as shown here: int num = 5; Later, in a method we can assign a different value to the variable: num = 6; If we define a lambda expression that uses this variable, the function is no longer pure: Function<Double,Double> impureFunction = t -> 3*t+num; The function no longer supports referential transparency. Closure in Java The use of external variables in a lambda expression raises several interesting questions. One of these involves the concept of closures. A closure is a function that uses the context within which it was defined. By context, we mean the variables within its scope. This sometimes is referred to as variable capture. We will use a class called ClosureExample to illustrate closures in Java. The class possesses a getStringOperation method that returns a Function lambda expression. This expression takes a string argument and returns an augmented version of it. The argument is converted to lowercase, and then its length is appended to it twice. In the process, both an instance variable and a local variable are used. In the implementation that follows, the instance variable and two local variables are used. One local variable is a member of the getStringOperation method and the second one is a member of the lambda expression. They are used to hold the length of the target string and for a separator string: public class ClosureExample { int instanceLength; public Function<String,String> getStringOperation() { final String seperator = ":"; return target -> { int localLength = target.length(); instanceLength = target.length(); return target.toLowerCase() + seperator + instanceLength + seperator + localLength; }; } } The lambda expression is created and used as shown here: ClosureExample ce = new ClosureExample(); final Function<String,String> function = ce.getStringOperation(); System.out.println(function.apply("Closure")); Its output follows: closure:7:7 Variables used by the lambda expression are restricted in their use. Local variables or parameters cannot be redefined or modified. These variables need to be effectively final. That is, they must be declared as final or not be modified. If the local variable and separator, had not been declared as final, the program would still be executed properly. However, if we tried to modify the variable later, then the following syntax error would be generated, indicating such variable was not permitted within a lambda expression: local variables referenced from a lambda expression must be final or effectively final If we add the following statements to the previous example and remove the final keyword, we will get the same syntax error message: function = String::toLowerCase; Consumer<String> consumer = s -> System.out.println(function.apply(s)); This is because the function variable is used in the Consumer lambda expression. It also needs to be effectively final, but we tried to assign a second value to it, the method reference for the toLowerCase method. Closure refers to functions that enclose variable external to the function. This permits the function to be passed around and used in different contexts. Currying Some functions can have multiple arguments. It is possible to evaluate these arguments one-by-one. This process is called currying and normally involves creating new functions, which have one fewer arguments than the previous one. The advantage of this process is the ability to subdivide the execution sequence and work with intermediate results. This means that it can be used in a more flexible manner. Consider a simple function such as: f(x,y) = x + y The evaluation of f(2,3) will produce a 5. We could use the following, where the 2 is "hardcoded": f(2,y) = 2 + y If we define: g(y) = 2 + y Then the following are equivalent: f(2,y) = g(y) = 2 + y Substituting 3 for y we get: f(2,3) = g(3) = 2 + 3 = 5 This is the process of currying. An intermediate function, g(y), was introduced which we can pass around. Let's see, how something similar to this can be done in Java 8. Start with a BiFunction method designed for concatenation of strings. A BiFunction method takes two parameters and returns a single value: BiFunction<String, String, String> biFunctionConcat = (a, b) -> a + b; The use of the function is demonstrated with the following statement: System.out.println(biFunctionConcat.apply("Cat", "Dog")); The output will be the CatDog string. Next, let's define a reference variable called curryConcat. This variable is a Function interface variable. This interface is based on two data types. The first one is String and represents the value passed to the Function interface's accept method. The second data type represents the accept method's return type. This return type is defined as a Function instance that is passed a string and returns a string. In other words, the curryConcat function is passed a string and returns an instance of a function that is passed and returns a string. Function<String, Function<String, String>> curryConcat; We then assign an appropriate lambda expression to the variable: curryConcat = (a) -> (b) -> biFunctionConcat.apply(a, b); This may seem to be a bit confusing initially, so let's take it one piece at a time. First of all, the lambda expression needs to return a function. The lambda expression assigned to curryConcat follows where the ellipses represent the body of the function. The parameter, a, is passed to the body: (a) ->...; The actual body follows: (b) -> biFunctionConcat.apply(a, b); This is the lambda expression or function that is returned. This function takes two parameters, a and b. When this function is created, the a parameter will be known and specified. This function can be evaluated later when the value for b is specified. The function returned is an instance of a Function interface, which is passed two parameters and returns a single value. To illustrate this, define an intermediate variable to hold this returned function: Function<String,String> intermediateFunction; We can assign the result of executing the curryConcat lambda expression using it's apply method as shown here where a value of Cat is specified for the a parameter: intermediateFunction = curryConcat.apply("Cat"); The next two statements will display the returned function: System.out.println(intermediateFunction); System.out.println(curryConcat.apply("Cat")); The output will look something similar to the following: packt.Chapter2$$Lambda$3/798154996@5305068a packt.Chapter2$$Lambda$3/798154996@1f32e575 Note that these are the values representing this functions as returned by the implied toString method. They are both different, indicating that two different functions were returned and can be passed around. Now that we have confirmed a function has been returned, we can supply a value for the b parameter as shown here: System.out.println(intermediateFunction.apply("Dog")); The output will be CatDog. This illustrates how we can split a two parameter function into two distinct functions, which can be evaluated when desired. They can be used together as shown with these statements: System.out.println(curryConcat.apply("Cat").apply("Dog")); System.out.println(curryConcat.apply("Flying ").apply("Monkeys")); The output of these statements is as follows: CatDog Flying Monkeys We can define a similar operation for doubles as shown here: Function<Double, Function<Double, Double>> curryAdd = (a) -> (b) -> a * b; System.out.println(curryAdd.apply(3.0).apply(4.0)); This will display 12.0 as the returned value. Currying is a valuable approach useful when the arguments of a function need to be evaluated at different times. Summary In this article, we investigated the use of lambda expressions and how they support the functional style of programming in Java 8. When possible, we used examples to contrast the use of classes and methods against the use of functions. This frequently led to simpler and more maintainable functional implementations. We illustrated how lambda expressions support the functional concepts of high-order, first-class, and pure functions. Examples were used to help clarify the concept of referential transparency. The concepts of closure and currying are found in most functional programming languages. We provide examples of how they are supported in Java 8. Lambda expressions have a specific syntax, which we examined in more detail. Also, there are several variations of the function that can be used to support the expression in the form, which we illustrated. Lambda expressions are based on functional interfaces using type inference. It is important to understand how to create functional interfaces and to know what standard functional interfaces are available in Java 8. Resources for Article: Further resources on this subject: An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article] Finding Peace in REST[article] Introducing JAX-RS API [article]
Read more
  • 0
  • 0
  • 1825

article-image-embedded-linux-and-its-elements
Packt
22 Sep 2015
6 min read
Save for later

Embedded Linux and Its Elements

Packt
22 Sep 2015
6 min read
 In this article by Chris Simmonds, author of the book, Mastering Embedded Linux Programming, we'll cover the introduction of embedded Linux and its elements. (For more resources related to this topic, see here.) Why is embedded Linux popular? Linux first became a viable choice for embedded devices around 1999. That was when Axis (www.axis.com) released their first Linux-powered network camera and Tivo (www.tivo.com) their first DVR (Digital video recorder). Since 1999, Linux has become ever more popular, to the point that today it is the operating system of choice for many classes of product. As of this writing, in 2015, there are about 2 billion devices running Linux. That includes a large number of smart phones running Android, set top boxes and smart TVs and WiFi routers. Not to mention a very diverse range of devices such as vehicle diagnostics, weighing scales, industrial devices and medical monitoring units that ship in smaller volumes. So, why does your TV run Linux? At first glance, the function of a TV is simple: it has to display a stream of video on a screen. Why is a complex Unix-based operating system like Linux necessary? The simple answer is Moore's Law: Gordon Moore, co-founder of Intel stated in 1965 that the density of components on a chip will double every 2 years. That applies to the devices that we design and use in our everyday lives just as much as it does to desktops, laptops and servers. A typical SoC (System on Chip) at the heart of current devices contains many function block and has a technical reference manual that stretches to thousands of pages. Your TV is not simply displaying a video stream as the old analog sets used to. The stream is digital, possibly encrypted, and it needs processing to create an image. Your TV is (or soon will be) connected to the Internet. It can receive content from smart phones, tablets and home media servers. It can be (or soon will) used to play games. And so on and so on. You need a full operating system to manage all that hardware. Here are some points that drive the adoption of Linux: Linux has the functionality required. It has a good scheduler, a good network stack, support for many kinds of storage media, good support for multimedia devices, and so on. It ticks all the boxes. Linux has been ported to a wide range of processor architectures, including those important for embedded use: ARM, MIPS, x86 and PowerPC. Linux is open source. So you have the freedom to get the source code and modify it to meet your needs. You, or someone in the community, can create a board support package for your particular SoC, board or device. You can add protocols, features, technologies that may be missing from the mainline source code. Or, you can remove features that you don't need in order to reduce memory and storage requirements. Linux is flexible. Linux has an active community. In the case of the Linux kernel, very active. There is a new release of the kernel every 10 to 12 weeks, and each release contains code from around 1000 developers. An active community means that Linux is up to date and supports current hardware, protocols and standards. Open source licenses guarantee that you have access to the source code. There is no vendor tie-in. There is no vendor, no license fees, no restrictive NDAs, EULAs, and so on. Open source software is free in both senses: it gives you the freedom to adapt it for our own use and there is nothing to pay. For these reasons, Linux is an ideal choice for complex devices. But there are a few caveats I should mention here. Complexity makes it harder to understand. Coupled with the fast moving development process and the decentralized structures of open source, you have to put some effort into learning how to use it and to keep on re-learning as it changes. I hope that this article will help in the process. Elements of embedded Linux Every project begins by obtaining, customizing and deploying these four elements: Toolchain, Bootloader, Kernel, and Root filesystem. Toolchain The toolchain is the first element of embedded Linux and the starting point of your project. It should be constant throughout the project, in other words, once you have chosen your toolchain it is important to stick with it. Changing compilers and development libraries in an inconsistent way during a project will lead to subtle bugs. Obtaining a toolchain can be as simple as downloading and installing a package. But, the toolchain itself is a complex thing. Linux toolchains are almost always based on components from the GNU project (http://www.gnu.org). It is becoming possible to create toolchains based on LLVM/Clang (http://llvm.org). Bootloader The bootloader is the second element of Embedded Linux. It is the part that starts the system up and loads the operating system kernel. When considering which bootloader to focus on, there is one that stands out: U-Boot. In an embedded Linux system the bootloader has two main jobs: to start the system running and to load a kernel. In fact the first job is in somewhat subsidiary to the second in that it is only necessary to get as much of the system working as is necessary to load the kernel. Kernel The kernel is the third element of Embedded Linux. It is the component that is responsible for managing resources and interfacing with hardware, and so affects almost every aspect of your final software build. Usually it is tailored to your particular hardware configuration. The kernel has three main jobs to do: to manage resources, to interface to hardware, and to provide an API that offers a useful level of abstraction to user space programs, as summarized in the following diagram: Root filesystem The root filesystem is the fourth and final element of embedded Linux. The first objective is to create a minimal root filesystem that can give us a shell prompt. Then using that as a base we will add scripts to start other programs up, and to configure a network interface and user permissions. Knowing how to build the root filesystem from scratch is a useful skill. Summary In this article we briefly saw the introduction for embedded Linux and its elements. Resources for Article: Further resources on this subject: Virtualization[article] An Introduction to WEP [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 3009

article-image-configuring-and-securing-virtual-private-cloud
Packt
16 Sep 2015
7 min read
Save for later

Configuring and Securing a Virtual Private Cloud

Packt
16 Sep 2015
7 min read
In this article by Aurobindo Sarkar and Sekhar Reddy, author of the book Amazon EC2 Cookbook, we will cover recipes for: Configuring VPC DHCP options Configuring networking connections between two VPCs (VPC peering) (For more resources related to this topic, see here.) In this article, we will focus on recipes to configure AWS VPC (Virtual Private Cloud) against typical network infrastructure requirements. VPCs help you isolate AWS EC2 resources, and this feature is available in all AWS regions. A VPC can span multiple availability zones in a region. AWS VPC also helps you run hybrid applications on AWS by extending your existing data center into the public cloud. Disaster recovery is another common use case for using AWS VPC. You can create subnets, routing tables, and internet gateways in VPC. By creating public and private subnets, you can put your web and frontend services in public subnet, your application databases and backed services in a private subnet. Using VPN, you can extend your on-premise data center. Another option to extend your on-premise data center is AWS Direct Connect, which is a private network connection between AWS and you're on-premise data center. In VPC, EC2 resources get static private IP addresses that persist across reboots, which works in the same way as DHCP reservation. You can also assign multiple IP addresses and Elastic Network Interfaces. You can have a private ELB accessible only within your VPC. You can use CloudFormation to automate the VPC creation process. Defining appropriate tags can help you manage your VPC resources more efficiently. Configuring VPC DHCP options DHCP options sets are associated with your AWS account, so they can be used across all your VPCs. You can assign your own domain name to your instances by specifying a set of DHCP options for your VPC. However, only one DHCP Option set can be associated with a VPC. Also, you can't modify the DHCP option set after it is created. In case your want to use a different set of DHCP options, then you will need to create a new DHCP option set and associate it with your VPC. There is no need to restart or relaunch the instances in the VPC after associating the new DHCP option set as they can automatically pick up the changes. How to Do It… In this section, we will create a DHCP option set and then associate it with your VPC. Create a DHCP option set with a specific domain name and domain name servers. In our example, we execute commands to create a DHCP options set and associate it with our VPC. We specify domain name testdomain.com and DNS servers (10.2.5.1 and 10.2.5.2) as our DHCP options. $ aws ec2 create-dhcp-options --dhcp-configuration Key=domain-name,Values=testdomain.com Key=domain-name-servers,Values=10.2.5.1,10.2.5.2 Associate the DHCP option and set your VPC (vpc-bb936ede). $ aws ec2 associate-dhcp-options --dhcp-options-id dopt-dc7d65be --vpc-id vpc-bb936ede How it works… DHCP provides a standard for passing configuration information to hosts in a network. The DHCP message contains an options field in which parameters such as the domain name and the domain name servers can be specified. By default, instances in AWS are assigned an unresolvable host name, hence we need to assign our own domain name and use our own DNS servers. The DHCP options sets are associated with the AWS account and can be used across our VPCs. First, we create a DHCP option set. In this step, we specify the DHCP configuration parameters as key value pairs where commas separate the values and multiple pairs are separated by spaces. In our example, we specify two domain name servers and a domain name. We can use up to four DNS servers. Next, we associate the DHCP option set with our VPC to ensure that all existing and new instances launched in our VPC will use this DHCP options set. Note that if you want to use a different set of DHCP options, then you will need to create a new set and again associate them with your VPC as modifications to a set of DHCP options is not allowed. In addition, you can let the instances pick up the changes automatically or explicitly renew the DHCP lease. However, in all cases, only one set of DHCP options can be associated with a VPC at any given time. As a practice, delete the DHCP options set when none of your VPCs are using it and you don't need it any longer. Configuring networking connections between two VPCs (VPC peering) In this recipe, we will configure VPC peering. VPC peering helps you connect instances in two different VPCs using their private IP addresses. VPC peering is limited to within a region. However, you can create VPC peering connection between VPCs that belong to different AWS accounts. The two VPCs that participate in VPC peering must not have matching or overlapping CIDR addresses. To create a VPC connection, the owner of the local VPC has to send the request to the owner of the peer VPC located in the same account or a different account. Once the owner of peer VPC accepts the request, the VPC peering connection is activated. You will need to update the routes in your route table to send traffic to the peer VPC and vice versa. You will also need to update your instance security groups to allow traffic from–to the peer VPC. How to Do It… Here, we present the commands to creating a VPC peering connection, accepting a peering request, and adding the appropriate route in your routing table. Create a VPC peering connection between two VPCs with IDs vpc-9c19a3f4 and vpc-0214e967. Record VpcPeeringConnectionId for further use $ aws ec2 create-vpc-peering-connection --vpc-id vpc-9c19a3f4 --peer-vpc-id vpc-0214e967 Accept VPC peering connection. Here, we will accept the VPC peering connection request with ID pcx-cf6aa4a6. $ aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-cf6aa4a6 Add a route in the route table for the VPC peering connection. The following command create route with destination CIDR (172.31.16.0/20) and VPC peer connection ID (pcx-0e6ba567) in route table rtb-7f1bda1a. $ aws ec2 create-route --route-table-id rtb-7f1bda1a --destination-cidr-block 172.31.16.0/20 --vpc-peering-connection-id pcx-0e6ba567 How it works… First, we request a VPC peering connection between two VPCs: a requester VPC that we own (i.e., vpc-9c19a3f4) and a peer VPC with that we want to create a connection (vpc-0214e967). Note that the peering connection request expires after 7 days. In order tot activate the VPC peering connection, the owner of the peer VPC must accept the request. In our recipe, as the owner of the peer VPC, we accept the VPC peering connection request. However, note that the owner of the peer VPC may be a person other than you. You can use the describe-vpc-peering-connections to view your outstanding peering connection requests. The VPC peering connection should be in the pending-acceptance state for you to accept the request. After creating the VPC peering connection, we created a route in our local VPC subnet's route table to direct traffic to the peer VPC. You can also create peering connections between two or more VPCs to provide full access to resources or peer one VPC to access centralized resources. In addition, peering can be implemented between a VPC and specific subnets or instances in one VPC with instances in another VPC. Refer to Amazon VPC documentation to set up the most appropriate peering connections for your specific requirements. Summary In this article, you learned configuring VPC DHCP options as well as configuring networking connections between two VPCs. The book Amazon EC2 Cookbook will cover recipes that relate to designing, developing, and deploying scalable, highly available, and secure applications on the AWS platform. By following the steps in our recipes, you will be able to effectively and systematically resolve issues related to development, deployment, and infrastructure for enterprise-grade cloud applications or products. Resources for Article: Further resources on this subject: Hands-on Tutorial for Getting Started with Amazon SimpleDB [article] Amazon SimpleDB versus RDBMS [article] Amazon DynamoDB - Modelling relationships, Error handling [article]
Read more
  • 0
  • 0
  • 931
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-identifying-best-places
Packt
16 Sep 2015
9 min read
Save for later

Identifying the Best Places

Packt
16 Sep 2015
9 min read
In this article by Ben Mearns, author of the book QGIS Blueprints, we will take a look at how the raster data can be analyzed, enhanced, and used for map production. Specifically, you will learn to produce a grid of the suitable locations based on the criteria values in other grids using raster analysis and map algebra. Then, using the grid, we will produce a simple click-based map. The end result will be a site suitability web application with click-based discovery capabilities. We'll be looking at the suitability for the farmland preservation selection. In this article, we will cover the following topics: Vector data ETL for raster analysis Batch processing Leaflet map application publication with QGIS2Leaf (For more resources related to this topic, see here.) Vector data Extract, Transform, and Load Our suitability analysis uses map algebra and criteria grids to give us a single value for the suitability for some activity in every place. This requires that the data be expressed in the raster (grid) format. So, let's perform the other necessary ETL steps and then convert our vector data to raster. We will perform the following actions: Ensure that our data has identical spatial reference systems. For example, we may be using a layer of the roads maintained by the state department of transportation and a layer of land use maintained by the department of natural resources. These layers must have identical spatial reference systems or be transformed to have identical systems. Extract geographic objects according to their classes as defined in some attribute table field if we want to operate on them while they're still in the vector form. If no further analysis is necessary, convert to raster. Loading data and establishing the CRS conformity It is important for the layers in this project to be transformed or projected into the same geographic or projected coordinate system. This is necessary for an accurate analysis and for publication to the web formats. Perform the following steps for this: Disable 'on the fly' projection if it is turned on. Otherwise, 'on the fly' will automatically project your data again to display it with the layers that are already in the Canvas. Navigate to Setting | Options and perform the settings shown in the following screenshot: Add the project layers: Navigate to Layer | Add Layer | Vector Layer. Add the following layers from within c2/data. ApplicantsCountyEasementsLanduseRoads You can select multiple layers to add by pressing Shift and clicking on the contiguous files or pressing Ctrl and clicking on the noncontiguous files. Import the Digital Elevation Model from c2/data/dem/dem.tif. Navigate to Layer | Add Layer | Raster Layer. From the dem directory, select dem.tif and then click on Open. Even though the layers are in a different CRS, QGIS does not warn us in this case. You must discover the issue by checking each layer individually. Check the CRS of the county layer and one other layer: Highlight the county layer in the Layers panel. Navigate to Layer | Properties. The CRS is displayed under the General tab in the Coordinate reference system section: Note that the county layer is in EPSG: 26957, while the others are in EPSG: 2776. We will transform the county layer from EPSG:26957 to EPSG:2776. Navigate to Layer | Save As | Select CRS. We will save all the output from this article in c2/output. To prepare the layers for conversion to raster, we will add a new generic column to all the layers populated with the number 1. This will be translated to a Boolean type raster, where the presence of the object that the raster represents (for example, roads) is indicated by a cell of 1 and all others with a zero. Follow these steps for the applicants, easements, and roads: Navigate to Layer | Toggle Editing. Then, navigate to Layer | Open Attribute Table. Add a column with the button at the top of the Attribute table dialog. Use value as the name for the new column and the following data format options: Select the new column from the dropdown in the Attribute table and enter 1 into the value box: Click on Update All. Navigate to Layer | Toggle Editing. Finally, save. The extracting (filtering) features Let's suppose that our criteria includes only a subset of the features in our roads layer—major unlimited access roads (but not freeways), a subset of the features as determined by a classification code (CFCC). To temporarily extract this subset, we will do a layer query by performing the following steps: Filter the major roads from the roads layer. Highlight the roads layer. Navigate to Layer | Query. Double-click on CFCC to add it to the expression. Click on the = operator to add to the expression Under the Values section, click on All to view all the unique values in the CFCC field. Double-click on A21 to add this to the expression. Do this for all the codes less than A36. Include A63 for highway on-ramps. You selection code will look similar to this: "CFCC" = 'A21' OR "CFCC" = 'A25' OR "CFCC" = 'A31' OR "CFCC" = 'A35' OR "CFCC" = 'A63' Click on OK, as shown in the following screenshot: Create a new c2/output directory. Save the roads layer as a new layer with only the selected features (major_roads) in this directory. To clear a layer filter, return to the query dialog on the applied layer (highlight it in the Layers pane; navigate to Layer | Query and click on Clear). Repeat these steps for the developed (LULC1 = 1) and agriculture (LULC1 = 2) landuses (separately) from the landuse layer. Converting to raster In this section, we will convert all the needed vector layers to raster. We will be doing this in batch, which will allow us to repeat the same operation many times over multiple layers. Doing more at once—working in batch The QGIS Processing Framework provides capabilities to run the same operation many times on different data. This is called batch processing. A batch process is invoked from an operation's context menu in the Processing Toolbox. The batch dialog requires that the parameters for each layer be populated for every iteration. Convert the vector layers to raster. Navigate to Processing Toolbox. Select Advanced Interface from the dropdown at the bottom of Processing Toolbox (if it is not selected, it will show as Simple Interface). Type rasterize to search for the Rasterize tool. Right-click on the Rasterize tool and select Execute as batch process: Fill in the Batch Processing dialog, making sure to specify the parameters as follows: Parameter Value Input layer (For example, roads) Attribute field value Output raster size Output resolution in map units per pixel Horizontal 30 Vertical 30 Raster type Int16 Output layer (For example, roads) The following images show how this will look in QGIS: Scroll to the right to complete the entry of parameter values.   Organize the new layers (optional step).    Batch sometimes gives unfriendly names based on some bug in the dialog box.    Change the layer names by doing the following for each layer created by batch:    Highlight the layer.    Navigate to Layer | Properties.    Change the layer name to the name of the vector layer from which this was created (for example, applicants). You should be able to find a hint for this value in the layer properties in the layer source (name of the .tif file).    Group the layers.    Press Shift + click on all the layers created by batch and the previous roads raster.    Navigate to Right click | Group selected. Publishing the results as a web application Now that we have completed our modeling for the site selection of a farmland for conservation, let's take steps to publish this for the Web. QGIS2leaf QGIS2leaf allows us to export our QGIS map to web map formats (JavaScript, HTML, and CSS) using the Leaflet map API. Leaflet is a very lightweight, extensible, and responsive (and trendy) web mapping interface. QGIS2Leaf converts all our vector layers to GeoJSON, which is the most common textual way to express the geographic JavaScript objects. As our operational layer is in GeoJSON, Leaflet's click interaction is supported, and we can access the information in the layers by clicking. It is a fully editable HTML and JavaScript file. You can customize and upload it to an accessible web location. QGIS2leaf is very simple to use as long as the layers are prepared properly (for example, with respect to CRS) up to this point. It is also very powerful in creating a good starting application including GeoJSON, HTML, and JavaScript for our Leaflet web map. Make sure to install the QGIS2Leaf plugin if you haven't already. Navigate to Web | QGIS2leaf | Exports a QGIS Project to a working Leaflet webmap. Click on the Get Layers button to add the currently displayed layers to the set that QGIS2leaf will export. Choose a basemap and enter the additional details if so desired. Select Encode to JSON. These steps will produce a map application similar to the following one. We'll take a look at how to restore the labels: Summary In this article, using the site selection example, we covered basic vector data ETL, raster analysis, and web map creation. We started with vector data, and after unifying CRS, we prepared the attribute tables. We then filtered and converted it to raster grids using batch processing. Finally, we published the prepared vector output with QGIS2Leaf as a simple Leaflet web map application with a strong foundation for extension. Resources for Article:   Further resources on this subject: Style Management in QGIS [article] Preparing to Build Your Own GIS Application [article] Geocoding Address-based Data [article]
Read more
  • 0
  • 0
  • 1241

article-image-groovy-closures
Packt
16 Sep 2015
9 min read
Save for later

Groovy Closures

Packt
16 Sep 2015
9 min read
In this article by Fergal Dearle, the author of the book Groovy for Domain-Specific Languages - Second Edition, we will focus exclusively on closures. We will take a close look at them from every angle. Closures are the single most important feature of the Groovy language. Closures are the special seasoning that helps Groovy stand out from Java. They are also the single most powerful feature that we will use when implementing DSLs. In the article, we will discuss the following topics: We will start by explaining just what a closure is and how we can define some simple closures in our Groovy code We will look at how many of the built-in collection methods make use of closures for applying iteration logic, and see how this is implemented by passing a closure as a method parameter We will look at the various mechanisms for calling closures A handy reference that you might want to consider having at hand while you read this article is GDK Javadocs, which will give you full class descriptions of all of the Groovy built-in classes, but of particular interest here is groovy.lang.Closure. (For more resources related to this topic, see here.) What is a closure Closures are such an unfamiliar concept to begin with that it can be hard to grasp initially. Closures have characteristics that make them look like a method in so far as we can pass parameters to them and they can return a value. However, unlike methods, closures are anonymous. A closure is just a snippet of code that can be assigned to a variable and executed later: def flintstones = ["Fred","Barney"] def greeter = { println "Hello, ${it}" } flintstones.each( greeter ) greeter "Wilma" greeter = { } flintstones.each( greeter ) greeter "Wilma" Because closures are anonymous, they can easily be lost or overwritten. In the preceding example, we defined a variable greeter to contain a closure that prints a greeting. After greeter is overwritten with an empty closure, any reference to the original closure is lost. It's important to remember that greeter is not the closure. It is a variable that contains a closure, so it can be supplanted at any time. Because greeter has a dynamic type, we could have assigned any other object to it. All closures are a subclass of the type groovy.lang.Closure. Because groovy.lang is automatically imported, we can refer to Closure as a type within our code. By declaring our closures explicitly as Closure, we cannot accidentally assign a non-closure to them: Closure greeter = { println it } For each closure that is declared in our code, Groovy generates a Closure class for us, which is a subclass of groovy.lang.Closure. Our closure object is an instance of this class. Although we cannot predict what exact type of closure is generated, we can rely on it being a subtype of groovy.lang.Closure. Closures and collection methods We will encounter Groovy lists and see some of the iteration functions, such as the each method: def flintstones = ["Fred","Barney"] flintstones.each { println "Hello, ${it}" } This looks like it could be a specialized control loop similar to a while loop. In fact, it is a call to the each method of Object. The each method takes a closure as one of its parameters, and everything between the curly braces {} defines another anonymous closure. Closures defined in this way can look quite similar to code blocks, but they are not the same. Code defined in a regular Java or Groovy style code block is executed as soon as it is encountered. With closures, the block of code defined in the curly braces is not executed until the call() method of the closure is made: println "one" def two = { println "two" } println "three" two.call() println "four" Will print the following: one three two four Let's dig a bit deeper into the structure of the each of the calls shown in the preceding code. I refer to each as a call because that's what it is—a method call. Groovy augments the standard JDK with numerous helper methods. This new and improved JDK is referred to as the Groovy JDK, or GDK for short. In the GDK, Groovy adds the each method to the java.lang.Object class. The signature of the each method is as follows: Object each(Closure closure) The java.lang.Object class has a number of similar methods such as each, find, every, any, and so on. Because these methods are defined as part of Object, you can call them on any Groovy or Java object. They make little sense on most objects, but they do something sensible if not very useful: given: "an Integer" def number = 1 when: "we call the each method on it" number.each { println it } then: "just the object itself gets passed into the Closure" "1" == output() These methods all have specific implementations for all of the collection types, including arrays, lists, ranges, and maps. So, what is actually happening when we see the call to flintstones.each is that we are calling the list's implementation of the each method. Because each takes a Closure as its last and only parameter, the following code block is interpreted by Groovy as an anonymous Closure object to be passed to the method. The actual call to the closure passed to each is deferred until the body of the each method itself is called. The closure may be called multiple times—once for every element in the collection. Closures as method parameters We already know that parentheses around method parameters are optional, so the previous call to each can also be considered equivalent to: flintstones.each ({ println "Hello, ${it}") Groovy has a special handling for methods whose last parameter is a closure. When invoking these methods, the closure can be defined anonymously after the method call parenthesis. So, yet another legitimate way to call the preceding line is: flintstones.each() { println "hello, ${it}" } The general convention is not to use parentheses unless there are parameters in addition to the closure: given: def flintstones = ["Fred", "Barney", "Wilma"] when: "we call findIndexOf passing int and a Closure" def result = flintstones.findIndexOf(0) { it == 'Wilma'} then: result == 2 The signature of the GDK findIndexOf method is: int findIndexOf(int, Closure) We can define our own methods that accept closures as parameters. The simplest case is a method that accepts only a single closure as a parameter: def closureMethod(Closure c) { c.call() } when: "we invoke a method that accepts a closure" closureMethod { println "Closure called" } then: "the Closure passed in was executed" "Closure called" == output() Method parameters as DSL This is an extremely useful construct when we want to wrap a closure in some other code. Suppose we have some locking and unlocking that needs to occur around the execution of a closure. Rather than the writer of the code to locking via a locking API call, we can implement the locking within a locker method that accepts the closure: def locked(Closure c) { callToLockingMethod() c.call() callToUnLockingMethod() } The effect of this is that whenever we need to execute a locked segment of code, we simply wrap the segment in a locked closure block, as follows: locked { println "Closure called" } In a small way, we are already writing a mini DSL when we use these types on constructs. This call to the locked method looks, to all intents and purposes, like a new language construct, that is, a block of code defining the scope of a locking operation. When writing methods that take other parameters in addition to a closure, we generally leave the Closure argument to last. As already mentioned in the previous section, Groovy has a special syntax handling for these methods, and allows the closure to be defined as a block after the parameter list when calling the method: def closureMethodInteger(Integer i, Closure c) { println "Line $i" c.call() } when: "we invoke a method that accepts an Integer and a Closure" closureMethodInteger(1) { println "Line 2" } then: "the Closure passed in was executed with the parameter" """Line 1 Line 2""" == output() Forwarding parameters Parameters passed to the method may have no impact on the closure itself, or they may be passed to the closure as a parameter. Methods can accept multiple parameters in addition to the closure. Some may be passed to the closure, while others may not: def closureMethodString(String s, Closure c) { println "Greet someone" c.call(s) } when: "we invoke a method that accepts a String and a Closure" closureMethodString("Dolly") { name -> println "Hello, $name" } then: "the Closure passed in was executed with the parameter" """Greet someone Hello, Dolly""" == output() This construct can be used in circumstances where we have a look-up code that needs to be executed before we have access to an object. Say we have customer records that need to be retrieved from a database before we can use them: def withCustomer (id, Closure c) { def cust = getCustomerRecord(id) c.call(cust) } withCustomer(12345) { customer -> println "Found customer ${customer.name}" } We can write an updateCustomer method that saves the customer record after the closure is invoked, and amend our locked method to implement transaction isolation on the database, as follows: class Customer { String name } def locked (Closure c) { println "Transaction lock" c.call() println "Transaction release" } def update (customer, Closure c) { println "Customer name was ${customer.name}" c.call(customer) println "Customer name is now ${customer.name}" } def customer = new Customer(name: "Fred") At this point, we can write code that nests the two method calls by calling update as follows: locked { update(customer) { cust -> cust.name = "Barney" } } This outputs the following result, showing how the update code is wrapped by updateCustomer, which retrieves the customer object and subsequently saves it. The whole operation is wrapped by locked, which includes everything within a transaction: Transaction lock Customer name was Fred Customer name is now Barney Transaction release Summary In this article, we covered closures in some depth. We explored the various ways to call a closure and the means of passing parameters. We saw how we can pass closures as parameters to methods, and how this construct can allow us to appear to add mini DSL syntax to our code. Closures are the real "power" feature of Groovy, and they form the basis of most of the DSLs. Resources for Article: Further resources on this subject: Using Groovy Closures Instead of Template Method [article] Metaprogramming and the Groovy MOP [article] Clojure for Domain-specific Languages - Design Concepts with Clojure [article]
Read more
  • 0
  • 0
  • 1902

article-image-slideshow-presentations
Packt
15 Sep 2015
24 min read
Save for later

Slideshow Presentations

Packt
15 Sep 2015
24 min read
 In this article by David Mitchell, author of the book Dart By Example you will be introduced to the basics of how to build a presentation application using Dart. It usually takes me more than three weeks to prepare a good impromptu speech. Mark Twain Presentations make some people shudder with fear, yet they are an undeniably useful tool for information sharing when used properly. The content has to be great and some visual flourish can make it stand out from the crowd. Too many slides can make the most receptive audience yawn, so focusing the presenter on the content and automatically taking care of the visuals (saving the creator from fiddling with different animations and fonts sizes!) can help improve presentations. Compelling content still requires the human touch. (For more resources related to this topic, see here.) Building a presentation application Web browsers are already a type of multimedia presentation application so it is feasible to write a quality presentation program as we explore more of the Dart language. Hopefully it will help us pitch another Dart application to our next customer. Building on our first application, we will use a text based editor for creating the presentation content. I was very surprised how much faster a text based editor is for producing a presentation, and more enjoyable. I hope you experience such a productivity boost! Laying out the application The application will have two modes, editing and presentation. In the editing mode, the screen will be split into two panes. The top pane will display the slides and the lower will contain the editor, and other interface elements. This article will focus on the core creation side of the presentation. The application will be a single Dart project. Defining the presentation format The presentations will be written in a tiny subset of the Markdown format which is a powerful yet simple to read text file based format (much easier to read, type and understand than HTML). In 2004, John Gruber and the late Aaron Swartz created the Markdown language in 2004 with the goal of enabling people to write using an easy-to-read, easy-to-write plain text format. It is used on major websites, such as GitHub.com and StackOverflow.com. Being plain text, Markdown files can be kept and compared in version control. For more detail and background on Markdown see https://en.wikipedia.org/wiki/Markdown A simple titled slide with bullet points would be defined as: #Dart Language +Created By Google +Modern language with a familiar syntax +Structured Web Applications +It is Awesomely productive! I am positive you only had to read that once! This will translate into the following HTML. <h1>Dart Language</h1> <li>Created By Google</li>s <li>Modern language with a familiar syntax</li> <li>Structured Web Applications</li> <li>It is Awesomely productive!</li> Markdown is very easy and fast to parse, which probably explains its growing popularity on the web. It can be transformed into many other formats. Parsing the presentation The content of the TextAreaHtml element is split into a list of individual lines, and processed in a similar manner to some of the features in the Text Editor application using forEach to iterate over the list. Any lines that are blank once any whitespace has been removed via the trim method are ignored. #A New Slide Title +The first bullet point +The second bullet point #The Second Slide Title +More bullet points !http://localhost/img/logo.png #Final Slide +Any questions? For each line starting with a # symbol, a new Slide object is created. For each line starting with a + symbol, they are added to this slides bullet point list. For each line is discovered using a ! symbol the slide's image is set (a limit of one per slide). This continues until the end of the presentation source is reached. A sample presentation To get a new user going quickly, there will be an example presentation which can be used as a demonstration and testing the various areas of the application. I chose the last topic that came up round the family dinner table—the coconut! #Coconut +Member of Arecaceae family. +A drupe - not a nut. +Part of daily diets. #Tree +Fibrous root system. +Mostly surface level. +A few deep roots for stability. #Yield +75 fruits on fertile land +30 typically +Fibre has traditional uses #Finally !coconut.png #Any Questions? Presenter project structures The project is a standard Dart web application with index.html as the entry point. The application is kicked off by main.dart which is linked to in index.html, and the application functionality is stored in the lib folder. Source File Description sampleshows.dart    The text for the slideshow application.  lifecyclemixin.dart  The class for the mixin.  slideshow.dart  Data structures for storing the presentation.  slideshowapp.dart  The application object. Launching the application The main function has a very short implementation. void main() { new SlideShowApp(); } Note that the new class instance does not need to be stored in a variable and that the object does not disappear after that line is executed. As we will see later, the object will attach itself to events and streams, keeping the object alive for the lifetime that the page is loaded. Building bullet point slides The presentation is build up using two classes—Slide and SlideShow. The Slide object creates the DivElement used to display the content and the SlideShow contains a list of Slide objects. The SlideShow object is updated as the text source is updated. It also keeps track of which slide is currently being displayed in the preview pane. Once the number of Dart files grows in a project, the DartAnalyzer will recommend naming the library. It is good habit to name every .dart file in a regular project with its own library name. The slideshow.dart file has the keyword library and a name next to it. In Dart, every file is a library, whether it is explicitly declared or not. If you are looking at Dart code online you may stumble across projects with imports that look a bit strange. #import("dart:html"); This is the old syntax for Dart's import mechanism. If you see this it is a sign that other aspects of the code may be out of date too. If you are writing an application in a single project, source files can be arranged in a folder structure appropriate for the project, though keeping the relatives paths manageable is advisable. Creating too many folders is probably means it is time to create a package! Accessing private fields In Dart, as discussed when we covered packages, the privacy is at the library level but it is still possible to have private fields in a class even though Dart does not have the keywords public, protected, and private. A simple return of a private field's value can be performed with a one line function. String getFirstName() => _name; To retrieve this value, a function call is required, for example, Person.getFirstName() however it may be preferred to have a property syntax such as Person.firstName. Having private fields and retaining the property syntax in this manner, is possible using the get and set keywords. Using true getters and setters The syntax of Dart also supports get and set via keywords: int get score =>score + bonus; set score(int increase) =>score += increase * level; Using either get/set or simple fields is down to preference. It is perfectly possible to start with simple fields and scale up to getters and setters if more validation or processing is required. The advantage of the get and set keywords in a library, is the intended interface for consumers of the package is very clear. Further it clarifies which methods may change the state of the object and which merely report current values. Mixin it up In object oriented languages, it is useful to build on one class to create a more specialized related class. For example, in the text editor the base dialog class was extended to create alert and confirm pop ups. What if we want to share some functionality but do not want inheritance occurring between the classes? Aggregation can solve this problem to some extent: class A{ classb usefulObject; } The downside is that this requires a longer reference to use: new A().usefulObject.handyMethod(); This problem has been solved in Dart (and other languages) by a mixin class to do this job, allowing the sharing of functionality without forced inheritance or clunky aggregation. In Dart, a mixin must meet the requirements: No constructors in the class declaration. The base class of the mixin must be Object. No calls to a super class are made. mixins are really just classes that are malleable enough to fit into the class hierarchy at any point. A use case for a mixin may be serialization fields and methods that may be required on several classes in an application that are not part of any inheritance chain. abstract class Serialisation { void save() { //Implementation here. } void load(String filename) { //Implementation here. } } The with keyword is used to declare that a class is using a mixin. class ImageRecord extends Record with Serialisation If the class does not have an explicit base class, it is required to specify Object. class StorageReports extends Object with Serialization In Dart, everything is an object, even basic types such as num are objects and not primitive types. The classes int and double are subtypes of num. This is important to know, as other languages have different behaviors. Let's consider a real example of this. main() { int i; print("$i"); } In a language such as Java the expected output would be 0 however the output in Dart is null. If a value is expected from a variable, it is always good practice to initialize it! For the classes Slide and SlideShow, we will use a mixin from the source file lifecyclemixin.dart to record a creation and an editing timestamp. abstract class LifecycleTracker { DateTime _created; DateTime _edited; recordCreateTimestamp() => _created = new DateTime.now(); updateEditTimestamp() => _edited = new DateTime.now(); DateTime get created => _created; DateTime get lastEdited => _edited; } To use the mixin, the recordCreateTimestamp method can be called from the constructor and the updateEditTimestamp from the main edit method. For slides, it makes sense just to record the creation. For the SlideShow class, both the creation and update will be tracked. Defining the core classes The SlideShow class is largely a container objects for a list of Slide objects and uses the mixin LifecycleTracker. class SlideShow extends Object with LifecycleTracker { List<Slide> _slides; List<Slide> get slides => _slides; ... The Slide class stores the string for the title and a list of strings for the bullet points. The URL for any image is also stored as a string: class Slide extends Object with LifecycleTracker { String titleText = ""; List<String> bulletPoints; String imageUrl = ""; ... A simple constructor takes the titleText as a parameter and initializes the bulletPoints list. If you want to focus on just-the-code when in WebStorm , double-click on filename title of the tab to expand the source code to the entire window. Double-click again to return to the original layout. For even more focus on the code, go to the View menu and click on Enter Distraction Free Mode. Transforming data into HTML To add the Slide object instance into a HTML document, the strings need to be converted into instances of HTML elements to be added to the DOM (Document Object Model). The getSlideContents() method constructs and returns the entire slide as a single object. DivElement getSlideContents() { DivElement slide = new DivElement(); DivElement title = new DivElement(); DivElement bullets = new DivElement(); title.appendHtml("<h1>$titleText</h1>"); slide.append(title); if (imageUrl.length > 0) { slide.appendHtml("<img src="$imageUrl" /><br/>"); } bulletPoints.forEach((bp) { if (bp.trim().length > 0) { bullets.appendHtml("<li>$bp</li>"); } }); slide.append(bullets); return slide; } The Div elements are constructed as objects (instances of DivElement), while the content is added as literal HTML statements. The method appendHtml is used for this particular task as it renders HTML tags in the text. The regular method appendText puts the entire literal text string (including plain unformatted text of the HTML tags) into the element. So what exactly is the difference? The method appendHtml evaluates the supplied ,HTML, and adds the resultant object node to the nodes of the parent element which is rendered in the browser as usual. The method appendText is useful, for example, to prevent user supplied content affecting the format of the page and preventing malicious code being injected into a web page. Editing the presentation When the source is updated the presentation is updated via the onKeyUp event. This was used in the text editor project to trigger a save to local storage. This is carried out in the build method of the SlideShow class, and follows the pattern we discussed parsing the presentation. build(String src) { updateEditTimestamp(); _slides = new List<Slide>(); Slide nextSlide; src.split("n").forEach((String line) { if (line.trim().length > 0) { // Title - also marks start of the next slide. if (line.startsWith("#")) { nextSlide = new Slide(line.substring(1)); _slides.add(nextSlide); } if (nextSlide != null) { if (line.startsWith("+")) { nextSlide.bulletPoints.add(line.substring(1)); } else if (line.startsWith("!")) { nextSlide.imageUrl = line.substring(1); } } } }); } As an alternative to the startsWith method, the square bracket [] operator could be used for line [0] to retrieve the first character. The startsWith can also take a regular expression or a string to match and a starting index, refer to the dart:core documentation for more information. For the purposes of parsing the presentation, the startsWith method is more readable. Displaying the current slide The slide is displayed via the showSlide method in slideShowApp.dart. To preview the current slide, the current index, stored in the field currentSlideIndex, is used to retrieve the desired slide object and the Div rendering method called. showSlide(int slideNumber) { if (currentSlideShow.slides.length == 0) return; slideScreen.style.visibility = "hidden"; slideScreen ..nodes.clear() ..nodes.add(currentSlideShow.slides[slideNumber].getSlideContents ()); rangeSlidePos.value = slideNumber.toString(); slideScreen.style.visibility = "visible"; } The slideScreen is a DivElement which is then updated off screen by setting the visibility style property to hidden The existing content of the DivElement is emptied out by calling nodes.clear() and the slide content is added with nodes.add. The range slider position is set and finally the DivElement is set to visible again. Navigating the presentation A button set with familiar first, previous, next and last slide allow the user to jump around the preview of the presentation. This is carried out by having an index into the list of slides stored in the field slide in the SlideShowApp class. Handling the button key presses The navigation buttons require being set up in an identical pattern in the constructor of the SlideShowApp object. First get an object reference using id, which is the id attribute of the element, and then attaching a handler to the click event. Rather than repeat this code, a simple function can handle the process. setButton(String id, Function clickHandler) { ButtonInputElement btn = querySelector(id); btn.onClick.listen(clickHandler); } As function is a type in Dart, functions can be passed around easily as a parameter. Let us take a look at the button that takes us to the first slide. setButton("#btnFirst", startSlideShow); void startSlideShow(MouseEvent event) { showFirstSlide(); } void showFirstSlide() { showSlide(0); } The event handlers do not directly change the slide, these are carried out by other methods, which may be triggered by other inputs such as the keyboard. Using the function type The SlideShowApp constructor makes use of this feature. Function qs = querySelector; var controls = qs("#controls"); I find the querySelector method a little long to type (though it is a good descriptive of what it does). With Function being types, we can easily create a shorthand version. The constructor spends much of its time selecting and assigning the HTML elements to member fields of the class. One of the advantages of this approach is that the DOM of the page is queried only once, and the reference stored and reused. This is good for performance of the application as, once the application is running, querying the DOM may take much longer. Staying within the bounds Using min and max function from the dart:math package, the index can be kept in range of the current list. void showLastSlide() { currentSlideIndex = max(0, currentSlideShow.slides.length - 1); showSlide(currentSlideIndex); } void showNextSlide() { currentSlideIndex = min(currentSlideShow.slides.length - 1, ++currentSlideIndex); showSlide(currentSlideIndex); } These convenience functions can save a great deal if and else if comparisons and help make code a good degree more readable. Using the slider control The slider control is another new control in the HTML5 standard. This will allow the user to scroll though the slides in the presentation. This control is a personal favorite of mine, as it is so visual and can be used to give very interactive feedback to the user. It seemed to be a huge omission from the original form controls in the early generation of web browsers. Even with clear widely accepted features, HTML specifications can take a long time to clear committees and make it into everyday browsers! <input type="range" id="rngSlides" value="0"/> The control has an onChange event which is given a listener in the SlideShowApp constructor. rangeSlidepos.onChange.listen(moveToSlide);rangeSlidepos.onChange .listen(moveToSlide); The control provides its data via a simple string value, which can be converted to an integer via the int.parse method to be used as an index to the presentation's slide list. void moveToSlide(Event event) { currentSlideIndex = int.parse(rangeSlidePos.value); showSlide(currentSlideIndex); } The slider control must be kept in synchronization with any other change in slide display, use of navigation or change in number of slides. For example, the user may use the slider to reach the general area of the presentation, and then adjust with the previous and next buttons. void updateRangeControl() { rangeSlidepos ..min = "0" ..max = (currentSlideShow.slides.length - 1).toString(); } This method is called when the number of slides is changed, and as with working with most HTML elements, the values to be set need converted to strings. Responding to keyboard events Using the keyboard, particularly the arrow (cursor) keys, is a natural way to look through the slides in a presentation even in the preview mode. This is carried out in the SlideShowApp constructor. In Dart web applications, the dart:html package allows direct access to the globalwindow object from any class or function. The Textarea used to input the presentation source will also respond to the arrow keys so there will need to be a check to see if it is currently being used. The property activeElement on the document will give a reference to the control with focus. This reference can be compared to the Textarea, which is stored in the presEditor field, so a decision can be taken on whether to act on the keypress or not. Key Event Code Action Left Arrow  37  Go back a slide. Up Arrow  38  Go to first slide.   Right Arrow  39  Go to next slide.  Down Arrow  40  Go to last slide. Keyboard events, like other events, can be listened to by using a stream event listener. The listener function is an anonymous function (the definition omits a name) that takes the KeyboardEvent as its only parameter. window.onKeyUp.listen((KeyboardEvent e) { if (presEditor != document.activeElement){ if (e.keyCode == 39) showNextSlide(); else if (e.keyCode == 37) showPrevSlide(); else if (e.keyCode == 38) showFirstSlide(); else if (e.keyCode == 40) showLastSlide(); } }); It is a reasonable question to ask how to get the keyboard key codes required to write the switching code. One good tool is the W3C's Key and Character Codes page at http://www.w3.org/2002/09/tests/keys.html, to help with this but it can often be faster to write the handler and print out the event that is passed in! Showing the key help Rather than testing the user's memory, there will be a handy reference to the keyboard shortcuts. This is a simple Div element which is shown and then hidden when the key (remember to press Shift too!) is pressed again by toggling the visibility style from visible to hidden. Listening twice to event streams The event system in Dart is implemented as a stream. One of the advantages of this is that an event can easily have more than one entity listening to the class. This is useful, for example in a web application where some keyboard presses are valid in one context but not in another. The listen method is an add operation (accumulative) so the key press for help can be implemented separately. This allows a modular approach which helps reuse as the handlers can be specialized and added as required. window.onKeyUp.listen((KeyboardEvent e) { print(e); //Check the editor does not have focus. if (presEditor != document.activeElement) { DivElement helpBox = qs("#helpKeyboardShortcuts"); if (e.keyCode == 191) { if (helpBox.style.visibility == "visible") { helpBox.style.visibility = "hidden"; } else { helpBox.style.visibility = "visible"; } } } }); In, for example, a game, a common set of event handling may apply to title and introduction screen and the actual in game screen contains additional event handling as a superset. This could be implemented by adding and removing handlers to the relevant event stream. Changing the colors HTML5 provides browsers with full featured color picker (typically browsers use the native OS's color chooser). This will be used to allow the user to set the background color of the editor application itself. The color picker is added to the index.html page with the following HTML: <input id="pckBackColor" type="color"> The implementation is straightforward as the color picker control provides: InputElement cp = qs("#pckBackColor"); cp.onChange.listen( (e) => document.body.style.backgroundColor = cp.value); As the event and property (onChange and value) are common to the input controls the basic InputElement class can be used. Adding a date Most presentations are usually dated, or at least some of the jokes are! We will add a convenient button for the user to add a date to the presentation using the HTML5 input type date which provides a graphical date picker. <input type="date" id="selDate" value="2000-01-01"/> The default value is set in the index.html page as follows: The valueAsDate property of the DateInputElement class provides the Date object which can be added to the text area: void insertDate(Event event) { DateInputElement datePicker = querySelector("#selDate"); if (datePicker.valueAsDate != null) presEditor.value = presEditor.value + datePicker.valueAsDate.toLocal().toString(); } In this case, the toLocal method is used to obtain a string formatted to the month, day, year format. Timing the presentation The presenter will want to keep to their allotted time slot. We will include a timer in the editor to aid in rehearsal. Introducing the stopwatch class The Stopwatch class (from dart:core) provides much of the functionality needed for this feature, as shown in this small command line application: main() { Stopwatch sw = new Stopwatch(); sw.start(); print(sw.elapsed); sw.stop(); print(sw.elapsed); } The elapsed property can be checked at any time to give the current duration. This is very useful class, for example, it can be used to compare different functions to see which is the fastest. Implementing the presentation timer The clock will be stopped and started with a single button handled by the toggleTimer method. A recurring timer will update the duration text on the screen as follows: If the timer is running, the update Timer and the Stopwatch in field slidesTime is stopped. No update to the display is required as the user will need to see the final time: void toggleTimer(Event event) { if (slidesTime.isRunning) { slidesTime.stop(); updateTimer.cancel(); } else { updateTimer = new Timer.periodic(new Duration(seconds: 1), (timer) { String seconds = (slidesTime.elapsed.inSeconds % 60).toString(); seconds = seconds.padLeft(2, "0"); timerDisplay.text = "${slidesTime.elapsed.inMinutes}:$seconds"; }); slidesTime ..reset() ..start(); } } The Stopwatch class provides properties for retrieving the elapsed time in minutes and seconds. To format this to minutes and seconds, the seconds portion is determined with the modular division operator % and padded with the string function padLeft. Dart's string interpolation feature is used to build the final string, and as the elapsed and inMinutes properties are being accessed, the {} brackets are required so that the single value is returned. Overview of slides This provides the user with a visual overview of the slides as shown in the following screenshot: The presentation slides will be recreated in a new full screen Div element. This is styled using the fullScreen class in the CSS stylesheet in the SlideShowApp constructor: overviewScreen = new DivElement(); overviewScreen.classes.toggle("fullScreen"); overviewScreen.onClick.listen((e) => overviewScreen.remove()); The HTML for the slides will be identical. To shrink the slides, the list of slides is iterated over, the HTML element object obtained and the CSS class for the slide is set: currentSlideShow.slides.forEach((s) { aSlide = s.getSlideContents(); aSlide.classes.toggle("slideOverview"); aSlide.classes.toggle("shrink"); ... The CSS hover class is set to scale the slide when the mouse enters so a slide can be focused on for review. The classes are set with the toggle method which either adds if not present or removes if they are. The method has an optional parameter: aSlide.classes.toggle('className', condition); The second parameter is named shouldAdd is true if the class is always to be added and false if the class is always to be removed. Handout notes There is nothing like a tangible handout to give attendees to your presentation. This can be achieved with a variation of the overview display: Instead of duplicating the overview code, the function can be parameterized with an optional parameter in the method declaration. This is declared with square brackets [] around the declaration and a default value that is used if no parameter is specified. void buildOverview([bool addNotes = false]) This is called by the presentation overview display without requiring any parameters. buildOverview(); This is called by the handouts display without requiring any parameters. buildOverview(true); If this parameter is set, an additional Div element is added for the Notes area and the CSS is adjust for the benefit of the print layout. Comparing optional positional and named parameters The addNotes parameter is declared as an optional positional parameter, so an optional value can be specified without naming the parameter. The first parameter is matched to the supplied value. To give more flexibility, Dart allows optional parameters to be named. Consider two functions, the first will take named optional parameters and the second positional optional parameters. getRecords1(String query,{int limit: 25, int timeOut: 30}) { } getRecords2(String query,[int limit = 80, int timeOut = 99]) { } The first function can be called in more ways: getRecords1(""); getRecords1("", limit:50, timeOut:40); getRecords1("", timeOut:40, limit:65); getRecords1("", limit:50); getRecords1("", timeOut:40); getRecords2(""); getRecords2("", 90); getRecords2("", 90, 50); With named optional parameters, the order they are supplied is not important and has the advantage that the calling code is clearer as to the use that will be made of the parameters being passed. With positional optional parameters, we can omit the later parameters but it works in a strict left to right order so to set the timeOut parameter to a non-default value, limit must also be supplied. It is also easier to confuse which parameter is for which particular purpose. Summary The presentation editor is looking rather powerful with a range of advanced HTML controls moving far beyond text boxes to date pickers and color selectors. The preview and overview help the presenter visualize the entire presentation as they work, thanks to the strong class structure built using Dart mixins and data structures using generics. We have spent time looking at the object basis of Dart, how to pass parameters in different ways and, closer to the end user, how to handle keyboard input. This will assist in the creation of many different types of application and we have seen how optional parameters and true properties can help document code for ourselves and other developers. Hopefully you learned a little about coconuts too. The next step for this application is to improve the output with full screen display, animation and a little sound to capture the audiences' attention. The presentation editor could be improved as well—currently it is only in the English language. Dart's internationalization features can help with this. Resources for Article: Further resources on this subject: Practical Dart[article] Handling the DOM in Dart[article] Dart with JavaScript [article]
Read more
  • 0
  • 0
  • 1035

article-image-performance-design
Packt
15 Sep 2015
9 min read
Save for later

Performance by Design

Packt
15 Sep 2015
9 min read
In this article by Shantanu Kumar, author of the book, Clojure High Performance Programming - Second Edition, we learn how Clojure is a safe, functional programming language that brings great power and simplicity to the user. Clojure is also dynamically and strongly typed, and has very good performance characteristics. Naturally, every activity performed on a computer has an associated cost. What constitutes acceptable performance varies from one use-case and workload to another. In today's world, performance is even the determining factor for several kinds of applications. We will discuss Clojure (which runs on the JVM (Java Virtual Machine)), and its runtime environment in the light of performance, which is the goal of the book. In this article, we will study the basics of performance analysis, including the following: A whirlwind tour of how the application stack impacts performance Classifying the performance anticipations by the use cases types (For more resources related to this topic, see here.) Use case classification The performance requirements and priority vary across the different kinds of use cases. We need to determine what constitutes acceptable performance for the various kinds of use cases. Hence, we classify them to identify their performance model. When it comes to details, there is no sure shot performance recipe of any kind of use case, but it certainly helps to study their general nature. Note that in real life, the use cases listed in this section may overlap with each other. The user-facing software The performance of user facing applications is strongly linked to the user's anticipation. Having a difference of a good number of milliseconds may not be perceptible for the user but at the same time, a wait for more than a few seconds may not be taken kindly. One important element to normalize the anticipation is to engage the user by providing a duration-based feedback. A good idea to deal with such a scenario would be to start the task asynchronously in the background, and poll it from the UI layer to generate duration-based feedback for the user. Another way could be to incrementally render the results to the user to even out the anticipation. Anticipation is not the only factor in user facing performance. Common techniques like staging or precomputation of data, and other general optimization techniques can go a long way to improve the user experience with respect to performance. Bear in mind that all kinds of user facing interfaces fall into this use case category—the Web, mobile web, GUI, command line, touch, voice-operated, gesture...you name it. Computational and data-processing tasks Non-trivial compute intensive tasks demand a proportional amount of computational resources. All of the CPU, cache, memory, efficiency and the parallelizability of the computation algorithms would be involved in determining the performance. When the computation is combined with distribution over a network or reading from/staging to disk, I/O bound factors come into play. This class of workloads can be further subclassified into more specific use cases. A CPU bound computation A CPU bound computation is limited by the CPU cycles spent on executing it. Arithmetic processing in a loop, small matrix multiplication, determining whether a number is a Mersenne prime, and so on, would be considered CPU bound jobs. If the algorithm complexity is linked to the number of iterations/operations N, such as O(N), O(N2) and more, then the performance depends on how big N is, and how many CPU cycles each step takes. For parallelizable algorithms, performance of such tasks may be enhanced by assigning multiple CPU cores to the task. On virtual hardware, the performance may be impacted if the CPU cycles are available in bursts. A memory bound task A memory bound task is limited by the availability and bandwidth of the memory. Examples include large text processing, list processing, and more. For example, specifically in Clojure, the (reduce f (pmap g coll)) operation would be memory bound if coll is a large sequence of big maps, even though we parallelize the operation using pmap here. Note that higher CPU resources cannot help when memory is the bottleneck, and vice versa. Lack of availability of memory may force you to process smaller chunks of data at a time, even if you have enough CPU resources at your disposal. If the maximum speed of your memory is X and your algorithm on single the core accesses the memory at speed X/3, the multicore performance of your algorithm cannot exceed three times the current performance, no matter how many CPU cores you assign to it. The memory architecture (for example, SMP and NUMA) contributes to the memory bandwidth in multicore computers. Performance with respect to memory is also subject to page faults. A cache bound task A task is cache bound when its speed is constrained by the amount of cache available. When a task retrieves values from a small number of repeated memory locations, for example, a small matrix multiplication, the values may be cached and fetched from there. Note that CPUs (typically) have multiple layers of cache, and the performance will be at its best when the processed data fits in the cache, but the processing will still happen, more slowly, when the data does not fit into the cache. It is possible to make the most of the cache using cache-oblivious algorithms. A higher number of concurrent cache/memory bound threads than CPU cores is likely to flush the instruction pipeline, as well as the cache at the time of context switch, likely leading to a severely degraded performance. An input/output bound task An input/output (I/O) bound task would go faster if the I/O subsystem, that it depends on, goes faster. Disk/storage and network are the most commonly used I/O subsystems in data processing, but it can be serial port, a USB-connected card reader, or any I/O device. An I/O bound task may consume very few CPU cycles. Depending on the speed of the device, connection pooling, data compression, asynchronous handling, application caching, and more, may help in performance. One notable aspect of I/O bound tasks is that performance is usually dependent on the time spent waiting for connection/seek, and the amount of serialization that we do, and hardly on the other resources. In practice, many data processing workloads are usually a combination of CPU bound, memory bound, cache bound, and I/O bound tasks. The performance of such mixed workloads effectively depends on the even distribution of CPU, cache, memory, and I/O resources over the duration of the operation. A bottleneck situation arises only when one resource gets too busy to make way for another. Online transaction processing The online transaction processing (OLTP) systems process the business transactions on demand. It can sit behind systems such as a user-facing ATM machine, point-of-sale terminal, a network-connected ticket counter, ERP systems, and more. The OLTP systems are characterized by low latency, availability, and data integrity. They run day-to-day business transactions. Any interruption or outage is likely to have a direct and immediate impact on the sales or service. Such systems are expected to be designed for resiliency rather than the delayed recovery from failures. When the performance objective is unspecified, you may like to consider graceful degradation as a strategy. It is a common mistake to ask the OLTP systems to answer analytical queries; something that they are not optimized for. It is desirable of an informed programmer to know the capability of the system, and suggest design changes as per the requirements. Online analytical processing The online analytical processing (OLAP) systems are designed to answer analytical queries in short time. They typically get data from the OLTP operations, and their data model is optimized for querying. They basically provide for consolidation (roll-up), drill-down and slicing, and dicing of data for analytical purposes. They often use specialized data stores that can optimize the ad-hoc analytical queries on the fly. It is important for such databases to provide pivot-table like capability. Often, the OLAP cube is used to get fast access to the analytical data. Feeding the OLTP data into the OLAP systems may entail workflows and multistage batch processing. The performance concern of such systems is to efficiently deal with large quantities of data, while also dealing with inevitable failures and recovery. Batch processing Batch processing is automated execution of predefined jobs. These are typically bulk jobs that are executed during off-peak hours. Batch processing may involve one or more stages of job processing. Often batch processing is clubbed with work-flow automation, where some workflow steps are executed offline. Many of the batch processing jobs work on staging of data, and on preparing data for the next stage of processing to pick up. Batch jobs are generally optimized for the best utilization of the computing resources. Since there is little to moderate the demand to lower the latencies of some particular subtasks, these systems tend to optimize for throughput. A lot of batch jobs involve largely I/O processing and are often distributed over a cluster. Due to distribution, the data locality is preferred when processing the jobs; that is, the data and processing should be local in order to avoid network latency in reading/writing data. Summary We learned about the basics of what it is like to think more deeply about performance. The performance of Clojure applications depend on various factors. For a given application, understanding its use cases, design and implementation, algorithms, resource requirements and alignment with the hardware, and the underlying software capabilities, is essential. Resources for Article: Further resources on this subject: Big Data [article] The Observer Pattern [article] Working with Incanter Datasets [article]
Read more
  • 0
  • 0
  • 2238
article-image-java-hibernate-collections-associations-and-advanced-concepts
Packt
15 Sep 2015
16 min read
Save for later

Java Hibernate Collections, Associations, and Advanced Concepts

Packt
15 Sep 2015
16 min read
In this article by Yogesh Prajapati and Vishal Ranapariya, the author of the book Java Hibernate Cookbook, he has provide a complete guide to the following recipes: Working with a first-level cache One-to-one mapping using a common join table Persisting Map (For more resources related to this topic, see here.) Working with a first-level cache Once we execute a particular query using hibernate, it always hits the database. As this process may be very expensive, hibernate provides the facility to cache objects within a certain boundary. The basic actions performed in each database transaction are as follows: The request reaches the database server via the network. The database server processes the query in the query plan. Now the database server executes the processed query. Again, the database server returns the result to the querying application through the network. At last, the application processes the results. This process is repeated every time we request a database operation, even if it is for a simple or small query. It is always a costly transaction to hit the database for the same records multiple times. Sometimes, we also face some delay in receiving the results because of network routing issues. There may be some other parameters that affect and contribute to the delay, but network routing issues play a major role in this cycle. To overcome this issue, the database uses a mechanism that stores the result of a query, which is executed repeatedly, and uses this result again when the data is requested using the same query. These operations are done on the database side. Hibernate provides an in-built caching mechanism known as the first-level cache (L1 cache). Following are some properties of the first-level cache: It is enabled by default. We cannot disable it even if we want to. The scope of the first-level cache is limited to a particular Session object only; the other Session objects cannot access it. All cached objects are destroyed once the session is closed. If we request for an object, hibernate returns the object from the cache only if the requested object is found in the cache; otherwise, a database call is initiated. We can use Session.evict(Object object) to remove single objects from the session cache. The Session.clear() method is used to clear all the cached objects from the session. Getting ready Let's take a look at how the L1 cache works. Creating the classes For this recipe, we will create an Employee class and also insert some records into the table: Source file: Employee.java @Entity @Table public class Employee { @Id @GeneratedValue private long id; @Column(name = "name") private String name; // getters and setters @Override public String toString() { return "Employee: " + "nt Id: " + this.id + "nt Name: " + this.name; } } Creating the tables Use the following table script if the hibernate.hbm2ddl.auto configuration property is not set to create: Use the following script to create the employee table: CREATE TABLE `employee` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ); We will assume that two records are already inserted, as shown in the following employee table: id name 1 Yogesh 2 Aarush Now, let's take a look at some scenarios that show how the first-level cache works. How to do it… Here is the code to see how caching works. In the code, we will load employee#1 and employee#2 once; after that, we will try to load the same employees again and see what happens: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); System.out.println("nLoading employee#1 again..."); /* Line 10 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 15 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Loading employee#1 again... Employee: Id: 1 Name: Yogesh Loading employee#2 again... Employee: Id: 2 Name: Aarush How it works… Here, we loaded Employee#1 and Employee#2 as shown in Line 2 and 6 respectively and also the print output for both. It's clear from the output that hibernate will hit the database to load Employee#1 and Employee#2 because at startup, no object is cached in hibernate. Now, in Line 10, we tried to load Employee#1 again. At this time, hibernate did not hit the database but simply use the cached object because Employee#1 is already loaded and this object is still in the session. The same thing happened with Employee#2. Hibernate stores an object in the cache only if one of the following operations is completed: Save Update Get Load List There's more… In the previous section, we took a look at how caching works. Now, we will discuss some other methods used to remove a cached object from the session. There are two more methods that are used to remove a cached object: evict(Object object): This method removes a particular object from the session clear(): This method removes all the objects from the session evict (Object object) This method is used to remove a particular object from the session. It is very useful. The object is no longer available in the session once this method is invoked and the request for the object hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); /* Line 5 */ session.evict(employee1); System.out.println("nEmployee#1 removed using evict(…)..."); System.out.println("nLoading employee#1 again..."); /* Line 9*/ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Employee#1 removed using evict(…)... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Here, we loaded an Employee#1, as shown in Line 2. This object was then cached in the session, but we explicitly removed it from the session cache in Line 5. So, the loading of Employee#1 will again hit the database. clear() This method is used to remove all the cached objects from the session cache. They will no longer be available in the session once this method is invoked and the request for the objects hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); /* Line 9 */ session.clear(); System.out.println("nAll objects removed from session cache using clear()..."); System.out.println("nLoading employee#1 again..."); /* Line 13 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 17 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush All objects removed from session cache using clear()... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Here, Line 2 and 6 show how to load Employee#1 and Employee#2 respectively. Now, we removed all the objects from the session cache using the clear() method. As a result, the loading of both Employee#1 and Employee#2 will again result in a database hit, as shown in Line 13 and 17. One-to-one mapping using a common join table In this method, we will use a third table that contains the relationship between the employee and detail tables. In other words, the third table will hold a primary key value of both tables to represent a relationship between them. Getting ready Use the following script to create the tables and classes. Here, we use Employee and EmployeeDetail to show a one-to-one mapping using a common join table: Creating the tables Use the following script to create the tables if you are not using hbm2dll=create|update: Use the following script to create the detail table: CREATE TABLE `detail` ( `detail_id` bigint(20) NOT NULL AUTO_INCREMENT, `city` varchar(255) DEFAULT NULL, PRIMARY KEY (`detail_id`) ); Use the following script to create the employee table: CREATE TABLE `employee` ( `employee_id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`employee_id`) ); Use the following script to create the employee_detail table: CREATE TABLE `employee_detail` ( `detail_id` BIGINT(20) DEFAULT NULL, `employee_id` BIGINT(20) NOT NULL, PRIMARY KEY (`employee_id`), KEY `FK_DETAIL_ID` (`detail_id`), KEY `FK_EMPLOYEE_ID` (`employee_id`), CONSTRAINT `FK_EMPLOYEE_ID` FOREIGN KEY (`employee_id`) REFERENCES `employee` (`employee_id`), CONSTRAINT `FK_DETAIL_ID` FOREIGN KEY (`detail_id`) REFERENCES `detail` (`detail_id`) ); Creating the classes Use the following code to create the classes: Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "employee_id") private long id; @Column(name = "name") private String name; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="employee_id") , inverseJoinColumns=@JoinColumn(name="detail_id") ) private Detail employeeDetail; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Detail getEmployeeDetail() { return employeeDetail; } public void setEmployeeDetail(Detail employeeDetail) { this.employeeDetail = employeeDetail; } @Override public String toString() { return "Employee" +"n Id: " + this.id +"n Name: " + this.name +"n Employee Detail " + "nt Id: " + this.employeeDetail.getId() + "nt City: " + this.employeeDetail.getCity(); } } Source file: Detail.java @Entity @Table(name = "detail") public class Detail { @Id @GeneratedValue @Column(name = "detail_id") private long id; @Column(name = "city") private String city; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="detail_id") , inverseJoinColumns=@JoinColumn(name="employee_id") ) private Employee employee; public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public long getId() { return id; } public void setId(long id) { this.id = id; } @Override public String toString() { return "Employee Detail" +"n Id: " + this.id +"n City: " + this.city +"n Employee " + "nt Id: " + this.employee.getId() + "nt Name: " + this.employee.getName(); } } How to do it… In this section, we will take a look at how to insert a record step by step. Inserting a record Using the following code, we will insert an Employee record with a Detail object: Code Detail detail = new Detail(); detail.setCity("AHM"); Employee employee = new Employee(); employee.setName("vishal"); employee.setEmployeeDetail(detail); Transaction transaction = session.getTransaction(); transaction.begin(); session.save(employee); transaction.commit(); Output Hibernate: insert into detail (city) values (?) Hibernate: insert into employee (name) values (?) Hibernate: insert into employee_detail (detail_id, employee_id) values (?,?) Hibernate saves one record in the detail table and one in the employee table and then inserts a record in to the third table, employee_detail, using the primary key column value of the detail and employee tables. How it works… From the output, it's clear how this method works. The code is the same as in the other methods of configuring a one-to-one relationship, but here, hibernate reacts differently. Here, the first two statements of output insert the records in to the detail and employee tables respectively, and the third statement inserts the mapping record in to the third table, employee_detail, using the primary key column value of both the tables. Let's take a look at an option used in the previous code in detail: @JoinTable: This annotation, written on the Employee class, contains the name="employee_detail" attribute and shows that a new intermediate table is created with the name "employee_detail" joinColumns=@JoinColumn(name="employee_id"): This shows that a reference column is created in employee_detail with the name "employee_id", which is the primary key of the employee table inverseJoinColumns=@JoinColumn(name="detail_id"): This shows that a reference column is created in the employee_detail table with the name "detail_id", which is the primary key of the detail table Ultimately, the third table, employee_detail, is created with two columns: one is "employee_id" and the other is "detail_id". Persisting Map Map is used when we want to persist a collection of key/value pairs where the key is always unique. Some common implementations of java.util.Map are java.util.HashMap, java.util.LinkedHashMap, and so on. For this recipe, we will use java.util.HashMap. Getting ready Now, let's assume that we have a scenario where we are going to implement Map<String, String>; here, the String key is the e-mail address label, and the value String is the e-mail address. For example, we will try to construct a data structure similar to <"Personal e-mail", "[email protected]">, <"Business e-mail", "[email protected]">. This means that we will create an alias of the actual e-mail address so that we can easily get the e-mail address using the alias and can document it in a more readable form. This type of implementation depends on the custom requirement; here, we can easily get a business e-mail using the Business email key. Use the following code to create the required tables and classes. Creating tables Use the following script to create the tables if you are not using hbm2dll=create|update. This script is for the tables that are generated by hibernate: Use the following code to create the email table: CREATE TABLE `email` ( `Employee_id` BIGINT(20) NOT NULL, `emails` VARCHAR(255) DEFAULT NULL, `emails_KEY` VARCHAR(255) NOT NULL DEFAULT '', PRIMARY KEY (`Employee_id`,`emails_KEY`), KEY `FK5C24B9C38F47B40` (`Employee_id`), CONSTRAINT `FK5C24B9C38F47B40` FOREIGN KEY (`Employee_id`) REFERENCES `employee` (`id`) ); Use the following code to create the employee table: CREATE TABLE `employee` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`id`) ); Creating a class Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "id") private long id; @Column(name = "name") private String name; @ElementCollection @CollectionTable(name = "email") private Map<String, String> emails; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Map<String, String> getEmails() { return emails; } public void setEmails(Map<String, String> emails) { this.emails = emails; } @Override public String toString() { return "Employee" + "ntId: " + this.id + "ntName: " + this.name + "ntEmails: " + this.emails; } } How to do it… Here, we will consider how to work with Map and its manipulation operations, such as inserting, retrieving, deleting, and updating. Inserting a record Here, we will create one employee record with two e-mail addresses: Code Employee employee = new Employee(); employee.setName("yogesh"); Map<String, String> emails = new HashMap<String, String>(); emails.put("Business email", "[email protected]"); emails.put("Personal email", "[email protected]"); employee.setEmails(emails); session.getTransaction().begin(); session.save(employee); session.getTransaction().commit(); Output Hibernate: insert into employee (name) values (?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) When the code is executed, it inserts one record into the employee table and two records into the email table and also sets a primary key value for the employee record in each record of the email table as a reference. Retrieving a record Here, we know that our record is inserted with id 1. So, we will try to get only that record and understand how Map works in our case. Code Employee employee = (Employee) session.get(Employee.class, 1l); System.out.println(employee.toString()); System.out.println("Business email: " + employee.getEmails().get("Business email")); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Employee Id: 1 Name: yogesh Emails: {Personal [email protected], Business [email protected]} Business email: [email protected] Here, we can easily get a business e-mail address using the Business email key from the map of e-mail addresses. This is just a simple scenario created to demonstrate how to persist Map in hibernate. Updating a record Here, we will try to add one more e-mail address to Employee#1: Code Employee employee = (Employee) session.get(Employee.class, 1l); Map<String, String> emails = employee.getEmails(); emails.put("Personal email 1", "[email protected]"); session.getTransaction().begin(); session.saveOrUpdate(employee); session.getTransaction().commit(); System.out.println(employee.toString()); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?, ?, ?) Employee Id: 2 Name: yogesh Emails: {Personal email 1= [email protected], Personal [email protected], Business [email protected]} Here, we added a new e-mail address with the Personal email 1 key and the value is [email protected]. Deleting a record Here again, we will try to delete the records of Employee#1 using the following code: Code Employee employee = new Employee(); employee.setId(1); session.getTransaction().begin(); session.delete(employee); session.getTransaction().commit(); Output Hibernate: delete from email where Employee_id=? Hibernate: delete from employee where id=? While deleting the object, hibernate will delete the child records (here, e-mail addresses) as well. How it works… Here again, we need to understand the table structures created by hibernate: Hibernate creates a composite primary key in the email table using two fields: employee_id and emails_KEY. Summary In this article you familiarized yourself with recipes such as working with a first-level cache, one-to-one mapping using a common join table, and persisting map. Resources for Article: Further resources on this subject: PostgreSQL in Action[article] OpenShift for Java Developers[article] Oracle 12c SQL and PL/SQL New Features [article]
Read more
  • 0
  • 0
  • 2817

article-image-introducing-boost-c-libraries
Packt
14 Sep 2015
22 min read
Save for later

Introducing the Boost C++ Libraries

Packt
14 Sep 2015
22 min read
 In this article written by John Torjo and Wisnu Anggoro, authors of the book Boost.Asio C++ Network Programming - Second Edition, the authors state that "Many programmers have used libraries since this simplifies the programming process. Because they do not need to write the function from scratch anymore, using a library can save much code development time". In this article, we are going to get acquainted with Boost C++ libraries. Let us prepare our own compiler and text editor to prove the power of Boost libraries. As we do so, we will discuss the following topics: Introducing the C++ standard template library Introducing Boost C++ libraries Setting up Boost C++ libraries in MinGW compiler Building Boost C++ libraries Compiling code that contains Boost C++ libraries (For more resources related to this topic, see here.) Introducing the C++ standard template library The C++ Standard Template Library (STL) is a generic template-based library that offers generic containers among other things. Instead of dealing with dynamic arrays, linked lists, binary trees, or hash tables, programmers can easily use an algorithm that is provided by STL. The STL is structured by containers, iterators, and algorithms, and their roles are as follows: Containers: Their main role is to manage the collection of objects of certain kinds, such as arrays of integers or linked lists of strings. Iterators: Their main role is to step through the element of the collections. The working of an iterator is similar to that of a pointer. We can increment the iterator by using the ++ operator and access the value by using the * operator. Algorithms: Their main role is to process the element of collections. An algorithm uses an iterator to step through all elements. After it iterates the elements, it processes each element, for example, modifying the element. It can also search and sort the element after it finishes iterating all the elements. Let us examine the three elements that structure STL by creating the following code: /* stl.cpp */ #include <vector> #include <iostream> #include <algorithm> int main(void) { int temp; std::vector<int> collection; std::cout << "Please input the collection of integer numbers, input 0 to STOP!n"; while(std::cin >> temp != 0) { if(temp == 0) break; collection.push_back(temp); } std::sort(collection.begin(), collection.end()); std::cout << "nThe sort collection of your integer numbers:n"; for(int i: collection) { std::cout << i << std::endl; } } Name the preceding code stl.cpp, and run the following command to compile it: g++ -Wall -ansi -std=c++11 stl.cpp -o stl Before we dissect this code, let us run it to see what happens. This program will ask users to enter as many as integer, and then it will sort the numbers. To stop the input and ask the program to start sorting, the user has to input 0. This means that 0 will not be included in the sorting process. Since we do not prevent users from entering non-integer numbers such as 3.14, or even the string, the program will soon stop waiting for the next number after the user enters a non-integer number. The code yields the following output: We have entered six integer: 43, 7, 568, 91, 2240, and 56. The last entry is 0 to stop the input process. Then the program starts to sort the numbers and we get the numbers sorted in sequential order: 7, 43, 56, 91, 568, and 2240. Now, let us examine our code to identify the containers, iterators, and algorithms that are contained in the STL. std::vector<int> collection; The preceding code snippet has containers from STL. There are several containers, and we use a vector in the code. A vector manages its elements in a dynamic array, and they can be accessed randomly and directly with the corresponding index. In our code, the container is prepared to hold integer numbers so we have to define the type of the value inside the angle brackets <int>. These angle brackets are also called generics in STL. collection.push_back(temp); std::sort(collection.begin(), collection.end()); The begin() and end() functions in the preceding code are algorithms in STL. They play the role of processing the data in the containers that are used to get the first and the last elements in the container. Before that, we can see the push_back() function, which is used to append an element to the container. for(int i: collection) { std::cout << i << std::endl; } The preceding for block will iterate each element of the integer which is called as collection. Each time the element is iterated, we can process the element separately. In the preceding example, we showed the number to the user. That is how the iterators in STL play their role. #include <vector> #include <algorithm> We include vector definition to define all vector functions and algorithm definition to invoke the sort() function. Introducing the Boost C++ libraries The Boost C++ libraries is a set of libraries to complement the C++ standard libraries. The set contains more than a hundred libraries that we can use to increase our productivity in C++ programming. It is also used when our requirements go beyond what is available in the STL. It provides source code under Boost Licence, which means that it allows us to use, modify, and distribute the libraries for free, even for commercial use. The development of Boost is handled by the Boost community, which consists of C++ developers from around the world. The mission of the community is to develop high-quality libraries as a complement to STL. Only proven libraries will be added to the Boost libraries. For detailed information about Boost libraries go to www.boost.org. And if you want to contribute developing libraries to Boost, you can join the developer mailing list at lists.boost.org/mailman/listinfo.cgi/boost. The entire source code of the libraries is available on the official GitHub page at github.com/boostorg. Advantages of Boost libraries As we know, using Boost libraries will increase programmer productivity. Moreover, by using Boost libraries, we will get advantages such as these: It is open source, so we can inspect the source code and modify it if needed. Its license allows us to develop both open source and close source projects. It also allows us to commercialize our software freely. It is well documented and we can find it libraries all explained along with sample code from the official site. It supports almost any modern operating system, such as Windows and Linux. It also supports many popular compilers. It is a complement to STL instead of a replacement. It means using Boost libraries will ease those programming processes which are not handled by STL yet. In fact, many parts of Boost are included in the standard C++ library. Preparing Boost libraries for MinGW compiler Before we go through to program our C++ application by using Boost libraries, the libraries need to be configured in order to be recognized by MinGW compiler. Here we are going to prepare our programming environment so that our compiler is able use Boost libraries. Downloading Boost libraries The best source from which to download Boost is the official download page. We can go there by pointing our internet browser to www.boost.org/users/download. Find the Download link in Current Release section. At the time of writing, the current version of Boost libraries is 1.58.0, but when you read this article, the version may have changed. If so, you can still choose the current release because the higher version must be compatible with the lower. However, you have to adjust as we're goning to talk about the setting later. Otherwise, choosing the same version will make it easy for you to follow all the instructions in this article. There are four file formats to be choose from for download; they are .zip, .tar.gz, .tar.bz2, and .7z. There is no difference among the four files but their file size. The largest file size is of the ZIP format and the lowest is that of the 7Z format. Because of the file size, Boost recommends that we download the 7Z format. See the following image for comparison: We can see, from the preceding image, the size of ZIP version is 123.1 MB while the size of the 7Z version is 65.2 MB. It means that the size of the ZIP version is almost twice that of the 7Z version. Therefore they suggest that you choose the 7Z format to reduce download and decompression time. Let us choose boost_1_58_0.7z to be downloaded and save it to our local storage. Deploying Boost libraries After we have got boost_1_58_0.7z in our local storage, decompress it using the 7ZIP application and save the decompression files to C:boost_1_58_0. The 7ZIP application can be grabbed from www.7-zip.org/download.html. The directory then should contain file structures as follows: Instead of browsing to the Boost download page and searching for the Boost version manually, we can go directly to sourceforge.net/projects/boost/files/boost/1.58.0. It will be useful when the 1.58.0 version is not the current release anymore. Using Boost libraries Most libraries in Boost are header-only; this means that all declarations and definitions of functions, including namespaces and macros, are visible to the compiler and there is no need to compile them separately. We can now try to use Boost with the program to convert the string into int value as follows: /* lexical.cpp */ #include <boost/lexical_cast.hpp> #include <string> #include <iostream> int main(void) { try { std::string str; std::cout << "Please input first number: "; std::cin >> str; int n1 = boost::lexical_cast<int>(str); std::cout << "Please input second number: "; std::cin >> str; int n2 = boost::lexical_cast<int>(str); std::cout << "The sum of the two numbers is "; std::cout << n1 + n2 << "n"; return 0; } catch (const boost::bad_lexical_cast &e) { std::cerr << e.what() << "n"; return 1; } } Open the Notepad++ application, type the preceding code, and save it as lexical.cpp in C:CPP. Now open the command prompt, point the active directory to C:CPP, and then type the following command: g++ -Wall -ansi lexical.cpp –Ic:boost_1_58_0 -o lexical We have a new option here, which is –I (the "include" option). This option is used along with the full path of the directory to inform the compiler that we have another header directory that we want to include to our code. Since we store our Boost libraries in c: boost_1_58_0, we can use –Ic:boost_1_58_0 as an additional parameter. In lexical.cpp, we apply boost::lexical_cast to convert string type data into int type data. The program will ask the user to input two numbers and will then automatically find the sum of both numbers. If a user inputs an inappropriate number, it will inform that an error has occurred. The Boost.LexicalCast library is provided by Boost for casting data type purpose (converting numeric types such as int, double, or floats into string types, and vice versa). Now let us dissect lexical.cpp to for a more detailed understanding of what it does: #include <boost/lexical_cast.hpp> #include <string> #include <iostream> We include boost/lexical_cast.hpp because the boost::lexical_cast function is declared lexical_cast.hpp header file whilst string header is included to apply std::string function and iostream header is included to apply std::cin, std::cout and std::cerr function. Other functions, such as std::cin and std::cout, and we saw what their functions are so we can skip those lines. #include <boost/lexical_cast.hpp> #include <string> #include <iostream> We used the preceding two separate lines to convert the user-provided input string into the int data type. Then, after converting the data type, we summed up both of the int values. We can also see the try-catch block in the preceding code. It is used to catch the error if user inputs an inappropriate number, except 0 to 9. catch (const boost::bad_lexical_cast &e) { std::cerr << e.what() << "n"; return 1; } The preceding code snippet will catch errors and inform the user what the error message exactly is by using boost::bad_lexical_cast. We call the e.what() function to obtain the string of the error message. Now let us run the application by typing lexical at the command prompt. We will get output like the following: I put 10 for first input and 20 for the second input. The result is 30 because it just sums up both input. But what will happen if I put in a non-numerical value, for instance Packt. Here is the output to try that condition: Once the application found the error, it will ignore the next statement and go directly to the catch block. By using the e.what() function, the application can get the error message and show it to the user. In our example, we obtain bad lexical cast: source type value could not be interpreted as target as the error message because we try to assign the string data to int type variable. Building Boost libraries As we discussed previously, most libraries in Boost are header-only, but not all of them. There are some libraries that have to be built separately. They are: Boost.Chrono: This is used to show the variety of clocks, such as current time, the range between two times, or calculating the time passed in the process. Boost.Context: This is used to create higher-level abstractions, such as coroutines and cooperative threads. Boost.Filesystem: This is used to deal with files and directories, such as obtaining the file path or checking whether a file or directory exists. Boost.GraphParallel: This is an extension to the Boost Graph Library (BGL) for parallel and distributed computing. Boost.IOStreams: This is used to write and read data using stream. For instance, it loads the content of a file to memory or writes compressed data in GZIP format. Boost.Locale: This is used to localize the application, in other words, translate the application interface to user's language. Boost.MPI: This is used to develop a program that executes tasks concurrently. MPI itself stands for Message Passing Interface. Boost.ProgramOptions: This is used to parse command-line options. Instead of using the argv variable in the main parameter, it uses double minus (--) to separate each command-line option. Boost.Python: This is used to parse Python language in C++ code. Boost.Regex: This is used to apply regular expression in our code. But if our development supports C++11, we do not depend on the Boost.Regex library anymore since it is available in the regex header file. Boost.Serialization: This is used to convert objects into a series of bytes that can be saved and then restored again into the same object. Boost.Signals: This is used to create signals. The signal will trigger an event to run a function on it. Boost.System: This is used to define errors. It contains four classes: system::error_code, system::error_category, system::error_condition, and system::system_error. All of these classes are inside the boost namespace. It is also supported in the C++11 environment, but because many Boost libraries use Boost.System, it is necessary to keep including Boost.System. Boost.Thread: This is used to apply threading programming. It provides classes to synchronize access on multiple-thread data. It is also supported in C++11 environments, but it offers extensions, such as we can interrupt thread in Boost.Thread. Boost.Timer: This is used to measure the code performance by using clocks. It measures time passed based on usual clock and CPU time, which states how much time has been spent to execute the code. Boost.Wave: This provides a reusable C preprocessor that we can use in our C++ code. There are also a few libraries that have optional, separately compiled binaries. They are as follows: Boost.DateTime: It is used to process time data; for instance, calendar dates and time. It has a binary component that is only needed if we use to_string, from_string, or serialization features. It is also needed if we target our application in Visual C++ 6.x or Borland. Boost.Graph: It is used to create two-dimensional graphics. It has a binary component that is only needed if we intend to parse GraphViz files. Boost.Math: It is used to deal with mathematical formulas. It has binary components for cmath functions. Boost.Random: It is used to generate random numbers. It has a binary component which is only needed if we want to use random_device. Boost.Test: It is used to write and organize test programs and their runtime execution. It can be used in header-only or separately compiled mode, but separate compilation is recommended for serious use. Boost.Exception: It is used to add data to an exception after it has been thrown. It provides non-intrusive implementation of exception_ptr for 32-bit _MSC_VER==1310 and _MSC_VER==1400, which requires a separately compiled binary. This is enabled by #define BOOST_ENABLE_NON_INTRUSIVE_EXCEPTION_PTR. Let us try to recreate the random number generator. But now we will use the Boost.Random library instead of std::rand() from the C++ standard function. Let us take a look at the following code: /* rangen_boost.cpp */ #include <boost/random/mersenne_twister.hpp> #include <boost/random/uniform_int_distribution.hpp> #include <iostream> int main(void) { int guessNumber; std::cout << "Select number among 0 to 10: "; std::cin >> guessNumber; if(guessNumber < 0 || guessNumber > 10) { return 1; } boost::random::mt19937 rng; boost::random::uniform_int_distribution<> ten(0,10); int randomNumber = ten(rng); if(guessNumber == randomNumber) { std::cout << "Congratulation, " << guessNumber << " is your lucky number.n"; } else { std::cout << "Sorry, I'm thinking about number " << randomNumber << "n"; } return 0; } We can compile the preceding source code by using the following command: g++ -Wall -ansi -Ic:/boost_1_58_0 rangen_boost.cpp -o rangen_boost Now, let us run the program. Unfortunately, for the three times that I ran the program, I always obtained the same random number as follows: As we can see from this example, we always get number 8. This is because we apply Mersenne Twister, a Pseudorandom Number Generator (PRNG), which uses the default seed as a source of randomness so it will generate the same number every time the program is run. And of course it is not the program that we expect. Now, we will rework the program once again, just in two lines. First, find the following line: #include <boost/random/mersenne_twister.hpp> Change it as follows: #include <boost/random/random_device.hpp> Next, find the following line: boost::random::mt19937 rng; Change it as follows: boost::random::random_device rng; Then, save the file as rangen2_boost.cpp and compile the rangen2_boost.cpp file by using the command like we compiled rangen_boost.cpp. The command will look like this: g++ -Wall -ansi -Ic:/boost_1_58_0 rangen2_boost.cpp -o rangen2_boost Sadly, there will be something wrong and the compiler will show the following error message: cc8KWVvX.o:rangen2_boost.cpp:(.text$_ZN5boost6random6detail20generate _uniform_intINS0_13random_deviceEjEET0_RT_S4_S4_N4mpl_5bool_ILb1EEE[_ ZN5boost6random6detail20generate_uniform_intINS0_13random_deviceEjEET 0_RT_S4_S4_N4mpl_5bool_ILb1EEE]+0x24f): more undefined references to boost::random::random_device::operator()()' follow collect2.exe: error: ld returned 1 exit status This is because, as we have discussed earlier, the Boost.Random library needs to be compiled separately if we want to use the random_device attribute. Boost libraries have a system to compile or build Boost itself, called Boost.Build library. There are two steps we have to achieve to install the Boost.Build library. First, run Bootstrap by pointing the active directory at the command prompt to C:boost_1_58_0 and typing the following command: bootstrap.bat mingw We use our MinGW compiler, as our toolset in compiling the Boost library. Wait a second and then we will get the following output if the process is a success: Building Boost.Build engine Bootstrapping is done. To build, run: .b2 To adjust configuration, edit 'project-config.jam'. Further information: - Command line help: .b2 --help - Getting started guide: http://boost.org/more/getting_started/windows.html - Boost.Build documentation: http://www.boost.org/build/doc/html/index.html In this step, we will find four new files in the Boost library's root directory. They are: b2.exe: This is an executable file to build Boost libraries. bjam.exe: This is exactly the same as b2.exe but it is a legacy version. bootstrap.log: This contains logs from the bootstrap process project-config.jam: This contains a setting that will be used in the building process when we run b2.exe. We also find that this step creates a new directory in C:boost_1_58_0toolsbuildsrcenginebin.ntx86 , which contains a bunch of .obj files associated with Boost libraries that needed to be compiled. After that, run the second step by typing the following command at the command prompt: b2 install toolset=gcc Grab yourself a cup of coffee after running that command because it will take about twenty to fifty minutes to finish the process, depending on your system specifications. The last output we will get will be like this: ...updated 12562 targets... This means that the process is complete and we have now built the Boost libraries. If we check in our explorer, the Boost.Build library adds C:boost_1_58_0stagelib, which contains a collection of static and dynamic libraries that we can use directly in our program. bootstrap.bat and b2.exe use msvc (Microsoft Visual C++ compiler) as the default toolset, and many Windows developers already have msvc installed on their machines. Since we have installed GCC compiler, we set the mingw and gcc toolset options in Boost's build. If you also have mvsc installed and want to use it in Boost's build, the toolset options can be omitted. Now, let us try to compile the rangen2_boost.cpp file again, but now with the following command: c:CPP>g++ -Wall -ansi -Ic:/boost_1_58_0 rangen2_boost.cpp - Lc:boost_1_58_0stagelib -lboost_random-mgw49-mt-1_58 - lboost_system-mgw49-mt-1_58 -o rangen2_boost We have two new options here, they are –L and –l. The -L option is used to define the path that contains the library file if it is not in the active directory. The –l option is used to define the name of library file but omitting the first lib word in front of the file name. In this case, the original library file name is libboost_random-mgw49-mt-1_58.a, and we omit the lib phrase and the file extension for option -l. The new file called rangen2_boost.exe will be created in C:CPP. But before we can run the program, we have to ensure that the directory which the program installed has contained the two dependencies library file. These are libboost_random-mgw49-mt-1_58.dll and libboost_system-mgw49-mt-1_58.dll, and we can get them from the library directory c:boost_1_58_0_1stagelib. Just to make it easy for us to run that program, run the following copy command to copy the two library files to C:CPP: copy c:boost_1_58_0_1stageliblibboost_random-mgw49-mt-1_58.dll c:cpp copy c:boost_1_58_0_1stageliblibboost_system-mgw49-mt-1_58.dll c:cpp And now the program should run smoothly. In order to create a network application, we are going to use the Boost.Asio library. We do not find Boost.Asio—the library we are going to use to create a network application—in the non-header-only library. It seems that we do not need to build the boost library since Boost.Asio is header-only library. This is true, but since Boost.Asio depends on Boost.System and Boost.System needs to be built before being used, it is important to build Boost first before we can use it to create our network application. For option –I and –L, the compiler does not care if we use backslash () or slash (/) to separate each directory name in the path because the compiler can handle both Windows and Unix path styles. Summary We saw that Boost C++ libraries were developed to complement the standard C++ library We have also been able to set up our MinGW compiler in order to compile the code which contains Boost libraries and build the binaries of libraries which have to be compiled separately. Please remember that though we can use the Boost.Asio library as a header-only library, it is better to build all Boost libraries by using the Boost.Build library. It will be easy for us to use all libraries without worrying about compiling failure. Resources for Article:   Further resources on this subject: Actors and Pawns[article] What is Quantitative Finance?[article] Program structure, execution flow, and runtime objects [article]
Read more
  • 0
  • 0
  • 8450

article-image-mastering-jenkins
Packt
14 Sep 2015
13 min read
Save for later

Continuous Delivery and Continuous Deployment

Packt
14 Sep 2015
13 min read
 In this article by Jonathan McAllister, author of the book Mastering Jenkins, we will discover all things continuous namely Continuous Delivery, and Continuous Deployment practices. Continuous Delivery represents a logical extension to the continuous integration practices. It expands the automation defined in continuous integration beyond simply building a software project and executing unit tests. Continuous Delivery adds automated deployments and acceptance test verification automation to the solution. To better describe this process, let's take a look at some basic characteristics of Continuous Delivery: The development resources commit changes to the mainline of the source control solution multiple times per day, and the automation system initiates a complete build, deploy, and test validation of the software project Automated tests execute against every change deployed, and help ensure that the software remains in an always-releasable state Every committed change is treated as potentially releasable, and extra care is taken to ensure that incomplete development work is hidden and does not impact readiness of the software Feedback loops are developed to facilitate notifications of failures. This includes build results, test execution reports, delivery status, and user acceptance verification Iterations are short, and feedback is rapid, allowing the business interests to weigh in on software development efforts and propose alterations along the way Business interests, instead of engineering, will decide when to physically release the software project, and as such, the software automation should facilitate this goal (For more resources related to this topic, see here.) As described previously, Continuous Delivery (CD) represents the expansion of Continuous Integration Practices. At the time of writing of this book, Continuous Delivery approaches have been successfully implemented at scale in organizations like Amazon, Wells Fargo, and others. The value of CD derives from the ability to tie software releases to business interests, collect feedback rapidly, and course correct efficiently. The following diagram illustrates the basic automation flow for Continuous Delivery: Figure 8-10: Continuous Delivery workflow As we can see in the preceding diagram, this practice allows businesses to rapidly develop, strategically market, and release software based on pivoting market demands instead of engineering time frames. When implementing a continuous delivery solution, there are a few key points that we should keep in mind: Keep the build fast Illuminate the failures, and recover immediately Make deployments push-button, for any version to any environment Automate the testing and validation operations with defined buckets for each logical test group (unit, smoke, functional and regression) Use feature toggles to avoid branching Get feedback early and often (automation feedback, test feedback, build feedback, UAT feedback) Principles of Continuous Delivery Continuous Delivery was founded on the premise of standardized and defined release processes, automation-based build pipelines, and logical quality gates with rapid feedback loops. In a continuous delivery paradigm, builds flow from development to QA and beyond like water in a pipe. Builds can be promoted from one logical group to another, and the risk of the proposed change is exposed incrementally to a wider audience. The practical application of the Continuous Delivery principles lies in frequent commits to the mainline, which, in turn, execute the build pipeline automation suite, pass through automated quality gates for verification, and are individually signed off by business interests (in a best case scenario). The idea of incrementally exposing the risk can be better illustrated through a circle of trust diagram, as follows: Figure 8-11: Circle of Trust for code changes As illustrated in the preceding trust diagram, the number of people exposed to a build expands incrementally as the build passes from one logical development and business group to another. This model places emphasis on verification and attempts to remove waste (time) by exposing the build output only to groups that have a vested interest in the build at that phase. Continuous Delivery in Jenkins Applying the Continuous Delivery principles in Jenkins can be accomplished in a number of ways. That said, there are some definite tips and tricks which can be leveraged to make the implementation a bit less painful. In this section, we will discuss and illustrate some of the more advanced Continuous Delivery tactics and learn how to apply them in Jenkins. Your specific implementation of Continuous Delivery will most definitely be unique to your organization; so, take what is useful, research anything that is missing, and disregard what is useless. Let's get started. Rapid feedback loops Rapid feedback loops are the baseline implementation requirement for Continuous Delivery. Applying this with Jenkins can be accomplished in a pretty slick manner using a combination of the Email-Ext plugin and some HTML template magic. In large-scale Jenkins implementations, it is not wise to manage many e-mail templates, and creating a single transformable one will help save time and effort. Let's take a look how to do this in Jenkins. The Email-Ext plugin provides Jenkins with the capabilities of completely customizable e-mail notifications. It allows the Jenkins system to customize just about every aspect of notifications and can be leveraged as an easy-to-implement, template-based e-mail solution. To begin with, we will need to install the plugin into our Jenkins system. The details for this plugin can be found at the following web address: https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin Once the plug-in has been installed into our Jenkins system, we will need to configure the basic connection details and optional settings. To begin, navigate to the Jenkins administration area and locate the Extended Email Notification section. Jenkins->Manage Jenkins->Configure System On this page, we will need to specify, at a minimum, the following details: SMTP Server SMTP Authentication details (User Name + Password) Reply-to List ([email protected]) System Admin Email Address (located further up on the page) The completed form may look something like the following screenshot: Figure 8-12: Completed form Once the basic SMTP configuration details have been specified, we can then add the Editable Email Notification post build step to our jobs, and configure the e-mail contents appropriately. The following screenshot illustrates the basic configuration options required for the build step to operate: Figure 8-13: Basic configuration options As we can see from the preceding screenshot, environment variables are piped into the plugin via the job's automation to define the e-mail contents, recipient list, and other related details. This solution makes for a highly effective feedback loop implementation. Quality gates and approvals Two of the key aspects of Continuous Delivery include the adoption of quality gates and stakeholder approvals. This requires individuals to signoff on a given change or release as it flows through the pipeline. Back in the day, this used to be managed through a Release Signoff sheet, which would often times be maintained manually on paper. In the modern digital age, this is managed through the Promoted builds plugin in Jenkins, whereby we can add LDAP or Active Directory integration to ensure that only authentic users have the access rights required to promote builds. However, there is room to expand this concept and learn some additional tips and tricks, which will ensure that we have a solid and secure implementation. Integrating Jenkins with Lightweight Directory Access Protocol (LDAP) is generally a straightforward exercise. This solution allows a corporate authentication system to be tied directly into Jenkins. This means that once the security integration is configured in Jenkins, we will be able to login to the Jenkins system (UI) by using our corporate account credentials. To connect Jenkins to a corporate authentication engine, we will first need to configure Jenkins to talk to the corporate security servers. This is configured in the Global Security administration area of the Jenkins user interface as shown in the following screenshot: Figure 8-14: Global Security configuration options The global security area of Jenkins allows us to specify the type of authentication that Jenkins will use for users who wish to access the Jenkins system. By default, Jenkins provides a built-in internal database for managing users; we will have to alter this to support LDAP. To configure this system to utilize LDAP, click the LDAP radio button, and enter your LDAP server details as illustrated in the following screenshot: Figure 8-15: LDAP server details Fill out the form with your company's LDAP specifics, and click save. If you happen to get stuck on this configuration, the Jenkins community has graciously provided an additional in-depth documentation. This documentation can be found at the following URL: https://wiki.jenkins-ci.org/display/JENKINS/LDAP+Plugin For users who wish to leverage Active Directory, there is a Jenkins plugin which can facilitate this type of integrated security solution. For more details on this plugin, please consult the plugin page at the following URL: https://wiki.jenkins-ci.org/display/JENKINS/Active+Directory+plugin Once the authentication solution has successfully been configured, we can utilize it to set approvers in the promoted builds plugin. To configure a promotion approver, we will need to edit the desired Jenkins project, and specify the users who should have the promote permissions. The following screenshot shows an example of this configuration: Figure 8-16: Configuration example As we can see, the promoted builds plugin provides an excellent signoff sheet solution. It is complete with access security controls, promotion criteria, and a robust build step implementation solution. Build pipeline(s) workflow and visualization When build pipelines are created initially, the most common practice is to simply daisy chain the jobs together. This is a perfectly reasonable initial-implementation approach, but in the long term, this may get confusing and it may become difficult to track the workflow of daisy-chained jobs. To assist with this issue, Jenkins offers a plugin to help visualize the build pipelines, and is appropriately named the Build Pipelines plugin. The details surrounding this plugin can be found at the following web URL: https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin This plugin provides an additional view option, which is populated by specifying an entry point to the pipeline, detecting upstream and downstream jobs, and creating a visual representation of the pipeline. Upon the initial installation of the plugin, we can see an additional option available to us when we create a new dashboard view. This is illustrated in the following screenshot: Figure 8-17: Dashboard view Upon creating a pipeline view using the build pipeline plugin, Jenkins will present us with a number of configuration options. The most important configuration options are the name of the view and the initial job dropdown selection option, as seen in the following screenshot: Figure 8-18: Pipeline view configuration options Once the basic configuration has been defined, click the OK button to save the view. This will trigger the plugin to perform an initial scan of the linked jobs and generate the pipeline view. An example of a completely developed pipeline is illustrated in the following image: Figure 8-19: Completely developed pipeline This completes the basic configuration of a build pipeline view, which gives us a good visual representation of our build pipelines. There are a number of features and customizations that we could apply to the pipeline view, but we will let you explore those and tweak the solution to your own specific needs. Continuous Deployment Just as Continuous Delivery represents a logical extension of Continuous Integration, Continuous Deployment represents a logical expansion upon the Continuous Delivery practices. Continuous Deployment is very similar to Continuous Delivery in a lot of ways, but it has one key fundamental variance: there are no approval gates. Without approval gates, code commits to the mainline have the potential to end up in the production environment in short order. This type of an automation solution requires a high-level of discipline, strict standards, and reliable automation. It is a practice that has proven valuable for the likes of Etsy, Flickr, and many others. This is because Continuous Deployment dramatically increases the deployment velocity. The following diagram describes both, Continuous Delivery and Continuous Deployment, to better showcase the fundamental difference between, them: Figure 8-20: Differentiation between Continuous Delivery and Continuous Deployment It is important to understand that Continuous Deployment is not for everyone, and is a solution that may not be feasible for some organizations, or product types. For example, in embedded software or Desktop application software, Continuous Deployment may not be a wise solution without properly architected background upgrade mechanisms, as it will most likely alienate the users due to the frequency of the upgrades. On the other hand, it's something that could be applied, fairly easily, to a simple API web service or a SaaS-based web application. If the business unit indeed desires to migrate towards a continuous deployment solution, tight controls on quality will be required to facilitate stability and avoid outages. These controls may include any of the following: The required unit testing with code coverage metrics The required a/b testing or experiment-driven development Paired programming Automated rollbacks Code reviews and static code analysis implementations Behavior-driven development (BDD) Test-driven development (TDD) Automated smoke tests in production Additionally, it is important to note that since a Continuous Deployment solution is a significant leap forward, in general, the implementation of the Continuous Delivery practices would most likely be a pre-requisite. This solution would need to be proven stable and trusted prior to the removal of the approval gates. Once removed though, the deployment velocity should significantly increase as a result. The quantifiable value of continuous deployment is well advertised by companies such as Amazon who realized a 78 percent reduction in production outages, and a 60% reduction in downtime minutes due to catastrophic defects. That said, implementing continuous deployment will require a buy-in from the stakeholders and business interests alike. Continuous Deployment in Jenkins Applying the Continuous Deployment practices in Jenkins is actually a simple exercise once Continuous Integration and Continuous Delivery have been completed. It's simply a matter of removing the approve criteria and allowing builds to flow freely through the environments, and eventually to production. The following screenshot shows how to implement this using the Promoted builds plugin: Figure 8-21: Promoted builds plugin implementation Once removed, the build automation solutions will continuously deploy for every commit to the mainline (given that all the automated tests have been passed). Summary With this article of Mastering Jenkins, you should now have a solid understanding of how to advocate for and develop Continuous Delivery, and Continuous Deployment practices at an organization. Resources for Article: Further resources on this subject: Exploring Jenkins [article] Jenkins Continuous Integration [article] What is continuous delivery and DevOps? [article]
Read more
  • 0
  • 0
  • 2184
article-image-getting-started-meteor
Packt
14 Sep 2015
6 min read
Save for later

Getting Started with Meteor

Packt
14 Sep 2015
6 min read
In this article, based on Marcelo Reyna's book Meteor Design Patterns, we will see that when you want to develop an application of any kind, you want to develop it fast. Why? Because the faster you develop, the better your return on investment will be (your investment is time, and the real cost is the money you could have produced with that time). There are two key ingredients ofrapid web development: compilers and patterns. Compilers will help you so that youdon’t have to type much, while patterns will increase the paceat which you solve common programming issues. Here, we will quick-start compilers and explain how they relate withMeteor, a vast but simple topic. The compiler we will be looking at isCoffeeScript. (For more resources related to this topic, see here.) CoffeeScriptfor Meteor CoffeeScript effectively replaces JavaScript. It is much faster to develop in CoffeeScript, because it simplifies the way you write functions, objects, arrays, logical statements, binding, and much more.All CoffeeScript files are saved with a .coffee extension. We will cover functions, objects, logical statements, and binding, since thisis what we will use the most. Objects and arrays CoffeeScriptgets rid of curly braces ({}), semicolons (;), and commas (,). This alone saves your fingers from repeating unnecessary strokes on the keyboard. CoffeeScript instead emphasizes on the proper use of tabbing. Tabbing will not only make your code more readable (you are probably doing it already), but also be a key factor inmaking it work. Let’s look at some examples: #COFFEESCRIPT toolbox = hammer:true flashlight:false Here, we are creating an object named toolbox that contains two keys: hammer and flashlight. The equivalent in JavaScript would be this: //JAVASCRIPT - OUTPUT var toolbox = { hammer:true, flashlight:false }; Much easier! As you can see, we have to tab to express that both the hammer and the flashlight properties are a part of toolbox. The word var is not allowed in CoffeeScript because CoffeeScript automatically applies it for you. Let’stakea look at how we would createan array: #COFFEESCRIPT drill_bits = [ “1/16 in” “5/64 in” “3/32 in” “7/64 in” ] //JAVASCRIPT – OUTPUT vardrill_bits; drill_bits = [“1/16 in”,”5/64 in”,”3/32 in”,”7/64 in”]; Here, we can see we don’t need any commas, but we do need brackets to determine that this is an array. Logical statements and operators CoffeeScript also removes a lot ofparentheses (()) in logical statements and functions. This makes the logic of the code much easier to understand at the first glance. Let’s look at an example: #COFFEESCRIPT rating = “excellent” if five_star_rating //JAVASCRIPT – OUTPUT var rating; if(five_star_rating){ rating = “excellent”; } In this example, we can clearly see thatCoffeeScript is easier to read and write.Iteffectively replaces all impliedparentheses in any logical statement. Operators such as &&, ||, and !== are replaced by words. Here is a list of the operators that you will be using the most: CoffeeScript JavaScript is === isnt !== not ! and && or || true, yes, on true false, no, off false @, this this Let's look at a slightly more complex logical statement and see how it compiles: #COFFEESCRIPT # Suppose that “this” is an object that represents a person and their physical properties if@eye_coloris “green” retina_scan = “passed” else retina_scan = “failed” //JAVASCRIPT - OUTPUT if(this.eye_color === “green”){ retina_scan = “passed”; } else { retina_scan = “failed”; } When using @eye_color to substitute for this.eye_color, notice that we do not need . Functions JavaScript has a couple of ways of creating functions. They look like this: //JAVASCRIPT //Save an anonymous function onto a variable varhello_world = function(){ console.log(“Hello World!”); } //Declare a function functionhello_world(){ console.log(“Hello World!”); } CoffeeScript uses ->instead of the function()keyword.The following example outputs a hello_world function: #COFFEESCRIPT #Create a function hello_world = -> console.log “Hello World!” //JAVASCRIPT - OUTPUT varhello_world; hello_world = function(){ returnconsole.log(“Hello World!”); } Once again, we use a tab to express the content of the function, so there is no need ofcurly braces ({}). This means that you have to make sure you have all of the logic of the function tabbed under its namespace. But what about our parameters? We can use (p1,p2) -> instead, where p1 and p2 are parameters. Let’s make our hello_world function say our name: #COFFEESCRIPT hello_world = (name) -> console.log “Hello #{name}” //JAVSCRIPT – OUTPUT varhello_world; hello_world = function(name) { returnconsole.log(“Hello “ + name); } In this example, we see how the special word function disappears and string interpolation. CoffeeScript allows the programmer to easily add logic to a string by escaping the string with #{}. Unlike JavaScript, you can also add returns and reshape the way astring looks without breaking the code. Binding In Meteor, we will often find ourselves using the properties of bindingwithin nested functions and callbacks.Function binding is very useful for these types of cases and helps avoid having to save data in additional variables. Let’s look at an example: #COFFEESCRIPT # Let’s make the context of this equal to our toolbox object # this = # hammer:true # flashlight:false # Run a method with a callback Meteor.call “use_hammer”, -> console.log this In this case, the thisobjectwill return a top-level object, such as the browser window. That's not useful at all. Let’s bind it now: #COFFEESCRIPT # Let’s make the context of this equal to our toolbox object # this = # hammer:true # flashlight:false # Run a method with a callback Meteor.call “use_hammer”, => console.log this The key difference is the use of =>instead of the expected ->sign fordefining the function. This will ensure that the callback'sthis object contains the context of the executing function. The resulting compiled script is as follows: //JAVASCRIPT Meteor.call(“use_hammer”, (function(_this) { return function() { returnConsole.log(_this); }; })(this)); CoffeeScript will improve your code and help you write codefaster. Still, itis not flawless. When you start combining functions with nested arrays, things can get complex and difficult to read, especially when functions are constructed with multiple parameters. Let’s look at an ugly query: #COFFEESCRIPT People.update sibling: $in:[“bob”,”bill”] , limit:1 -> console.log “success!” There are a few ways ofexpressing the difference between two different parameters of a function, but by far the easiest to understand. We place a comma one indentation before the next object. Go to coffeescript.org and play around with the language by clicking on the try coffeescript link. Summary We can now program faster because we have tools such as CoffeeScript, Jade, and Stylus to help us. We also seehow to use templates, helpers, and events to make our frontend work with Meteor. Resources for Article: Further resources on this subject: Why Meteor Rocks! [article] Function passing [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article]
Read more
  • 0
  • 0
  • 1483

article-image-netbeans-developers-life-cycle
Packt
08 Sep 2015
30 min read
Save for later

The NetBeans Developer's Life Cycle

Packt
08 Sep 2015
30 min read
In this article by David Salter, the author of Mastering NetBeans, we'll cover the following topics: Running applications Debugging applications Profiling applications Testing applications On a day-to-day basis, developers spend much of their time writing and running applications. While writing applications, they typically debug, test, and profile them to ensure that they provide the best possible application to customers. Running, debugging, profiling, and testing are all integral parts of the development life cycle, and NetBeans provides excellent tooling to help us in all these areas. (For more resources related to this topic, see here.) Running applications Executing applications from within NetBeans is as simple as either pressing the F6 button on the keyboard or selecting the Run menu item or Project Context menu item. Choosing either of these options will launch your application without specifying any additional Java command-line parameters using the default platform JDK that NetBeans is currently using. Sometimes we want to change the options that are used for launching applications. NetBeans allows these options to be easily specified by a project's properties. Right-clicking on a project in the Projects window and selecting the Properties menu option opens the Project Properties dialog. Selecting the Run category allows the configuration options to be defined for launching an application. From this dialog, we can define and select multiple run configurations for the project via the Configuration dropdown. Selecting the New… button to the right of the Configuration dropdown allows us to enter a name for a new configuration. Once a new configuration is created, it is automatically selected as the active configuration. The Delete button can be used for removing any unwanted configurations. The preceding screenshot shows the Project Properties dialog for a standard Java project. Different project types (for example, web or mobile projects) have different options in the Project Properties window. As can be seen from the preceding Project Properties dialog, several pieces of information can be defined for a standard Java project, which together make up the launch configuration for a project: Runtime Platform: This option allows us to define which Java platform we will use when launching the application. From here, we can select from all the Java platforms that are configured within NetBeans. Selecting the Manage Platforms… button opens the Java Platform Manager dialog, allowing full configuration of the different Java platforms available (both Java Standard Edition and Remote Java Standard Edition). Selecting this button has the same effect as selecting the Tools and then Java Platforms menu options. Main Class: This option defines the main class that is used to launch the application. If the project has more than one main class, selecting the Browse… button will cause the Browse Main Classes dialog to be displayed, listing all the main classes defined in the project. Arguments: Different command-line arguments can be passed to the main class as defined in this option. Working Directory: This option allows the working directory for the application to be specified. VM Options: If different VM options (such as heap size) require setting, they can be specified by this option. Selecting the Customize button displays a dialog listing the different standard VM options available which can be selected (ticked) as required. Custom VM properties can also be defined in the dialog. For more information on the different VM properties for Java, check out http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html. From here, the VM properties for Java 7 (and earlier versions) and Java 8 for Windows, Solaris, Linux, and Mac OS X can be referenced. Run with Java Web Start: Selecting this option allows the application to be executed using Java Web Start technologies. This option is only available if Web Start is enabled in the Application | Web Start category. When running a web application, the project properties are different from those of a standalone Java application. In fact, the project properties for a Maven web application are different from those of a standard NetBeans web application. The following screenshot shows the properties for a Maven-based web application; as discussed previously, Maven is the standard project management tool for Java applications, and the recommended tool for creating and managing web applications: Debugging applications In the previous section, we saw how NetBeans provides the easy-to-use features to allow developers to launch their applications, but then it also provides more powerful additional features. The same is true for debugging applications. For simple debugging, NetBeans provides the standard facilities you would expect, such as stepping into or over methods, setting line breakpoints, and monitoring the values of variables. When debugging applications, NetBeans provides several different windows, enabling different types of information to be displayed and manipulated by the developer: Breakpoints Variables Call stack Loaded classes Sessions Threads Sources Debugging Analyze stack All of these windows are accessible from the Window and then Debugging main menu within NetBeans. Breakpoints NetBeans provides a simple approach to set breakpoints and a more comprehensive approach that provides many more useful features. Breakpoints can be easily added into Java source code by clicking on the gutter on the left-hand side of a line of Java source code. When a breakpoint is set, a small pink square is shown in the gutter and the entire line of source code is also highlighted in the same color. Clicking on the breakpoint square in the gutter toggles the breakpoint on and off. Once a breakpoint has been created, instead of removing it altogether, it can be disabled by right-clicking on the bookmark in the gutter and selecting the Breakpoint and then Enabled menu options. This has the effect of keeping the breakpoint within your codebase, but execution of the application does not stop when the breakpoint is hit. Creating a simple breakpoint like this can be a very powerful way of debugging applications. It allows you to stop the execution of an application when a line of code is hit. If we want to add a bit more control onto a simple breakpoint, we can edit the breakpoint's properties by right-clicking on the breakpoint in the gutter and selecting the Breakpoint and then Properties menu options. This causes the Breakpoint Properties dialog to be displayed: In this dialog, we can see the line number and the file that the breakpoint belongs to. The line number can be edited to move the breakpoint if it has been created on the wrong line. However, what's more interesting is the conditions that we can apply to the breakpoint. The Condition entry allows us to define a condition that has to be met for the breakpoint to stop the code execution. For example, we can stop the code when the variable i is equal to 20 by adding a condition, i==20. When we add conditions to a breakpoint, the breakpoint becomes known as a conditional breakpoint, and the icon in the gutter changes to a square with the lower-right quadrant removed. We can also cause the execution of the application to halt at a breakpoint when the breakpoint has been hit a certain number of times. The Break when hit count is condition can be set to Equal to, Greater than, or Multiple of to halt the execution of the application when the breakpoint has been hit the requisite number of times. Finally, we can specify what actions occur when a breakpoint is hit. The Suspend dropdown allows us to define what threads are suspended when a breakpoint is hit. NetBeans can suspend All threads, Breakpoint thread, or no threads at all. The text that is displayed in the Output window can be defined via the Print Text edit box and different breakpoint groups can be enabled or disabled via the Enable Group and Disable Group drop-down boxes. But what exactly is a breakpoint group? Simply put, a breakpoint group is a collection of breakpoints that can all be set or unset at the same time. It is a way of categorizing breakpoints into similar collections, for example, all the breakpoints in a particular file, or all the breakpoints relating to exceptions or unit tests. Breakpoint groups are created in the Breakpoints window. This is accessible by selecting the Debugging and then Breakpoints menu options from within the main NetBeans Window menu. To create a new breakpoint group, simply right-click on an existing breakpoint in the Breakpoints window and select the Move Into Group… and then New… menu options. The Set the Name of Breakpoints Group dialog is displayed in which the name of the new breakpoint group can be entered. After creating a breakpoint group and assigning one or more breakpoints into it, the entire group of breakpoints can be enabled or disabled, or even deleted by right-clicking on the group in the Breakpoints window and selecting the appropriate option. Any newly created breakpoint groups will also be available in the Breakpoint Properties window. So far, we've seen how to create breakpoints that stop on a single line of code, and also how to create conditional breakpoints so that we can cause an application to stop when certain conditions occur for a breakpoint. These are excellent techniques to help debug applications. NetBeans, however, also provides the ability to create more advanced breakpoints so that we can get even more control of when the execution of applications is halted by breakpoints. So, how do we create these breakpoints? These different types of breakpoints are all created from in the Breakpoints window by right-clicking and selecting the New Breakpoint… menu option. In the New Breakpoint dialog, we can create different types of breakpoints by selecting the appropriate entry from the Breakpoint Type drop-down list. The preceding screenshot shows an example of creating a Class breakpoint. The following types of breakpoints can be created: Class: This creates a breakpoint that halts execution when a class is loaded, unloaded, or either event occurs. Exception: This stops execution when the specified exception is caught, uncaught, or either event occurs. Field: This creates a breakpoint that halts execution when a field on a class is accessed, modified, or either event occurs. Line: This stops execution when the specified line of code is executed. It acts the same way as creating a breakpoint by clicking on the gutter of the Java source code editor window. Method: This creates a breakpoint that halts execution when a method is entered, exited, or when either event occurs. Optionally, the breakpoint can be created for all methods inside a specified class rather than a single method. Thread: This creates a breakpoint that stops execution when a thread is started, finished, or either event occurs. AWT/Swing Component: This creates a breakpoint that stops execution when a GUI component is accessed. For each of these different types of breakpoints, conditions and actions can be specified in the same way as on simple line-based breakpoints. The Variables debug window The Variables debug window lists all the variables that are currently within  the scope of execution of the application. This is therefore thread-specific, so if multiple threads are running at one time, the Variables window will only display variables in scope for the currently selected thread. In the Variables window, we can see the variables currently in scope for the selected thread, their type, and value. To display variables for a different thread to that currently selected, we must select an alternative thread via the Debugging window. Using the triangle button to the left of each variable, we can expand variables and drill down into the properties within them. When a variable is a simple primitive (for example, integers or strings), we can modify it or any property within it by altering the value in the Value column in the Variables window. The variable's value will then be changed within the running application to the newly entered value. By default, the Variables window shows three columns (Name, Type, and Value). We can modify which columns are visible by pressing the selection icon () at the top-right of the window. Selecting this displays the Change Visible Columns dialog, from which we can select from the Name, String value, Type, and Value columns: The Watches window The Watches window allows us to see the contents of variables and expressions during a debugging session, as can be seen in the following screenshot: In this screenshot, we can see that the variable i is being displayed along with the expressions 10+10 and i+20. New expressions can be watched by clicking on the <Enter new watch> option or by right-clicking on the Java source code editor and selecting the New Watch… menu option. Evaluating expressions In addition to watching variables in a debugging session, NetBeans also provides the facility to evaluate expressions. Expressions can contain any Java code that is valid for the running scope of the application. So, for example, local variables, class variables, or new instances of classes can be evaluated. To evaluate variables, open the Evaluate Expression window by selecting the Debug and then Evaluate Expression menu options. Enter an expression to be evaluated in this window and press the Evaluate Code Fragment button at the bottom-right corner of the window. As a shortcut, pressing the Ctrl + Enter keys will also evaluate the code fragment. Once an expression has been evaluated, it is shown in the Evaluation Result window. The Evaluation Result window shows a history of each expression that has previously been evaluated. Expressions can be added to the list of watched variables by right-clicking on the expression and selecting the Create Fixed Watch expression. The Call Stack window The Call Stack window displays the call stack for the currently executing thread: The call stack is displayed from top to bottom with the currently executing frame at the top of the list. Double-clicking on any entry in the call stack opens up the corresponding source code in the Java editor within NetBeans. Right-clicking on an entry in the call stack displays a pop-up menu with the choice to: Make Current: This makes the selected thread the current thread Pop To Here: This pops the execution of the call stack to the selected location Go To Source: This displays the selected code within the Java source editor Copy Stack: This copies the stack trace to the clipboard for use elsewhere When debugging, it can be useful to change the stack frame of the currently executing thread by selecting the Pop To Here option from within the stack trace window. Imagine the following code: // Get some magic int magic = getSomeMagicNumber(); // Perform calculation performCalculation(magic); During a debugging session, if after stepping over the getSomeMagicNumber() method, we decided that the method has not worked as expected, our course of action would probably be to debug into the getSomeMagicNumber() method. But, we've just stepped over the method, so what can we do? Well, we can stop the debugging session and start again or repeat the operation that called this section of code and hope there are no changes to the application state that affect the method we want to debug. A better solution, however, would be to select the line of code that calls the getSomeMagicNumber() method and pop the stack frame using the Pop To Here option. This would have the effect of rewinding the code execution so that we can then step into the method and see what is happening inside it. As well as using the Pop To Here functionality, NetBeans also offers several menu options for manipulating the stack frame, namely: Make Callee Current: This makes the callee of the current method the currently executing stack frame Make Caller Current: This makes the caller of the current method the currently executing stack frame Pop Topmost Call: This pops one stack frame, making the calling method the currently executing stack frame When moving around the call stack using these techniques, any operations performed by the currently executing method are not undone. So, for example, strange results may be seen if global or class-based variables are altered within a method and then an entry is popped from the call stack. Popping entries in the call stack is safest when no state changes are made within a method. The call stack displayed in the Debugging window for each thread behaves in the same way as in the Call Stack window itself. The Loaded Classes window The Loaded Classes window displays a list of all the classes that are currently loaded, showing how many instances there are of each class as a number and as a percentage of the total number of classes loaded. Depending upon the number of external libraries (including the standard Java runtime libraries) being used, you may find it difficult to locate instances of your own classes in this window. Fortunately, the filter at the bottom of the window allows the list of classes to be filtered, based upon an entered string. So, for example, entering the filter String will show all the classes with String in the fully qualified class name that are currently loaded, including java.lang.String and java.lang.StringBuffer. Since the filter works on the fully qualified name of a class, entering a package name will show all the classes listed in that package and subpackages. So, for example, entering a filter value as com.davidsalter.multithread would show only the classes listed in that package and subpackages. The Sessions window Within NetBeans, it is possible to perform multiple debugging sessions where either one project is being debugged multiple times, or more commonly, multiple projects are being debugged at the same time, where one is acting as a client application and the other is acting as a server application. The Sessions window displays a list of the currently running debug sessions, allowing the developer control over which one is the current session. Right-clicking on any of the sessions listed in the window provides the following options: Make Current: This makes the selected session the currently active debugging session Scope: This debugs the current thread or all the threads in the selected session Language: This options shows the language of the application being debugged—Java Finish: This finishes the selected debugging session Finish All: This finishes all the debugging sessions The Sessions window shows the name of the debug session (for example the main class being executed), its state (whether the application is Stopped or Running) and language being debugged. Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The default choice is to display all columns except for the Host Name column, which displays the name of the computer the session is running on. The Threads window The Threads window displays a hierarchical list of threads in use by the application currently being debugged. The current thread is displayed in bold. Double-clicking on any of the threads in the hierarchy makes the thread current. Similar to the Debugging window, threads can be made current, suspended, or interrupted by right-clicking on the thread and selecting the appropriate option. The default display for the Threads window is to show the thread's name and its state (Running, Waiting, or Sleeping). Clicking the selection icon () at the top-right of the window allows the user to choose which columns are displayed in the window. The Sources window The Sources window simply lists all of the source roots that NetBeans considers for the selected project. These are the only locations that NetBeans will search when looking for source code while debugging an application. If you find that you are debugging an application, and you cannot step into code, the most likely scenario is that the source root for the code you wish to debug is not included in the Sources window. To add a new source root, right-click in the Sources window and select the Add Source Root option. The Debugging window The Debugging window allows us to see which threads are running while debugging our application. This window is, therefore, particularly useful when debugging multithreaded applications. In this window, we can see the different threads that are running within our application. For each thread, we can see the name of the thread and the call stack leading to the breakpoint. The current thread is highlighted with a green band along the left-hand side edge of the window. Other threads created within our application are denoted with a yellow band along the left-hand side edge of the window. System threads are denoted with a gray band. We can make any of the threads the current thread by right-clicking on it and selecting the Make Current menu option. When we do this, the Variables and Call Stack windows are updated to show new information for the selected thread. The current thread can also be selected by clicking on the Debug and then Set Current Thread… menu options. Upon selecting this, a list of running threads is shown from which the current thread can be selected. Right-clicking on a thread and selecting the Resume option will cause the selected thread to continue execution until it hits another breakpoint. For each thread that is running, we can also Suspend, Interrupt, and Resume the thread by right-clicking on the thread and choosing the appropriate action. In each thread listing, the current methods call stack is displayed for each thread. This can be manipulated in the same way as from the Call Stack window. When debugging multithreaded applications, new breakpoints can be hit within different threads at any time. NetBeans helps us with multithreaded debugging by not automatically switching the user interface to a different thread when a breakpoint is hit on the non-current thread. When a breakpoint is hit on any thread other than the current thread, an indication is displayed at the bottom of the Debugging window, stating New Breakpoint Hit (an example of this can be seen in the previous window). Clicking on the icon to the right of the message shows all the breakpoints that have been hit together with the thread name in which they occur. Selecting the alternate thread will cause the relevant breakpoint to be opened within NetBeans and highlighted in the appropriate Java source code file. NetBeans provides several filters on the Debugging window so that we can show more/less information as appropriate. From left to right, these images allow us to: Show less (suspended and current threads only) Show thread groups Show suspend/resume table Show system threads Show monitors Show qualified names Sort by suspended/resumed state Sort by name Sort by default Debugging multithreaded applications can be a lot easier if you give your threads names. The thread's name is displayed in the Debugging window, and it's a lot easier to understand what a thread with a proper name is doing as opposed to a thread called Thread-1. Deadlock detection When debugging multithreaded applications, one of the problems that we can see is that a deadlock occurs between executing threads. A deadlock occurs when two or more threads become blocked forever because they are both waiting for a shared resource to become available. In Java, this typically occurs when the synchronized keyword is used. NetBeans allows us to easily check for deadlocks using the Check for Deadlock tool available on the Debug menu. When a deadlock is detected using this tool, the state of the deadlocked threads is set to On Monitor in the Threads window. Additionally, the threads are marked as deadlocked in the Debugging window. Each deadlocked thread is displayed with a red band on the left-hand side border and the Deadlock detected warning message is displayed. Analyze Stack Window When running an application within NetBeans, if an exception is thrown and not caught, the stack trace will be displayed in the Output window, allowing the developer to see exactly where errors have occurred. From the following screenshot, we can easily see that a NullPointerException was thrown from within the FaultyImplementation class in the doUntestedOperation() method at line 16. Looking before this in the stack trace (that is the entry underneath), we can see that the doUntestedOperation() method was called from within the main() method of the Main class at line 21: In the preceding example, the FaultyImplementation class is defined as follows: public class FaultyImplementation { public void doUntestedOperation() { throw new NullPointerException(); } } Java is providing an invaluable feature to developers, allowing us to easily see where exceptions are thrown and what the sequence of events was that led to the exception being thrown. NetBeans, however, enhances the display of the stack traces by making the class and line numbers clickable hyperlinks which, when clicked on, will navigate to the appropriate line in the code. This allows us to easily delve into a stack trace and view the code at all the levels of the stack trace. In the previous screenshot, we can click on the hyperlinks FaultyImplementation.java:16 and Main.java:21 to take us to the appropriate line in the appropriate Java file. This is an excellent time-saving feature when developing applications, but what do we do when someone e-mails us a stack trace to look at an error in a production system? How do we manage stack traces that are stored in log files? Fortunately, NetBeans provides an easy way to allow a stack trace to be turned into clickable hyperlinks so that we can browse through the stack trace without running the application. To load and manage stack traces into NetBeans, the first step is to copy the stack trace onto the system clipboard. Once the stack trace has been copied onto the clipboard, Analyze Stack Window can be opened within NetBeans by selecting the Window and then Debugging and then Analyze Stack menu options (the default installation for NetBeans has no keyboard shortcut assigned to this operation). Analyze Stack Window will default to showing the stack trace that is currently in the system clipboard. If no stack trace is in the clipboard, or any other data is in the system's clipboard, Analyze Stack Window will be displayed with no contents. To populate the window, copy a stack trace into the system's clipboard and select the Insert StackTrace From Clipboard button. Once a stack trace has been displayed in Analyze Stack Window, clicking on the hyperlinks in it will navigate to the appropriate location in the Java source files just as it does from the Output window when an exception is thrown from a running application. You can only navigate to source code from a stack trace if the project containing the relevant source code is open in the selected project group. Variable formatters When debugging an application, the NetBeans debugger can display the values of simple primitives in the Variables window. As we saw previously, we can also display the toString() representation of a variable if we select the appropriate columns to display in the window. Sometimes when debugging, however, the toString() representation is not the best way to display formatted information in the Variables window. In this window, we are showing the value of a complex number class that we have used in high school math. The ComplexNumber class being debugged in this example is defined as: public class ComplexNumber { private double realPart; private double imaginaryPart; public ComplexNumber(double realPart, double imaginaryPart) { this.realPart = realPart; this.imaginaryPart = imaginaryPart; } @Override public String toString() { return "ComplexNumber{" + "realPart=" + realPart + ", imaginaryPart=" + imaginaryPart + '}'; } // Getters and Setters omitted for brevity… } Looking at this class, we can see that it essentially holds two members—realPart and imaginaryPart. The toString() method outputs a string, detailing the name of the object and its parameters which would be very useful when writing ComplexNumbers to log files, for example: ComplexNumber{realPart=1.0, imaginaryPart=2.0} When debugging, however, this is a fairly complicated string to look at and comprehend—particularly, when there is a lot of debugging information being displayed. NetBeans, however, allows us to define custom formatters for classes that detail how an object will be displayed in the Variables window when being debugged. To define a custom formatter, select the Java option from the NetBeans Options dialog and then select the Java Debugger tab. From this tab, select the Variable Formatters category. On this screen, all the variable formatters that are defined within NetBeans are shown. To create a new variable formatter, select the Add… button to display the Add Variable Formatter dialog. In the Add Variable Formatter dialog, we need to enter Formatter Name and a list of Class types that NetBeans will apply the formatting to when displaying values in the debugger. To apply the formatter to multiple classes, enter the different classes, separated by commas. The value that is to be formatted is entered in the Value formatted as a result of code snippet field. This field takes the scope of the object being debugged. So, for example, to output the ComplexNumber class, we can enter the custom formatter as: "("+realPart+", "+imaginaryPart+"i)" We can see that the formatter is built up from concatenating static strings and the values of the members realPart and imaginaryPart. We can see the results of debugging variables using custom formatters in the following screenshot: Debugging remote applications The NetBeans debugger provides rapid access for debugging local applications that are executing within the same JVM as NetBeans. What happens though when we want to debug a remote application? A remote application isn't necessarily hosted on a separate server to your development machine, but is defined as any application running outside of the local JVM (that is the one that is running NetBeans). To debug a remote application, the NetBeans debugger can be "attached" to the remote application. Then, to all intents, the application can be debugged in exactly the same way as a local application is debugged, as described in the previous sections of this article. To attach to a remote application, select the Debug and then Attach Debugger… menu options. On the Attach dialog, the connector (SocketAttach, ProcessAttach, or SocketListen) must be specified to connect to the remote application. The appropriate connection details must then be entered to attach the debugger. For example, the process ID must be entered for the ProcessAttach connector and the host and port must be specified for the SocketAttach connector. Profiling applications Learning how to debug applications is an essential technique in software development. Another essential technique that is often overlooked is profiling applications. Profiling applications involves measuring various metrics such as the amount of heap memory used or the number of loaded classes or running threads. By profiling applications, we can gain an understanding of what our applications are actually doing and as such we can optimize them and make them function better. NetBeans provides first class profiling tools that are easy to use and provide results that are easy to interpret. The NetBeans profiler allows us to profile three specific areas: Application monitoring Performance monitoring Memory monitoring Each of these monitoring tools is accessible from the Profile menu within NetBeans. To commence profiling, select the Profile and then Profile Project menu options. After instructing NetBeans to profile a project, the profiler starts providing the choice of the type of profiling to perform. Testing applications Writing tests for applications is probably one of the most important aspects of modern software development. NetBeans provides the facility to write and run both JUnit and TestNG tests and test suites. In this section, we'll provide details on how NetBeans allows us to write and run these types of tests, but we'll assume that you have some knowledge of either JUnit or TestNG. TestNG support is provided by default with NetBeans, however, due to license concerns, JUnit may not have been installed when you installed NetBeans. If JUnit support is not installed, it can easily be added through the NetBeans Plugins system. In a project, NetBeans creates two separate source roots: one for application sources and the other for test sources. This allows us to keep tests separate from application source code so that when we ship applications, we do not need to ship tests with them. This separation of application source code and test source code enables us to write better tests and have less coupling between tests and applications. The best situation is for the test source root to have a dependency on application classes and the application classes to have no dependency on the tests that we have written. To write a test, we must first have a project. Any type of Java project can have tests added into it. To add tests into a project, we can use the New File wizard. In the Unit Tests category, there are templates for: JUnit Tests Tests for Existing Class (this is for JUnit tests) Test Suite (this is for JUnit tests) TestNG Test Case TestNG Test Suite When creating classes for these types of tests, NetBeans provides the option to automatically generate code; this is usually a good starting point for writing classes. When executing tests, NetBeans iterates through the test packages in a project looking for the classes that are suffixed with the word Test. It is therefore essential to properly name tests to ensure they are executed correctly. Once tests have been created, NetBeans provides several methods for running the tests. The first method is to run all the tests that we have defined for an application. Selecting the Run and then Test Project menu options runs all of the tests defined for a project. The type of the project doesn't matter (Java SE or Java EE), nor whether a project uses Maven or the NetBeans project build system (Ant projects are even supported if they have a valid test activity), all tests for the project will be run when selecting this option. After running the tests, the Test Results window will be displayed, highlighting successful tests in green and failed tests in red. In the Test Results window, we have several options to help categorize and manage the tests: Rerun all of the tests Rerun the failed tests Show only the passed tests Show only the failed tests Show errors Show aborted tests Show skipped tests Locate previous failure Locate next failure Always open test result window Always open test results in a new tab The second option within NetBeans for running tests it to run all the tests in a package or class. To perform these operations, simply right-click on a package in the Projects window and select Test Package or right-click on a Java class in the Projects window and select Test File. The final option for running tests it to execute a single test in a class. To perform this operation, right-click on a test in the Java source code editor and select the Run Focussed Test Method menu option. After creating tests, how do we keep them up to date when we add new methods to application code? We can keep tests suites up to date by manually editing them and adding new methods corresponding to new application code or we can use the Create/Update Tests menu. Selecting the Tools and then Create/Update Tests menu options displays the Create Tests dialog that allows us to edit the existing test classes and add new methods into them, based upon the existing application classes. Summary In this article, we looked at the typical tasks that a developer does on a day-to-day basis when writing applications. We saw how NetBeans can help us to run and debug applications and how to profile applications and write tests for them. Finally, we took a brief look at TDD, and saw how the Red-Green-Refactor cycle can be used to help us develop more stable applications. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans [article] Creating a JSF composite component [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 3127