Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-top-two-features-gson
Packt
03 Sep 2013
5 min read
Save for later

Top two features of GSON

Packt
03 Sep 2013
5 min read
(For more resources related to this topic, see here.) Java objects support Objects in GSON are referred as types of JsonElement: The GSON library can convert any user-defined class objects to and from the JSON representation. The Student class is a user-defined class, and GSON can serialize any Student object to JSON. The Student.java class is as follows: public class Student { private String name; private String subject; privateint mark; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getSubject() { return subject; } public void setSubject(String subject) { this.subject = subject; } public int getMark() { return mark; } public void setMark(int mark) { this.mark = mark; } } The code for JavaObjectFeaturesUse.java is as follows: import com.google.gson.Gson; import com.packt.chapter.vo.Student; public class JavaObjectFeaturesUse { public static void main(String[] args){ Gsongson = new Gson(); Student aStudent = new Student(); aStudent.setName("Sandeep"); aStudent.setMark(128); aStudent.setSubject("Computer Science"); String studentJson = gson.toJson(aStudent); System.out.println(studentJson); Student anotherStudent = gson.fromJson(studentJson, Student.class); System.out.println(anotherStudentinstanceof Student); } } The output of the preceding code is as follows: {"name":"Sandeep","subject":"Computer Science","mark":128} True The preceding code creates a Student object with name as Sandeep, subject as Computer Science, and marks as 128. A Gson object is then instantiated and the Student object is passed in as a parameter to the toJson() method. It returns a string that has the JSON representation of the Java object. This string is printed as the first line in the console. The output JSON representation of the Student object is a collection of key/value pairs. The Java property of the Student class becomes the key in the JSON string. In the last part of the code, the fromJson() method takes the JSON generated string as the first input parameter and Student.class as the second parameter, to convert the JSON string back to a Student Java object. The last line of the code uses an instance of Student as the second-line operator to verify whether the generated Java object by the fromJson() method is of type Student. In the console, it prints True as the output, and if we print the values, we will get the same values as in JSON. Serialization and deserialization GSON has implicit serializations for some classes, such as Java wrapper type (Integer, Long, Double, and so on), java.net.URL, java.net.URI, java.util.Date, and so on. Let's see an example: import java.util.Date; import com.google.gson.Gson; public class InbuiltSerializerFeature { public static void main(String[] args) { Date aDateJson = new Date(); Gsongson = new Gson(); String jsonDate = gson.toJson(aDateJson); System.out.println(jsonDate); } } The output of the preceding code is as follows: May 29, 2013 8:55:07 PM The preceding code is serializing the Java Date class object to its JSON representation. In the preceding section, you have learned how GSON is used to serialize and deserialize objects, and how it supports custom serializers and deserializers for user-defined Java class objects. Let's see how it works. Also, GSON provides the custom serialization feature to developers. The following code is an example of a custom serializer: classStudentTypeSerializer implements JsonSerializer<Student>{ @Override publicJsonElement serialize(Student student, Type type, JsonSerializationContextcontext) { JsonObjectobj = new JsonObject(); obj.addProperty("studentname", student.getName()); obj.addProperty("subjecttaken", student.getSubject()); obj.addProperty("marksecured", student.getMark()); returnobj; } } The following code is an example of a custom deserializer: classStudentTypeDeserializer implements JsonDeserializer<Student>{ @Override public Student deserialize(JsonElementjsonelment, Type type, JsonDeserializationContext context) throws JsonParseException { JsonObjectjsonObject = jsonelment.getAsJsonObject(); Student aStudent = new Student(); aStudent.setName(jsonObject.get("studentname").getAsString()); aStudent.setSubject(jsonObject.get("subjecttaken").getAsString()); aStudent.setMark(jsonObject.get("marksecured").getAsInt()); return aStudent; } } The following code tests the custom serializer and deserializer: import java.lang.reflect.Type; import com.google.gson.Gson; import com.google.gson.GsonBuilder; import com.google.gson.JsonDeserializationContext; import com.google.gson.JsonDeserializer; import com.google.gson.JsonElement; import com.google.gson.JsonObject; import com.google.gson.JsonParseException; import com.google.gson.JsonSerializationContext; import com.google.gson.JsonSerializer; public class CustomSerializerFeature { public static void main(String[] args) { GsonBuildergsonBuilder = new GsonBuilder(); gsonBuilder.registerTypeAdapter(Student.class, new StudentTypeSerializer()); Gsongson = gsonBuilder.create(); Student aStudent = new Student(); aStudent.setName("Sandeep"); aStudent.setMark(150); aStudent.setSubject("Arithmetic"); String studentJson = gson.toJson(aStudent); System.out.println("Custom Serializer : Json String Representation "); System.out.println(studentJson); Student anotherStudent = gson.fromJson(studentJson, Student.class); System.out.println("Custom DeSerializer : Java Object Creation"); System.out.println("Student Name "+anotherStudent.getName()); System.out.println("Student Mark "+anotherStudent.getMark()); System.out.println("Student Subject "+anotherStudent.getSubject()); System.out.println("is anotherStudent is type of Student "+(anotherStudentinstanceof Student)); } } The output of the preceding code is as follows: Custom Serializer : Json String Representation {"studentname":"Sandeep","subjecttaken":"Arithmetic","marksecured":150} Custom DeSerializer : Java Object Creation Student Name Sandeep Student Mark 150 Student Subject Arithmetic is anotherStudent is type of Student true Summary This section explains about the support of Java objects and how to implement serialization and deserialization in GSON. Resources for Article : Further resources on this subject: Play Framework: Binding and Validating Objects and Rendering JSON Output [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Class-less Objects in JavaScript [Article]
Read more
  • 0
  • 0
  • 2052

article-image-scratching-tip-iceberg
Packt
03 Sep 2013
15 min read
Save for later

Scratching the Tip of the Iceberg

Packt
03 Sep 2013
15 min read
Boost is a huge collection of libraries. Some of those libraries are small and meant for everyday use and others require a separate article to describe all of their features. This article is devoted to some of those big libraries and to give you some basics to start with. The first two recipes will explain the usage of Boost.Graph. It is a big library with an insane number of algorithms. We'll see some basics and probably the most important part of it visualization of graphs. We'll also see a very useful recipe for generating true random numbers. This is a very important requirement for writing secure cryptography systems. Some C++ standard libraries lack math functions. We'll see how that can be fixed using Boost. But the format of this article leaves no space to describe all of the functions. Writing test cases is described in the Writing test cases and Combining multiple test cases in one test module recipes. This is important for any production-quality system. The last recipe is about a library that helped me in many courses during my university days. Images can be created and modified using it. I personally used it to visualize different algorithms, hide data in images, sign images, and generate textures. Unfortunately, even this article cannot tell you about all of the Boost libraries. Maybe someday I'll write another book... and then a few more. Working with graphs Some tasks require a graphical representation of data. Boost.Graph is a library that was designed to provide a flexible way of constructing and representing graphs in memory. It also contains a lot of algorithms to work with graphs, such as topological sort, breadth first search, depth first search, and Dijkstra shortest paths. Well, let's perform some basic tasks with Boost.Graph! Getting ready Only basic knowledge of C++ and templates is required for this recipe. How to do it... In this recipe, we'll describe a graph type, create a graph of that type, add some vertexes and edges to the graph, and search for a specific vertex. That should be enough to start using Boost.Graph. We start with describing the graph type: #include <boost/graph/adjacency_list.hpp> #include <string> typedef std::string vertex_t; typedef boost::adjacency_list< boost::vecS , boost::vecS , boost::bidirectionalS , vertex_t > graph_type; Now we construct it: graph_type graph; Let's use a non portable trick that speeds up graph construction: static const std::size_t vertex_count = 5; graph.m_vertices.reserve(vertex_count); Now we are ready to add vertexes to the graph: typedef boost::graph_traits<graph_type> ::vertex_descriptor descriptor_t; descriptor_t cpp = boost::add_vertex(vertex_t("C++"), graph); descriptor_t stl = boost::add_vertex(vertex_t("STL"), graph); descriptor_t boost = boost::add_vertex(vertex_t("Boost"), graph); descriptor_t guru = boost::add_vertex(vertex_t("C++ guru"), graph); descriptor_t ansic = boost::add_vertex(vertex_t("C"), graph); It is time to connect vertexes with edges: boost::add_edge(cpp, stl, graph); boost::add_edge(stl, boost, graph); boost::add_edge(boost, guru, graph); boost::add_edge(ansic, guru, graph); We make a function that searches for a vertex: template <class GraphT> void find_and_print(const GraphT& g, boost::string_ref name) { Now we will write code that gets iterators to all vertexes: typedef typename boost::graph_traits<graph_type> ::vertex_iterator vert_it_t; vert_it_t it, end; boost::tie(it, end) = boost::vertices(g); It's time to run a search for the required vertex: typedef boost::graph_traits<graph_type>::vertex_descriptor desc_t; for (; it != end; ++ it) { desc_t desc = *it; if (boost::get(boost::vertex_bundle, g)[desc] == name.data()) { break; } } assert(it != end); std::cout << name << 'n'; } /* find_and_print */ How it works... In step 1, we are describing what our graph must look like and upon what types it must be based. boost::adjacency_list is a class that represents graphs as a two-dimensional structure, where the first dimension contains vertexes and the second dimension contains edges for that vertex. boost::adjacency_list must be the default choice for representing a graph; it suits most cases. The first template parameter, boost::adjacency_list, describes the structure used to represent the edge list for each of the vertexes; the second one describes a structure to store vertexes. We can choose different STL containers for those structures using specific selectors, as listed in the following table: Selector STL container boost::vecS std::vector boost::listS std::list boost::slistS std::slist boost::setS std::set boost::multisetS std::multiset boost::hash_setS std::hash_set The third template parameter is used to make an undirected, directed, or bidirectional graph. Use the boost::undirectedS, boost::directedS, and boost::bidirectionalS selectors respectively. The fifth template parameter describes the datatype that will be used as the vertex. In our example, we chose std::string. We can also support a datatype for edges and provide it as a template parameter. Steps 2 and 3 are trivial, but at step 4 you will see a non portable way to speed up graph construction. In our example, we use std::vector as a container for storing vertexes, so we can force it to reserve memory for the required amount of vertexes. This leads to less memory allocations/deallocations and copy operations during insertion of vertexes into the graph. This step is non-portable because it is highly dependent on the current implementation of boost::adjacency_list and on the chosen container type for storing vertexes. At step 4, we see how vertexes can be added to the graph. Note how boost::graph_traits<graph_type> has been used. The boost::graph_traits class is used to get types that are specific for a graph type. We'll see its usage and the description of some graph-specific types later in this article. Step 5 shows what we need do to connect vertexes with edges. If we had provided a datatype for the edges, adding an edge would look as follows: boost::add_edge(ansic, guru, edge_t(initialization_parameters), graph) Note that at step 6 the graph type is a template parameter. This is recommended to achieve better code reusability and make this function work with other graph types. At step 7, we see how to iterate over all of the vertexes of the graph. The type of vertex iterator is received from boost::graph_traits. The function boost::tie is a part of Boost.Tuple and is used for getting values from tuples to the variables. So calling boost::tie(it, end) = boost::vertices(g) will put the begin iterator into the it variable and the end iterator into the end variable. It may come as a surprise to you, but dereferencing a vertex iterator does not return vertex data. Instead, it returns the vertex descriptor desc, which can be used in boost::get(boost::vertex_bundle, g)[desc] to get vertex data, just as we have done in step 8. The vertex descriptor type is used in many of the Boost.Graph functions; we saw its use in the edge construction function in step 5. As already mentioned, the Boost.Graph library contains the implementation of many algorithms. You will find many search policies implemented, but we won't discuss them in this article. We will limit this recipe to only the basics of the graph library. There's more... The Boost.Graph library is not a part of C++11 and it won't be a part of C++1y. The current implementation does not support C++11 features. If we are using vertexes that are heavy to copy, we may gain speed using the following trick: vertex_descriptor desc = boost::add_vertex(graph); boost::get(boost::vertex_bundle, g_)[desc] = std::move(vertex_data); It avoids copy constructions of boost::add_vertex(vertex_data, graph) and uses the default construction with move assignment instead. The efficiency of Boost.Graph depends on multiple factors, such as the underlying containers types, graph representation, edge, and vertex datatypes. Visualizing graphs Making programs that manipulate graphs was never easy because of issues with visualization. When we work with STL containers such as std::map and std::vector, we can always print the container's contents and see what is going on inside. But when we work with complex graphs, it is hard to visualize the content in a clear way: too many vertexes and too many edges. In this recipe, we'll take a look at the visualization of Boost.Graph using the Graphviz tool. Getting ready To visualize graphs, you will need a Graphviz visualization tool. Knowledge of the preceding recipe is also required. How to do it... Visualization is done in two phases. In the first phase, we make our program output the graph's description in a text format; in the second phase, we import the output from the first step to some visualization tool. The numbered steps in this recipe are all about the first phase. Let's write the std::ostream operator for graph_type as done in the preceding recipe: #include <boost/graph/graphviz.hpp> std::ostream& operator<<(std::ostream& out, const graph_type& g) { detail::vertex_writer<graph_type> vw(g); boost::write_graphviz(out, g, vw); return out; } The detail::vertex_writer structure, used in the preceding step, must be defined as follows: namespace detail { template <class GraphT> class vertex_writer { const GraphT& g_; public: explicit vertex_writer(const GraphT& g) : g_(g) {} template <class VertexDescriptorT> void operator()(std::ostream& out, const VertexDescriptorT& d) const { out << " [label="" << boost::get(boost::vertex_bundle, g_)[d] << ""]"; } }; // vertex_writer } // namespace detail That's all. Now, if we visualize the graph from the previous recipe using the std::cout << graph; command, the output can be used to create graphical pictures using the dot command-line utility: $ dot -Tpng -o dot.png digraph G { 0 [label="C++"]; 1 [label="STL"]; 2 [label="Boost"]; 3 [label="C++ guru"]; 4 [label="C"]; 0->1 ; 1->2 ; 2->3 ; 4->3 ; }   The output of the preceding command is depicted in the following figure: We can also use the Gvedit or XDot programs for visualization if the command line frightens you. How it works... The Boost.Graph library contains function to output graphs in Graphviz (DOT) format. If we write boost::write_graphviz(out, g) with two parameters in step 1, the function will output a graph picture with vertexes numbered from 0. That's not very useful, so we provide an instance of the vertex_writer class that outputs vertex names. As we can see in step 2, the format of output must be DOT, which is understood by the Graphviz tool. You may need to read the Graphviz documentation for more info about the DOT format. If you wish to add some data to the edges during visualization, we need to provide an instance of the edge visualizer as a fourth parameter to boost::write_graphviz. There's more... C++11 does not contain Boost.Graph or the tools for graph visualization. But you do not need to worry—there are a lot of other graph formats and visualization tools and Boost. Graph can work with plenty of them. Using a true random number generator I know of many examples of commercial products that use incorrect methods for getting random numbers. It's a shame that some companies still use rand() in cryptography and banking software. Let's see how to get a fully random uniform distribution using Boost.Random that is suitable for banking software. Getting ready Basic knowledge of C++ is required for this recipe. Knowledge of different types of distributions will also be helpful. The code in this recipe requires linking against the boost_random library. How to do it... To create a true random number, we need some help from the operating system or processor. This is how it can be done using Boost: We'll need to include the following headers: #include <boost/config.hpp> #include <boost/random/random_device.hpp> #include <boost/random/uniform_int_distribution.hpp> Advanced random number providers have different names under different platforms: static const std::string provider = #ifdef BOOST_WINDOWS "Microsoft Strong Cryptographic Provider" #else "/dev/urandom" #endif ; Now we are ready to initialize the generator with Boost.Random: boost::random_device device(provider); Let's get a uniform distribution that returns a value between 1000 and 65535: boost::random::uniform_int_distribution<unsigned short> random(1000); That's it. Now we can get true random numbers using the random(device) call. How it works... Why does the rand() function not suit banking? Because it generates pseudo-random numbers, which means that the hacker could predict the next generated number. This is an issue with all pseudo-random number algorithms. Some algorithms are easier to predict and some harder, but it's still possible. That's why we are using boost::random_device in this example (see step 3). That device gathers information about random events from all around the operating system to construct an unpredictable hardware-generated number. The examples of such events are delays between pressed keys, delays between some of the hardware interruptions, and the internal CPU random number generator. Operating systems may have more than one such type of random number generators. In our example for POSIX systems, we used /dev/urandom instead of the more secure /dev/random because the latter remains in a blocked state until enough random events have been captured by the OS. Waiting for entropy could take seconds, which is usually unsuitable for applications. Use /dev/random to create long-lifetime GPG/SSL/SSH keys. Now that we are done with generators, it's time to move to step 4 and talk about distribution classes. If the generator just generates numbers (usually uniformly distributed), the distribution class maps one distribution to another. In step 4, we made a uniform distribution that returns a random number of unsigned short type. The parameter 1000 means that distribution must return numbers greater or equal to 1000. We can also provide the maximum number as a second parameter, which is by default equal to the maximum value storable in the return type. There's more... Boost.Random has a huge number of true/pseudo random generators and distributions for different needs. Avoid copying distributions and generators; this could turn out to be an expensive operation. C++11 has support for different distribution classes and generators. You will find all of the classes from this example in the <random> header in the std:: namespace. The Boost.Random libraries do not use C++11 features, and they are not really required for that library either. Should you use Boost implementation or STL? Boost provides better portability across systems; however, some STL implementations may have assembly-optimized implementations and might provide some useful extensions. Using portable math functions Some projects require specific trigonometric functions, a library for numerically solving ordinary differential equations, and working with distributions and constants. All of those parts of Boost.Math would be hard to fit into even a separate book. A single recipe definitely won't be enough. So let's focus on very basic everyday-use functions to work with float types. We'll write a portable function that checks an input value for infinity and not-a-number (NaN) values and changes the sign if the value is negative. Getting ready Basic knowledge of C++ is required for this recipe. Those who know C99 standard will find a lot in common in this recipe. How to do it... Perform the following steps to check the input value for infinity and NaN values and change the sign if the value is negative: We'll need the following headers: #include <boost/math/special_functions.hpp> #include <cassert> Asserting for infinity and NaN can be done like this: template <class T> void check_float_inputs(T value) { assert(!boost::math::isinf(value)); assert(!boost::math::isnan(value)); Use the following code to change the sign: if (boost::math::signbit(value)) { value = boost::math::changesign(value); } // ... } // check_float_inputs That's it! Now we can check that check_float_inputs(std::sqrt(-1.0)) and check_float_inputs(std::numeric_limits<double>::max() * 2.0) will cause asserts. How it works... Real types have specific values that cannot be checked using equality operators. For example, if the variable v contains NaN, assert(v!=v) may or may not pass depending on the compiler. For such cases, Boost.Math provides functions that can reliably check for infinity and NaN values. Step 3 contains the boost::math::signbit function, which requires clarification. This function returns a signed bit, which is 1 when the number is negative and 0 when the number is positive. In other words, it returns true if the value is negative. Looking at step 3 some readers might ask, "Why can't we just multiply by -1 instead of calling boost::math::changesign?". We can. But multiplication may work slower than boost::math::changesign and won't work for special values. For example, if your code can work with nan, the code in step 3 will be able to change the sign of -nan and write nan to the variable. The Boost.Math library maintainers recommend wrapping math functions from this example in round parenthesis to avoid collisions with C macros. It is better to write (boost::math::isinf)(value) instead of boost::math::isinf(value). There's more... C99 contains all of the functions described in this recipe. Why do we need them in Boost? Well, some compiler vendors think that programmers do not need them, so you won't find them in one very popular compiler. Another reason is that the Boost.Math functions can be used for classes that behave like numbers. Boost.Math is a very fast, portable, reliable library.
Read more
  • 0
  • 0
  • 1101

article-image-introduction-drools
Packt
03 Sep 2013
8 min read
Save for later

Introduction to Drools

Packt
03 Sep 2013
8 min read
(For more resources related to this topic, see here.) So, what is Drools? The techie answer guaranteed to get that glazed over look from anyone hounding you for details on project design is that Drools, part of the JBoss Enterprise BRMS product since federating in 2005, is a Business Rule Management System (BRMS) and rules engine written in Java which implements and extends the Rete pattern-matching algorithm within a rules engine capable of both forward and backward chaining inference. Now, how about an answer fit for someone new to rules engines? After all, you're here to learn the basics, right? Drools is a collection of tools which allow us to separate and reason over logic and data found within business processes. Ok, but what does that mean? Digging deeper, the keywords in that statement we need to consider are "logic" and "data". Logic, or rules in our case, are pieces of knowledge often expressed as, "When some conditions occur, then do some tasks". Simple enough, no? These pieces of knowledge could be about any process in your organization, such as how you go about approving TPS reports, calculate interest on a loan, or how you divide workload among employees. While these processes sound complex, in reality, they're made up of a collection of simple business rules. Let's consider a daily ritual process for many workers: the morning coffee. The whole process is second nature to coffee drinkers. As they prepare for their work day, they probably don't consider the steps involved—they simply react to situations at hand. However, we can capture the process as a series of simple rules: When your mug is dirty, then go clean it When your mug is clean, then go check for coffee When the pot is full, then pour yourself a cup and return to your desk When the pot is empty, then mumble about co-workers and make some coffee Alright, so that's logic, but what's data? Facts (our word for data) are the objects that drive the decision process for us. Given the rules from our coffee example, some facts used to drive our decisions would be the mug and the coffee pot. While we know from reading our rules what to do when the mug or pot are in a particular state, we need facts that reflect an actual state on a particular day to reason over. In seeing how a BRMS allows us to define the business rules of a business process, we can now state some of the features of a rules engine. As stated before, we've separated logic from data—always a good thing! In our example, notice how we didn't see any detail about how to clean our mug or how to make a new batch of coffee, meaning we've also separated what to do from how to do it , thus allowing us to change procedure without altering logic. Lastly, by gathering all of our rules in one place, we've centralized our business process knowledge. This gives us an excellent facility when we need to explain a business process or transfer knowledge. It also helps to prevent tribal knowledge, or the ownership and understanding of an undocumented procedure by just one or a few users. So when is a BRMS the right choice? Consider a rules engine when a problem is too complex for traditional coding approaches. Rules can abstract away the complexity and prevent usage of fragile implementations. Rules engines are also beneficial when a problem isn't fully known. More often than not, you'll find yourself iterating business methodology in order to fully understand small details involved that are second nature to users. Rules are flexible and allow us to easily change what we know about a procedure to accommodate this iterative design. This same flexibility comes in handy if you find that your logic changes often over time. Lastly, in providing a straightforward approach in documenting business rules, rules engines are an excellent choice if you find domain knowledge readily available, but via non-technical people who may be incapable of contributing to code. Sounds great, so let's get started, right? Well, I promised I'd also help you decide when a rules engine is not the right choice for you. In using a rules engine, someone must translate processes into actual rules, which can be a blessing in taking business logic away from developers, but also a curse in required training. Secondly, if your logic doesn't change very often, then rules might be overkill. Likewise, If your project is small in nature and likely to be used once and forgotten, then rules probably aren't for you. However, beware of the small system that will grow in complexity going forward! So if rules are right for you, why should you choose Drools? First and foremost, Drools has the flexibility of an open source license with the support of JBoss available. Drools also boasts five modules (to be discussed in more detail later), making their system quite extensible with domain-specific languages, graphical editing tools, web-based tools, and more. If you're partial to Eclipse, you'll also likely come to appreciate their plugin. Still not convinced? Read on and give it a shot—after all, that's why you're here, right? Installation In just five easy steps, you can integrate Drools into a new or existing project. Step 1 – what do I need? For starters, you will need to check that you have all of the required elements, listed as follows (all versions are as of time of writing): Java 1.5 (or higher) SE JDK. Apache Maven 3.0.4. Eclipse 4.2 (Juno) and the Drools plugin. Memory—512 MB (minimum), 1 GB or higher recommended. This will depend largely on the scale of your JVM and rule sessions, but the more the better! Step 2 – installing Java Java is the core language on which Drools is built, and is the language in which we'll be writing, so we'll definitely be needing that. The easiest way to get Java going is to download from and follow the installation instructions found at: www.oracle.com/technetwork/java/javase/downloads/index.html Step 3 – installing Maven Maven is a build automation tool from Apache that lets us describe a configuration of the project we're building and leave dependency management (amongst other things) up to it to work out. Again, the easiest way to get Maven up and running is to download and follow the documentation provided with the tool, found at: maven.apache.org/download.cgi Step 4 – installing Eclipse If you happen to have some other IDE of choice, or maybe you're just the old school type, then it's perfectly acceptable to author and execute your Drools-integrated code in your usual fashion. However, if you're an Eclipse fan, or you'd like to take advantage of auto-complete, syntax highlighting, and debugging features, then I recommend you go ahead and install Eclipse and the Drools plugin. The version of Eclipse that we're after is Eclipse IDE for Java Developers, which you can download and find installation instructions for on their site: http://www.eclipse.org/downloads/ Step 5 – installing the Drools Eclipse plugin In order to add the IDE plugin to Eclipse, the easiest method is to use Eclipse's built-in update manager. First, you'll need to add something the plugin depends on—the Graphical Editing Framework (GEF). In the Eclipse menu, click on Help, then on Install New Software..., enter the following URL in the Work with: field, and hit Add. download.eclipse.org/tools/gef/updates/releases/ Give your repository a nifty name in the pop-up window, such as GEF, and continue on with the install as prompted. You'll be asked to verify what you're installing and accept the license. Now we can add the Drools plugin itself—you can find the URL you'll need by visiting: http://www.jboss.org/drools/downloads.html Then, search for the text Eclipse update site and you'll see the link you need. Copy the address of the link to your clipboard, head back into Eclipse, and follow the same process you did for installing GEF. Note that you'll be asked to confirm the install of unsigned content, and that this is expected. Summary By this point, you know what Drools is, you should also be ready to integrate Drools into your applications. If you find yourself stuck, one of the good parts about an open source community is that there's nearly always someone who has faced your problem before and likely has a solution to recommend. Resources for Article : Further resources on this subject: Drools Integration Modules: Spring Framework and Apache Camel [Article] Human-readable Rules with Drools JBoss Rules 5.0(Part 2) [Article] Drools JBoss Rules 5.0 Flow (Part 2) [Article]
Read more
  • 0
  • 0
  • 1001
Banner background image

article-image-cross-browser-distributed-testing
Packt
02 Sep 2013
3 min read
Save for later

Cross-browser-distributed testing

Packt
02 Sep 2013
3 min read
(For more resources related to this topic, see here.) Getting ready In contrast to the server-side software, JavaScript applications are being executed on the client side and therefore depend on the user browser. Normally, project specification includes the list of the browsers and platforms that the application must support. The longer the list, the harder is cross-browser-compatibility testing. For example, jQuery supports 13 browsers on different platforms. The project is fully tested in every declared environment with every single commit. That is possible thanks to the distributed testing tool TestSwarm (swarm.jquery.org). You may also hear of other tools such as Js TestDriver (code.google.com/p/js-test-driver) or Karma (karma-runner.github.io). We will take Bunyip (https://github.com/ryanseddon/bunyip) as it has swiftly been gaining popularity recently. How does it work? You launch the tool for a test runner HTML and it provides the connect end-point (IP:port) and launches a locally installed browser, if configured. As soon as you fire up the address in a browser, the client is captured by Bunyip and the connection is established. With your confirmation, Bunyip runs the tests in every connected browser to collect and report results. See the following figure: Bunyip is built on top of the Yeti tool (www.yeti.cx) that works with YUI Test, QUnit, Mocha, Jasmine, or DOH. Bunyip can be used in conjunction with BrowserStack. So, with a paid account at BrowserStack (www.browserstack.com), you can make Bunyip run your tests on hundreds of remotely hosted browsers. To install the tool, type in the console as follows: npm install -g bunyip Here, we recourse to the Node.js package manager that is part of Node.js. So if you don't have Node.js installed, find the installation instructions on the following page: https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager Now, we are ready to start using Bunyip. How to do it Add to the QUnit test suite (test-suite.html) the following configuration option to prevent it from auto-starting before the plugin callback is set up: if (QUnit && QUnit.config) {QUnit.config.autostart = false;} Launch a Yeti hub on port 9000 (default configuration) and use test-suite.html. bunyip -f test-suite.html Copy the connector address (for example, http://127.0.0.1:9000) from the output and fire it up in diverse browsers. You can use Oracle VirtualBox (www.virtualbox.org) to launch browsers in virtual machines set up on every platform you need. Examine the results shown in the following screenshot: Summary In this article, we learnt about Cross-browser-distributed testing and the automation of client-side cross-platform/browser testing. Resources for Article: Further resources on this subject: Building a Custom Version of jQuery [Article] Testing your App [Article] Logging and Reports [Article]
Read more
  • 0
  • 0
  • 1228

article-image-exploring-top-new-features-clr
Packt
30 Aug 2013
10 min read
Save for later

Exploring the Top New Features of the CLR

Packt
30 Aug 2013
10 min read
(For more resources related to this topic, see here.) One of its most important characteristics is that it is an in-place substitution of the .NET 4.0 and only runs on Windows Vista SP2 or later systems. .NET 4.5 breathes asynchronous features and makes writing async code even easier. It also provides us with the Task Parallel Library (TPL) Dataflow Library to help us create parallel and concurrent applications. Another very important addition is the portable libraries, which allow us to create managed assemblies that we can refer through different target applications and platforms, such as Windows 8, Windows Phone, Silverlight, and Xbox. We couldn't avoid mentioning Managed Extensibility Framework (MEF), which now has support for generic types, a convention-based programming model, and multiple scopes. Of course, this all comes together with a brand-new tooling, Visual Studio 2012, which you can find at http://msdn.microsoft.com/en-us/vstudio. Just be careful if you have projects in .NET 4.0 since it is an in-place install. For this article I'd like to give a special thanks to Layla Driscoll from the Microsoft .NET team who helped me summarize the topics, focus on what's essential, and showcase it to you, dear reader, in the most efficient way possible. Thanks, Layla. There are some features that we will not be able to explore through this article as they are just there and are part of the CLR but are worth explaining for better understanding: Support for arrays larger than 2 GB on 64-bit platforms, which can be enabled by an option in the app config file. Improved performance on the server's background garbage collection, which must be enabled in the <gcServer> element in the runtime configuration schema. Multicore JIT: Background JIT (Just In Time) compilation on multicore CPUs to improve app performance. This basically creates profiles and compiles methods that are likely to be executed on a separate thread. Improved performance for retrieving resources. The culture-sensitive string comparison (sorting, casing, normalization, and so on) is delegated to the operating system when running on Windows 8, which implements Unicode 6.0. On other platforms, the .NET framework will behave as in the previous versions, including its own string comparison data implementing Unicode 5.0. Next we will explore, in practice, some of these features to get a solid grasp on what .NET 4.5 has to offer and, believe me, we will have our hands full! Creating a portable library Most of us have often struggled and hacked our code to implement an assembly that we could use in different .NET target platforms. Portable libraries are here to help us to do exactly this. Now there is an easy way to develop a portable assembly that works without modification in .NET Framework, Windows Store apps style, Silverlight, Windows Phone, and XBOX 360 applications. The trick is that the Portable Class Library project supports a subset of assemblies from these platforms, providing us a Visual Studio template. This article will show you how to implement a basic application and help you get familiar with Visual Studio 2012. Getting ready In order to use this section you should have Visual Studio 2012 installed. Note that you will need a Visual Studio 2012 SKU higher than Visual Studio Express for it to fully support portable library projects. How to do it... Here we will create a portable library and see how it works: First, open Visual Studio 2012 and create a new project. We will select the Portable Class Library template from the Visual C# category. Now open the Properties dialog box of our newly created portable application and, in the library we will see a new section named Target frameworks. Note that, for this type of project, the dialog box will open as soon as the project is created, so opening it will only be necessary when modifying it afterwards. If we click on the Change button, we will see all the multitargeting possibilities for our class. We will see that we can target different versions of a framework. There is also a link to install additional frameworks. The one that we could install right now is XNA but we will click on Cancel and let the dialog box be as it is. Next, we will click on the show all files icon at the top of the Solution Explorer window (the icon with two papers and some dots behind them), right-click on the References folder, and click on Add Reference. We will observe on doing so that we are left with a .NET subset of assemblies that are compatible with the chosen target frameworks. We will add the following lines to test the portable assembly: using System;using System.Collections.Generic;using System.Linq;using System.Text;namespace pcl_myFirstPcl{public static class MyPortableClass {public static string GetSomething() {return "I am a portable class library"; } }} Build the project. Next, to try this portable assembly we could add, for example, a Silverlight project to the solution, together with an ASP.NET Web application project to wrap the Silverlight. We just need to add a reference to the portable library project and add a button to the MainPage.xaml page that calls the portable library static method we created. The code behind it should look as follows. Remember to add a using reference to our portable library namespace. using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Animation;using System.Windows.Shapes;using pcl_myFirstPcl;namespace SilverlightApplication_testPCL{public partial class MainPage : UserControl {public MainPage() {InitializeComponent(); }private void Button_Click_1(object sender, RoutedEventArgs e) { String something = MyPortableClass.GetSomething();MessageBox.Show("Look! - I got this string from my portable class library: " + something); } }} We can execute the code and check if it works. In addition, we could add other types of projects, reference the Portable Library Class, and ensure that it works properly. How it works... We created a portable library from the Portable Class Library project template and selected the target frameworks. We saw the references; note that it reinforces the visibility of the assemblies that break the compatibility with the targeted platforms, helping us to avoid mistakes. Next we added some code, a target reference application that referenced the portable class, and used it. There's more... We should be aware that when deploying a .NET app that references a Portable Class Library assembly, we must specify its dependency to the correct version of the .NET Framework, ensuring that the required version is installed. A very common and interesting usage of the Portable Class Library would be to implement MVVM. For example, we could put the View Model and Model classes inside a portable library and share it with Windows Store apps, Silverlight, and Windows Phone applications. The architecture is described in the following diagram, which has been taken from MSDN (http://msdn.microsoft.com/en-us/library/hh563947%28v=vs.110%29.aspx): It is really interesting that the list of target frameworks is not limited and we even have a link to install additional frameworks, so I guess that the number of target frameworks will eventually grow. Controlling the timeout in regular expressions .NET 4.5 gives us improved control on the resolution of regular expressions so we can react when they don't resolve on time. This is extremely useful if we don't control the regular expressions/patterns, such as the ones provided by the users. A badly formed pattern can have bad performance due to excessive backtracking and this new feature is really a lifesaver. How to do it... Next we are going to control the timeout in the regular expression, where we will react if the operation takes more than 1 millisecond: Create a new Visual Studio project of type Console Application, named caRegexTimeout. Open the Program.cs file and add a using clause for using regular expressions: Using System.Text.RegularExpressions; Add the following method and call it from the Main function: private static void ExecuteRegexExpression() {bool RegExIsMatch = false;string testString = "One Tile to rule them all, One Tile to find them… ";string RegExPattern = @"([a-z ]+)*!";TimeSpantsRegexTimeout = TimeSpan.FromMilliseconds(1);try {RegExIsMatch = Regex.IsMatch(testString, RegExPattern, RegexOptions.None, tsRegexTimeout); }catch (RegexMatchTimeoutException ex) {Console.WriteLine("Timeout!!");Console.WriteLine("- Timeout specified: " + ex.MatchTimeout); }catch (ArgumentOutOfRangeException ex) {Console.WriteLine("ArgumentOutOfRangeException!!");Console.WriteLine(ex.Message); }Console.WriteLine("Finished succesfully: " + RegExIsMatch.ToString());Console.ReadLine();} If we execute it, we will see that it doesn't fi nish successfully, showing us some details in the console window. Next, we will change testString and RegExPattern to: String testString = "[email protected]";String RegExPattern = @"^([w-.]+)@([w-.]+).[a-zA-Z]{2,4}$"; If we run it, we will now see that it runs and fi nishes successfully. How it works... The RegEx.IsMatch() method now accepts a parameter, which is matchTimeout of type TimeSpan, indicating the maximum time that we allow for the matching operation. If the execution time exceeds this amount, RegexMatchTimeoutException is launched. In our code, we have captured it with a try-catch statement to provide a custom message and of course to react upon a badly formed regex pattern taking too much time. We have tested it with an expression that will take some more time to validate and we got the timeout. When we changed the expression to a good one with a better execution time, the timeout was not reached. Additionally, we also watched out for the ArgumentOutOfRangeException, which is thrown when TimeSpan is zero, or negative, or greater than 24 days. There'smore... We could also set a global matchTimeout for the application through the "REGEX_DEFAULT_MATCH_TIMEOUT" property with the AppDomain.SetData method: AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_ TIMEOUT",TimeSpan.FromMilliseconds(200)); Anyway, if we specify the matchTimeout parameter, we will override the global value. Defining the culture for an application domain With .NET 4.5, we have in our hands a way of specifying the default culture for all of our application threads in a quick and efficient way. How to do it... We will now define the default culture for our application domain as follows: Create a new Visual Studio project of type Console Application named caCultureAppDomain. Open the Program.cs file and add the using clause for globalization: using System.Globalization; Next, add the following methods: static void DefineAppDomainCulture() {String CultureString = "en-US";DisplayCulture();CultureInfo.DefaultThreadCurrentCulture = CultureInfo.CreateSpecificCulture(CultureString);CultureInfo.DefaultThreadCurrentUICulture = CultureInfo.CreateSpecificCulture(CultureString);DisplayCulture();Console.ReadLine();}static void DisplayCulture() {Console.WriteLine("App Domain........: {0}", AppDomain.CurrentDomain.Id);Console.WriteLine("Default Culture...: {0}", CultureInfo.DefaultThreadCurrentCulture);Console.WriteLine("Default UI Culture: {0}", CultureInfo.DefaultThreadCurrentUICulture);} Then add a call to the DefineAppDomainCulture() method. If we execute it, we will observe that the initial default cultures are null and we specify them to become the default for the App Domain. How it works... We used the CultureInfo class to specify the culture and the UI of the application domain and all its threads. This is easily done through the DefaultThreadCurrentCulture and DefaultThreadCurrentUICulture properties. There's more... We must be aware that these properties affect only the current application domain, and if it changes we should control them.
Read more
  • 0
  • 0
  • 983

article-image-defining-alerts
Packt
30 Aug 2013
3 min read
Save for later

Defining alerts

Packt
30 Aug 2013
3 min read
(For more resources related to this topic, see here.) Citrix EdgeSight alert is a powerful rules- and actions-based system that instructs the EdgeSight agents to send an alert in real time when a predefined situation has occurred on a monitored object. Alerts are defined by rules. The action can be configured to send either an e-mail alert or SNMP trap. The generated alerts are also listed and organized within the EdgeSight web console. After an alert rule has been created, it should be mapped to a department. How to do it... To create an alert, navigate to Configure | Company Configuration | Alerts | Rules | New Alert Rule. We will create an alert rule based on an application, so select the Application Alerts radio button and click on Next. Select Application Performance as the alert type and click on Next. Give the alert rule a name, the name of the process we want to monitor, and the CPU time in percent. Click on Next. Select departments to assign this alert rule to and click on Next. Select the department you wish to edit alert actions in and click on Next. We now need to assign an action to this alert rule; we will create a new action. So select the Create New Alert Action radio button and click on Next. We will send an e-mail notification as the alert action so select Send an email notification radio button and click on Next. Enter a name, subject, and one or more recipient e-mail addresses for this e-mail action. You can click the Test Action button to test whether EdgeSight was able to successfully queue the message or not. Click on Finish. How it works... Creating too many real-time alerts can affect the XenApp server performance as each rule that is created requires more work to be performed by the agent. We should only create alerts for those critical situations that require immediate action. If the situation is not critical, the delivery of alerts based on the normal upload cycle will probably be sufficient. By default, the alert data and other statistics are uploaded to the server daily. There's more... When a new alert rule is created or any existing rule is modified, this change is applied to all the devices in the department when those devices next upload data to the EdgeSight Server; alternatively, you can manually upload the alert rule data by clicking on Run Remotely. Administrators can also force certain agent devices to perform a configuration check within the EdgeSight web console by navigating to Configure | Company Configuration | Agents and then selecting the device from the device picker. To suppress an alert, navigate to Monitor | Alert List, click on the down arrow , and then select Suppress Alert. To clear an alert navigate to Configure | Company Configuration | Alerts | Suppressions. This is per user basis and other EdgeSight administrators will still see those alerts suppressed by you. Summary In this article will learned EdgeSight alerts and saw how to create alerts and define action when the defined alert condition is met. Resources for Article : Further resources on this subject: Publishing applications [Article] Designing a XenApp 6 Farm [Article] The XenDesktop architecture [Article]
Read more
  • 0
  • 0
  • 1078
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-article-conceptualizing-it-service-management
Packt
30 Aug 2013
6 min read
Save for later

Conceptualizing IT Service Management

Packt
30 Aug 2013
6 min read
(For more resources related to this topic, see here.) Understanding IT Service Management (ITSM) The success of ITSM lies in putting the customer first. ITSM suggests designing all processes to provide value to customers by facilitating the outcomes they want, without the ownership of specific costs and risks. This quality service is provided through a set of the organization's own resources and capabilities. The capabilities of an IT service organization generally lie with its people, process, or technology. While people and technology could be found in the market, the organizational processes need to be defined, developed, and often customized within the organization. The processes mature with the organization, and hence need to be given extra focus. Release Management, Incident Management, and so on, are some of the commonly heard ITSM processes. It's easy to confuse these with functions, which as per ITIL, has a different meaning associated with it. Many of us do not associate different meanings for many similar terms. Here are some examples: Incident Management versus Problem Management Change Management versus Release Management Service Level Agreement (SLA) versus Operational Level Agreement (OLA) Service Portfolio versus Service Catalog This book will strive to bring out the fine differences between such terms, as and when we formally introduce them. This should make the concepts clear while avoiding any confusion. So, let us first see the difference between a process and a function. Differentiating between process and function A process is simply a structured set of activity designed to accomplish a specific objective. It takes one or more defined inputs and turns them into defined outputs. Characteristics A process is measurable, aimed at specific results, delivers primary results to a customer, and responds to specific triggers. Whereas, a function is a team or a group of people and the tools they use to carry out the processes. Hence, while Release Management, Incident Management, and so on are processes. The IT Service Desk is a function, which might be responsible for carrying out these processes. Luckily, ServiceDesk Plus provides features for managing both processes and functions. Differentiating between Service Level Agreement (SLA) and Operational Level Agreement (OLA) Service Level Agreement, or SLA, is a widely used term and often has some misconceptions attached to it. Contrary to popular belief, SLA is not necessarily a legal contract, but should be written in simple language, which can be understood by all parties without any ambiguity. An SLA is simply an agreement between a service provider and the customer(s) and documents the service targets and responsibilities of all parties. There are three types of SLAs defined in ITIL: Service Based SLA: All customers get the same deal for a specific service Customer Based SLA: A customer gets the same deal for all services Multilevel SLA: This involves a combination of corporate level, service level, and customer level SLAs. An Operational Level Agreement, or OLA, on the other hand, is the agreement between the service provider and another part of the same organization. An OLA is generally a prerequisite to help meet the SLA. There might be legal contracts between the service provider and some external suppliers as well, to help meet the SLA(s). These third-party legal contracts are called Underpinning Contracts. As must be evident, management and monitoring of these agreements is of utmost importance for the service organization. Here is how to create SLA records easily and track them in ServiceDesk Plus: Agree SLA with the customers. Go to Admin tab. Click on Service Level Agreements in the Helpdesk block. Image All SLA-based mail escalations are enabled by default. These can be disabled by clicking on the Disable Escalation button. Four SLAs are set by default—High SLA, Medium SLA, Normal SLA, and Low SLA. More could be added, if needed. Click on any SLA Name to view/edit its details. SLAs for sites, if any, can be configured by the site admin from the Service Level Agreement for combo box. SLA Rules block, below SLA details, is used for setting the rules and criteria for the SLA. Once agreed with the customers, configuring SLAs in the tool is pretty easy and straightforward. Escalations are taken care of automatically, as per the defined rules. To monitor the SLAs for a continuous focus on customer satisfaction, several Flash Reports are available under the Reports tab, for use on the fly. Differentiating between Service Portfolio and Service Catalog This is another example of terms often used interchangeably. However, ITIL clarifies that the Service Catalog lists only live IT services but Service Portfolio is a bigger set including services in the pipeline and retired services as well. Service Catalog contains information about two types of IT services: Customer-facing services (referred to as Business Service Category ) and Supporting services, with the complexities hidden from the business (referred to as IT Service Category) ServiceDesk Plus plays a vital role in managing the ways in which these services are exposed to users. The software provides a simple and effective interface to browse through the services and monitor their status. Users can also request for availing these services from within the module. The Service Catalog can also be accessed from the Admin tab, by clicking on Service Catalog under the Helpdesk block. The page lists the configured service categories and can be used to Add Service Category , Manage the service items, and Add Service under each category. Deleting a Service Category Deletion of an existing Service Category should be done with care. Here are the steps: Select Service Categories from Manage dropdown. A window with Service Categories List will open. Select the check box next to the Service Category to be deleted and then press the Delete button on the interface. A confirmation box will appear and on confirmation, the Service Category will be processed for deletion. If the concerned Service Category is in use by a module, then it will be grayed out and the category will be unavailable for further usage. To bring it back into usage, click on the edit icon Image next to the category name and uncheck the box for Service not for further usage in the new window.   The following two options under the Manage dropdown provide additional features for the customization of service request forms: Additional Fields: This can be used to capture additional details about the service apart from the predefined fields Service Level Agreements : This can be used to configure Service Based SLAs Summary We now understand the ITSM concepts, the fine differences between some of the terms, and also why software like ServiceDesk Plus is modeled after ITIL framework. We've also seen how SLAs and Service Catalog could be configured and tracked using ServiceDesk Plus. Resources for Article :   Further resources on this subject: Introduction to vtiger CRM [Article] Overview of Microsoft Dynamics CRM 2011 [Article] Customizing PrestaShop Theme Part 1 [Article]
Read more
  • 0
  • 0
  • 922

article-image-working-zend-framework-20
Packt
29 Aug 2013
5 min read
Save for later

Working with Zend Framework 2.0

Packt
29 Aug 2013
5 min read
(For more resources related to this topic, see here.) So, what is Zend Framework? Throughout the years, PHP has become one of the most popular server-side scripting languages on the Internet. This is largely due to its steep learning curve and ease of use. However, these two reasons have also contributed to many of its shortcomings. With minimal restrictions on how you write code with this language, you can employ any style or structure that you prefer, and thus it becomes much easier to write bad code. But there is a solution: use a framework! A framework simplifies coding by providing a highly modular file organization with code libraries of the most common scripting in everyday programming. It helps you develop faster by eliminating the monotonous details of coding and makes your code more re-usable and easier to maintain. There are many popular PHP frameworks out there. A number of them have large, open source communities that provide a wide range of support and offer many solutions. This is probably the main reason why most beginner PHP developers get confused while choosing a framework. I will not discuss the pros and cons of other frameworks, but I will demonstrate briefly why Zend Framework is a great choice. Zend Framework (ZF) is a modern, free, and open source framework that is maintained and developed by a large community of developers and backed by Zend Technologies Ltd, the company founded by the developers of PHP. Currently, Zend Framework is used by a large number of global companies, such as BBC, Discovery, Offers.com, and Cisco. Additionally, many widely used open source projects and recognized frameworks are powered by Zend Framework, such as in the case of Magento, Centurion, TomatoCMS, and PHProjekt. And lastly, its continued development is sponsored by highly recognizable firms such as Google and Microsoft. With all this in mind, we know one thing is certain—Zend Framework is here to stay. Zend Framework has a rich set of components or libraries and that is why it is also known as a component framework. You will find a library in it for almost anything that you need for your everyday project, from simple form validation to file upload. It gives you the flexibility to select a single component to develop your project or opt for all components, as you may need them. Moreover, with the release of Zend Framework 2, each component is available via Pyrus and Composer. Pyrus is a package management and distribution system, and Composer is a tool for dependency management in PHP that allows you to declare the dependent libraries your project needs and installs them in your project for you. Zend Framework 2 follows a 100 percent object-oriented design principle and makes use of all the new PHP 5.3+ features such as namespaces, late static binding, lambda functions, and closures. Now, let’s get started on a quick-start project to learn the basics of Zend Framework 2, and be well on our way to building our first Zend Framework MVC application. Installation ZF2 requires PHP 5.3.3 or higher, so make sure you have the latest version of PHP. We need a Windows-based PC, and we will be using XAMPP (http://www.apachefriends.org/en/xampp.html) for our development setup. I have installed XAMPP on my D: drive, so my web root path for my setup is d:xampphtdocs. Step 1 – downloading Zend Framework To create a ZF2 project, we will need two things: the framework itself and a skeleton application. Download both Zend Framework and the skeleton application from http://framework.zend.com/downloads/latest and https://github.com/zendframework/ZendSkeletonApplication, respectively. Step 2 – unzipping the skeleton application Now put the skeleton application that you have just downloaded into the web root directory (d:xampphtdocs) and unzip it. Name the directory address-book as we are going to create a very small address book application, or you can name it anything you want your project name to be. When you unzip the skeleton application, it looks similar to the following screenshot: Step 3 – knowing the directories Inside the module directory, there is a default module called Application. Inside the vendor directory, there is an empty directory called ZF2. This directory is for the Zend Framework library. Unzip the Zend Framework that you have downloaded, and copy the library folder from the unzipped folder to the vendorZF2 directory. Step 4 – welcome to Zend Framework 2 Now, in your browser, type: http://localhost/address-book/public. It should show a screen as shown in the following screenshot. If you see the same screen, it means you have created the project successfully. And that’s it By this point, you should have a working Zend Framework, and you are free to play around and discover more about it. Summary In this article we will learned what is Zend Framework and will also learn how to install it on Windows PC. Resources for Article: Further resources on this subject: Authentication with Zend_Auth in Zend Framework 1.8 [Article] Building Your First Zend Framework Application [Article] Authorization with Zend_Acl in Zend Framework 1.8 [Article]
Read more
  • 0
  • 0
  • 1152

article-image-writing-your-first-lines-coffeescript
Packt
29 Aug 2013
9 min read
Save for later

Writing Your First Lines of CoffeeScript

Packt
29 Aug 2013
9 min read
(For more resources related to this topic, see here.) Following along with the examples I implore you to open up a console as you read this article and try out the examples for yourself. You don't strictly have to; I'll show you any important output from the example code. However, following along will make you more comfortable with the command-line tools, give you a chance to write some CoffeeScript yourself, and most importantly, will give you an opportunity to experiment. Try changing the examples in small ways to see what happens. If you're confused about a piece of code, playing around and looking at the outcome will often help you understand what's really going on. The easiest way to follow along is to simply open up a CoffeeScript console. Just run this from the command line to get an interactive console: coffee If you'd like to save all your code to return to later, or if you wish to work on something more complicated, you can create files instead and run those. Give your files the .coffee extension , and run them like this: coffee my_sample_code.coffee Seeing the compiled JavaScript The golden rule of CoffeeScript, according to the CoffeeScript documentation, is: It's just JavaScript. This means that it is a language that compiles down to JavaScript in a simple fashion, without any complicated extra moving parts. This also means that it's easy, with a little practice, to understand how the CoffeeScript you are writing will compile into JavaScript. Your JavaScript expertise is still applicable, but you are freed from the tedious parts of the language. You should understand how the generated JavaScript will work, but you do not need to actually write the JavaScript. To this end, we'll spend a fair amount of time, especially in this article, comparing CoffeeScript code to the compiled JavaScript results. It's like peeking behind the wizard's curtain! The new language features won't seem so intimidating once you know how they work, and you'll find you have more trust in CoffeeScript when you can check in on the code it's generating. After a while, you won't even need to check in at all. I'll show you the corresponding JavaScript for most of the examples in this article, but if you write your own code, you may want to examine the output. This is a great way to experiment and learn more about the language! Unfortunately, if you're using the CoffeeScript console to follow along, there isn't a great way to see the compiled output (most of the time, it's nice to have all that out of sight—just not right now!). You can see the compiled JavaScript in several other easy ways, though. The first is to put your code in a file and compile it. The other is to use the Try CoffeeScript tool on http://coffeescript.org/. It brings up an editor right in the browser that updates the output as you type. CoffeeScript basics Let's get started! We'll begin with something simple: x = 1 + 1 You can probably guess what JavaScript this will compile to: var x;x = 1 + 1; Statements One of the very first things you will notice about CoffeeScript is that there are no semicolons. Statements are ended by a new line. The parser usually knows if a statement should be continued on the next line. You can explicitly tell it to continue to the next line by using a backslash at the end of the first line: x = 1+ 1 It's also possible to stretch function calls across multiple lines, as is common in "fluent" JavaScript interfaces: "foo" .concat("barbaz") .replace("foobar", "fubar") You may occasionally wish to place more than one statement on a single line (for purely stylistic purposes). This is the one time when you will use a semicolon in CoffeeScript: x = 1; y = 2 Both of these situations are fairly rare. The vast majority of the time, you'll find that one statement per line works great. You might feel a pang of loss for your semicolons at first, but give it time. The calluses on your pinky finger will fall off, your eyes will adjust to the lack of clutter, and soon enough you won't remember what good you ever saw in semicolons. Variables CoffeeScript variables look a lot like JavaScript variables, with one big difference: no var! CoffeeScript puts all variables in the local scope by default. x = 1y = 2z = x + y compiles to: var x, y, z;x = 1;y = 2;z = x + y; Believe it or not, this is one of my absolute top favorite things about CoffeeScript. It's so easy to accidentally introduce variables to the global scope in JavaScript and create subtle problems for yourself. You never need to worry about that again; from now on, it's handled automatically. Nothing is getting into the global scope unless you want it there. If you really want to put a variable in the global scope and you're really sure it's a good idea, you can easily do this by attaching it to the top-level object. In the CoffeeScript console, or in Node.js programs, this is the global object: global.myGlobalVariable = "I'm so worldly!" In a browser, we use the window object instead: window.myGlobalVariable = "I'm so worldly!" Comments Any line that begins with a # is a comment. Anything after a # in the middle of a line will also be a comment. # This is a comment."Hello" # This is also a comment Most of the time, CoffeeScripters use only this style, even for multiline comments. # Most multiline comments simply wrap to the# next line, each begun with a # and a space. It is also possible (but rare in the CoffeeScript world) to use a block comment, which begins and ends with ###. The lines in between these characters do not need to begin with a #. ###This is a block comment. You can get artistic in here.<(^^)>### Regular comments are not included in the compiled JavaScript, but block comments are, delineated by /* */. Calling functions Function invocation can look very familiar in CoffeeScript: console.log("Hello, planet!") Other than the missing semicolon, that's exactly like JavaScript, right? But function invocation can also look different: console.log "Hello, planet!" Whoa! Now we're in unfamiliar ground. This will work exactly the same as the previous example, though. Any time you call a function with arguments, the parentheses are optional. This also works with more than one argument: Math.pow 2, 3 While you might be a little nervous writing this way at first, I encourage you to try it and give yourself time to become comfortable with it. Idiomatic CoffeeScript style eliminates parentheses whenever it's sensible to do so. What do I mean by "sensible"? Well, imagine you're reading your code for the first time, and ask yourself which style makes it easiest to comprehend. Usually it's most readable without parentheses, but there are some occasions when your code is complex enough that judicious use of parentheses will help. Use your best judgment, and everything will turn out fine. There is one exception to the optional parentheses rule. If you are invoking a function with no arguments, you must use parentheses: Date.now() Why? The reason is simple. CoffeeScript preserves JavaScript's treatment of functions as first-class citizens. myFunc = Date.now #=> myFunc holds a function object that hasn't been executedmyDate = Date.now() #=> myDate holds the result of the function's execution CoffeeScript's syntax is looser, but it must still be unambiguous. When no arguments are present, it's not clear whether you want to access the function object or execute the function. Requiring parentheses makes it clear which one you want, and still allows both kinds of functionality. This is part of CoffeeScript's philosophy of not deviating from the fundamentals of the JavaScript language. If functions were always executed instead of returned, CoffeeScript would no longer act like JavaScript, and it would be hard for you, the seasoned JavaScripter, to know what to expect. This way, once you understand a few simple concepts, you will know exactly what your code is doing. From this discussion, we can extract a more general principle: parentheses are optional, except when necessary to avoid ambiguity . Here's another situation in which you might encounter ambiguity: nested function calls. Math.max 2, 3, Math.min 4, 5, 6 Yikes! What's happening there? Well, you can easily clear this up by adding parentheses. You may add parentheses to all the function calls, or you may add just enough to resolve the ambiguity: # These two calls are equivalentMath.max(2, 3, Math.min(4, 5, 6))Math.max 2, 3, Math.min(4, 5, 6) This makes it clear that you wish min to take 4 and 5 as arguments. If you wished 6 to be an argument to max instead, you would place the parentheses differently. # These two calls are equivalentMath.max(2, 3, Math.min(4, 5), 6)Math.max 2, 3, Math.min(4, 5), 6 Precedence Actually, the original version I showed you is valid CoffeeScript too! You just need to understand the precedence rules that CoffeeScript uses for functions. Arguments are assigned to functions from the inside out . Another way to think of this is that an argument belongs to the function that it's nearest to. So our original example is equivalent to the first variation we used, in which 4, 5, and 6 are arguments to min: # These two calls are equivalentMath.max 2, 3, Math.min 4, 5, 6Math.max 2, 3, Math.min(4, 5, 6) The parentheses are only absolutely necessary if our desired behavior doesn't match CoffeeScript's precedence—in this case, if we wanted 6 to be a argument to max. This applies to an unlimited level of nesting: threeSquared = Math.pow 3, Math.floor Math.min 4, Math.sqrt 5 Of course, at some point the elimination of parentheses turns from the question of if you can to if you should. You are now a master of the intricacies of CoffeeScript function-call parsing, but the other programmers reading your code might not be (and even if they are, they might prefer not to puzzle out what your code is doing). Avoid parentheses in simple cases, and use them judiciously in the more complicated situations.
Read more
  • 0
  • 0
  • 1471

article-image-using-opencl
Packt
28 Aug 2013
15 min read
Save for later

Using OpenCL

Packt
28 Aug 2013
15 min read
(For more resources related to this topic, see here.) Let's start the journey by looking back into the history of computing and why OpenCL is important from the respect that it aims to unify the software programming model for heterogeneous devices. The goal of OpenCL is to develop a royalty-free standard for cross-platform, parallel programming of modern processors found in personal computers, servers, and handheld/embedded devices. This effort is taken by "The Khronos Group" along with the participation of companies such as Intel, ARM, AMD, NVIDIA, QUALCOMM, Apple, and many others. OpenCL allows the software to be written once and then executed on the devices that support it. In this way it is akin to Java, this has benefits because software development on these devices now has a uniform approach, and OpenCL does this by exposing the hardware via various data structures, and these structures interact with the hardware via Application Programmable Interfaces ( APIs ). Today, OpenCL supports CPUs that includes x86s, ARM and PowerPC and GPUs by AMD, Intel, and NVIDIA. Developers can definitely appreciate the fact that we need to develop software that is cross-platform compatible, since it allows the developers to develop an application on whatever platform they are comfortable with, without mentioning that it provides a coherent model in which we can express our thoughts into a program that can be executed on any device that supports this standard. However, what cross-platform compatibility also means is the fact that heterogeneous environments exists, and for quite some time, developers have to learn and grapple with the issues that arise when writing software for those devices ranging from execution model to memory systems. Another task that commonly arose from developing software on those heterogeneous devices is that developers were expected to express and extract parallelism from them as well. Before OpenCL, we know that various programming languages and their philosophies were invented to handle the aspect of expressing parallelism (for example, Fortran, OpenMP, MPI, VHDL, Verilog, Cilk, Intel TBB, Unified parallel C, Java among others) on the device they executed on. But these tools were designed for the homogeneous environments, even though a developer may think that it's to his/her advantage, since it adds considerable expertise to their resume. Taking a step back and looking at it again reveals that is there is no unified approach to express parallelism in heterogeneous environments. We need not mention the amount of time developers need to be productive in these technologies, since parallel decomposition is normally an involved process as it's largely hardware dependent. To add salt to the wound, many developers only have to deal with homogeneous computing environments, but in the past few years the demand for heterogeneous computing environments grew. The demand for heterogeneous devices grew partially due to the need for high performance and highly reactive systems, and with the "power wall" at play, one possible way to improve more performance was to add specialized processing units in the hope of extracting every ounce of parallelism from them, since that's the only way to reach power efficiency. The primary motivation for this shift to hybrid computing could be traced to the research headed entitled Optimizing power using Transformations by Anantha P. Chandrakasan. It brought out a conclusion that basically says that many-core chips (which run at a slightly lower frequency than a contemporary CPU) are actually more power-efficient. The problem with heterogeneous computing without a unified development methodology, for example, OpenCL, is that developers need to grasp several types of ISA and with that the various levels of parallelism and their memory systems are possible. CUDA, the GPGPU computing toolkit, developed by NVIDIA deserves a mention not only because of the remarkable similarity it has with OpenCL, but also because the toolkit has a wide adoption in academia as well as industry. Unfortunately CUDA can only drive NVIDIA's GPUs. The ability to extract parallelism from an environment that's heterogeneous is an important one simply because the computation should be parallel, otherwise it would defeat the entire purpose of OpenCL. Fortunately, major processor companies are part of the consortium led by The Khronos Group and actively realizing the standard through those organizations. Unfortunately the story doesn't end there, but the good thing is that we, developers, realized that a need to understand parallelism and how it works in both homogeneous and heterogeneous environments. OpenCL was designed with the intention to express parallelism in a heterogeneous environment. For a long time, developers have largely ignored the fact that their software needs to take advantage of the multi-core machines available to them and continued to develop their software in a single-threaded environment, but that is changing (as discussed previously). In the many-core world, developers need to grapple with the concept of concurrency, and the advantage of concurrency is that when used effectively, it maximizes the utilization of resources by providing progress to others while some are stalled. When software is executed concurrently with multiple processing elements so that threads can run simultaneously, we have parallel computation. The challenge that the developer has is to discover that concurrency and realize it. And in OpenCL, we focus on two parallel programming models: task parallelism and data parallelism. Task parallelism means that developers can create and manipulate concurrent tasks. When developers are developing a solution for OpenCL, they would need to decompose a problem into different tasks and some of those tasks can be run concurrently, and it is these tasks that get mapped to processing elements ( PEs ) of a parallel environment for execution. On the other side of the story, there are tasks that cannot be run concurrently and even possibly interdependent. An additional complexity is also the fact that data can be shared between tasks. When attempting to realize data parallelism, the developer needs to readjust the way they think about data and how they can be read and updated concurrently. A common problem found in parallel computation would be to compute the sum of all the elements given in an arbitrary array of values, while storing the intermediary summed value and one possible way to do this is illustrated in the following diagram and the operator being applied there, that is, is any binary associative operator. Conceptually, the developer could use a task to perform the addition of two elements of that input to derive the summed value. Whether the developer chooses to embody task/data parallelism is dependent on the problem, and an example where task parallelism would make sense will be by traversing a graph. And regardless of which model the developer is more inclined with, they come with their own sets of problems when you start to map the program to the hardware via OpenCL. And before the advent of OpenCL, the developer needs to develop a module that will execute on the desired device and communication, and I/O with the driver program. An example example of this would be a graphics rendering program where the CPU initializes the data and sets everything up, before offloading the rendering to the GPU. OpenCL was designed to take advantage of all devices detected so that resource utilization is maximized, and hence in this respect it differs from the "traditional" way of software development. Now that we have established a good understanding of OpenCL, we should spend some time understanding how a developer can learn it. And not to fret, because every project you embark with, OpenCL will need you to understand the following: Discover the makeup of the heterogeneous system you are developing for Understand the properties of those devices by probing it Start the parallel program decomposition using either or all of task parallelism or data parallelism, by expressing them into instructions also known as kernels that will run on the platform Set up data structures for the computation Manipulate memory objects for the computation Execute the kernels in the order that's desired on the proper device Collate the results and verify for correctness Next, we need to solidify the preceding points by taking a deeper look into the various components of OpenCL. The following components collectively make up the OpenCL architecture: Platform Model: A platform is actually a host that is connected to one or more OpenCL devices. Each device comprises possibly multiple compute units ( CUs ) which can be decomposed into one or possibly multiple processing elements, and it is on the processing elements where computation will run. Execution Model: Execution of an OpenCL program is such that the host program would execute on the host, and it is the host program which sends kernels to execute on one or more OpenCL devices on that platform. When a kernel is submitted for execution, an index space is defined such that a work item is instantiated to execute each point in that space. A work item would be identified by its global ID and it executes the same code as expressed in the kernel. Work items are grouped into work groups and each work group is given an ID commonly known as its work group ID, and it is the work group's work items that get executed concurrently on the PEs of a single CU. That index space we mentioned earlier is known as NDRange describing an N-dimensional space, where N can range from one to three. Each work item has a global ID and a local ID when grouped into work groups, that is distinct from the other and is derived from NDRange. The same can be said about work group IDs. Let's use a simple example to illustrate how they work. Given two arrays, A and B, of 1024 elements each, we would like to perform the computation of vector multiplication also known as dot product, where each element of A would be multiplied by the corresponding element in B. The kernel code would look something as follows: __kernel void vector_multiplication(__global int* a, __global int* b, __global int* c) {int threadId = get_global_id(0); // OpenCL functionc[i] = a[i] * b[i];} In this scenario, let's assume we have 1024 processing elements and we would assign one work item to perform exactly one multiplication, and in this case our work group ID would be zero (since there's only one group) and work items IDs would range from {0 … 1023}. Recall what we discussed earlier, that it is the work group's work items that can executed on the PEs. Hence reflecting back, this would not be a good way of utilizing the device. In this same scenario, let's ditch the former assumption and go with this: we still have 1024 elements but we group four work items into a group, hence we would have 256 work groups with each work group having an ID ranging from {0 … 255}, but it is noticed that the work item's global ID still would range from {0 … 1023} simply because we have not increased the number of elements to be processed. This manner of grouping work items into their work groups is to achieve scalability in these devices, since it increases execution efficiency by ensuring all PEs have something to work on. The NDRange can be conceptually mapped into an N-dimensional grid and the following diagram illustrates how a 2DRange works, where WG-X denotes the length in rows for a particular work group and WG-Y denotes the length in columns for a work group, and how work items are grouped including their respective IDs in a work group. Before the execution of the kernels on the device(s), the host program plays an important role and that is to establish context with the underlying devices and laying down the order of execution of the tasks. The host program does the context creation by establishing the existence (creating if necessary) of the following: All devices to be used by the host program The OpenCL kernels, that is, functions and their abstractions that will run on those devices The memory objects that encapsulated the data to be used / shared by the OpenCL kernels. Once that is achieved, the host needs to create a data structure called a command queue that will be used by the host to coordinate the execution of the kernels on the devices and commands are issued to this queue and scheduled onto the devices. A command queue can accept: kernel execution commands, memory transfer commands, and synchronization commands. Additionally, the command queues can execute the commands in-order, that is, in the order they've been given, or out-of-order. If the problem is decomposed into independent tasks, it is possible to create multiple command queues targeting different devices and scheduling those tasks onto them, and then OpenCL will run them concurrently. Memory Model: So far, we have understood the execution model and it's time to introduce the memory model that OpenCL has stipulated. Recall that when the kernel executes, it is actually the work item that is executing its instance of the kernel code. Hence the work item needs to read and write the data from memory and each work item has access to four types of memories: global, constant, local, and private. These memories vary from size as well as accessibilities, where global memory has the largest size and is most accessible to work items, whereas private memory is possibly the most restrictive in the sense that it's private to the work item. The constant memory is a read-only memory where immutable objects are stored and can be shared with all work items. The local memory is only available to all work items executing in the work group and is held by each compute unit, that is, CU-specific. The application running on the host uses the OpenCL API to create memory objects in global memory and will enqueue memory commands to the command queue to operate on them. The host's responsibility is to ensure that data is available to the device when the kernel starts execution, and it does so by copying data or by mapping/unmapping regions of memory objects. During a typical data transfer from the host memory to the device memory, OpenCL commands are issued to queues which may be blocking or non-blocking. The primary difference between a blocking and non-blocking memory transfer is that in the former, the function calls return only once (after being queued) it is deemed safe, and in the latter the call returns as soon as the command is enqueued. Memory mapping in OpenCL allows a region of memory space to be available for computation and this region can be blocking or non-blocking and the developer can treat this space as readable or writeable or both. Hence forth, we are going to focus on getting the basics of OpenCL by letting our hands get dirty in developing small OpenCL programs to understand a bit more, programmatically, how to use the platform and execution model of OpenCL. The OpenCL specification Version 1.2 is an open, royalty-free standard for general purpose programming across various devices ranging from mobile to conventional CPUs, and lately GPUs through an API and the standard at the time of writing supports: Data and task based parallel programming models Implements a subset of ISO C99 with extensions for parallelism with some restrictions such as recursion, variadic functions, and macros which are not supported Mathematical operations comply to the IEEE 754 specification Porting to handheld and embedded devices can be accomplished by establishing configuration profiles Interoperability with OpenGL, OpenGL ES, and other graphics APIs Throughout this article, we are going to show you how you can become proficient in programming OpenCL. As you go through the article, you'll discover not only how to use the API to perform all kinds of operations on your OpenCL devices, but you'll also learn how to model a problem and transform it from a serial program to a parallel program. More often than not, the techniques you'll learn can be transferred to other programming toolsets. In the toolsets, I have worked with OpenCLTM, CUDATM, OpenMPTM, MPITM, Intel thread building blocksTM, CilkTM, CilkPlusTM, which allows the developer to express parallelism in a homogeneous environment and find the entire process of learning the tools to application of knowledge to be classified into four parts. These four phases are rather common and I find it extremely helpful to remember them as I go along. I hope you will be benefited from them as well. Finding concurrency: The programmer works in the problem domain to identify the available concurrency and expose it to use in the algorithm design Algorithm structure: The programmer works with high-level structures for organizing a parallel algorithm Supporting Structures: This refers to how the parallel program will be organized and the techniques used to manage shared data Implementation mechanisms: The final step is to look at specific software constructs for implementing a parallel program. Don't worry about these concepts, they'll be explained as we move through the article. The next few recipes we are going to examine have to do with understanding the usage of OpenCL APIs, by focusing our efforts in understanding the platform model of the architecture.
Read more
  • 0
  • 0
  • 763
article-image-salesforce-crm-functions
Packt
27 Aug 2013
3 min read
Save for later

Salesforce CRM Functions

Packt
27 Aug 2013
3 min read
(For more resources related to this topic, see here.) Functional overview of Salesforce CRM The Salesforce CRM functions are related to each other and, as mentioned previously, have cross-over areas which can be represented as shown in the following diagram: Marketing administration Marketing administration is available in Salesforce CRM under the application suite known as the Marketing Cloud. The core functionality enables organizations to manage marketing campaigns from initiation to lead development in conjunction with the sales team. The features in the marketing application can help measure the effectiveness of each campaign by analyzing the leads and opportunities generated as a result of specific marketing activities. Salesforce automation Salesforce automation is the core feature set within Salesforce CRM and is used to manage the sales process and activities. It enables salespeople to automate manual and repetitive tasks and provides them with information related to existing and prospective customers. In Salesforce CRM, Salesforce automation is known as the Sales Cloud, and helps the sales people manage sales activities, leads and contact records, opportunities, quotes, and forecasts. Customer service and support automation Customer service and support automation within Salesforce CRM is known as the Service Cloud, and allows support teams to automate and manage the requests for service and support by existing customers. Using the Service Cloud features, organizations can handle customer requests, such as the return of faulty goods or repairs, complaints, or provide advice about products and services. Associated with the functional areas, described previously, are features and mechanisms to help users and customers collaborate and share information known as enterprise social networking. Enterprise social networking Enterprise social network capabilities within Salesforce CRM enable organizations to connect with people and securely share business information in real time. Social networking within an enterprise serves to connect both employees and customers and enables business collaboration. In Salesforce CRM, the enterprise social network suite is known as Salesforce Chatter. Salesforce CRM record life cycle The capabilities of Salesforce CRM provides for the processing of campaigns through to customer acquisition and beyond as shown in the following diagram: At the start of the process, it is the responsibility of the marketing team to develop suitable campaigns in order to generate leads. Campaign management is carried out using the Marketing Administration tools and has links to the lead and also any opportunities that have been influenced by the campaign. When validated, leads are converted to accounts, contacts, and opportunities. This can be the responsibility of either the marketing or sales teams and requires a suitable sales process to have been agreed upon. In Salesforce CRM, an account is the company or organization and a contact is an individual associated with an account. Opportunities can either be generated from lead conversion or may be entered directly by the sales team. As described earlier in this article, the structure of Salesforce requires account ownership to be established which sees inherited ownership of the opportunity. Account ownership is usually the responsibility of the sales team. Opportunities are worked through a sales process using sales stages where the stage is advanced to the point where they are set as won/closed and become sales. Opportunity information should be logged in the organization's financial system. Upon financial completion and acceptance of the deal (and perhaps delivery of the goods or service), the post-customer acquisition process is then enabled where the account and contact can be recognized as a customer. Here the customer relationships concerning incidents and requests are managed by escalating cases within the customer services and support automation suite.
Read more
  • 0
  • 0
  • 7036

article-image-getting-started-mule
Packt
26 Aug 2013
10 min read
Save for later

Getting Started with Mule

Packt
26 Aug 2013
10 min read
(For more resources related to this topic, see here.) Mule ESB is a lightweight Java programming language. Through ESB, you can integrate or communicate with multiple applications. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. Understanding Mule concepts and terminologies Enterprise Service Bus (ESB) is an application that gives access to other applications and services. Its main task is to be the messaging and integration backbone of an enterprise. An ESB is a distributed middleware system to integrate different applications. All these applications communicate through the ESB. It consists of a set of service containers that integrate various types of applications. The containers are interconnected with a reliable messaging bus. Getting ready An ESB is used for integration using a service-oriented approach. Its main features are as follows: Polling JMS Message transformation and routing services Tomcat hot deployment Web service security We often use the abbreviation, VETRO, to summarize the ESB functionality: V– validate the schema validation E– enrich T– transform R– route (either itinerary or content based) O– operate (perform operations; they run at the backend) Before introducing any ESB, developers and integrators must connect different applications in a point-to-point fashion. How to do it... After the introduction of an ESB, you just need to connect each application to the ESB so that every application can communicate with each other through the ESB. You can easily connect multiple applications through the ESB, as shown in the following diagram: Need for the ESB You can integrate different applications using ESB. Each application can communicate through ESB: To integrate more than two or three services and/or applications To integrate more applications, services, or technologies in the future To use different communication protocols To publish services for composition and consumption For message transformation and routing   What is Mule ESB? Mule ESB is a lightweight Java-based enterprise service bus and integration platform that allows developers and integrators to connect applications together quickly and easily, enabling them to exchange data. There are two editions of Mule ESB: Community and Enterprise. Mule ESB Enterprise is the enterprise-class version of Mule ESB, with additional features and capabilities that are ideal for clustering and performance tuning, DataMapper, and the SAP connector. Mule ESB Community and Enterprise editions are built on a common code base, so it is easy to upgrade from Mule ESB Community to Mule ESB Enterprise. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. The key advantage of an ESB is that it allows different applications to communicate with each other by acting as a transit system for carrying data between applications within your enterprise or across the Internet. Mule ESB includes powerful capabilities that include the following: Service creation and hosting: It exposes and hosts reusable services using Mule ESB as a lightweight service container Service mediation: It shields services from message formats and protocols, separate business logic from messaging, and enables location-independent service calls Message routing: It routes, filters, aggregates, and re-sequences messages based on content and rules Data transformation: It exchanges data across varying formats and transport protocols Mule ESB is lightweight but highly scalable, allowing you to start small and connect more applications over time. Mule provides a Java-based messaging framework. Mule manages all the interactions between applications and components transparently. Mule provides transformation, routing, filtering, Endpoint, and so on. How it works... When you examine how a message flows through Mule ESB, you can see that there are three layers in the architecture, which are listed as follows: Application Layer Integration Layer Transport Layer Likewise, there are three general types of tasks you can perform to configure and customize your Mule deployment. Refer to the following diagram: The following list talks about Mule and its configuration: Service component development: This involves developing or re-using the existing POJOs, which is a class with attributes and it generates the get and set methods, Cloud connectors, or Spring Beans that contain the business logic and will consume, process, or enrich messages. Service orchestration: This involves configuring message processors, routers, transformers, and filters that provide the service mediation and orchestration capabilities required to allow composition of loosely coupled services using a Mule flow. New orchestration elements can be created also and dropped into your deployment. Integration: A key requirement of service mediation is decoupling services from the underlying protocols. Mule provides transport methods to allow dispatching and receiving messages on different protocol connectors. These connectors are configured in the Mule configuration file and can be referenced from the orchestration layer. Mule supports many existing transport methods and all the popular communication protocols, but you may also develop a custom transport method if you need to extend Mule to support a particular legacy or proprietary system. Spring beans: You can construct service components from Spring beans and define these Spring components through a configuration file. If you don't have this file, you will need to define it manually in the Mule configuration file. Agents: An agent is a service that is created in Mule Studio. When you start the server, an agent is created. When you stop the server, this agent will be destroyed. Connectors: The Connector is a software component. Global configuration: Global configuration is used to set the global properties and settings. Global Endpoints: Global Endpoints can be used in the Global Elements tab. We can use the global properties' element as many times in a flow as we want. For that, we must pass the global properties' reference name. Global message processor: A global message processor observes a message or modifies either a message or the message flow; examples include transformers and filters. Transformers: A transformer converts data from one format to another. You can define them globally and use them in multiple flows. Filters: Filters decide which Mule messages should be processed. Filters specify the conditions that must be met for a message to be routed to a service or continue progressing through a flow. There are several standard filters that come with Mule ESB, which you can use, or you can create your own filters. Models: It is a logical grouping of services, which are created in Mule Studio. You can start and stop all the services inside a particular model. Services: You can define one or more services that wrap your components (business logic) and configure Routers, Endpoints, transformers, and filters specifically for that service. Services are connected using Endpoints. Endpoints: Services are connected using Endpoints. It is an object on which the services will receive (inbound) and send (outbound) messages. Flow: Flow is used for a message processor to define a message flow between a source and a target. Setting up the Mule IDE The developers who were using Mule ESB over other technologies such as Liferay Portal, Alfresco ECM, or Activiti BPM can use Mule IDE in Eclipse without configuring the standalone Mule Studio in the existing environment. In recent times, MuleSoft (http://www.mulesoft.org/) only provides Mule Studio from Version 3.3 onwards, but not Mule IDE. If you are using the older version of Mule ESB, you can get Mule IDE separately from http://dist.muleforge.org/mule-ide/releases/. Getting ready To set Mule IDE, we need Java to be installed on the machine and its execution path should be set in an environment variable. We will now see how to set up Java on our machine. Firstly, download JDK 1.6 or a higher version from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/jdk6downloads-1902814.html. In your Windows system, go to Start | Control Panel | System | Advanced. Click on Environment Variables under System Variables, find Path, and click on it. In the Edit window, modify the path by adding the location of the class to its value. If you do not have the item Path, you may select the option of adding a new variable and adding Path as the name and the location of the class as its value. Close the window, reopen the command prompt window, and run your Java code. How to do it... If you go with Eclipse, you have to download Mule IDE Standalone 3.3. Download Mule ESB 3.3 Community edition from the following URL: http://www.mulesoft.org/extensions/mule-ide. Unzip the downloaded file and set MULE_HOME as the environment variable. Download the latest version of Eclipse from http://www.eclipse.org/downloads/. After installing Eclipse, you now have to integrate Mule IDE in the Eclipse. If you are using Eclipse Version 3.4 (Galileo), perform the following steps to install Mule IDE. If you are not using Version 3.4 (Galileo), the URL for downloading will be different. Open Eclipse IDE. Go to Help | Install New Software…. Write the URL in the Work with: textbox: http://dist.muleforge.org/muleide/updates/3.4/ and press Enter. Select the Mule IDE checkbox. Click on the Next button. Read and accept the license agreement terms. Click on the Finish button. This will take some time. When it prompts for a restart, shut it down and restart Eclipse. Mule configuration After installing Mule IDE, you will now have to configure Mule in Eclipse. Perform the following steps: Open Eclipse IDE. Go to Window | Preferences. Select Mule, add the distribution folder mule as standalone 3.3; click on the Apply button and then on the OK button. This way you can configure Mule with Eclipse. Installing Mule Studio Mule Studio is a powerful, user-friendly Eclipse-based tool. Mule Studio has three main components: a package tree, a palette, and a canvas. Mule ESB easily creates flows as well as edits and tests them in a few minutes. Mule Studio is currently in public beta. It is based on drag-and-drop elements and supports two-way editing. Getting ready To install Mule Studio, download Mule Studio from http://www.mulesoft.org/download-mule-esb-community-edition. How to do it... Unzip the Mule Studio folder. Set the environment variable for Mule Studio. While starting with Mule Studio, the config.xml file will be created automatically by Mule Studio. The three main components of Mule Studio are as follows: A package tree A palette A canvas A package tree A package tree contains the entire structure of your project. In the following screenshot, you can see the package explorer tree. In this package explorer tree, under src/main/java, you can store the custom Java class. You can create a graphical flow from src/main/resources. In the app folder you can store the mule-deploy.properties file. The folders src, main, and app contain the flow of XML files. The folders src, main, and test contain flow-related test files. The Mule-project.xml file contains the project's metadata. You can edit the name, description, and server runtime version used for a specific project. JRE System Library contains the Java runtime libraries. Mule Runtime contains the Mule runtime libraries. A palette The second component is palette. The palette is the source for accessing Endpoints, components, transformers, and Cloud connectors. You can drag them from the palette and drop them onto the canvas in order to create flows. The palette typically displays buttons indicating the different types of Mule elements. You can view the content of each button by clicking on them. If you do not want to expand elements, click on the button again to hide the content. A canvas The third component is canvas; canvas is a graphical editor. In canvas you can create flows. The canvas provides a space that facilitates the arrangement of Studio components into Mule flows. In the canvas area you can configure each and every component, and you can add or remove components on the canvas.
Read more
  • 0
  • 0
  • 2661

article-image-networking-performance-design
Packt
23 Aug 2013
18 min read
Save for later

Networking Performance Design

Packt
23 Aug 2013
18 min read
(For more resources related to this topic, see here.) Device and I/O virtualization involves managing the routing of I/O requests between virtual devices and the shared physical hardware. Software-based I/O virtualization and management, in contrast to a direct pass through to the hardware, enables a rich set of features and simplified management. With networking, virtual NICs and virtual switches create virtual networks between virtual machines which are running on the same host without the network traffic consuming bandwidth on the physical network NIC teaming consists of multiple, physical NICs and provides failover and load balancing for virtual machines. Virtual machines can be seamlessly relocated to different systems by using VMware vMotion, while keeping their existing MAC addresses and the running state of the VM. The key to effective I/O virtualization is to preserve these virtualization benefits while keeping the added CPU overhead to a minimum. The hypervisor virtualizes the physical hardware and presents each virtual machine with a standardized set of virtual devices. These virtual devices effectively emulate well-known hardware and translate the virtual machine requests to the system hardware. This standardization on consistent device drivers also helps with virtual machine standardization and portability across platforms, because all virtual machines are configured to run on the same virtual hardware, regardless of the physical hardware in the system. In this article we will discuss the following: Describe various network performance problems Discuss the causes of network performance problems Propose solutions to correct network performance problems Designing a network for load balancing and failover for vSphere Standard Switch The load balancing and failover policies that are chosen for the infrastructure can have an impact on the overall design. Using NIC teaming we can group several physical network adapters attached to a vSwitch. This grouping enables load balancing between the different physical NICs and provides fault tolerance if a card or link failure occurs. Network adapter teaming offers a number of available load balancing and load distribution options. Load balancing is load distribution based on the number of connections, not on network traffic. In most cases, load is managed only for the outgoing traffic and balancing is based on three different policies: Route based on the originating virtual switch port ID (default) Route based on the source MAC hash Route based on IP hash Also, we have two network failure detection options and those are: Link status only Beacon probing Getting ready To step through this recipe, you will need one or more running ESXi hosts, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required. How to do it... To change the load balancing policy and to select the right one for your environment, and also select the appropriate failover policy, you need to follow the proceeding steps: Open up your VMware vSphere Client. Log in to the vCenter Server. On the left hand side, choose any ESXi Server and choose configuration from the right hand pane. Click on the Networking section and select the vSwitch for which you want to change the load balancing and failover settings. You may wish to override this per port group level as well. Click on Properties. Select the vSwitch and click on Edit. Go to the NIC Teaming tab. Select one of the available policies from the Load Balancing drop-down menu. Select one of the available policies on the Network Failover Detection drop-down menu. Click on OK to make it effective. How it works... Route based on the originating virtual switch port ID (default) In this configuration, load balancing is based on the number of physical network cards and the number of virtual ports used. With this configuration policy, a virtual network card connected to a vSwitch port will always use the same physical network card. If a physical network card fails, the virtual network card is redirected to another physical network card. You typically do not see the individual ports on a vSwitch. However, each vNIC that gets connected to a vSwitch is implicitly using a particular port on the vSwitch. (It's just that there's no reason to ever configure which port, because that is always done automatically.) It does a reasonable job of balancing your egress uplinks for the traffic leaving an ESXi host as long as all the virtual machines using these uplinks have similar usage patterns. It is important to note that port allocation occurs only when a VM is started or when a failover occurs. Balancing is done based on a port's occupation rate at the time the VM starts up. This means that which pNIC is selected for use by this VM is determined at the time the VM powers on based on which ports in the vSwitch are occupied at the time. For example, if you started 20 VMs in a row on a vSwitch with two pNICs, the odd-numbered VMs would use the left pNIC and the even-numbered VMs would use the right pNIC and that would persist even if you shut down all the even-numbered VMs; the left pNIC, would have all the VMs and the right pNIC would have none. It might happen that two heavily-loaded VMs are connected to the same pNIC, thus load is not balanced. This policy is the easiest one and we always call for the simplest one to map it to a best operational simplification. Now when speaking of this policy, it is important to understand that if, for example, teaming is created with two 1 GB cards, and if one VM consumes more than one card's capacity, a performance problem will arise because traffic greater than 1 Gbps will not go through the other card, and there will be an impact on the VMs sharing the same port as the VM consuming all resources. Likewise, if two VMs each wish to use 600 Mbps and they happen to go to the first pNIC, the first pNIC cannot meet the 1.2 Gbps demand no matter how idle the second pNIC is. Route based on source MAC hash This principle is the same as the default policy but is based on the number of MAC addresses. This policy may put those VM vNICs on the same physical uplink depending on how the MAC hash is resolved. For MAC hash, VMware has a different way of assigning ports. It's not based on the dynamically changing port (after a power off and power on the VM usually gets a different vSwitch port assigned), but is instead based on fixed MAC address. As a result one VM is always assigned to the same physical NIC unless the configuration is not changed. With the port ID, the VM could get different pNICs after a reboot or VMotion. If you have two ESXi Servers with the same configuration, the VM will stay on the same pNIC number even after a vMotion. But again, one pNIC may be congested while others are bored. So there is no real load balancing. Route based on IP hash The limitation of the two previously-discussed policies is that a given virtual NIC will always use the same physical network card for all its traffic. IP hash-based load balancing uses the source and destination of the IP address to determine which physical network card to use. Using this algorithm, a VM can communicate through several different physical network cards based on its destination. This option requires configuration of the physical switch's ports to EtherChannel. Because the physical switch is configured similarly, this option is the only one that also provides inbound load distribution, where the distribution is not necessarily balanced. There are some limitations and reasons why this policy is not commonly used. These reasons are described as follows: The route based on IP hash load balancing option involves added complexity and configuration support from upstream switches. Link Aggregation Control Protocol (LACP) or EtherChannel is required for this algorithm to be used. However, this does not apply for a vSphere Standard Switch. For IP hash to be an effective algorithm for load balancing there must be many IP sources and destinations. This is not a common practice for IP storage networks, where a single VMkernel port is used to access a single IP address on a storage device. The same NIC will always send all its traffic to the same destination (for example, Google.com) through the same pNIC, though another destination (for example, bing.com) might go through another pNIC. So, in a nutshell, due to the added complexity, the upstream dependency on the advanced switch configuration and the management overhead, this configuration is rarely used in production environments. The main reason is that if you use IP hash, the pSwitch must be configured with LACP or EtherChannel. Also, if you use LACP or EtherChannel, the load balancing algorithm must be IP hash. This is because with LACP, inbound traffic to the VM could come through either of the pNICs, and the vSwitch must be ready to deliver that to the VM and only IP Hash will do that (the other policies will drop the inbound traffic to this VM that comes in on a pNIC that the VM doesn't use). We have only two failover detection options and those are: Link status only The link status option enables the detection of failures related to the physical network's cables and switch. However, be aware that configuration issues are not detected. This option also cannot detect the link state problems with upstream switches; it works only with the first hop switch from the host. Beacon probing The beacon probing option allows the detection of failures unseen by the link status option, by sending the Ethernet broadcast frames through all the network cards. These network frames authorize the vSwitch to detect faulty configurations or upstream switch failures and force the failover if the ports are blocked. When using an inverted U physical network topology in conjunction with a dual-NIC server, it is recommended to enable link state tracking or a similar network feature in order to avoid traffic black holes. According to VMware's best practices, it is recommended to have at least three cards before activating this functionality. However, if IP hash is going to be used, beacon probing should not be used as a network failure detection, in order to avoid an ambiguous state due to the limitation that a packet cannot hairpin on the port it is received. Beacon probing works by sending out and listening to beacon probes from the NICs in a team. If there are two NICs, then each NIC will send out a probe and the other NICs will receive that probe. Because EtherChannel is considered one link, this will not function properly as the NIC uplinks are not logically separate uplinks. If beacon probing is used, this can result in MAC address flapping errors, and the network connectivity may be interrupted. Designing a network for load balancing and failover for vSphere Distributed Switch The load balancing and failover policies that are chosen for the infrastructure can have an impact on the overall design. Using NIC teaming, we can group several physical network switches attached to a vSwitch. This grouping enables load balancing between the different Physical NICs, and provides fault tolerance if a card failure occurs. The vSphere distributed vSwitch offers a load balancing option that actually takes the network workload into account when choosing the physical uplink. This is route based on a physical NIC load. This is also called Load Based Teaming (LBT). We recommend this load balancing option over the others when using a distributed vSwitch. Benefits of using this load balancing policy are as follows: It is the only load balancing option that actually considers NIC load when choosing uplinks. It does not require upstream switch configuration dependencies like the route based on IP hash algorithm does. When the route based on physical NIC load is combined with the network I/O control, a truly dynamic traffic distribution is achieved. Getting ready To step through this recipe, you will need one or more running ESXi Servers, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required. How to do it... To change the load balancing policy and select the right one for your environment, and also select the appropriate failover policy you need to follow the proceeding steps: Open up your VMware vSphere Client. Log in to the vCenter Server. Navigate to Networking on the home screen. Navigate to a Distributed Port group and right click and select Edit Settings. Click on the Teaming and Failover section. From the Load Balancing drop-down menu, select Route Based on physical NIC load as the load balancing policy. Choose the appropriate network failover detection policy from the drop-down menu. Click on OK and your settings will be effective. How it works... Load based teaming, also known as route based on physical NIC load, maps vNICs to pNICs and remaps the vNIC to pNIC affiliation if the load exceeds specific thresholds on a pNIC. LBT uses the originating port ID load balancing algorithm for the initial port assignment, which results in the first vNIC being affiliated to the first pNIC, the second vNIC to the second pNIC, and so on. Once the initial placement is over after the VM being powered on, LBT will examine both the inbound and outbound traffic on each of the pNICs and then distribute the load across if there is congestion. LBT will send a congestion alert when the average utilization of a pNIC is 75 percent over a period of 30 seconds. 30 seconds of interval period is being used for avoiding the MAC flapping issues. However, you should enable port fast on the upstream switches if you plan to use STP. VMware recommends LBT over IP hash when you use vSphere Distributed Switch, as it does not require any special or additional settings in the upstream switch layer. In this way you can reduce unnecessary operational complexity. LBT maps vNIC to pNIC and then distributes the load across all the available uplinks, unlike IP hash which just maps the vNIC to pNIC but does not do load distribution. So it may happen that when a high network I/O VM is sending traffic through pNIC0, your other VM will also get to map to the same pNIC and send the traffic. What to know when offloading checksum VMware takes advantage of many of the performance features from modern network adaptors. In this section we are going to talk about two of them and those are: TCP checksum offload TCP segmentation offload Getting ready To step through this recipe, you will need a running ESXi Server and a SSH Client (Putty). No other prerequisites are required. How to do it... The list of network adapter features that are enabled on your NIC can be found in the file /etc/vmware/esx.conf on your ESXi Server. Look for the lines that start with /net/vswitch. However, do not change the default NIC's driver settings unless you have a valid reason to do so. A good practice is to follow any configuration recommendations that are specified by the hardware vendor. Carry out the following steps in order to check the settings: Open up your SSH Client and connect to your ESXi host. Open the file etc/vmware/esx.conf Look for the line that starts with /net/vswitch Your output should look like the following screenshot: How it works... A TCP message must be broken down into Ethernet frames. The size of each frame is the maximum transmission unit (MUT). The default maximum transmission unit is 1500 bytes. The process of breaking messages into frames is called segmentation. Modern NIC adapters have the ability to perform checksum calculations natively. TCP checksums are used to determine the validity of transmitted or received network packets based on error correcting code. These calculations are traditionally performed by the host's CPU. By offloading these calculations to the network adapters, the CPU is freed up to perform other tasks. As a result, the system as a whole runs better. TCP segmentation offload (TSO) allows a TCP/IP stack from the guest OS inside the VM to emit large frames (up to 64KB) even though the MTU of the interface is smaller. Earlier operating system used the CPU to perform segmentation. Modern NICs try to optimize this TCP segmentation by using a larger segment size as well as offloading work from the CPU to the NIC hardware. ESXi utilizes this concept to provide a virtual NIC with TSO support, without requiring specialized network hardware. With TSO, instead of processing many small MTU frames during transmission, the system can send fewer, larger virtual MTU frames. TSO improves performance for the TCP network traffic coming from a virtual machine and for network traffic sent out of the server. TSO is supported at the virtual machine level and in the VMkernel TCP/IP stack. TSO is enabled on the VMkernel interface by default. If TSO becomes disabled for a particular VMkernel interface, the only way to enable TSO is to delete that VMkernel interface and recreate it with TSO enabled. TSO is used in the guest when the VMXNET 2 (or later) network adapter is installed. To enable TSO at the virtual machine level, you must replace the existing VMXNET or flexible virtual network adapter with a VMXNET 2 (or later) adapter. This replacement might result in a change in the MAC address of the virtual network adapter. Selecting the correct virtual network adapter When you configure a virtual machine, you can add NICs and specify the adapter type. The types of network adapters that are available depend on the following factors: The version of the virtual machine, which depends on which host created it or most recently updated it. Whether or not the virtual machine has been updated to the latest version for the current host. The guest operating system. The following virtual NIC types are supported: Vlance VMXNET Flexible E 1000 Enhanced VMXNET (VMXNET 2) VMXNET 3 If you want to know more about these network adapter types then refer to the following KB article: http://kb.vmware.com/kb/1001805 Getting ready To step through this recipe, you will need one or more running ESXi Servers, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required. How to do it... To choose a particular virtual network adapter you have two ways, one is while you create a new VM and the other one is while adding a new network adaptor to an existing VM. To choose a network adaptor while creating a new VM is as follows: Open vSphere Client. Log in to the vCenter Server. Click on the File menu, and navigate to New| Virtual Machine. Go through the steps and hold on to the step where you need to create network connections. Here you need to choose how many network adaptors you need, which port group you want them to connect to, and an adaptor type. To choose an adaptor type while adding a new network interface in an existing VM you should follow these steps: Open vSphere Client. Log in to the vCenter Server. Navigate to VMs and Templates on your home screen. Select an existing VM where you want to add a new network adaptor, right click and select Edit Settings. Click on the Add button. Select Ethernet Adaptor. Select the Adaptor type and select the network where you want this adaptor to connect. Click on Next and then click on Finish How it works... Among the entire supported virtual network adaptor types, VMXNETis the paravirtualized device driver for virtual networking. The VMXNET driver implements an idealized network interface that passes through the network traffic from the virtual machine to the physical cards with minimal overhead. The three versions of VMXNET are VMXNET, VMXNET 2 (Enhanced VMXNET), and VMXNET 3. The VMXNET driver improves the performance through a number of optimizations as follows: Shares a ring buffer between the virtual machine and the VMkernel, and uses zero copy, which in turn saves CPU cycles. Zero copy improves performance by having the virtual machines and the VMkernel share a buffer, reducing the internal copy operations between buffers to free up CPU cycles. Takes advantage of transmission packet coalescing to reduce address space switching. Batches packets and issues a single interrupt, rather than issuing multiple interrupts. This improves efficiency, but in some cases with slow packet-sending rates, it could hurt throughput while waiting to get enough packets to actually send. Offloads TCP checksum calculation to the network hardware rather than use the CPU resources of the virtual machine monitor. Use vmxnet3 if you can, or the most recent model you can. Use VMware Tools where possible. For certain unusual types of network traffic, sometimes the generally-best model isn't optimal; if you have poor network performance, experiment with other types of vNICs to see which performs best.
Read more
  • 0
  • 0
  • 2508
article-image-appfog-top-features-you-need-know
Packt
20 Aug 2013
17 min read
Save for later

AppFog Top Features You Need to Know

Packt
20 Aug 2013
17 min read
(For more resources related to this topic, see here.) Auto reconfigure Most application's life cycle will involve using different databases in different environments. For example, you may use one database locally for development environments, but when you deploy to production, you will most likely have a production database in high-end machines. It can be a very tedious task to manage these changes during each deployment. AppFog supports the auto configure feature that automatically detects the database settings in your application and rewrites them using the bound service's credentials and settings. However, only some of the frameworks, such as Ruby on Rails and the Java Spring framework, are supported by AppFog for auto reconfigure. Enabling auto reconfigure AppFog will turn on auto configure automatically if you deploy a Spring application with the javax.sql.DataSource bean defined in the spring context XML file. AppFog will parse this file and override the driver class, URL, username, and password that form to match the service bound to the application. The following is an example snippet of the Spring context XML that will enable the AppFog auto reconfigure feature during deployment: <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"destroy-method="close"><property name="driverClassName" value="com.mysql.jdbc.Driver" /><property name="url" value="jdbc:mysql://127.0.0.1:3306/test" /><property name="username" value="spring" /><property name="password" value="spring" /></bean> This is because this file includes a reference to the org.apache.commons.dbcp.BasicDataSource bean that implements the javax.sql.DataSource interface, and therefore turns on the auto reconfigure feature. This feature is very helpful because it enables developers to deploy to AppFog without changing a single line of code. AppFog supports more auto recon figure features for Spring applications for other services such as MongoDB, Redis, and RabbitMQ. There are a couple of requirements to enable auto reconfigure on AppFog, and they are: Only one javax.sql.DataSource bean definition should be allowed in the Spring context XML file. Only one type of service should be bound. For example, for a relational database, only either one of the bound MySQL or PostgreSQL will enable auto reconfigure. Disabling auto reconfigure In some situations, you may not want to use the auto reconfigure feature; for example, you may want to use other database solutions such as MongoDB from MongoLab, MySQL from RDS by Amazon Web Services, or your own database installation on the same infrastructure. Java Spring application If you don't want to enable the AppFog auto reconfigure feature for your Java Spring application while creating the project, you can just select JavaWeb instead of choosing Spring. Ruby on Rails If you do not want to enable the AppFog auto reconfigure feature for your Rails application database, you can disable it easily by creating a new file config/cloudfoundry.yml and then add the following line to disable the auto reconfigure feature: autoconfig: false So all in all, AppFog's auto reconfigure feature is a great time-saving option that allows you to deploy your app without even knowing the details involved, and you still remain in control, so if you don't want to use it you can just disable it as we have seen earlier. Custom SSL If you are dealing with any sensitive information such as login credentials or credit card information, then having SSL is essential. SSL encrypts your data before it is sent, which can prevent man-in-the-middle attacks, where people intercept the data that otherwise would be transferred in plain text. AppFog provides a default SSL for applications that use an AppFog-provided subdomain under *.af.cm. The AppFog platform enables developers to deploy applications that easily enable custom SSL. This feature is only available for a paid plan. At the time of writing, the cheapest plan that offers SSL is priced at $50 per month with one end point. Be aware that AppFog custom SSL support is currently available only for those applications on AppFog that are hosted in the Amazon Web Service infrastructure; thus, applications that are deployed to Rackspace, Windows Azure, and HP Cloud cannot use this feature even when you are using a paid plan. The Install tool To generate an RSA private key, you need to have OpenSSL installed on your machine. Most of the Linux distro install OpenSSL by default. To install OpenSSL in Windows, you need to download the installer from http://gnuwin32.sourceforge.net/packages/openssl.htm and then install it according to the installer instructions. With OpenSSL installed, we can move on to generating our own private key. Generating a private key It is easy to generate an RSA private key using OpenSSL. openssl genrsa is the command to generate an RSA private key. Make sure OpenSSL's bin folder is in the PATH environment variable, or you can use the console to navigate to OpenSSL's bin folder. The location of the bin folder of my machine is c:Program Files (x86)GnuWin32bin. c:Program Files (x86)GnuWin32bin>openssl genrsa -des3 -out server.key1024Loading 'screen' into random state - doneGenerating RSA private key, 1024 bit long modulus...........++++++..........++++++e is 65537 (0x10001)Enter pass phrase for server.key:Verifying - Enter pass phrase for server.key: The preceding command will generate the RSA private key with 1024 bit strength. The pass phrase was required to generate the key but we will remove it later. Generating Certificate Signing Request In Public Key Infrastructure (PKI) systems, a Certificate Signing Request is a message sent from an applicant to the Certificate Authority in order to apply for a digital identity certificate. So we need to generate a Certificate Signing Request for the Certificate Authority to sign. To generate a Certificate Signing Request, use the openssl req command: c:Program Files (x86)GnuWin32bin>openssl req -new -key server.key -outserver.csrLoading 'screen' into random state - doneYou are about to be asked to enter information that will be incorporatedinto your certificate request.What you are about to enter is what is called a Distinguished Name or aDN.There are quite a few fields but you can leave some blankFor some fields there will be a default value,If you enter '.', the field will be left blank.-----Country Name (2 letter code) [AU]:MYState or Province Name (full name) [Some-State]:Kuala LumpurLocality Name (eg, city) []:Kuala LumpurOrganization Name (eg, company) [Internet Widgits Pty Ltd]:Dream and MeOrganizational Unit Name (eg, section) []:ITCommon Name (eg, YOUR name) []:Dream and MeEmail Address []:[email protected] enter the following 'extra' attributesto be sent with your certificate requestA challenge password []:An optional company name []: If you are on a Windows machine and encounter the following error: Unable to load config info from /usr/local/ssl/openssl.cnf Then you need to add the following for the command to load the config file from the custom location: -config "C:Program Files (x86)GnuWin32shareopenssl.cnf" Please note that the path to the config file might be different based on your installation. You can send the created server.csr file to the SSL certificate provider to sign it. The next step is to remove the pass phrase protection so that we can use it on AppFog: c:Program Files (x86)GnuWin32bin>openssl rsa -in server.key -outserver.keyEnter pass phrase for server.key:writing RSA key The new private key file will be created without the pass phrase protection. You will need to upload this new private key to AppFog. Installing the SSL certificate To install our new SSL certificate to AppFog, we will need to log in to the AppFog web console. On the main page, open the SSL tab and click on the Get Started button. On the new page, you will need to upload the server.csr file along with your server.key private key. Once they are uploaded, AppFog will provide you with an SSL terminator that will look like the following: af-ssl-term-0-000000000.us-east-1.elb.amazonaws.com You will need to sign in to your domain provider and create/modify CNAME to point to the SSL terminator provided to you. It could take a while for the DNS to propagate, but once done, you will have your custom SSL set up! This will give your users more confidence in your application, and is a lot more secure than the HTTP protocol. Teams Most applications are developed by teams, and deploying them is no exception. As such, AppFog allows you to create and manage teams with permissions for starting, stopping, and restarting applications. This feature is still in beta and only available to the paid plans, as of the time of writing this article. Once you start using the paid plan, you can navigate to the Teams tab and start to invite people to join your team. Once the invitation is approved by the user, then he/she can start to manage your application! You can also manage team members from the console as follows: c:optappfog-starterappfog-blog>af -u [email protected] update appfogblog Currently, the team feature only supports basic permission controls, but in the future, AppFog will implement more complex authorizations, such as roles and groups, which will allow different permissions for different environments, such as allowing QA engineers to be able to manage only the QA environment but not production applications. Third-party add-ons AppFog provides third-party services that you can easily install and tie into your application. A list of add-ons can be found in the Add-ons tab of the application's details page. These add-ons can be very useful for developers, such as the Mailgun add-on, which provides a service for developers to send e-mails via the cloud without setting up a mail server or relying on the Gmail SMTP that will limit the request. Another useful add-on is Blitz, which is a cloud-based load testing tool for developers to find performance bottlenecks. This add-on allows you to easily set it up and start load testing in minutes! Installing an add-on A great example for showing off the process of using an add-on is the Logentries add-on. The Logentries add-on allows you to manage your application's logs from the cloud. To install it, just go to the Add-ons page and hit the Install button, which you will find right under the description, as shown in the following screenshot: Managing add-ons After successfully installing an add-on, you will see two buttons being displayed that will help you to manage the add-ons: After clicking on the Manage button, you will be able to sign in to Logentries with a single sign-on feature. AppFog and the AppFog add-on provider provide good integration that allows you to sign in to the add-on console without a username and password as they are integrated using a single sign-on. Configuring Rails to use Logentries The next step is to set up your application to use Logentries. For a Rails application, you need to add the le gem into the Gemfile and then install it with a bundler using the bundle install command. Once installed, we need to configure the logger by modifying the config/environment.rb file. All you have to do is just add the following lines: if Rails.env.development?Rails.logger = Le.new('LOGENTRIES_TOKEN', true)elseRails.logger = Le.new('LOGENTRIES_TOKEN')end Replace LOGENTRIES_TOKEN with the token created in the Logentries UI. The second parameter tells the app if it should be dumped to the console instead. So for development, we will just be printing our errors to the console, whereas in production, we will be logging to the Logentries service. Rails.logger.info("information message")Rails.logger.warn("warning message")Rails.logger.debug("debug message") For more information on Logentries, you can view its documentation page at https://logentries.com/doc/. There are many other add-ons such as Redis Cloud, IronWorker, Blitz, Mailgun, and more. All of these add-ons provide good documentation on how to install, configure, and use them in your application. This is just another great example of how AppFog speeds up the development process for developers besides just providing a great infrastructure to work on. Tunnel AppFog secures its services, such as databases, from outside access, which is great in most situations, as only your application should have access to the database. However, there are situations where you will need remote access, for example, while running ad-hoc queries against your database for one-time analysis. For these types of situations, you will need to tunnel into the AppFog environment to locally access the resources. Install Caldecott Gem To begin with, we need to install the caldecott gem that will allow us to connect through TCP over an HTTP tunnel. c:optappfog-starterappfog-blog>af tunnel appfog-blog-data To use af tunnel, you must first install caldecott: gem install caldecott Note that you'll need a C compiler. If you're on OS X, Xcode will provide one. If you're on Windows, try DevKit. To install caldecott, simply run gem install caldecott from the console. With it installed, we are ready to create a tunnel. Tunnel to service You can use the af tool to create a tunnel and bind a local port to the remote port on the AppFog infrastructure. Just run af tunnel <servicename> [--port], where <servicename> is the name of the service you want to tunnel to, and you can optionally specify the port number to bind to. c:optappfog-starterappfog-blog>af tunnel appfog-blog-dataGetting tunnel connection info: OKService connection info:username : uf52effea2387407ba14bb0d94b820af1password : pf79b4c9e197841298386f2543a5d7857name : d5198ab07a6434a68adeaa9162e31e8d5infra : rsStarting tunnel to appfog-blog-data on port 10000.1: none2: psqlWhich client would you like to start?: 1Open another shell to run command-line clients oruse a UI tool to connect using the displayed information.Press Ctrl-C to exit... During the tunneling process, you can choose to either run the psql client or none. In my example, I have chosen none since I will use PG Admin3 to manage. The following table shows the clients that can start by caldecott. You need to make sure the client executable is in the PATH environment variable. Service   Client   MongoDB   mongo   MySQL   mysql   PostgreSQL   psql   If your favorite client is not in the list, you can choose none. The af tool will output the details of the credentials, so you can paste them into your favorite client to manage the databases. The following is an example of me using PG Admin3 to sign in. Once connected, you can use it as if the database was local, view data, and even create new tables. AppFog provides a secure channel for you to manage and tunnel your data service. Moreover, you can use your favorite database client, such as a MySQL workbench or pgAdmin3. Export/import service One of the features you must know is the export/import service. This feature allows you to export existing services' data and import this data to new services. This is very helpful for developers to clone production data to another service for other purposes, such as to analyze data or use as a development database. At the time of writing, AppFog only provides the af tool to export/import services. You can export a service using the af export-service <service> command: c:optappfog-starterappfog-blog>af export-service appfog-blog-dataExporting data from 'appfog-blog-data': OKhttp://dl.rs.af.cm/serialized/postgresql/dcb9c83b851524c17bfc9778ba8f5c1ac/snapshots/1629?token=PEXKgYJy9B8e After running the export command, you will be provided with a link to the snapshot. You can download it and take its backup. Using this link, you can import into a new service and initialize with the data. To import a service, you can use the af import-service <service> <url> command, where <service> is the new service's name and <url> is the link you exported from another service. For example, if you want to name the new service as appfog-blog-data-singapore, you can simply use the following command: c:optappfog-starterappfog-blog>af import-service appfogblog-data-singapore http://dl.rs.af.cm/serialized/postgresql/dcb9c83b851524c17bfc9778ba8f5c1ac/snapshots/1629?token=PEXKgYJy9B8eImporting data into 'appfog-blog-data-singapore': OK It's worth noting that you can only create a service of the same type with the snapshot tools. For example, you cannot create a MySQL database from the snapshot of a PostgreSQL database. Cloning We have just seen some features for cloning the database services. AppFog also offers a similar feature for your application itself. The cloning abilities allow you to replicate your application, optionally including the services. The difference between this and the previous export/import method is, of course, that here you clone the application as well. When cloning your application, you can choose a different infrastructure. So for instance, you may have deployed your app to the HP infrastructure, but the clone feature allows you to replicate it—let's say, on the AWS cloud—with zero downtime. To clone a complete application including its services, you can use the Clone tab on the application's admin section, as shown in the following screenshot: Choosing an infrastructure The first step is to choose the infrastructure, which, at the time of writing, had the following options: AWS Asia Southeast AWS Europe West AWS US East HP Openstack AZ 2 MS Azure AZ 1 Choosing a subdomain Your new application needs a new subdomain to map to. Currently, the AppFog clone feature is only able to map to the *.af.cm subdomain when you clone, but once the application is set up, you can map your own custom domain. To clone an application from the command line, you can use the af clone <src-app> <dest-app> [infra] command. To view a list of the available infrastructures, you can just run the af infras command: c:optappfog-starterappfog-blog>af infras+--------+-------------------------+| Name | Description |+--------+-------------------------+| aws | AWS US East - Virginia || eu-aws | AWS EU West - Ireland || ap-aws | AWS Asia SE - Singapore || hp | HP AZ 2 - Las Vegas |+--------+-------------------------+ So, to clone your application to AWS Singapore, just execute the following: c:optappfog-starterappfog-blog>af clone appfog-blog appfog-blogsingapore-clone ap-aws1: AWS US East - Virginia2: AWS EU West - Ireland3: AWS Asia SE - Singapore4: HP AZ 2 - Las VegasSelect Infrastructure: 3Application Deployed URL [appfog-blog-singapore-clone.ap01.aws.af.cm]:Pulling last pushed source code: OKCloning 'appfog-blog' to 'appfog-blog-singapore-clone':Uploading Application:Checking for available resources: OKPacking application: OKUploading (33K): OKPush Status: OKExporting data from appfog-blog-data: OKCreating service appfog-blog-singapore-clone-data: OKBinding service appfog-blog-singapore-clone-data: OKImporting data to appfog-blog-singapore-clone-data: OKStaging Application 'appfog-blog-singapore-clone': OKStarting Application 'appfog-blog-singapore-clone': OK This simple one-line command just cloned your entire application from one infrastructure to another within minutes! Cool, right? You can then view the new application from the web console and check its status. AppFog provides an awesome clone feature that allows you to clone an application from one infrastructure to another. While this needs to be carefully done on a production application, this feature still has many use cases that will ease the developer's workload. As I hope you have now seen, AppFog is not just a simple PaaS that allows you to deploy applications. AppFog extends this basic functionality with tons of features such as custom domains, app-cloning, and multiple data center setup options. Besides offering amazing features for developers, AppFog also offers features for your customers, such as allowing you to deploy to their location, for example Singapore, which will decrease the latency across Asia. Third-party add-on is yet another cool feature that is available on the AppFog platform. For example, MongoLab/MongoHQ provides free add-ons to an AppFog user with maximum 500 MB of storage, which is a huge amount of storage and is enough for small productions. Moreover, Logentries allows you to rapidly develop and test your backend with load testing and logging features Summary This article introduced you to the AppFog features and showed how to use them in a real-world environment. The features included load balancing, SSL, add-ons, teams, clones, tunnels, and so on. Resources for Article : Further resources on this subject: Introduction to Cloud Computing with Microsoft Azure [Article] Apache CloudStack Architecture [Article] Troubleshooting in OpenStack Cloud Computing [Article]
Read more
  • 0
  • 0
  • 1696

article-image-testing-xtext-and-xtend
Packt
20 Aug 2013
20 min read
Save for later

Testing with Xtext and Xtend

Packt
20 Aug 2013
20 min read
(For more resources related to this topic, see here.) Introduction to testing Writing automated tests is a fundamental technology / methodology when developing software. It will help you write quality software where most aspects (possibly all aspects) are somehow verified in an automatic and continuous way. Although successful tests do not guarantee that the software is bug free, automated tests are a necessary condition for professional programming (see Beck 2002, Martin 2002, 2008, 2011 for some insightful reading about this subject). Tests will also document your code, whether it is a framework, a library, or an application; tests are form of documentation that does not risk to get stale with respect to the implementation itself. Javadoc comments will likely not be kept in synchronization with the code they document, manuals will tend to become obsolete if not updated consistently, while tests will fail if they are not up-to-date. The Test Driven Development (TDD) methodology fosters the writing of tests even before writing production code. When developing a DSL one can relax this methodology by not necessarily writing the tests first. However, one should write tests as soon as a new functionality is added to the DSL implementation. This must be taken into consideration right from the beginning, thus, you should not try to write the complete grammar of a DSL, but proceed gradually; write a few rules to parse a minimal program, and immediately write tests for parsing some test input programs. Only when these tests pass you should go on to implementing other parts of the grammar. Moreover, if some validation rules can already be implemented with the current version of the DSL, you should write tests for the current validator checks as well. Ideally, one does not have to run Eclipse to manually check whether the current implementation of the DSL works as expected. Using tests will then make the development much faster. The number of tests will grow as the implementation grows, and tests should be executed each time you add a new feature or modify an existing one. You will see that since tests will run automatically, executing them over and over again will require no additional effort besides triggering their execution (think instead if you should manually check that what you added or modified did not break something). This also means that you will not be scared to touch something in your implementation; after you did some changes, just run the whole test suite and check whether you broke something. If some tests fail,you will just need to check whether the failure is actually expected (and in case fix the test) or whether your modifications have to be fixed. It is worth noting that using a version control system (such as Git) is essential to easily get back to a known state; just experimenting with your code and finding errors using tests does not mean you can easily backtrack. You will not even be scared to port your implementation to a new version of the used frameworks. For example, when a new version of Xtext is released, it is likely that some API has changed and your DSL implementation might not be built anymore with the new version. Surely, running the MWE2 workflow is required. But after your sources compile again, your test suite will tell you whether the behavior of your DSL is still the same. In particular, if some of the tests fail, you can get an immediate idea of which parts need to be changed to conform to the new version of Xtext. Moreover, if your implementation relies on a solid test suite, it will be easier for contributors to provide patches and enhancements for your DSL; they can run the test suite themselves or they can add further tests for a specific bugfix or for a new feature. It will also be easy for the main developers to decide whether to accept the contributions by running the tests. Last but not the least, you will discover that writing tests right from the beginning will force you to write modular code (otherwise you will not be able to easily test it) and it will make programming much more fun. Xtext and Xtend themselves are developed with a test driven approach. Junit 4 Junit is the most popular unit test framework for Java and it is shipped with the Eclipse JDT. In particular, the examples in this article are based on Junit version 4. To implement Junit tests, you just need to write a class with methods annotated with @org.junit.Test. We will call such methods simply test methods. Such Java (or Xtend) classes can then be executed in Eclipse using the "Junit test" launch configuration; all methods annotated with @Test will be then executed by Junit. In test methods you can use assert methods provided by Junit to implement a test. For example, assertEquals (expected, actual) checks whether the two arguments are equal; assertTrue(expression) checks whether the passed expression evaluates to true. If an assertion fails, Junit will record such failure; in particular, in Eclipse, the Junit view will provide you with a report about tests that failed. Ideally, no test should fail (and you should see the green bar in the Junit view). All test methods can be executed by Junit in any order, thus, you should never write a test method which depends on another one; all test methods should be executable independently from each other. If you annotate a method with @Before, that method will be executed before each test method in that class, thus, it can be used to prepare a common setup for all the test methods in that class. Similarly, a method annotated with @After will be executed after each test method (even if it fails), thus, it can be used to cleanup the environment. A static method annotated with @BeforeClass will be executed only once before the start of all test methods (@AfterClass has the complementary intuitive functionality). The ISetup interface Running tests means we somehow need to bootstrap the environment to make it support EMF and Xtext in addition to the implementation of our DSL. This is done with a suitable implementation of ISetup. We need to configure things differently depending on how we want to run tests; with or without Eclipse and with or without Eclipse UI being present. The way to set up the environment is quite different when Eclipse is present, since many services are shared and already part of the Eclipse environment. When setting up the environment for non-Eclipse use (also referred to as standalone) there are a few things that must be configured, such as creating a Guice injector and registering information required by EMF. The method createInjectorAndDoEMFRegistration in the ISetup interface is there to do exactly this. Besides the creation of an Injector, this method also performs all the initialization of EMF global registries so that after the invocation of that method, the EMF API to load and store models of your language can be fully used, even without a running Eclipse. Xtext generates an implementation of this interface, named after your DSL, which can be found in the runtime plugin project. For our Entities DSL it is called EntitiesStandaloneSetup. The name "standalone" expresses the fact that this class has to be used when running outside Eclipse. Thus, the preceding method must never be called when running inside Eclipse (otherwise the EMF registries will become inconsistent). In a plain Java application the typical steps to set up the DSL (for example, our Entities DSL) can be sketched as follows: Injector injector = new EntitiesStandaloneSetup().createInjectorAndDoEMFRegistration();XtextResourceSet resourceSet = injector.getInstance(XtextResourceSet.class);resourceSet.addLoadOption (XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);Resource resource = resourceSet.getResource (URI.createURI("/path/to/my.entities"), true);Model model = (Model) resource.getContents().get(0); This standalone setup class is especially useful also for Junit tests that can then be run without an Eclipse instance. This will speed up the execution of tests. Of course, in such tests you will not be able to test UI features. As we will see in this article, Xtext provides many utility classes for testing which do not require us to set up the runtime environment explicitly. However, it is important to know about the existence of the setup class in case you either need to tweak the generated standalone compiler or you need to set up the environment in a specific way for unit tests. Implementing tests for your DSL Xtext highly fosters using unit tests, and this is reflected by the fact that, by default, the MWE2 workflow generates a specific plug-in project for testing your DSL. In fact, usually tests should reside in a separate project, since they should not be deployed as part of your DSL implementation. This additional project ends with the .tests suffix, thus, for our Entities DSL, it is org.example.entities.tests. The tests plug-in project has the needed dependencies on the required Xtext utility bundles for testing. We will use Xtend to write Junit tests. In the src-gen directory of the tests project, you will find the injector p roviders for both headless and UI tests. You can use these providers to easily write Junit test classes without having to worry about the injection mechanisms setup. The Junit tests that use the injector provider will typically have the following shape (using the Entities DSL as an example): @RunWith(typeof(XtextRunner))@InjectWith(typeof(EntitiesInjectorProvider))class MyTest { @Inject MyClass ... As hinted in the preceding code, in this class you can rely on injection; we used @InjectWith and declared that EntitiesInjectorProvider has to be used to create the injector. EntitiesInjectorProvider will transparently provide the correct configuration for a standalone environment. As we will see later in this article, when we want to test UI features, we will use EntitiesUiInjectorProvider (note the "Ui" in the name). Testing the parser The first tests you might want to write are the ones which concern parsing. This reflects the fact that the grammar is the first thing you must write when implementing a DSL. You should not try to write the complete grammar before starting testing: you should write only a few rules and soon write tests to check if those rules actually parse an input test program as you expect. The nice thing is that you do not have to store the test input in a file (though you could do that); the input to pass to the parser can be a string, and since we use Xtend, we can use multi-line strings. The Xtext test framework provides the class ParseHelper to easily parse a string. The injection mechanism will automatically tell this class to parse the input string with the parser of your DSL. To parse a string, we inject an instance of ParseHelper<T>, where T is the type of the root class in our DSL's model – in our Entities example, this class is called Model. The method ParseHelper.parse will return an instance of T after parsing the input string given to it. By injecting the ParseHelper class as an extension, we can directly use its methods on the strings we want to parse. Thus, we can write: @RunWith(typeof(XtextRunner))@InjectWith(typeof(EntitiesInjectorProvider))class EntitiesParserTest { @Inject extension ParseHelper<Model> @Test def void testParsing() { val model = ''' entity MyEntity { MyEntity attribute; } '''.parse val entity = model.entities.get(0) Assert::assertEquals("MyEntity", entity.name) val attribute = entity.attributes.get(0) Assert::assertEquals("attribute", attribute.name); Assert::assertEquals("MyEntity", (attribute.type.elementType as EntityType). entity.name); } ... In this test, we parse the input and test that the expected structure was constructed as a result of parsing. These tests do not add much value in the Entities DSL, but in a more complex DSL you do want to test that the structure of the parsed EMF model is as you expect. You can now run the test: right-click on the Xtend file and select Run As | JUnit Test as shown in the following screenshot. The test should pass and you should see the green bar in the Junit view. Note that the parse method returns an EMF model even if the input string contains syntax errors (it tries to parse as much as it can); thus, if you want to make sure that the input string is parsed without any syntax error, you have to check that explicitly. To do that, you can use another utility class, ValidationTestHelper. This class provides many assert methods that take an EObject argument. You can use an extension field and simply call assertNoErrors on the parsed EMF object. Alternatively, if you do not need the EMF object but you just need to check that there are no parsing errors, you can simply call it on the result of parse, for example: class EntitiesParserTest { @Inject extension ParseHelper<Model> @Inject extension ValidationTestHelper... @Test def void testCorrectParsing() { ''' entity MyEntity { MyEntity attribute } '''.parse.assertNoErrors } If you try to run the tests again, you will get a failure for this new test, as shown in the following screenshot: The reported error should be clear enough: we forgot to add the terminating ";" in our input program, thus we can fix it and run the test again; this time the green bar should be back. You can now write other @Test methods for testing the various features of the DSL (see the sources of the examples). Depending on the complexity of your DSL you may have to write many of them. Tests should test one specific thing at a time; lumping things together (to reduce the overhead of having to write many test methods) usually makes it harder later. Remember that you should follow this methodology while implementing your DSL, not after having implemented all of it. If you follow this strictly, you will not have to launch Eclipse to manually check that you implemented a feature correctly, and you will note that this methodology will let you program really fast. Ideally, you should start with the grammar with a single rule, especially if the grammar contains nonstandard terminals. The very first task is to write a grammar that just parses all terminals. Write a test for that to ensure there are no overlapping terminals before proceeding; this is not needed if terminals are not added to the standard terminals. After that add as few rules as possible in each round of development/testing until the grammar is complete. Testing the validator Earlier we used the ValidationTestHelper class to test that it was possible to parse without errors. Of course, we also need to test that errors and warnings are detected. In particular, we should test any error situation handled by our own validator. The ValidationTestHelper class contains utility methods (besides assertNoErrors) that allow us to test whether the expected errors are correctly issued. For instance, for our Entities DSL, we wrote a custom validator method that checks that the entity hierarchy is acyclic. Thus, we should write a test that, given an input program with a cycle in the hierarchy, checks that such an error is indeed raised during validation. Although not strictly required, it is better to separate Junit test classes according to the tested features, thus, we write another Junit class, EntitiesValidatorTest, which contains tests related to validation. The start of this new Junit test class should look familiar: @RunWith(typeof(XtextRunner))@InjectWith(typeof(EntitiesInjectorProvider))class EntitiesValidatorTest { @Inject extension ParseHelper<Model> @Inject extension ValidationTestHelper ... We are now going to use the assertError method from ValidationTestHelper, which, besides the EMF model element to validate, requires the following arguments: EClass of the object which contains the error (which is usually retrieved through the EMF EPackage class generated when running the MWE2 workflow) The expected Issue Code An optional string describing the expected error message Thus, we parse input containing an entity extending itself and we pass the arguments to assertError according to the error generated by checkNoCycleInEntityHierarchy in EntitiesValidator: @Testdef void testEntityExtendsItself() { ''' entity MyEntity extends MyEntity { } '''.parse.assertError(EntitiesPackage::eINSTANCE.entity, EntitiesValidator::HIERARCHY_CYCLE, "cycle in hierarchy of entity 'MyEntity'" )} Note that the EObject argument is the one returned by the parse method (we use assertError as an extension method). Since the error concerns an Entity object, we specify the corresponding EClass (retrieved using EntitiesPackage), the expected Issue Code, and finally, the expected error message. This test should pass. We can now write another test which tests the same validation error on a more complex input with a cycle in the hierarchy involving more than one entity; in this test we make sure that our validator issues an error for each of the entities involved in the hierarchy cycle: @Testdef void testCycleInEntityHierarchy() { val model = ''' entity A extends B {} entity B extends C {} entity C extends A {} '''.parse model.assertError(EntitiesPackage::eINSTANCE.entity, EntitiesValidator::HIERARCHY_CYCLE, "cycle in hierarchy of entity 'A'" ) model.assertError(EntitiesPackage::eINSTANCE.entity, EntitiesValidator::HIERARCHY_CYCLE, "cycle in hierarchy of entity 'B'" ) model.assertError(EntitiesPackage::eINSTANCE.entity, EntitiesValidator::HIERARCHY_CYCLE, "cycle in hierarchy of entity 'C'" )} Note that this time we must store the parsed EMF model into a variable since we will call assertError many times. We can also test that the NamesAreUniqueValidator method detects elements with the same name: @Testdef void testDuplicateEntities() { val model = ''' entity MyEntity {} entity MyEntity {} '''.parse model.assertError(EntitiesPackage::eINSTANCE.entity, null, "Duplicate Entity 'MyEntity'" )} In this case, we pass null for the issue argument, since no Issue Code is reported by NamesAreUniqueValidator. Similarly, we can write a test where the input has two attributes with the same name: @Testdef void testDuplicateAttributes() { val model = ''' entity MyEntity { MyEntity attribute; MyEntity attribute; } '''.parse model.assertError(EntitiesPackage::eINSTANCE.attribute, null, "Duplicate Attribute 'attribute'" )} Note that in this test we pass the EClass corresponding to Attribute, since duplicate attributes are involved in the expected error. Do not worry if it seems tricky to get the arguments for assertError right the first time; writing a test that fails the first time it is executed is expected in Test Driven Development. The error of the failing test should put you on the right track to specify the arguments correctly. However, by inspecting the error of the failing test, you must first make sure that the actual output is what you expected, otherwise something is wrong either with your test or with the implementation of the component that you are testing. Testing the formatter As we said in the previously, the formatter is also used in a non-UI environment (indeed, we implemented that in the runtime plug-in project), thus, we can test the formatter for our DSL with plain Junit tests. At the moment, there is no helper class in the Xtext framework for testing the formatter, thus we need to do some additional work to set up the tests for the formatter. This example will also provide some more details on Xtext and EMF, and it will introduce unit test methodologies that are useful in many testing scenarios where you need to test whether a string output is as you expect. First of all, we create another Junit test class for testing the formatter; this time we do not need the helper for the validator; we will inject INodeModelFormatter as an extension field since this is the class internally used by Xtext to perform formatting. One of the main principles of unit testing (which is also its main strength) is that you should test a single functionality in isolation. Thus, to test the formatter, we must not run a UI test that opens an Xtext editor on an input file and call the menu item which performs the formatting; we just need to test the class to which the formatting is delegated and we do not need a running Eclipse for that. import static extension org.junit.Assert.*@RunWith(typeof(XtextRunner))@InjectWith(typeof(EntitiesInjectorProvider))class EntitiesFormatterTest { @Inject extension ParseHelper<Model> @Inject extension INodeModelFormatter; Note that we import all the static methods of the Junit Assert class as extension methods. Then, we write the code that actually performs the formatting given an input string. Since we will write several tests for formatting, we isolate such code in a reusable method. This method is not annotated with @Test, thus it will not be automatically executed by Junit as a test method. This is the Xtend code that returns the formatted version of the input string: (input.parse.eResource as XtextResource).parseResult. rootNode.format(0, input.length).formattedText The method ParseHelper.parse returns the EMF model object, and each EObject has a reference to the containing EMF resource; we know that this is actually XtextResource (a specialized version of an EMF resource). We retrieve the result of parsing, that is, an IParseResult object, from the resource. The result of parsing contains the node model; recall from, that the node model carries the syntactical information that is, offsets and spaces of the textual input. The root of the node model, ICompositeNode, can be passed to the formatter to get the formatted version (we can even specify to format only a part of the input program). Now we can write a reusable method that takes an input char sequence and an expected char sequence and tests that the formatted version of the input program is equal to what we expect: def void assertFormattedAs(CharSequence input, CharSequence expected) { expected.toString.assertEquals( (input.parse.eResource as XtextResource).parseResult. rootNode.format(0, input.length).formattedText)} The reason why we convert the expected char sequence into a string will be clear in a minute. Note the use of Assert.assertEquals as an extension method. We can now write our first formatting test using our extension method assertFormattedAs: @Testdef void testEntities() { ''' entity E1 { } entity E2 {} '''.assertFormattedAs( '''...''' )} Why did we specify "…" as the expected formatted output? Why did we not try to specify what we really expect as the formatted output? Well, we could have written the expected output, and probably we would have gotten it right on the first try, but why not simply make the test fail and see the actual output? We can then copy that in our test once we are convinced that it is correct. So let's run the test, and when it fails, the Junit view tells us what the actual result is, as shown in the following screenshot: If you now double-click on the line showing the comparison failure in the Junit view, you will get a dialog showing a line by line comparison, as shown in the following screenshot: You can verify that the actual output is correct, copy that, and paste it into your test as the expected output. The test will now succeed: @Testdef void testEntities() { ''' entity E1 { } entity E2 {} '''.assertFormattedAs('''entity E1 {}entity E2 {}''' )} We did not indent the expected output in the multi-line string since it is easy to paste it like that from the Junit dialog. Using this technique you can easily write Junit tests that deal with comparisons. However, the "Result Comparison" dialog appears only if you pass String objects to assertEquals; that is why we converted the char sequence into a string in the implementation of assertFormattedAs. We now add a test for testing the formatting of attributes; the final result will be: @Testdef void testAttributes() { ''' entity E1 { int i ; string s; boolean b ;} '''.assertFormattedAs(''' entity E1 { int i; string s; boolean b; }''' )} Summary In this article we introduced unit testing for languages implemented with Xtext. Being able to test most of the DSL aspects without having to start an Eclipse environment really speeds up development.Test Driven Development is an important programming methodology that helps you make your implementations more modular, more reliable, and resilient to changes of the libraries used by your code. Resources for Article: Further resources on this subject: Making Money with Your Game [Article] Getting started with Kinect for Windows SDK Programming [Article] Installing Alfresco Software Development Kit (SDK) [Article]
Read more
  • 0
  • 0
  • 5501