Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-filtering-sequence
Packt
02 Jun 2015
5 min read
Save for later

Filtering a sequence

Packt
02 Jun 2015
5 min read
In this article by Ivan Morgillo, the author of RxJava Essentials, we will approach Observable filtering with RxJava filter(). We will manipulate a list on installed app to show only a subset of this list, according to our criteria. (For more resources related to this topic, see here.) Filtering a sequence with RxJava RxJava lets us use filter() to keep certain values that we don't want, out of the sequence that we are observing. In this example, we will use a list, but we will filter it, passing to the filter() function the proper predicate to include only the values we want. We are using loadList() to create an Observable sequence, filter it, and populate our adapter: private void loadList(List<AppInfo> apps) {    mRecyclerView.setVisibility(View.VISIBLE);      Observable.from(apps)           .filter((appInfo) ->                appInfo.getName().startsWith("C"))            .subscribe(new Observer<AppInfo>() {                @Override                public void onCompleted() {                    mSwipeRefreshLayout.setRefreshing(false);                }                  @Override                public void onError(Throwable e) {                    Toast.makeText(getActivity(), "Something went                      south!", Toast.LENGTH_SHORT).show();                    mSwipeRefreshLayout.setRefreshing(false);                }                  @Override                public void onNext(AppInfo appInfo) {                    mAddedApps.add(appInfo);                    mAdapter.addApplication(mAddedApps.size() - 1,                   appInfo);                }            }); } We have added the following line to the loadList() function: .filter((appInfo) -> appInfo.getName().startsWith("C")) After the creation of the Observable, we are filtering out every emitted element that has a name starting with a letter that is not a C. Let's have it in Java 7 syntax too, to clarify the types here: .filter(new Func1<AppInfo, Boolean>() {    @Override    public Boolean call(AppInfo appInfo) {        return appInfo.getName().startsWith("C");    } }) We are passing a new Func1 object to filter(), that is, a function having just one parameter. The Func1 object has an AppInfo object as parameter type and it returns a Boolean object. The filter() function will return true only if the condition will be verified. At that point, the value will be emitted and received by all the Observers. As you can imagine, filter() is critically useful to create the perfect sequence that we need from the Observable sequence we get. We don't need to know the source of the Observable sequence or why it's emitting tons of different elements. We just want a useful subset of those elements to create a new sequence we can use in our app. This mindset enforces the separation and the abstraction skills of our coding day life. One of the most common use of filter() is filtering null objects: .filter(new Func1<AppInfo, Boolean>() {    @Override    public Boolean call(AppInfo appInfo) {        return appInfo != null;    } }) This seems to be trivial and there is a lot of boilerplate code for something that trivial, but this will save us from checking for null values in the onNext() call, letting us focus on the actual app logic. As result of our filtering, the next figure shows the installed apps list, filtered by name starting with C: Summary In this article, we introduced RxJava filter() function and we used it in a real-world example in an Android app. RxJava offers a lot more functions allowing you to filter and manipulate Observable sequences. A comprehensive list of methods, scenarios and example are available in RxJava that will drive you in a step-by-step journey, from the basic of the Observer pattern to composing Observables and querying REST API using RxJava. Resources for Article: Further resources on this subject: Android Native Application API [article] Android Virtual Device Manager [article] Putting It All Together – Community Radio [article]
Read more
  • 0
  • 0
  • 796

article-image-mapreduce-api
Packt
02 Jun 2015
10 min read
Save for later

Map/Reduce API

Packt
02 Jun 2015
10 min read
 In this article by Wagner Roberto dos Santos, author of the book Infinispan Data Grid Platform Definitive Guide, we will see the usage of Map/Reduce API and its introduction in Infinispan. Using the Map/Reduce API According to Gartner, from now on in-memory data grids and in-memory computing will be racing towards mainstream adoption and the market for this kind of technology is going to reach 1 billion by 2016. Thinking along these lines, Infinispan already provides a MapReduce API for distributed computing, which means that we can use Infinispan cache to process all the data stored in heap memory across all Infinispan instances in parallel. If you're new to MapReduce, don't worry, we're going to describe it in the next section in a way that gets you up to speed quickly. An introduction to Map/Reduce MapReduce is a programming model introduced by Google, which allows for massive scalability across hundreds or thousands of servers in a data grid. It's a simple concept to understand for those who are familiar with distributed computing and clustered environments for data processing solutions. You can find the paper about MapReduce in the following link:http://research.google.com/archive/mapreduce.html The MapReduce has two distinct computational phases; as the name states, the phases are map and reduce: In the map phase, a function called Map is executed, which is designed to take a set of data in a given cache and simultaneously perform filtering, sorting operations, and outputs another set of data on all nodes. In the reduce phase, a function called Reduce is executed, which is designed to reduce the final form of the results of the map phase in one output. The reduce function is always performed after the map phase. Map/Reduce in the Infinispan platform The Infinispan MapReduce model is an adaptation of the Google original MapReduce model. There are four main components in each map reduce task, they are as follows: MapReduceTask: This is a distributed task allowing a large-scale computation to be transparently parallelized across Infinispan cluster nodes. This class provides a constructor that takes a cache whose data will be used as the input for this task. The MapReduceTask orchestrates the execution of the Mapper and Reducer seamlessly across Infinispan nodes. Mapper: A Mapper is used to process each input cache entry K,V. A Mapper is invoked by MapReduceTask and is migrated to an Infinispan node, to transform the K,V input pair into intermediate keys before emitting them to a Collector. Reducer: A Reducer is used to process a set of intermediate key results from the map phase. Each execution node will invoke one instance of Reducer and each instance of the Reducer only reduces intermediate keys results that are locally stored on the execution node. Collator: This collates results from reducers executed on the Infinispan cluster and assembles a final result returned to an invoker of MapReduceTask. The following image shows that in a distributed environment, an Infinispan MapReduceTask is responsible for starting the process for a given cache, unless you specify an onKeys(Object...) filter, all available key/value pairs of the cache will be used as input data for the map reduce task:   In the preceding image, the Map/Reduce processes are performing the following steps: The MapReduceTask in the Master Task Node will start the Map Phase by hashing the task input keys and grouping them by the execution node they belong to and then, the Infinispan master node will send a map function and input keys to each node. In each destination, the map will be locally loaded with the corresponding value using the given keys. The map function is executed on each node, resulting in a map< KOut, VOut > object on each node. The Combine Phase is initiated when all results are collected, if a combiner is specified (via combineWith(Reducer<KOut, VOut> combiner) method), the combiner will extract the KOut keys and invoke the reduce phase on keys. Before starting the Reduce Phase, Infinispan will execute an intermediate migration phase, where all intermediate keys and values are grouped. At the end of the Combine Phase, a list of KOut keys are returned to the initial Master Task Node. At this stage, values (VOut) are not returned, because they are not needed in the master node. At this point, Infinispan is ready to start the Reduce Phase; the Master Task Node will group KOut keys by the execution node and send a reduce command to each node where keys are hashed. The reducer is invoked and for each KOut key, the reducer will grab a list of VOut values from a temporary cache belonging to MapReduceTask, wraps it with an iterator, and invokes the reduce method on it. Each reducer will return one map with the KOut/VOut result values. The reduce command will return to the Master Task Node, which in turn will combine all resulting maps into one single map and return it as a result of MapReduceTask. Sample application – find a destination Now that we have seen what map and reduce are, and how the Infinispan model works, let's create a Find Destination application that illustrates the concepts we have discussed. To demonstrate how CDI works, in the last section, we created a web service that provides weather information. Now, based on this same weather information service, let's create a map/reduce engine for the best destination based on simple business rules, such as destination type (sun destination, golf, skiing, and so on). So, the first step is to create the WeatherInfo cache object that will hold information about the weather: public class WeatherInfo implements Serializable {  private static final long serialVersionUID =     -3479816816724167384L;  private String country;  private String city;  private Date day;  private Double temp;  private Double tempMax;  private Double tempMin;  public WeatherInfo(String country, String city, Date day,     Double temp) {    this(country, city, day, temp, temp + 5, temp - 5);  }  public WeatherInfo(String country, String city, Date day,     Double temp,    Double tempMax, Double tempMin) {    super();    this.country = country;    this.city = city;    this.day = day;    this.temperature = temp;    this.temperatureMax = tempMax;    this.temperatureMin = tempMin;  }// Getters and Setters ommitted  @Override  public String toString() {    return "{WeatherInfo:{ country:" + country + ", city:" +       city + ", day:" + day + ", temperature:" + temperature + ",       temperatureMax:" + temperatureMax + ", temperatureMin:" +           temperatureMin + "}";  }} Now, let's create an enum object to define the type of destination a user can select and the rules associated with each destination. To keep it simple, we are going to have only two destinations, sun and skiing. The temperature value will be used to evaluate if the destination can be considered the corresponding type: public enum DestinationTypeEnum {SUN(18d, "Sun Destination"), SKIING(-5d, "Skiing Destination");private Double temperature;private String description;DestinationTypeEnum(Double temperature, String description) {this.temperature = temperature;this.description = description;}public Double getTemperature() {return temperature;}public String getDescription() {return description;} Now it's time to create the Mapper class—this class is going to be responsible for validating whether each cache entry fits the destination requirements. To define the DestinationMapper class, just extend the Mapper<KIn, VIn, KOut, VOut> interface and implement your algorithm in the map method; public class DestinationMapper implementsMapper<String, WeatherInfo, DestinationTypeEnum, WeatherInfo> {private static final long serialVersionUID =-3418976303227050166L;public void map(String key, WeatherInfo weather,Collector<DestinationTypeEnum, WeatherInfo> c) {if (weather.getTemperature() >= SUN.getTemperature()){c.emit(SUN, weather);}else if (weather.getTemperature() <=SKIING.getTemperature()) {c.emit(SKIING, weather);}}} The role of the Reducer class in our application is to return the best destination among all destinations, based on the highest temperature for sun destinations and the lowest temperature for skiing destinations, returned by the mapping phase. To implement the Reducer class, you'll need to implement the Reducer<KOut, VOut> interface: public class DestinationReducer implementsReducer<DestinationTypeEnum, WeatherInfo> {private static final long serialVersionUID = 7711240429951976280L;public WeatherInfo reduce(DestinationTypeEnum key,Iterator<WeatherInfo> it) {WeatherInfo bestPlace = null;if (key.equals(SUN)) {while (it.hasNext()) {WeatherInfo w = it.next();if (bestPlace == null || w.getTemp() >bestPlace.getTemp()) {bestPlace = w;}}} else { /// Best for skiingwhile (it.hasNext()) {WeatherInfo w = it.next();if (bestPlace == null || w.getTemp() <bestPlace.getTemp()) {bestPlace = w;}}}return bestPlace;}} Finally, to execute our sample application, we can create a JUnit test case with the MapReduceTask. But first, we have to create a couple of cache entries before executing the task, which we are doing in the setUp() method: public class WeatherInfoReduceTest {private static final Log logger =LogFactory.getLog(WeatherInfoReduceTest.class);private Cache<String, WeatherInfo> weatherCache;@Beforepublic void setUp() throws Exception {Date today = new Date();EmbeddedCacheManager manager = new DefaultCacheManager();Configuration config = new ConfigurationBuilder().clustering().cacheMode(CacheMode.LOCAL).build();manager.defineConfiguration("weatherCache", config);weatherCache = manager.getCache("weatherCache");WeatherInfoweatherCache.put("1", new WeatherInfo("Germany", "Berlin",today, 12d));weatherCache.put("2", new WeatherInfo("Germany","Stuttgart", today, 11d));weatherCache.put("3", new WeatherInfo("England", "London",today, 8d));weatherCache.put("4", new WeatherInfo("England","Manchester", today, 6d));weatherCache.put("5", new WeatherInfo("Italy", "Rome",today, 17d));weatherCache.put("6", new WeatherInfo("Italy", "Napoli",today, 18d));weatherCache.put("7", new WeatherInfo("Ireland", "Belfast",today, 9d));weatherCache.put("8", new WeatherInfo("Ireland", "Dublin",today, 7d));weatherCache.put("9", new WeatherInfo("Spain", "Madrid",today, 19d));weatherCache.put("10", new WeatherInfo("Spain", "Barcelona",today, 21d));weatherCache.put("11", new WeatherInfo("France", "Paris",today, 11d));weatherCache.put("12", new WeatherInfo("France","Marseille", today, -8d));weatherCache.put("13", new WeatherInfo("Netherlands","Amsterdam", today, 11d));weatherCache.put("14", new WeatherInfo("Portugal", "Lisbon",today, 13d));weatherCache.put("15", new WeatherInfo("Switzerland","Zurich", today, -12d));}@Testpublic void execute() {MapReduceTask<String, WeatherInfo, DestinationTypeEnum,WeatherInfo> task = new MapReduceTask<String, WeatherInfo,DestinationTypeEnum, WeatherInfo>(weatherCache);task.mappedWith(new DestinationMapper()).reducedWith(newDestinationReducer());Map<DestinationTypeEnum, WeatherInfo> destination =task.execute();assertNotNull(destination);assertEquals(destination.keySet().size(), 2);logger.info("********** PRINTING RESULTS FOR WEATHER CACHE*************");for (DestinationTypeEnum destinationType :destination.keySet()){logger.infof("%s - Best Place: %sn",destinationType.getDescription(),destination.get(destinationType));}}} When we execute the application, you should expect to see the following output: INFO: Skiing DestinationBest Place: {WeatherInfo:{ country:Switzerland, city:Zurich,day:Mon Jun 02 19:42:22 IST 2014, temp:-12.0, tempMax:-7.0,tempMin:-17.0}INFO: Sun DestinationBest Place: {WeatherInfo:{ country:Spain, city:Barcelona, day:MonJun 02 19:42:22 IST 2014, temp:21.0, tempMax:26.0, tempMin:16.0} Summary In this article, you learned how to work with applications in modern distributed server architecture, using the Map Reduce API, and how it can abstract parallel programming into two simple primitives, the map and reduce methods. We have seen a sample use case Find Destination that demonstrated how use map reduce almost in real time. Resources for Article: Further resources on this subject: MapReduce functions [Article] Hadoop and MapReduce [Article] Introduction to MapReduce [Article]
Read more
  • 0
  • 0
  • 1345

article-image-creating-spring-application
Packt
25 May 2015
18 min read
Save for later

Creating a Spring Application

Packt
25 May 2015
18 min read
In this article by Jérôme Jaglale, author of the book Spring Cookbook , we will cover the following recipes: Installing Java, Maven, Tomcat, and Eclipse on Mac OS Installing Java, Maven, Tomcat, and Eclipse on Ubuntu Installing Java, Maven, Tomcat, and Eclipse on Windows Creating a Spring web application Running a Spring web application Using Spring in a standard Java application (For more resources related to this topic, see here.) Introduction In this article, we will first cover the installation of some of the tools for Spring development: Java: Spring is a Java framework. Maven: This is a build tool similar to Ant. It makes it easy to add Spring libraries to a project. Gradle is another option as a build tool. Tomcat: This is a web server for Java web applications. You can also use JBoss, Jetty, GlassFish, or WebSphere. Eclipse: This is an IDE. You can also use NetBeans, IntelliJ IDEA, and so on. Then, we will build a Springweb application and run it with Tomcat. Finally, we'll see how Spring can also be used in a standard Java application (not a web application). Installing Java, Maven, Tomcat, and Eclipse on Mac OS We will first install Java 8 because it's not installed by default on Mac OS 10.9 or higher version. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, and so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the Eclipse IDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Download Java from the Oracle website http://oracle.com. In the Java SE downloads section, choose the Java SE 8 SDK. Select Accept the License Agreement and download the Mac OS X x64 package. The direct link to the page is http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. Open the downloaded file, launch it, and complete the installation. In your ~/.bash_profile file, set the JAVA_HOME environment variable. Change jdk1.8.0_40.jdk to the actual folder name on your system (this depends on the version of Java you are using, which is updated regularly): export JAVA_HOME="/Library/Java/JavaVirtualMachines/ jdk1.8.0_40.jdk/Contents/Home" Open a new terminal and test whether it's working: $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b26)Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode) Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version: Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). In your ~/.bash_profile file, add a MAVEN HOME environment variable pointing to that folder. For example: export MAVEN_HOME=~/bin/apache-maven-3.3.1 Add the bin subfolder to your PATH environment variable: export PATH=$PATH:$MAVEN_HOME/bin Open a new terminal and test whether it's working: $ mvn –vApache Maven 3.3.1 (12a6b3...Maven home: /Users/jerome/bin/apache-maven-3.3.1Java version: 1.8.0_40, vendor: Oracle CorporationJava home: /Library/Java/JavaVirtualMachines/jdk1.8.0_...Default locale: en_US, platform encoding: UTF-8OS name: "mac os x", version: "10.9.5", arch... … Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the Core binary distribution. Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Make the scripts in the bin subfolder executable: chmod +x bin/*.sh Launch Tomcat using the catalina.sh script: $ bin/catalina.sh runUsing CATALINA_BASE:   /Users/jerome/bin/apache-tomcat-7.0.54...INFO: Server startup in 852 ms Tomcat runs on the 8080 port by default. In a web browser, go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Mac OS X 64 Bit version of Eclipse IDE for Java EE Developers. Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Launch Eclipse by executing the eclipse binary: ./eclipse There's more… Tomcat can be run as a background process using these two scripts: bin/startup.shbin/shutdown.sh On a development machine, it's convenient to put Tomcat's folder somewhere in the home directory (for example, ~/bin) so that its contents can be updated without root privileges. Installing Java, Maven, Tomcat, and Eclipse on Ubuntu We will first install Java 8. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the EclipseIDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Add this PPA (Personal Package Archive): sudo add-apt-repository -y ppa:webupd8team/java Refresh the list of the available packages: sudo apt-get update Download and install Java 8: sudo apt-get install –y oracle-java8-installer Test whether it's working: $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b25)...Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25… Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version:   Uncompress the downloaded file and move the resulting folder to a convenient location (for example, ~/bin). In your ~/.bash_profile file, add a MAVEN HOME environment variable pointing to that folder. For example: export MAVEN_HOME=~/bin/apache-maven-3.3.1 Add the bin subfolder to your PATH environment variable: export PATH=$PATH:$MAVEN_HOME/bin Open a new terminal and test whether it's working: $ mvn –vApache Maven 3.3.1 (12a6b3...Maven home: /home/jerome/bin/apache-maven-3.3.1Java version: 1.8.0_40, vendor: Oracle Corporation... Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the Core binary distribution.   Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Make the scripts in the bin subfolder executable: chmod +x bin/*.sh Launch Tomcat using the catalina.sh script: $ bin/catalina.sh run Using CATALINA_BASE:   /Users/jerome/bin/apache-tomcat-7.0.54 ... INFO: Server startup in 852 ms Tomcat runs on the 8080 port by default. Go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Linux 64 Bit version of Eclipse IDE for Java EE Developers.   Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Launch Eclipse by executing the eclipse binary: ./eclipse There's more… Tomcat can be run as a background process using these two scripts: bin/startup.sh bin/shutdown.sh On a development machine, it's convenient to put Tomcat's folder somewhere in the home directory (for example, ~/bin) so that its contents can be updated without root privileges. Installing Java, Maven, Tomcat, and Eclipse on Windows We will first install Java 8. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, and so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the Eclipse IDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Download Java from the Oracle website http://oracle.com. In the Java SE downloads section, choose the Java SE 8 SDK. Select Accept the License Agreement and download the Windows x64 package. The direct link to the page is http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html.   Open the downloaded file, launch it, and complete the installation. Navigate to Control Panel | System and Security | System | Advanced system settings | Environment Variables…. Add a JAVA_HOME system variable with the C:Program FilesJavajdk1.8.0_40 value. Change jdk1.8.0_40 to the actual folder name on your system (this depends on the version of Java, which is updated regularly). Test whether it's working by opening Command Prompt and entering java –version. Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version:   Uncompress the downloaded file. Create a Programs folder in your user folder. Move the extracted folder to it. Navigate to Control Panel | System and Security | System | Advanced system settings | Environment Variables…. Add a MAVEN_HOME system variable with the path to the Maven folder. For example, C:UsersjeromeProgramsapache-maven-3.2.1. Open the Path system variable. Append ;%MAVEN_HOME%bin to it.   Test whether it's working by opening a Command Prompt and entering mvn –v.   Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the 32-bit/64-bit Windows Service Installer binary distribution.   Launch and complete the installation. Tomcat runs on the 8080 port by default. Go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Windows 64 Bit version of Eclipse IDE for Java EE Developers.   Uncompress the downloaded file. Launch the eclipse program. Creating a Spring web application In this recipe, we will build a simple Spring web application with Eclipse. We will: Create a new Maven project Add Spring to it Add two Java classes to configure Spring Create a "Hello World" web page In the next recipe, we will compile and run this web application. How to do it… In this section, we will create a Spring web application in Eclipse. Creating a new Maven project in Eclipse In Eclipse, in the File menu, select New | Project…. Under Maven, select Maven Project and click on Next >. Select the Create a simple project (skip archetype selection) checkbox and click on Next >. For the Group Id field, enter com.springcookbook. For the Artifact Id field, enter springwebapp. For Packaging, select war and click on Finish. Adding Spring to the project using Maven Open Maven's pom.xml configuration file at the root of the project. Select the pom.xml tab to edit the XML source code directly. Under the project XML node, define the versions for Java and Spring. Also add the Servlet API, Spring Core, and Spring MVC dependencies: <properties> <java.version>1.8</java.version> <spring.version>4.1.5.RELEASE</spring.version> </properties>   <dependencies> <!-- Servlet API --> <dependency>    <groupId>javax.servlet</groupId>    <artifactId>javax.servlet-api</artifactId>    <version>3.1.0</version>    <scope>provided</scope> </dependency>   <!-- Spring Core --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-context</artifactId>    <version>${spring.version}</version> </dependency>   <!-- Spring MVC --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-webmvc</artifactId>    <version>${spring.version}</version> </dependency> </dependencies> Creating the configuration classes for Spring Create the Java packages com.springcookbook.config and com.springcookbook.controller; in the left-hand side pane Package Explorer, right-click on the project folder and select New | Package…. In the com.springcookbook.config package, create the AppConfig class. In the Source menu, select Organize Imports to add the needed import declarations: package com.springcookbook.config; @Configuration @EnableWebMvc @ComponentScan (basePackages = {"com.springcookbook.controller"}) public class AppConfig { } Still in the com.springcookbook.config package, create the ServletInitializer class. Add the needed import declarations similarly: package com.springcookbook.config;   public class ServletInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {    @Override    protected Class<?>[] getRootConfigClasses() {        return new Class<?>[0];    }       @Override    protected Class<?>[] getServletConfigClasses() {        return new Class<?>[]{AppConfig.class};    }      @Override    protected String[] getServletMappings() {        return new String[]{"/"};    } } Creating a "Hello World" web page In the com.springcookbook.controller package, create the HelloController class and its hi() method: @Controller public class HelloController { @RequestMapping("hi") @ResponseBody public String hi() {      return "Hello, world."; } } How it works… This section will give more you details of what happened at every step. Creating a new Maven project in Eclipse The generated Maven project is a pom.xml configuration file along with a hierarchy of empty directories: pom.xml src |- main    |- java    |- resources    |- webapp |- test    |- java    |- resources Adding Spring to the project using Maven The declared Maven libraries and their dependencies are automatically downloaded in the background by Eclipse. They are listed under Maven Dependencies in the left-hand side pane Package Explorer. Tomcat provides the Servlet API dependency, but we still declared it because our code needs it to compile. Maven will not include it in the generated .war file because of the <scope>provided</scope> declaration. Creating the configuration classes for Spring AppConfig is a Spring configuration class. It is a standard Java class annotated with: @Configuration: This declares it as a Spring configuration class @EnableWebMvc: This enables Spring's ability to receive and process web requests @ComponentScan(basePackages = {"com.springcookbook.controller"}): This scans the com.springcookbook.controller package for Spring components ServletInitializer is a configuration class for Spring's servlet; it replaces the standard web.xml file. It will be detected automatically by SpringServletContainerInitializer, which is automatically called by any Servlet 3. ServletInitializer extends the AbstractAnnotationConfigDispatcherServletInitializer abstract class and implements the required methods: getServletMappings(): This declares the servlet root URI. getServletConfigClasses(): This declares the Spring configuration classes. Here, we declared the AppConfig class that was previously defined. Creating a "Hello World" web page We created a controller class in the com.springcookbook.controller package, which we declared in AppConfig. When navigating to http://localhost:8080/hi, the hi()method will be called and Hello, world. will be displayed in the browser. Running a Spring web application In this recipe, we will use the Spring web application from the previous recipe. We will compile it with Maven and run it with Tomcat. How to do it… Here are the steps to compile and run a Spring web application: In pom.xml, add this boilerplate code under the project XML node. It will allow Maven to generate .war files without requiring a web.xml file: <build>    <finalName>springwebapp</finalName> <plugins>    <plugin>      <groupId>org.apache.maven.plugins</groupId>      <artifactId>maven-war-plugin</artifactId>      <version>2.5</version>      <configuration>       <failOnMissingWebXml>false</failOnMissingWebXml>      </configuration>    </plugin> </plugins> </build> In Eclipse, in the left-hand side pane Package Explorer, select the springwebapp project folder. In the Run menu, select Run and choose Maven install or you can execute mvn clean install in a terminal at the root of the project folder. In both cases, a target folder will be generated with the springwebapp.war file in it. Copy the target/springwebapp.war file to Tomcat's webapps folder. Launch Tomcat. In a web browser, go to http://localhost:8080/springwebapp/hi to check whether it's working.   How it works… In pom.xml the boilerplate code prevents Maven from throwing an error because there's no web.xml file. A web.xml file was required in Java web applications; however, since Servlet specification 3.0 (implemented in Tomcat 7 and higher versions), it's not required anymore. There's more… On Mac OS and Linux, you can create a symbolic link in Tomcat's webapps folder pointing to the.war file in your project folder. For example: ln -s ~/eclipse_workspace/spring_webapp/target/springwebapp.war ~/bin/apache-tomcat/webapps/springwebapp.war So, when the.war file is updated in your project folder, Tomcat will detect that it has been modified and will reload the application automatically. Using Spring in a standard Java application In this recipe, we will build a standard Java application (not a web application) using Spring. We will: Create a new Maven project Add Spring to it Add a class to configure Spring Add a User class Define a User singleton in the Spring configuration class Use the User singleton in the main() method How to do it… In this section, we will cover the steps to use Spring in a standard (not web) Java application. Creating a new Maven project in Eclipse In Eclipse, in the File menu, select New | Project.... Under Maven, select Maven Project and click on Next >. Select the Create a simple project (skip archetype selection) checkbox and click on Next >. For the Group Id field, enter com.springcookbook. For the Artifact Id field, enter springapp. Click on Finish. Adding Spring to the project using Maven Open Maven's pom.xml configuration file at the root of the project. Select the pom.xml tab to edit the XML source code directly. Under the project XML node, define the Java and Spring versions and add the Spring Core dependency: <properties> <java.version>1.8</java.version> <spring.version>4.1.5.RELEASE</spring.version> </properties>   <dependencies> <!-- Spring Core --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-context</artifactId>    <version>${spring.version}</version> </dependency> </dependencies> Creating a configuration class for Spring Create the com.springcookbook.config Java package; in the left-hand side pane Package Explorer, right-click on the project and select New | Package…. In the com.springcookbook.config package, create the AppConfig class. In the Source menu, select Organize Imports to add the needed import declarations: @Configuration public class AppConfig { } Creating the User class Create a User Java class with two String fields: public class User { private String name; private String skill; public String getName() {    return name; } public void setName(String name) {  this.name = name; } public String getSkill() {    return skill; } public void setSkill(String skill) {    this.skill = skill; } } Defining a User singleton in the Spring configuration class In the AppConfig class, define a User bean: @Bean public User admin(){    User u = new User();    u.setName("Merlin");    u.setSkill("Magic");    return u; } Using the User singleton in the main() method Create the com.springcookbook.main package with the Main class containing the main() method: package com.springcookbook.main; public class Main { public static void main(String[] args) { } } In the main() method, retrieve the User singleton and print its properties: AnnotationConfigApplicationContext springContext = new AnnotationConfigApplicationContext(AppConfig.class);   User admin = (User) springContext.getBean("admin");   System.out.println("admin name: " + admin.getName()); System.out.println("admin skill: " + admin.getSkill());   springContext.close(); Test whether it's working; in the Run menu, select Run.   How it works... We created a Java project to which we added Spring. We defined a User bean called admin (the bean name is by default the bean method name). In the Main class, we created a Spring context object from the AppConfig class and retrieved the admin bean from it. We used the bean and finally, closed the Spring context. Summary In this article, we have learned how to install some of the tools for Spring development. Then, we learned how to build a Springweb application and run it with Tomcat. Finally, we saw how Spring can also be used in a standard Java application.
Read more
  • 0
  • 0
  • 5261
Banner background image

article-image-practical-dart
Packt
22 May 2015
20 min read
Save for later

Practical Dart

Packt
22 May 2015
20 min read
This article is written by Martin Sikora, the author of Dart Essentials. This article will focus on the most common features of Dart that you'll use every day for your next Dart project. In this article, we'll look at: Future-Based API, Dart's built-in library for working with asynchronous calls Creating Ajax requests in Dart How packages work in Dart The whole article is intended to be very practical. We'll create a small app that reads a JSON dictionary and lets you search among all the terms in it. To make it more complicated, we'll implement a so-called fuzzy search algorithm, which doesn't search exact matches but the same order of characters instead. (For more resources related to this topic, see here.) The documentation search app We're going to write an app that can search among many terms and show some simple detail for each of them. The search input field will have an autocomplete feature with a list of all the terms that match our search string. Particularly, we'll use the documentation for PHP with 9,047 functions and write a fuzzy search algorithm that will search in it. Fuzzy search is used in IDEs such as PHPStorm or PyCharm and also in the popular text editor Sublime Text. It doesn't search just for the strings that start with your search term but it checks whether the order of characters in your term and in the checked string is the same. For example, if you type docfrg, it will find DocumentFragment because the letters in DocumentFragment are in the same order as docfrg. This is very handy because when there are a lot of functions with the same prefix, you can start typing with just the first character and then jump to the middle of the word and it's very likely that there won't be many functions with the same characters. This is quite common for PHP because there are a lot of functions that start with mysql or str_. If you're looking for a function called str_replace, you can type just splc. We'll load the entire dictionary with Ajax as a JSON string and decode it to a Map object. Dart uses the Future-Based API for all asynchronous calls including Ajax, so we should talk about it first. The Future-Based API Dart, as well as JavaScript, uses asynchronous calls a lot. A common pitfall of this approach in JavaScript is that it tends to make many nested function calls with callbacks: async1(function() { // do something async2(function() {    // do something    async3(function() {      // do something      callback();    }); }, callback); }); The downsides of this approach are obvious: It makes it hard to read and debug code, so-called callback hell. Each nested function can access variables from all parent scopes. This leads to variable shadowing and also prevents the JavaScript interpreter from deallocating unused variables. When working with a larger amount of data (for example, asynchronous calls when reading files), even simple script can use the entire available memory and cause the browser to crash. Future in Dart stands for an object that represents a value that will exist sometime in the future. Dart uses Future objects in nearly all their APIs, and we're going to use it in order to avoid passing callbacks around. An example of using Future is HttpRequest.getString(), which returns a Future object immediately and makes an asynchronous Ajax call: HttpRequest.getString('http://...').then(onDataReady); To work with the data returned from a Future object, we use the then() method, which takes the callback function as an argument that can return another Future object as well. If we want to create asynchronous behavior similar to that in the preceding example, we use the Completer class, which is a part of dart:async package. This class has a property called future, which represents our Future object and the complete() method, which resolves the Future object with some value. To keep the same order of function calls, we'll chain the then() methods of each Future object: import 'dart:async';   Future async1() { var completer = new Completer(); // Simulate long lasting async operation. new Future.delayed(const Duration(seconds: 2), () {    // Resolve completer.future with this value. This will also    // call callback passed to then() method for this Future.    completer.complete('delayed call #1'); }); // Call to [Completer.complete()] resolves the Future object return completer.future; }   Future async2(String val) { // Print result from the previous async call. print(val); // Then create a new Completer and schedule // it for later execution. var completer = new Completer(); // Simulate long lasting async operation. new Future.delayed(const Duration(seconds: 3), () {    completer.complete('delayed call #2'); }); return completer.future; }   Future async3(String val) { // Return another Future object. }   void main() { // Chain async calls. Each function returns a Future object. async1()    .then((String val) => async2(val))    .then((String val) => async3(val))    .then((String val) => print(val)); } We got rid of nested calls and have quite straightforward, shallow code. APIs similar to Dart's Future are very common among most JavaScript frameworks. Maybe you've already seen $.Deferred() in jQuery or $q.defer() in AngularJS. Future objects can also handle error states with catchError() that are emitted by a Completer object with completeError(). Another usage of Future is when we want a function to be called asynchronously, which is internally scheduled at the end of the event queue: new Future(() { // function body }); Sometimes, this is useful when you want to let the browser process all the events before executing more computationally intensive tasks that could make the browser unresponsive for a moment. For more in-depth information about Dart's event loop, see . Using keywords async and await Dart 1.9 introduced two new keywords, async and await, that significantly simplify the usage of asynchronous calls with the Future-Based API. Async The async keyword is used to mark a function's body, which immediately returns a Future object that is executed later and its return value is used to complete the Future object just like we saw previously when using the Completer class: Future<String> hello() async { return 'Hello, World!'; } In practice, you don't have to specify the Future<String> return type because even Dart Editor knows that the async function returns a Future object, so we'll omit it most of the time. This saves some writing but its real power comes in combination with await. Await With Future, the only way to chain (or simulate synchronous) calls is to use the then()method multiple times, as we saw earlier. But there's a new keyword await that is able to pause the execution of the current VM's thread and wait until Future is completed: String greetings = await hello(); The completed value of Future is then used as a value for the entire expression await hello(). In comparison to the preceding example of multiple asynchronous calls, we could use just: print(await async3(await async2(await async1()))); The only limitation here is that await must be used inside an asynchronous function (for example, defined with async) in order not to block the main execution thread. If the expression with await raises an exception, it's propagated to its caller. We're going to use async and await a lot here, but it's good to know how to use the "original" Future-Based API with the Future and Complete classes, because it will take some time for developers of third-party libraries to update their code with async and await. Dart 1.9 actually introduced even more keywords such as await-for, yield, async*, and a few more (also called generators), but these aren't very common and we're not going to discuss them here. If you want to know more about them, refer to https://www.dartlang.org/articles/beyond-async/. Creating Ajax requests in Dart Nearly every app these days uses Ajax. With libraries such as jQuery, it's very easy to make Ajax calls, and Dart is no different. Well, maybe the only difference is that Dart uses the Future-Based API. Creating an Ajax call and getting the response is this easy: String url = 'http://domain.com/foo/bar';Future ajax = HttpRequest.getString(url); ajax.then((String response) { print(response); });   // Or even easier with await. // Let's assume we're inside an asynchronous function. String response = await HttpRequest.getString(url); That's all. HttpRequest.getString() is a static method that returns a Future<String> object. When the response is ready, the callback function is called with the response as a string. You can handle an error state with catchError() method or just wrap the await expression with the try-catch block. By default, getString() uses the HTTP GET method. There are also more general static methods such as HttpRequest.request(), which returns Future<HttpRequest>, where you can access return code, response type, and so on. Also, you can set a different HTTP method if you want. To send form data via the POST method, the best way is to use HttpRequest.postFormData(), which takes a URL and a Map object with form fields as arguments. In this article, we'll use Ajax to load a dictionary for our search algorithm as JSON, and we'll also see JSONP in action later. Dart packages Every Dart project that contains the pubspec.yaml file is also a package. Our search algorithm is a nice example of a component that can be used in multiple projects, so we'll stick to a few conventions that will make our code reusable. Dart doesn't have namespaces like other languages, such as PHP, Java, or C++. Instead, it has libraries that are very similar in concept. We'll start writing our app by creating a new project with the Uber Simple Web Application template and creating two directories. First, we create /lib in the project's root. Files in this directory are automatically made public for anyone using our package. The second directory is /lib/src, where we'll put the implementation of our library, which is going to be private. Let's create a new file in /lib/fuzzy.dart: // lib/fuzzy.dart library fuzzy; part 'src/fuzzy_search.dart'; This creates a library called fuzzy. We could put all the code for this library right into fuzzy.dart, but that would be a mess. We'd rather split the implementation into multiple files and use the part keyword to tell Dart to make all the functions and classes defined in lib/src/fuzzy_search.dart public. One library can use the part keyword multiple times. Similar to object properties, everything that starts with the _ underscore is private and not available from the outside. Then, in lib/src/fuzzy_search.dart, we'll put just the basic skeleton code right now: // lib/src/fuzzy_search.dart part of fuzzy; class FuzzySearch { /* ... */ } The part of keyword tells Dart that this file belongs to the fuzzy library. Then, in main.dart, we need to import our own library to be able to use the FuzzySearch class: // web/main.dart import 'package:Chapter_02_doc_search/fuzzy.dart'; // ... later in the code create an instance of FuzzySearch. var obj = new FuzzySearch(); Note that the fuzzy.dart file is inside the lib directory, but we didn't have to specify it. The package importer is actually not working with directory names but package names, so Chapter_02_doc_search here is a package name from pubspec.yaml and not a directory, although these two have the same name. For more in-depth information about pubspec.yaml files, refer to https://www.dartlang.org/tools/pub/pubspec.html. You should end up with a structure like this: Note that the package has a reference to itself in the packages directory. One package can be a library and a web app at the same time. If you think about it, it's not total nonsense, because you can create a library and ship it with a demo app that shows what the library does and how to use it. You can read more about Dart packages at https://www.dartlang.org/tools/pub/package-layout.html. Writing the fuzzy search algorithm We can move on with writing the fuzzy search algorithm. A proper name for this algorithm would be probably approximate string matching, because our implementation is simpler than the canonical and we don't handle typos. Try to read the code: // lib/src/fuzzy_search.dartpart of fuzzy;   class FuzzySearch { List<String> list; FuzzySearch(this.list); List<String> search(String term) {    List<String> results = [];       if (term.isEmpty) {      return [];    }       // Iterate entire list.    List<String> result = list.where((String key) {      int ti = 0; // term index      int si = 0; // key index      // Check order of characters in the search      // term and in the string key.      for (int si = 0; si < key.length; si++) {        if (term[ti] == key[si]) {          ti++;          if (ti == term.length) {            return true;          }        }      }      return false;    }).toList(growable: false);       // Custom sort function.    // We want the shorter terms to be first because it's more    // likely that what you're looking for is there.    result.sort((String a, String b) {      if (a.length > b.length) {        return 1;      } else if (a.length == b.length) {        return 0;      }       return -1;    });       return result; } } The app itself will require a simple HTML code (we're omitting obvious surrounding code, such as <html> or <head>): <body> <input type="search" id="search" disabled> <ul id="autocomplete-results"></ul>   <div id="detail">    <h1></h1>    <div></div> </div>   <script type="application/dart" src="main.dart"></script> <script data-pub-inline src="packages/browser/dart.js"></script> </body> We don't want to hardcode the dictionary, so we'll load it using Ajax. JSON file with all search terms, and it looks like this: { ... "strpos": {    "desc": "Find the numeric position of the first occurrence       of 'needle' in the 'haystack' string.",    "name": "strpos" },    ... "pdo::commit": {    "desc": "...",    "name": "PDO::commit" }, ... } The key for each item is its lowercased name. In Dart, this JSON will be represented as: Map<String, Map<String, String>> Now, we'll write a static method that creates an instance of our app and the main() function: import 'dart:html'; import 'dart:convert'; import 'dart:async'; import 'package:Chapter_02_doc_search/fuzzy.dart';   class DocSearch { static fromJson(Element root, String url) async {    String json = await HttpRequest.getString(url);  Map decoded = JSON.decode(json);    return new DocSearch(root, decoded); }   DocSearch(Element root, [Map<String, dynamic> inputDict] {    // Rest of the constructor. } // The rest of the class goes here. }     main() async { try {    await DocSearch.fromJson(querySelector('body'), 'dict.json'); } catch(e) {    print("It's broken."); } } Note how we're creating an instance of DocSearch and are declaring main() as asynchronous. We call a DocSearch.fromJson()static method, which returns a Future object (the async keyword does this for us automatically), which is completed with an instance of DocSearch when the Ajax call is finished and when we decoded JSON into a Map object. The source code for this example contains both Dart 1.9 implementation with async and await and pre 1.9 version with the raw Future and Completer classes. Handling HTML elements You can see that if we hardcoded our dictionary, we could call the constructor of DocSearch like with any other class. We can now look at the constructor particularly: // web/main.dart class DocSearch { Element _root; InputElement _input; UListElement _ul; FuzzySearch _fuzzy; Map<String, dynamic> _dict; static Future fromJson(Element root, String url) async {    /* The same as above. */ } DocSearch(Element root, [Map<String, dynamic> inputDict]) {    _root = root;    dict = inputDict;    _input = _root.querySelector('input');    _ul = _root.querySelector('ul');       // Usage of ".." notation.    _input      ..attributes.remove('disabled')      ..onKeyUp.listen((_) => search(_input.value))      ..onFocus.listen((_) => showAutocomplete());       _ul.onClick.listen((Event e) {      Element target = e.target;      showDetail(target.dataset['key']);    });       // Pass only clicks that are not into <ul> or <input>.    Stream customOnClick = document.onClick.where((Event e) {      Element target = e.target;      return target != _input && target.parent != _ul;    });    customOnClick.listen((Event e) => hideAutocomplete()); }   /* The rest of the class goes here. */ } To set multiple properties to the same object, we can use the double dot operator. This lets you avoid copying and pasting the same object name over and over again. This notation is equal to: _input.attributes.remove('disabled') _input.onKeyUp.listen((_) => search(_input.value)) _input.onFocus.listen((_) => showAutocomplete()); Of course, we can use it for more nested properties as well: elm.attributes ..remove('whatever') ..putIfAbsent('value', 'key') In the constructor, we're creating a custom Stream object, as we talked about earlier in this article. This stream passes only clicks outside our <ul> and <input> elements, which represent autocomplete container and a search input filed, respectively. We need to do this because we want to be able to hide the autocomplete when the user clicks outside of the search field. Using just onBlur in the input field (the lost focus event) wouldn't work as we wanted, because any click in the autocomplete would hide it immediately without emitting onClick inside the autocomplete. This is a nice place for custom streams. We could also make our stream a public property and let other developers bind listeners to it. In vanilla JavaScript, you would probably do this as an event that checks both conditions and emits a second event and then listen only to the second event. The rest of the code is mostly what we've already seen, but it's probably good idea to recap it in context. From now on, we'll skip obvious things such as DOM manipulation unless there's something important. We're also omitting CSS files because they aren't important to us: // web/main.dart class DocSearch { /* Properties are the same as above. */ static fromJson(Element root, String url) async { /* ... */ } DocSearch(Element root, [Map<String, dynamic> inputDict]) {     /* ... */ }   // Custom setter for dict property. When we change // the dictionary that this app uses, it will also change // the search list for the FuzzySearch instance. void set dict(Map<String, dynamic> dict) {    _dict = dict;    if (_fuzzy == null) {      _fuzzy = new FuzzySearch(_dict.keys.toList());    } else {      _fuzzy.list = _dict.keys.toList();    } } void search(String term) {    if (term.length > 1) {      int start = new DateTime.now().millisecondsSinceEpoch;      List<String> results =          _fuzzy.search(_input.value.toLowerCase());      int end = new DateTime.now().millisecondsSinceEpoch;      // Debug performance. Note the usage of interpolation.      print('$term: ${(end - start).toString()} ms');        renderAutocomplete(results);    } else {      hideAutocomplete();    } } void renderAutocomplete(List<String> list) {    if (list.length == 0) hideAutocomplete();    }    // We'll use DocumentFragment as we talked about earlier.    // http://jsperf.com/document-fragment-test-peluchetti    DocumentFragment frag = new DocumentFragment();       list.forEach((String key) {      LIElement li = new LIElement();      li.text = _dict[key]['name'];      // Same as creating 'data-key' attribute or using data()      // method in jQuery.      li.dataset['key'] = key;      frag.append(li);    });       _ul.children.clear();    _ul.append(frag.clone(true));  showAutocomplete(); } void showDetail(String key) {    Map<String, String> info = _dict[key];    _root.querySelector('#detail > h1').text = info['name'];       String desc = info['desc']      ..replaceAll('\n\n', '</p><p>')      ..replaceAll('\_', '_');    _root.querySelector('#detail > div').innerHtml =        '<p>' + desc + '</p>';       hideAutocomplete(); } void showAutocomplete() { _ul.style.display = 'block'; } void hideAutocomplete() { _ul.style.display = 'none'; } } Note that we defined a custom setter for the dict property, so when we change it from anywhere in the code, it also changes the list property in the instance of the FuzzySearch class. Dart allows writing both custom getters and setters: void set property(<T> newValue) { // Custom logic here. } <T> get property { // Custom logic here. // return an instance of <T> } Finally, we can test it in the browser: When you type at least two characters in the search field, it opens autocomplete with suggested function names. You can click on one of them; it closes the autocomplete and shows a simple detail window with its name and description. You can open Developer Tools and see how much time it took for Dart to traverse the entire 9,047 string list (it's about 25 ms in Intel Core Duo 2.5 GHz). As we're already creating the FuzzySearch class as a reusable library, it would be nice if we could use it not just in Dart but also in JavaScript. Summary This article focused on a very practical aspect of Dart. From Streams and the Future-Based API to Ajax, Dart 1.9 took a significant step forward in simplifying the usage of asynchronous APIs using new async and await keywords. If you don't find yourself familiar with the Future-Based API, at least try to understand the new async and await features and try to compare Dart's approach to an asynchronous code to what you already know from JavaScript. Resources for Article: Further resources on this subject: Handling the DOM in Dart [article] Handle Web Applications [article] Dart with JavaScript [article]
Read more
  • 0
  • 0
  • 1421

Packt
22 May 2015
27 min read
Save for later

Financial Derivative – Options

Packt
22 May 2015
27 min read
In this article by Michael Heydt, author of Mastering pandas for Finance, we will examine working with options data provided by Yahoo! Finance using pandas. Options are a type of financial derivative and can be very complicated to price and use in investment portfolios. Because of their level of complexity, there have been many books written that are very heavy on the mathematics of options. Our goal will not be to cover the mathematics in detail but to focus on understanding several core concepts in options, retrieving options data from the Internet, manipulating it using pandas, including determining their value, and being able to check the validity of the prices offered in the market. (For more resources related to this topic, see here.) Introducing options An option is a contract that gives the buyer the right, but not the obligation, to buy or sell an underlying security at a specific price on or before a certain date. Options are considered derivatives as their price is derived from one or more underlying securities. Options involve two parties: the buyer and the seller. The parties buy and sell the option, not the underlying security. There are two general types of options: the call and the put. Let's look at them in detail: Call: This gives the holder of the option the right to buy an underlying security at a certain price within a specific period of time. They are similar to having a long position on a stock. The buyer of a call is hoping that the value of the underlying security will increase substantially before the expiration of the option and, therefore, they can buy the security at a discount from the future value. Put: This gives the option holder the right to sell an underlying security at a certain price within a specific period of time. A put is similar to having a short position on a stock. The buyer of a put is betting that the price of the underlying security will fall before the expiration of the option and they will, thereby, be able to gain a profit by benefitting from receiving the payment in excess of the future market value. The basic idea is that one side of the party believes that the underlying security will increase in value and the other believes it will decrease. They will agree upon a price known as the strike price, where they place their bet on whether the price of the underlying security finishes above or below this strike price on the expiration date of the option. Through the contract of the option, the option seller agrees to give the buyer the underlying security on the expiry of the option if the price is above the strike price (for a call). The price of the option is referred to as the premium. This is the amount the buyer will pay to the seller to receive the option. This price of an option depends upon many factors, of which the following are the primary factors: The current price of the underlying security How long the option needs to be held before it expires (the expiry date) The strike price on the expiry date of the option The interest rate of capital in the market The volatility of the underlying security There being an adequate interest between buyer and seller around the given option The premium is often established so that the buyer can speculate on the future value of the underlying security and be able to gain rights to the underlying security in the future at a discount in the present. The holder of the option, known as the buyer, is not obliged to exercise the option on its expiration date, but the writer, also referred to as the seller, however, is obliged to buy or sell the instrument if the option is exercised. Options can provide a variety of benefits such as the ability to limit risk and the advantage of providing leverage. They are often used to diversify an investment portfolio to lower risk during times of rising or falling markets. There are four types of participants in an options market: Buyers of calls Sellers of calls Buyers of puts Sellers of puts Buyers of calls believe that the underlying security will exceed a certain level and are not only willing to pay a certain amount to see whether that happens, but also lose their entire premium if it does not. Their goal is that the resulting payout of the option exceeds their initial premium and they, therefore, make a profit. However, they are willing to forgo their premium in its entirety if it does not clear the strike price. This then becomes a game of managing the risk of the profit versus the fixed potential loss. Sellers of calls are on the other side of buyers. They believe the price will drop and that the amount they receive in payment for the premium will exceed any loss in the price. Normally, the seller of a call would already own the stock. They do not believe the price will exceed the strike price and that they will be able to keep the underlying security and profit if the underlying security stays below the strike by an amount that does not exceed the received premium. Loss is potentially unbounded as the stock increases in price above the strike price, but that is the risk for an upfront receipt of cash and potential gains on loss of price in the underlying instrument. A buyer of a put is betting that the price of the stock will drop beyond a certain level. By buying a put they gain the option to force someone to buy the underlying instrument at a fixed price. By doing this, they are betting that they can force the sale of the underlying instrument at a strike price that is higher than the market price and in excess of the premium that they pay to the seller of the put option. On the other hand, the seller of the put is betting that they can make an offer on an instrument that is perceived to lose value in the future. They will offer the option for a price that gives them cash upfront, and they plan that at maturity of the option, they will not be forced to purchase the underlying instrument. Therefore, it keeps the premium as pure profit. Or, the price of the underlying instruments drops only a small amount so that the price of buying the underlying instrument relative to its market price does not exceed the premium that they received. Notebook setup The examples in this article will be based on the following configuration in IPython: In [1]:    import pandas as pd    import numpy as np    import pandas.io.data as web    from datetime import datetime      import matplotlib.pyplot as plt    %matplotlib inline      pd.set_option('display.notebook_repr_html', False)    pd.set_option('display.max_columns', 7)    pd.set_option('display.max_rows', 15)    pd.set_option('display.width', 82)    pd.set_option('precision', 3) Options data from Yahoo! Finance Options data can be obtained from several sources. Publicly listed options are exchanged on the Chicago Board Options Exchange (CBOE) and can be obtained from their website. Through the DataReader class, pandas also provides built-in (although in the documentation referred to as experimental) access to options data. The following command reads all currently available options data for AAPL: In [2]:    aapl_options = web.Options('AAPL', 'yahoo') aapl_options = aapl_options.get_all_data().reset_index() This operation can take a while as it downloads quite a bit of data. Fortunately, it is cached so that subsequent calls will be quicker, and there are other calls to limit the types of data downloaded (such as getting just puts). For convenience, the following command will save this data to a file for quick reload at a later time. Also, it helps with repeatability of the examples. The data retrieved changes very frequently, so the actual examples in the book will use the data in the file provided with the book. It saves the data for later use (it's commented out for now so as not to overwrite the existing file). Here's the command we are talking about: In [3]:    #aapl_options.to_csv('aapl_options.csv') This data file can be reloaded with the following command: In [4]:    aapl_options = pd.read_csv('aapl_options.csv',                              parse_dates=['Expiry']) Whether from the Web or the file, the following command restructures and tidies the data into a format best used in the examples to follow: In [5]:    aos = aapl_options.sort(['Expiry', 'Strike'])[      ['Expiry', 'Strike', 'Type', 'IV', 'Bid',          'Ask', 'Underlying_Price']]    aos['IV'] = aos['IV'].apply(lambda x: float(x.strip('%'))) Now, we can take a look at the data retrieved: In [6]:    aos   Out[6]:            Expiry Strike Type     IV   Bid   Ask Underlying_Price    158 2015-02-27     75 call 271.88 53.60 53.85           128.79    159 2015-02-27     75 put 193.75 0.00 0.01           128.79    190 2015-02-27     80 call 225.78 48.65 48.80           128.79    191 2015-02-27     80 put 171.88 0.00 0.01           128.79    226 2015-02-27     85 call 199.22 43.65 43.80           128.79 There are 1,103 rows of options data available. The data is sorted by Expiry and then Strike price to help demonstrate examples. Expiry is the data at which the particular option will expire and potentially be exercised. We have the following expiry dates that were retrieved. Options typically are offered by an exchange on a monthly basis and within a short overall duration from several days to perhaps two years. In this dataset, we have the following expiry dates: In [7]:    aos['Expiry'].unique()   Out[7]:    array(['2015-02-26T17:00:00.000000000-0700',          '2015-03-05T17:00:00.000000000-0700',          '2015-03-12T18:00:00.000000000-0600',          '2015-03-19T18:00:00.000000000-0600',          '2015-03-26T18:00:00.000000000-0600',          '2015-04-01T18:00:00.000000000-0600',          '2015-04-16T18:00:00.000000000-0600',          '2015-05-14T18:00:00.000000000-0600',          '2015-07-16T18:00:00.000000000-0600',          '2015-10-15T18:00:00.000000000-0600',          '2016-01-14T17:00:00.000000000-0700',          '2017-01-19T17:00:00.000000000-0700'], dtype='datetime64[ns]') For each option's expiration date, there are multiple options available, split between puts and calls, and with different strike values, prices, and associated risk values. As an example, the option with the index 158 that expires on 2015-02-27 is for buying a call on AAPL with a strike price of $75. The price we would pay for each share of AAPL would be the bid price of $53.60. Options typically sell 100 units of the underlying security, and, therefore, this would mean that this option would cost of 100 x $53.60 or $5,360 upfront: In [8]:    aos.loc[158]   Out[8]:    Expiry             2015-02-27 00:00:00    Strike                               75    Type                              call    IV                                 272    Bid                               53.6    Ask                               53.9    Underlying_Price                   129    Name: 158, dtype: object This $5,360 does not buy us the 100 shares of AAPL. It gives us the right to buy 100 shares of AAPL on 2015-02-27 at $75 per share. We should only buy if the price of AAPL is above $75 on 2015-02-27. If not, we will have lost our premium of $5360 and purchasing below will only increase our loss. Also, note that these quotes were retrieved on 2015-02-25. This specific option has only two days until it expires. That has a huge effect on the pricing: We have paid $5,360 for the option to buy 100 shares of AAPL on 2015-02-27 if the price of AAPL is above $75 on that date. The price of AAPL when the option was priced was $128.79 per share. If we were to buy 100 shares of AAPL now, we would have paid $12,879 now. If AAPL is above $75 on 2015-02-27, we can buy 100 shares for $7500. There is not a lot of time between the quote and Expiry of this option. With AAPL being at $128.79, it is very likely that the price will be above $75 in two days. Therefore, in two days: We can walk away if the price is $75 or above. Since we paid $5360, we probably wouldn't want to do that. At $75 or above, we can force execution of the option, where we give the seller $7,500 and receive 100 shares of AAPL. If the price of AAPL is still $128.79 per share, then we will have bought $12,879 of AAPL for $7,500+$5,360, or $12,860 in total. In technicality, we will have saved $19 over two days! But only if the price didn't drop. If for some reason, AAPL dropped below $75 in two days, we kept our loss to our premium of $5,360. This is not great, but if we had bought $12,879 of AAPL on 2015-02-5 and it dropped to $74.99 on 2015-02-27, we would have lost $12,879 – $7,499, or $5,380. So, we actually would have saved $20 in loss by buying the call option. It is interesting how this math works out. Excluding transaction fees, options are a zero-loss game. It just comes down to how much risk is involved in the option versus your upfront premium and how the market moves. If you feel you know something, it can be quite profitable. Of course, it can also be devastatingly unprofitable. We will not examine the put side of this example. It would suffice to say it works out similarly from the side of the seller. Implied volatility There is one more field in our dataset that we didn't look at—implied volatility (IV). We won't get into the details of the mathematics of how this is calculated, but this reflects the amount of volatility that the market has factored into the option. This is different than historical volatility (typically the standard deviation of the previous year of returns). In general, it is informative to examine the IV relative to the strike price on a particular Expiry date. The following command shows this in tabular form for calls on 2015-02-27: In [9]:    calls1 = aos[(aos.Expiry=='2015-02-27') & (aos.Type=='call')]    calls1[:5]   Out[9]:            Expiry Strike Type     IV   Bid   Ask Underlying_Price    158 2015-02-27     75 call 271.88 53.60 53.85           128.79    159 2015-02-27     75   put 193.75 0.00   0.01           128.79    190 2015-02-27     80 call 225.78 48.65 48.80           128.79    191 2015-02-27     80   put 171.88 0.00   0.01           128.79    226 2015-02-27     85 call 199.22 43.65 43.80           128.79 It appears that as the strike price approaches the underlying price, the implied volatility decreases. Plotting this shows it even more clearly: In [10]:    ax = aos[(aos.Expiry=='2015-02-27') & (aos.Type=='call')] \            .set_index('Strike')[['IV']].plot(figsize=(12,8))    ax.axvline(calls1.Underlying_Price.iloc[0], color='g'); The shape of this curve is important as it defines points where options are considered to be either in or out of the money. A call option is referred to as in the money when the options strike price is below the market price of the underlying instrument. A put option is in the money when the strike price is above the market price of the underlying instrument. Being in the money does not mean that you will profit; it simply means that the option is worth exercising. Where and when an option is in our out of the money can be visualized by examining the shape of its implied volatility curve. Because of this curved shape, it is generally referred to as a volatility smile as both ends tend to turn upwards on both ends, particularly, if the curve has a uniform shape around its lowest point. This is demonstrated in the following graph, which shows the nature of in/out of the money for both puts and calls: A skew on the smile demonstrates a relative demand that is greater toward the option being in or out of the money. When this occurs, the skew is often referred to as a smirk. Volatility smirks Smirks can either be reverse or forward. The following graph demonstrates a reverse skew, similar to what we have seen with our AAPL 2015-02-27 call: In a reverse-skew smirk, the volatility for options at lower strikes is higher than at higher strikes. This is the case with our AAPL options expiring on 2015-02-27. This means that the in-the-money calls and out-of-the-money puts are more expensive than out-of-the-money calls and in-the-money puts. A popular explanation for the manifestation of the reverse volatility skew is that investors are generally worried about market crashes and buy puts for protection. One piece of evidence supporting this argument is the fact that the reverse skew did not show up for equity options until after the crash of 1987. Another possible explanation is that in-the-money calls have become popular alternatives to outright stock purchases as they offer leverage and, hence, increased ROI. This leads to greater demand for in-the-money calls and, therefore, increased IV at the lower strikes. The other variant of the volatility smirk is the forward skew. In the forward-skew pattern, the IV for options at the lower strikes is lower than the IV at higher strikes. This suggests that out-of-the-money calls and in-the-money puts are in greater demand compared to in-the-money calls and out-of-the-money puts: The forward-skew pattern is common for options in the commodities market. When supply is tight, businesses would rather pay more to secure supply than to risk supply disruption. For example, if weather reports indicate a heightened possibility of an impending frost, fear of supply disruption will cause businesses to drive up demand for out-of-the-money calls for the affected crops. Calculating payoff on options The payoff of an option is a relatively straightforward calculation based upon the type of the option and is derived from the price of the underlying security on expiry relative to the strike price. The formula for the call option payoff is as follows: The formula for the put option payoff is as follows: We will model both of these functions and visualize their payouts. The call option payoff calculation An option gives the buyer of the option the right to buy (a call option) or sell (a put option) an underlying security at a point in the future and at a predetermined price. A call option is basically a bet on whether or not the price of the underlying instrument will exceed the strike price. Your bet is the price of the option (the premium). On the expiry date of a call, the value of the option is 0 if the strike price has not been exceeded. If it has been exceeded, its value is the market value of the underlying security. The general value of a call option can be calculated with the following function: In [11]:    def call_payoff(price_at_maturity, strike_price):        return max(0, price_at_maturity - strike_price) When the price of the underlying instrument is below the strike price, the value is 0 (out of the money). This can be seen here: In [12]:    call_payoff(25, 30)   Out[12]:    0 When it is above the strike price (in the money), it will be the difference of the price and the strike price: In [13]:    call_payoff(35, 30)   Out[13]:    5 The following function returns a DataFrame object that calculates the return for an option over a range of maturity prices. It uses np.vectorize() to efficiently apply the call_payoff() function to each item in the specific column of the DataFrame: In [14]:    def call_payoffs(min_maturity_price, max_maturity_price,                    strike_price, step=1):        maturities = np.arange(min_maturity_price,                              max_maturity_price + step, step)        payoffs = np.vectorize(call_payoff)(maturities, strike_price)        df = pd.DataFrame({'Strike': strike_price, 'Payoff': payoffs},                          index=maturities)        df.index.name = 'Maturity Price'    return df The following command demonstrates the use of this function to calculate payoff of an underlying security at finishing prices ranging from 10 to 25 and with a strike price of 15: In [15]:    call_payoffs(10, 25, 15)   Out[15]:                    Payoff Strike    Maturity Price                  10                   0     15    11                   0     15    12                   0     15    13                   0     15    14                   0     15    ...               ...     ...    21                   6     15    22                  7     15    23                   8     15    24                   9     15    25                 10     15      [16 rows x 2 columns] Using this result, we can visualize the payoffs using the following function: In [16]:    def plot_call_payoffs(min_maturity_price, max_maturity_price,                          strike_price, step=1):        payoffs = call_payoffs(min_maturity_price, max_maturity_price,                              strike_price, step)        plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10)        plt.ylabel("Payoff")        plt.xlabel("Maturity Price")        plt.title('Payoff of call option, Strike={0}'                  .format(strike_price))        plt.xlim(min_maturity_price, max_maturity_price)        plt.plot(payoffs.index, payoffs.Payoff.values); The payoffs are visualized as follows: In [17]:    plot_call_payoffs(10, 25, 15) The put option payoff calculation The value of a put option can be calculated with the following function: In [18]:    def put_payoff(price_at_maturity, strike_price):        return max(0, strike_price - price_at_maturity) While the price of the underlying is below the strike price, the value is 0: In [19]:    put_payoff(25, 20)   Out[19]:    0 When the price is below the strike price, the value of the option is the difference between the strike price and the price: In [20]:    put_payoff(15, 20)   Out [20]:    5 This payoff for a series of prices can be calculated with the following function: In [21]:    def put_payoffs(min_maturity_price, max_maturity_price,                    strike_price, step=1):        maturities = np.arange(min_maturity_price,                              max_maturity_price + step, step)        payoffs = np.vectorize(put_payoff)(maturities, strike_price)       df = pd.DataFrame({'Payoff': payoffs, 'Strike': strike_price},                          index=maturities)        df.index.name = 'Maturity Price'        return df The following command demonstrates the values of the put payoffs for prices of 10 through 25 with a strike price of 25: In [22]:    put_payoffs(10, 25, 15)   Out [22]:                    Payoff Strike    Maturity Price                  10                   5     15    11                   4     15    12                   3     15    13                  2     15    14                   1     15    ...               ...     ...    21                   0     15    22                   0     15    23                   0     15    24                   0     15    25                   0      15      [16 rows x 2 columns] The following function will generate a graph of payoffs: In [23]:    def plot_put_payoffs(min_maturity_price,                        max_maturity_price,                        strike_price,                        step=1):        payoffs = put_payoffs(min_maturity_price,                              max_maturity_price,                              strike_price, step)        plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10)        plt.ylabel("Payoff")      plt.xlabel("Maturity Price")        plt.title('Payoff of put option, Strike={0}'                  .format(strike_price))        plt.xlim(min_maturity_price, max_maturity_price)        plt.plot(payoffs.index, payoffs.Payoff.values); The following command demonstrates the payoffs for prices between 10 and 25 with a strike price of 15: In [24]:    plot_put_payoffs(10, 25, 15) Summary In this article, we examined several techniques for using pandas to calculate the prices of options, their payoffs, and profit and loss for the various combinations of calls and puts for both buyers and sellers. Resources for Article: Further resources on this subject: Why Big Data in the Financial Sector? [article] Building Financial Functions into Excel 2010 [article] Using indexes to manipulate pandas objects [article]
Read more
  • 0
  • 0
  • 1038

article-image-vrops-introduction-and-architecture
Packt
22 May 2015
10 min read
Save for later

vROps – Introduction and Architecture

Packt
22 May 2015
10 min read
In this article by Scott Norris and Christopher Slater, the authors of Mastering vRealize Operations Manager, we introduce you to vRealize Operations Manager and its component architecture. vRealize Operations Manager (vROps) 6.0 is a tool from VMware that helps IT administrators monitor, troubleshoot, and manage the health and capacity of their virtual environment. vROps has been developed from the stage of being a single tool to being a suite of tools known as vRealize Operations. This suite includes vCenter Infrastructure Navigator (VIN), vRealize Configuration Manager (vCM), vRealize Log Insight, and vRealize Hyperic. Due to its popularity and the powerful analytics engine that vROps uses, many hardware vendors supply adapters (now known as solutions) that allow IT administrators to extend monitoring, troubleshooting, and capacity planning to non-vSphere systems including storage, networking, applications, and even physical devices. In this article, we will learn what's new with vROps 6.0; specifically with respect to its architecture components. One of the most impressive changes with vRealize Operations Manager 6.0 is the major internal architectural change of components, which has helped to produce a solution that supports both a scaled-out and high-availability deployment model. In this article, we will describe the new platform components and the details of the new deployment architecture. (For more resources related to this topic, see here.) A new, common platform design In vRealize Operations Manager 6.0, a new platform design was required to meet some of the required goals that VMware envisaged for the product. These included: The ability to treat all solutions equally and to be able to offer management of performance, capacity, configuration, and compliance of both VMware and third-party solutions The ability to provide a single platform that can scale up to tens of thousands of objects and millions of metrics by scaling out with little reconfiguration or redesign required The ability to support a monitoring solution that can be highly available and to support the loss of a node without impacting the ability to store or query information To meet these goals, vCenter Operations Manager 5.x (vCOps) went through a major architectural overhaul to provide a common platform that uses the same components no matter what deployment architecture is chosen. These changes are shown in the following figure: When comparing the deployment architecture of vROps 6.0 with vCOps 5.x, you will notice that the footprint has changed dramatically. Listed in the following table are some of the major differences in the deployment of vRealize Operations Manager 6.0 compared to vRealize Operations Manager 5.x: Deployment considerations vCenter Operations Manager 5.x vRealize Operations Manager 6.0 vApp deployment vApp consists of two VMs: The User Interface VM The Analytics VM There is no supported way to add additional VMs to vApp and therefore no way to scale out. This deploys a single virtual appliance (VA), that is, the entire solution is provided in each VA. As many as up to 8 VAs can be deployed with this type of deployment. Scaling This deployment could only be scaled up to a certain extent. If it is scaled beyond this, separate instances are needed to be deployed, which do not share the UI or data. This deployment is built on the GemFire federated cluster that supports sharing of data and the UI. Data resiliency is done through GemFire partitioning. Remote collector Remote collectors are supported in vCOps 5.x, but with the installable version only. These remote collectors require a Windows or Linux base OS. The same VA is used for the remote collector simply by specifying the role during the configuration. Installable/standalone option It is required that customers own MSSQL or Oracle database. No capacity planning or vSphere UI is provided with this type of deployment. This deployment leverages built-in databases. It uses the same code base as used in the VA. The ability to support new scaled out and highly available architectures will require an administrator to consider which model is right for their environment before a vRealize Operations Manager 6.0 migration or rollout begins. The vRealize Operations Manager component architecture With a new common platform design comes a completely new architecture. As mentioned in the previous table, this architecture is common across all deployed nodes as well as the vApp and other installable versions. The following diagram shows the five major components of the Operations Manager architecture: The five major components of the Operations Manager architecture depicted in the preceding figure are: The user interface Collector and the REST API Controller Analytics Persistence The user interface In vROps 6.0, the UI is broken into two components—the Product UI and the Admin UI. Unlike the vCOps 5.x vApp, the vROps 6.0 Product UI is present on all nodes with the exception of nodes that are deployed as remote collectors. Remote collectors will be discussed in more detail in the next section. The Admin UI is a web application hosted by Pivotal tc Server(A Java application Apache web server) and is responsible for making HTTP REST calls to the Admin API for node administration tasks. The Cluster and Slice Administrator (CaSA) is responsible for cluster administrative actions such as: Enabling/disabling the Operations Manager cluster Enabling/disabling cluster nodes Performing software updates Browsing logfiles The Admin UI is purposely designed to be separate from the Product UI and always be available for administration and troubleshooting tasks. A small database caches data from the Product UI that provides the last known state information to the Admin UI in the event that the Product UI and analytics are unavailable. The Admin UI is available on each node at https://<NodeIP>/admin. The Product UI is the main Operations Manager graphical user interface. Like the Admin UI, the Product UI is based on Pivotal tc Server and can make HTTP REST calls to the CaSA for administrative tasks. However, the primary purpose of the Product UI is to make GemFire calls to the Controller API to access data and create views, such as dashboards and reports. As shown in the following figure, the Product UI is simply accessed via HTTPS on TCP port 443. Apache then provides a reverse proxy to the Product UI running in Pivotal tc Server using the Apache AJP protocol. Collector The collector's role has not differed much from that in vCOps 5.x. The collector is responsible for processing data from solution adapter instances. As shown in the following figure, the collector uses adapters to collect data from various sources and then contacts the GemFire locator for connection information of one or more controller cache servers. The collector service then connects to one or more Controller API GemFire cache servers and sends the collected data. It is important to note that although an instance of an adapter can only be run on one node at a time, this does not imply that the collected data is being sent to the controller on that node. Controller The controller manages the storage and retrieval of the inventory of the objects within the system. The queries are performed by leveraging the GemFire MapReduce function that allows you to perform selective querying. This allows efficient data querying as data queries are only performed on selective nodes rather than all nodes. We will go in detail to know how the controller interacts with the analytics and persistence stack a little later as well as its role in creating new resources, feeding data in, and extracting views. Analytics Analytics is at the heart of vROps as it is essentially the runtime layer for data analysis. The role of the analytics process is to track the individual states of every metric and then use various forms of correlation to determine whether there are problems. At a high level, the analytics layer is responsible for the following tasks: Metric calculations Dynamic thresholds Alerts and alarms Metric storage and retrieval from the Persistence layer Root cause analysis Historic Inventory Server (HIS) version metadata calculations and relationship data One important difference between vROps 6.0 and vCOps 5.x is that analytics tasks are now run on every node (with the exception of remote collectors). The vCOps 5.x Installable provides an option of installing separate multiple remote analytics processors for dynamic threshold (DT) processing. However, these remote DT processors only support dynamic threshold processing and do not include other analytics functions. Although its primary tasks have not changed much from vCOps 5.x, the analytics component has undergone a significant upgrade under the hood to work with the new GemFire-based cache and the Controller and Persistence layers. Persistence The Persistence layer, as its name implies, is the layer where the data is persisted to a disk. The layer primarily consists of a series of databases that replace the existing vCOps 5.x filesystem database (FSDB) and PostgreSQL combination. Understanding the persistence layer is an important aspect of vROps 6.0, as this layer has a strong relationship with the data and service availability of the solution. vROps 6.0 has four primary database services built on the EMC Documentum xDB (an XML database) and the original FSDB. These services include: Common name Role DB type Sharded Location Global xDB Global data Documentum xDB No /storage/vcops/xdb Alarms xDB Alerts and Alarms data Documentum xDB Yes /storage/vcops/alarmxdb HIS xDB Historical Inventory Service data Documentum xDB Yes /storage/vcops/hisxdb FSDB Filesystem Database metric data FSDB Yes /storage/db/vcops/data CaSA DB Cluster and Slice Administrator data HSQLDB (HyperSQL database) N/A /storage/db/casa/webapp/hsqldb Sharding is the term that GemFire uses to describe the process of distributing data across multiple systems to ensure that computational, storage, and network loads are evenly distributed across the cluster. Global xDB Global xDB contains all of the data that, for the release of vROps, can not be sharded. The majority of this data is user configuration data that includes: User created dashboards and reports Policy settings and alert rules Super metric formulas (not super metric data, as this is sharded in the FSDB) Resource control objects (used during resource discovery) As Global xDB is used for data that cannot be sharded, it is solely located on the master node (and master replica if high availability is enabled). Alarms xDB Alerts and Alarms xDB is a sharded xDB database that contains information on DT breaches. This information then gets converted into vROps alarms based on active policies. HIS xDB Historical Inventory Service (HIS) xDB is a sharded xDB database that holds historical information on all resource properties and parent/child relationships. HIS is used to change data back to the analytics layer based on the incoming metric data that is then used for DT calculations and symptom/alarm generation. FSDB The role of the Filesystem Database is not differed much from vCOps 5.x. The FSDB contains all raw time series metrics for the discovered resources. The FSDB metric data, HIS object, and Alarms data for a particular resource share the same GemFire shard key. This ensures that the multiple components that make up the persistence of a given resource are always located on the same node. Summary In this article, we discussed the new common platform architecture design and how Operations Manager 6.0 differs from Operations Manager 5.x. We also covered the major components that make up the Operations Manager 6.0 platform and the functions that each of the component layers provide. Resources for Article: Further resources on this subject: Solving Some Not-so-common vCenter Issues [article] VMware vRealize Operations Performance and Capacity Management [article] Working with VMware Infrastructure [article]
Read more
  • 0
  • 0
  • 6830
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-learning-selenium-testing-tools-python
Packt
08 May 2015
3 min read
Save for later

Learning Selenium Testing Tools with Python

Packt
08 May 2015
3 min read
Selenium is a portable software testing framework for web applications. It is open-source software, released under the Apache 2.0 license, and can be downloaded and used without charge. (For more resources related to this topic, see here.) Selenium WebDriver is the successor to Selenium RC. Selenium WebDriver accepts commands and sends them to a browser. This is implemented through a browser-specific browser driver, which sends commands to a browser, and retrieves results. Selenium WebDriver is a set of open source tools and libraries to automate browsers. It has gained a wider acceptance and has become a tool of choice for automated testing on web applications. Selenium WebDriver is now part of the W3C standard. The beauty of Selenium WebDriver is that the user can write automated tests in any language, thanks to its platform agnostic approach. It provides a number of client libraries in Java, C#, Python, Ruby, JavaScript, and more to write the tests. Over the years Selenium has become a very powerful testing platform and many organizations are adopting Selenium over the other commercial tools. In the book Learning Selenium WebDriver with Python by Unmesh Gundecha, you will learn the following topics: Creating Selenium WebDriver tests using Python the unittest module Using Selenium WebDriver for cross-browser testing Building reliable and robust tests using implicit and explicit waits Setting up and using Selenium Grid for a distributed run Testing web applications on mobile platforms such as iOS and Android using Appium Using various methods provided by Selenium WebDriver to locate the web elements and interact with them Capturing screenshot and video of the test execution This book is a practical guide on automated web testing with Selenium testing tools using Python and is written for users with previous Python experience, although any previous knowledge of Selenium WebDriver is not needed. The author has provided you with step-by-step tutorials including practical examples that will help you build automated tests for testing your web applications using the Selenium WebDriver Python client library. This book is an interactive guide on automated web testing with Selenium WebDriver using Python. With the help of this book you can use Selenium for automated testing in real world, explore the Selenium WebDriver API for easy implementation of small to complex operations on browsers and web applications, and its easy and practical examples will help you get started with Selenium WebDriver. Summary The main aim of this book is to cover the fundamentals related to Python Selenium testing. You will learn how the Selenium WebDriver Python API can be integrated with CI and Build tools to allow tests to be run while building applications. This book will guide you through using the Selenium WebDriver Python client library as well as other tools from the Selenium project. Towards the end of this book, you'll get to grips with Selenium Grid, which is used for running tests in parallel using nodes for cross-browser testing. It will also give you a basic overview of the concepts, while helping you improve your practical testing skills with Python and Selenium. Resources for Article: Further resources on this subject: BackTrack 4: Security with Penetration Testing Methodology [article] Improving Plone 3 Product Performance [article] Selenium Testing Tools [article]
Read more
  • 0
  • 0
  • 2175

article-image-learning-r-geospatial-analysis
Packt
08 May 2015
3 min read
Save for later

Learning R for Geospatial Analysis

Packt
08 May 2015
3 min read
The defining feature of spatial data analysis is the reference, within the data being analyzed, to locations on the surface of the earth. This is a very broad subject encompassing distinct areas of expertise such as spatial statistics, geometric computation, and image processing. In practice, spatial data is commonly stored, viewed, and analyzed in Geographic Information System (GIS) software, of which the most well-known example is ArcGIS. However, most often, menu-based interfaces of GIS software are too narrow in scope to meet with specialized demands or too inflexible to feasibly accomplish customized, repetitive tasks. Writing scripts rather than using menus or working in combination with external software are two commonly used paths to solve such problems. However, what if we can use a single environment, combining the advantages of programming and spatial data analysis capabilities with a comprehensive ecosystem of computational tools that are readily implementable in customized procedures? This book will demonstrate that the R programming language is indeed such an environment and teach you how to use it in order to perform various spatial data analysis tasks. (For more resources related to this topic, see here.) What you will learn This book covers the basic concepts related to writing R code. You will also learn how to work with vectors, time series, tables, rasters, points, lines, and polygons. The book also covers several advanced themes associated with raster data analysis in R. Demonstrations on how rasters and vector layers can be combined in a single analysis are shown. Transformation between raster and vector data structures as well as data extraction from a raster based on vector layers are covered in this book. Moreover, we will also learn how spatial interpolation can be carried out in R through examples of interpolating meteorological point measurements to create annual temperature maps of Spain. You will also explore some of the most useful methods for advanced visualization of spatial data in R, using the ggplot2, ggmap, and lattice packages. How the book differs Most currently available books on this subject are focused on advanced applications such as spatial statistics, assuming you have prior knowledge of R and the respective scientific domains. Yet, introductory material on R from the point of view of a spatial data analyst, which is focused on introductory topics such as spatial data handling, computation, and visualization, is scarce. This book aims to fill the gap. Thus, this book is intended for anyone who wants to learn how to efficiently analyze geospatial data with R. No prior experience with R and/or programming is required; only you need to be familiar with basic geographic information concepts (such as spatial coordinates). Required skills To follow through the examples in this book, all you need to do is install R (which is available for free) and download the example datasets from the book's website. Some of the examples also require you to have an Internet connection to download additional datasets and R packages from the R environment. Summary This book is composed of step-by-step tutorials, starting with the language basics before proceeding to cover the main GIS operations and data types. Visualization of spatial data is vital either during the various analysis steps and/or as the final product, and this book shows you how to get the most out of R's visualization capabilities. The book culminates with examples of cutting-edge applications utilizing R's strengths as a statistical and graphical tool. Resources for Article:  Further resources on this subject: Data visualization [article] Machine Learning in Bioinformatics [article] Specialized Machine Learning Topics [article]
Read more
  • 0
  • 0
  • 2657

article-image-ensuring-five-star-rating-marketplace
Packt
05 May 2015
43 min read
Save for later

Ensuring Five-star Rating in the MarketPlace

Packt
05 May 2015
43 min read
In this article written by Feroz Pearl Louis and Gaurav Gupta, author of the book Mastering Mobile Test Automation, we will learn that the star rating system on mobile marketplaces, such as Google Play and Application Store, is a source of positive as well as negative feedback for the applications deployed by any organization. This system is used to measure various aspects of the application, such as like functionality, usability, and is a way to quantify the all-elusive measurement-defying factor that organizations yearn to measure called "user experience", besides the obvious ones, such as the appeal and aesthetics of an application's graphical user interface (GUI). If an organization does not spend time in testing the functionality adequately, then it may suffer the consequences and lose the market share to competitors. The challenge to enable different channels such as web applications through mobile browsers, as well as providing different native or hybrid applications to service the customers as per their preferences, often leads to a situation where organizations have to develop both a web version and a hybrid version of the application. (For more resources related to this topic, see here.) At any given point of time, it is almost impossible to test an application completely, and to cover various permutations and combinations of operating systems, their versions, device manufacturers, device specifications with various screen sizes, and application types, with solely employed manual testing techniques. This is where automation comes to the rescue. However, mobile automation in itself is very complex because of the previously explained fragmentation issue. In this article, you will learn how not to fall into the trap of using different tools, frameworks, and techniques to address these differences. In this article, we will cover the following topics: Introduction to mobile test automation Types of mobile application packages Mobile test automation overview Some common factors to be considered during mobile testing, including Interrupt testing, form factor testing, layout testing, and more Overview of different types of mobile automation testing approaches Selection of the best mobile testing approach depending on the project Troubleshooting and best practices Introduction to mobile test automation Before we start learning about mobile test automation, let's understand what functional test automation is. Test automation has always been a fundamental part of the software testing lifecycle for any project. Organizations invariably look to automate the repetitive testing actions in order to utilize the manual effort thus saved for more dynamic and productive tasks. Use of automation tools also allows utilization of system idle time more effectively. To address these needs, there are a plethora of tools available in the market along with various frameworks and implementation techniques. There are both open source and licensed tools available in the market. Tools such as HP's Unified Functional Testing (UFT), formerly known as QuickTest Professional (QTP), TestComplete, Selenium, eggPlant, Ranorex, SilkTest, IBM Functional tester, and numerous others, provide various capabilities for functional automation. However, almost all of these tools are designed to support only a single operating system (predominantly Windows—owing to its popularity and the coverage it enjoys across industry verticals), although a few provide support for other lesser-used operating systems, such as Unix, Linux, Sun Solaris, and Apple Macintosh. As far as functional automation is concerned, you don't need to even consider the implications of supporting multiple operating systems in most cases. With Windows as the only operating system that is supported, there aren't any considerations for different operating systems. If the application is a web application, then there may be a need to do cross-browser testing, that is, testing automation on various browser types (Chrome, Firefox, and Safari besides Internet Explorer) and their respective versions. Also, as far as functional automation is considered, there is a very clear demarcation between nonfunctional and functional requirements. So, an automated solution for functional testing is not required to consider factors such as how others processes running on the machine would impact it, or any of the hardware aspects, such as the screen resolution of monitors and the make of the machines (IBM, Lenovo, and others). When it comes to mobile automation, there is an impact on the test suite design due to various other aspects, such as operating systems (Android, iOS, Blackberry, Windows) on which the application is supposed to be accessed, the mode of access (Wi-Fi, 3G, LTE, and so on), the form factor of the devices (tablets, phones, phablets, and so on), and the behavior of the application in various orientation modes (portrait, landscape, and so on). So, apart from normal automation challenges, a robust mobile automation suite should be able to address all these challenges in a reliable way. Fragmentation of the mobile ecosystem is an aspect that compounds this manifold problem. An application should be able to service different operating systems and their flavors provided by original equipment manufacturers (OEMs), such as Apple with iOS, Google's Android with Samsung, HTC, Xiaomi, and numerous others, Windows with Nokia and HTC, and even Blackberry and other lesser-used operating systems and devices. Add to this the complexity of dealing with various form factors, such as phones, tablets, phablets, and their various hybrids. The following figure is a visualization of the Android market fragmentation over various equipment manufacturers, form factors, and OS versions: As we know, test automation is the use of software to automate and control the setting up of test preconditions, execution of tests, test control, and test reporting functions with minimum, or ideally zero, user intervention. Automating the testing for any mobile application is the best way to ensure quality, and to achieve the quick and precise results that are needed to accommodate fast development cycles. Organizations look toward functional test automation primarily to reduce the total cost of ownership over a period of time, and to ensure the quality of the product or application being developed. These advantages are compounded many times for mobile test automation and hence it provides the same advantages, but to a much greater degree. The following are the various advantages of mobile test automation for any project: Improved testing efficiency: The same scripts can be used to run uniform validations across different devices, operating systems, and application types (of the same application), thereby reducing the test execution effort considerably. This also means that the return on investment (RoI), which typically takes about 3-5 cycles of executing the conventional functional automation to achieve breakeven, is viable in most cases within the first release itself, as mobile testing is typically repeated on many devices. So, in this case, fragmentation acts as a positive factor if the automation is employed properly, whereas, with pure manual testing, it greatly increases the costs. Consistent and repeatable testing process: Human beings tend to get bored with repetitive tasks and this makes such a test prone to errors. Due to the effect of fragmentation in the mobile world, the same application functionality needs to be validated across various combinations of operating systems, application types, device manufacturers, network conditions, and many more. Hence, the use of automation, which is basically a program, ensures that the same scripts run without any modifications every time. Improved regression testing coverage: The use of automation scripts allows the regression tests to be iterated over multiple combinations of test data. Such data-driven scripts allow the same flow to be validated against different test data combinations. For example, if an application allows users to search for the nearest ATMs in a given area, basically, the same flow would need to be tested with various zip codes as inputs. Hence, the use of automated scripts would instantly allow the test coverage to be increased dramatically. More tests can be run in less time: Since automated scripts can be run in parallel over various devices, the same amount of testing can be compacted inside a much smaller time window in comparison to the manually executed functional testing. With the use of automation scripts that include device setups as preconditions, the execution window can be exponentially reduced, which otherwise would take a manual tester considerable time to complete. 24/7 operation: Although any functional automation suite can lead to better resource utilization in terms of executing more number of scripts in lesser time, with respect to mobile automation, the resources are often expensive mobile devices. If functional testing is done manually, then more of the same devices need to be procured to allow manual testers to carry out tests, and especially, more so in the case of geographically distributed testing teams. Mobile automation scripts, on the other hand, can be triggered remotely and can run unattended, reducing the overall cost of ownership and allowing 24/7 utilization of devices and tools. Human resources are free to perform advanced manual tests: Having automation scripts perform repetitive regression testing tasks frees up the bandwidth of manual testing teams for exploratory tests that are expensive to automate and cumbersome to manage. Hence, the use of automation leads to a balanced approach, where testers can perform more meaningful work and thereby improve the quality of delivered applications. In mobiles, since regression is more repetitive on account of the fragmentation problem, the amount of effort saved is manifold, and hence, testers can generally focus on testing aspects such as user interface (UI) testing and user experience testing. Simple reproduction of found defects: Since automation scripts can be executed multiple times on demand and are usually accompanied with reports and screenshots, defect triangulation is easy and is just a matter of re-execution of automation scripts. With pure manual testing, a tester would have to spend effort on manually recreating the defect, capturing all the required details, and then reporting it for defect tracking. With mobile automation, the same flow can be triggered multiple times on a multitude of devices hence, the same defect can be replicated and isolated if it occurs only on a specific set of devices. Accurate and realistic real-life mobile scenarios: Since a mobile requires tests to be specifically designed for variable network conditions and other considerations, such as device screen sizes, orientation, and more, which are difficult to recreate accurately with pure manual testing effort, automation scripts can be developed that accurately to recreate these real-world scenarios in a reliable way. These types of tests are mainly not required to be developed for functional automation suites, and hence, this is one of the major differences. For the most realistic results, conventional wisdom is to test automation on actual devices—without optical recognition, emulation, jailbreaking, or tethering. It is impractical to try to automate everything, especially for mobile devices. However, leveraging commercial off-the-shelf (COTS) tools can vastly reduce the cost of automation and thereby enhance the benefits of the automation process. In the following section, we will discuss in detail the challenges that make mobile automation vastly different from conventional functional automation. The following are some of the issues that make the effective testing automation of mobile applications challenging: Restricted access to native methods to enable automation tools: Traditional functional automation tools utilize native operating system methods to emulate user interactions. This is comparatively easy to do as the operating system allows access. However, the same level of access is not available with a mobile operating system. Also, inter-application interactions are restricted in a mobile operating system and each application is treated as an individual thread. This is normally only allowed when a phone is rooted or when the application under test is modified to allow instrumentation access. So, using other software (the test automation tool) to control user inputs in a mobile application is much more difficult to achieve and consequently slower or more error prone. For example, if an Android application under test makes a call to the photo gallery, then the automated test would not be able to continue because a new application comes to the foreground. Lack of prediction techniques for UI synchronization in a Mobile environment: In addition to the restricted access mentioned in the previous point, mobile application user interface response times are dependent on many variables, such as connection speed and device configuration other than the server response times. Hence, it is much harder to predict the synchronization response in a mobile application. Due to this automation of mobile, the application is more prone to be unstable unless hardcoded wait times are included in the automation scripts. Handling location-specific changes in the application behavior: Many mobile applications are designed to interact with the user location, and behave differently as per the change in GPS coordinates. Since network strengths cannot be controlled externally, it is very difficult to predict the application behavior and to replicate the preconditions of a network strength-specific use case through the use of automation. So, this is another aspect that every automation solution has to address appropriately. Some automation tools allow the simulation of such network conditions that should be specifically handled while developing the automation suite. Supporting application behavior changes for varied form factors: As explained earlier, since there are different screen sizes, available for mobile devices, the behavior of the application is often specific to the screen size owing to responsive design techniques that are now quite widely used. Even with the change in the orientation of the devices, application use cases have alternative behavior. For example, an application interface loaded in the portrait mode would appear different, with objects in different locations than they would appear in the landscape mode. Hence, automation solutions would need to factor this in and ensure that such changes are handled in a robust and scalable way. Scripting complexity due to diversity in OS: Since many applications are developed to support various OSes, especially mobile web applications, it is a key challenge to handle application differences, such as mobile device input methods for various devices, as devices differ in keystrokes, input methods, menu structures, and display properties. With different mobile operating systems in the market, such as Android, iOS, Brew Symbian, Tizen, Windows, and BlackBerry (RIM), each having its own limitations and variations, creation of a single script for every device is a challenge that needs to be adequately tackled in order to make the automation solution more robust, maintainable, and scalable to support newer devices in future. Mobile application packages With the advancement in wireless technology, big technology companies, such as Apple, Amazon, Google, and so on, came out with a solution that provides users with a more realistic approach to finding information, making decisions, shopping, and other countless things at their fingertips by developing mobile applications for their products. The main purpose of developing mobile applications was actually to retrieve information using various productivity tools, which includes calculator, e-mail, calendar, contacts, and many more. However, with more demand for and the availability of resources, there was a rapid growth and expansion in other categories, such as mobile games, shopping, GPS and location-based services, banking, order tracking, ticket purchases, and recently, mobile medical applications. The distribution platforms, such as Apple App Store, Google Play, Windows Phone Store, Nokia Store, and BlackBerry Application World, are operated by the owners of the mobile operating systems, and mobile applications are made available to users by them. We usually hear about the terms such as a native application, hybrid application, or web application, so, did you ever wonder what they are and what is the difference is between them? Moving ahead, we will discuss the different mobile packages available for use and their salient features that make an impact on the selection of a strategy and testing tool for automation. The different mobile packages available are: Native applications Web applications Hybrid applications Native applications Any mobile application needs to be installed through various distribution systems, such as Application Store and Google Play. Native applications are the applications developed specifically for one platform, such as iOS, Android, Windows, and many more. They can interact and take full advantage of operating system features and other software that is typically installed on that platform. They have the ability to use device-specific hardware and software, such as the GPS, compass, camera, contact book, and so on. These types of applications can also incorporate gestures such as standard operating system gestures or new application-defined gestures. Native applications have their entire code developed for a particular operating system and hence have no reusability across operating systems. A native application for iOS would thus have its application handles built specifically for Objective-C or Swift and hence would not work on an Android device. If the same application needs to be used across different operating systems, which is a very logical requirement for any successful application, then developers would have to write a whole new repository of code for another mobile operating system. This makes the application maintenance cumbersome and the uniformity of features is another challenge that becomes difficult to manage. However, having different code bases for different operating systems allows the flexibility to have operating-system-specific customizations that are easy to build and deploy. Also, today, there is a need to follow very strict "look and feel" guidelines for each operating system. Using a native application might be the best way to keep this presentation correct one for each OS. Also, testing native applications is usually limited to the operating system in question and hence, the fragmentation is usually limited in impact. Only manufactures and operating system versions need to be considered. Mobile web applications A mobile web application is actually not an application but in essence only websites that are accessed via a mobile interface, and it has design features specific to the smaller screen interface and it has user interactions such as swipe, scroll, pinch, and zoom built in. These mobile web applications are accessed via a mobile browser and are typically developed using HTML or HTML5. Users first access them as they would access any web page. They navigate to a special URL and then have the option of installing them on their home screen by creating a bookmark for that page. So, in many ways, a web application is hard to differentiate from a native application, as in mobile screens, usually there are no visible browser buttons or bars, although it runs in mobile browsers. A user can perform various native application functionalities, such as swiping to move on to new sections of the application. Most of the native application features are available in the HTML5 web application, for example, they can use the tap-to-call feature, GPS, compass, camera, contact book, and so on. However, there are still some native features that are inaccessible (at least for now) in a browser, such as the push notifications, running an application in the background, accelerometer information (other than detecting landscape or portrait orientations), complex gestures, and more. While web applications are generally very quick to develop with a lot of ready-to-use libraries and tools, such as AngularJS, Sencha, and JQuery, and also provide a unique code base for all operating systems, there is an added complexity of testing that adds to the fragmentation problem discussed earlier. There is no dearth of good mobile browsers and on a mobile device, there is very limited control that application developers can have, so users are free to use any mobile browser of their choice, such as Chrome, Safari, UC Browser, Opera Mobile, Opera Mini, Firefox, and many more. Consequently, these applications are generally development-light and testing-heavy. Hence, while developing automation scripts, the solution has to consider this impact, and the tool and technique selected should have the facility to run scripts on all these different browsers. Of course, it could be argued that many applications (native or otherwise) do not take advantage of the extra features provided by native applications. However, if an application really requires native features, you will have to create a native application or, at least, a hybrid application. Hybrid applications Hybrid applications are combinations of both native applications and web applications, and because of that, many people incorrectly call them web applications. Like native applications, they are installed in a device through an Application Store and can take advantage of the many device features available. Just like web applications, hybrid applications are dependent on HTML being rendered in a browser, with the caveat that the browser is embedded within the application. So, for an existing web page, companies build hybrid applications as wrappers without spending significant effort and resources, and they can get their existence known in Application Store and have a star rating! Web applications usually do not have one and hence have this added disadvantage of lacking the automatic publicity that a five-star rating provides in the mobile stores. Because of cross-platform development and significantly low development costs, hybrid applications are becoming popular, as the same HTML code components are reusable on different mobile operating systems. The other added advantage is that hybrid applications can have the same code base wrapped inside an operating-system-specific shell thereby making it development-light. By removing the problem posed by various device browsers, hybrid applications can be more tightly controlled, making them less prone to fragmentation, at least on the browser side. However, since they are hybrid applications, any automation testing solution should have the ability to test across different operating system and version combinations, with the ability to differentiate between various operating-system-specific functionality differences. Various tools such as PhoneGap and Sencha allow developers to code and design an application across various platforms just by using the power of HTML. Factors to be considered during mobile testing In many aspects, an approach to perform any type of testing is not so different from mobile automation testing. From methodology and experience, while working with the actual testing tools, what testers have learned in testing can be applied to mobile automation testing. So, a question might come to your minds that then, where does the difference lie and how should you accommodate these differences? So, following this topic, we will see some of the factors that are highly relevant to mobile automation testing and require particular attention, but if handled correctly, then we can ensure a successful mobile testing effort. Some of the factors that need to be taken care of in testing mobile applications are as follows: Testing for cross device and platform coverage: It is not feasible to test an application on each and every available device because of the plethora of devices that support the application across different platforms, which means you have to strategically choose only a limited, but sufficient set of physical devices. You need to remember that testing on one device, irrespective of whether it is of the same make, same operating system version, or uses the same platform cannot ensure that it would work on any other device. So, it is important that, at the very least, most of the critical features, if not all, are tested on a physical device. Otherwise, the application always runs a risk of potential failure on an untested device, especially when the target audience for the application is widespread, such as for a game or banking application. Use of emulated devices is one of the common ways to overcome the issues of testing on numerous physical devices. Although this approach is generally less expensive, we cannot rely completely on the emulated devices for the results they present, and with emulators, it may be quite possible that test conditions are not close enough to the real-life scenarios. So, an adequate coverage of different physical devices is required to test these following variations, providing sufficient coverage in order to negate the effects of fragmentation and have sufficient representation of these various factors: Varying screen sizes Different form factors Different pixel densities and resolutions Different input methods, such as QWERTY, touch screen, and more Different user input methods, such as swipes, gestures, scrolling, and many more Testing different versions of an operating system of the same platform: For thorough testing, we need to test the application on all major platforms, such as Android, iOS, Windows, and others, for the target customer base, but each one of them has numerous versions available that keep on growing regularly. Most commonly, testing automation on the latest version of any operating system can be sufficient, as the operating systems are generally backward compatible. However, due to fragmentation of the Android OS, the application would still need to be tested on at least the most commonly used versions besides the latest ones, which in some cases may be significantly behind the latest version. This is because there may be many Android devices that are on an earlier version of Android and are not supported by the latest versions of Android. Testing of various network types and network providers: Most of the mobile applications, such as banking- or information-search-related applications require network connectivity, such as CDMA or GSM, at least partially, if not completely. If the application talks to a server about the flow of information to and fro, testing on various (at least all major) network providers is important. The network infrastructure used by network providers may affect data communication between application and the backend. Apart from the different network providers, an application needs to be tested on other modes of network communication, such as Wi-Fi network as well. Testing for mobile-environment-specific constraints: The mobile environment is very dynamic and has constraints, such as limited computing resources, available memory, in-between calls or messages, network switching, battery life, and a lot of other sensors and features, such as accelerometer, gyroscope, GPS, memory cards, camera, and others, present in the device, as an application's behavior depends on these factors. An application should integrate or interact (if required) with these features gracefully, and sufficient testing needs to be carried out in various situations to ensure this. However, oftentimes, it is not practically feasible to recreate all permutations and combinations of these factors, and hence a strategic approach needs to be taken to ensure sufficient coverage. Testing for the unpredictability of a mobile user: A tester has to be more cautious and should expand the horizon while testing the applications. They should make sure that an application provides an overall good response to all users and a good user experience; hence, User Experience (UX) testing invariably needs to be performed to a certain degree for all mobile applications. Since, a mobile application's audience comprises of various people ranging from nontech people to skilled technical users and from children to middle-aged users. Each of the users have their own style of using the application and have set their own expectations of it. A middle-aged or an aged user will be much calmer while using any application than someone who is young when it comes to the performance of the application. In general, we can say that mobile users have set incredibly high expectations of the applications available in the marketplace. Mobile automation testing approaches In this section, you will understand the different approaches used for automation of a mobile application and their salient points. There are, broadly speaking, four different approaches or techniques available for mobile application testing automation: Test automation using physically present real devices Test automation using emulators and simulators Mobile web application test automation through the user agent simulation technique Cloud-solutions-based test automation Automation using real devices As the name suggests, this technique is based on the usage of real devices that are physically present with the testing automation team. Since this technique is based on the usage of real devices, it is a natural consequence that the Application Under Test (AUT) is also tested over a real network (GSM, CDMA, or Wi-Fi). To establish connectivity of the automation tool with the devices, any of the communication mechanisms, such as USB, Bluetooth, or Wi-Fi can be used; however, the most commonly used and the most reliable one is the USB connection. After the connection is established between the machines on which the automation tool is installed and the Device Under Test (DUT), the automation scripts can capture object properties of the AUT and later, the developed scripts can be executed on other devices as well, but with minor modifications. There are numerous automation tools, both licensed as well as open source freeware, available for mobile automation. Some commonly used licensed tools are: Experitest SeeTest TestPlant eggPlant Mobile /eggOn Jamo Solutions M-eux Test ZAP-fiX Prominent tools for Android and iOS automation are: Selenium with solutions such as Selendroid and Appium along with iOS and Android drivers MonkeyTalk (formerly FoneMonkey) The following are the salient features of this approach: The AUT is accessed on devices either by using a real mobile network or Wi-Fi network and can also be accessed by the Intranet network of the machine to which it is connected The automation testing tool is installed on the desktop that uses the USB or Wi-Fi connectivity to control devices under test Steps to set up automation For automation on real devices, scripts are required to be executed on the devices with a USB or Wi-Fi connection to send commands via the execution tool to the devices. The following is a step-by-step description of how to perform the automation on real devices: Determine the device connectivity solution (USB or Wi-Fi connectivity) based on the available setup. In some cases, USB connectivity is not enabled due to security policies and only in those cases is a Wi-Fi connection utilized. Identify the tool to be used for the automation based on the tool feasibility study of the application. Procure the required licenses (seat or concurrent) if a licensed tool is selected. License procurement might mean that lengthy agreements need to be signed by both parties, besides arranging for the payment of services such as support. So, this step should be well planned with enough buffer time. If the existing automation setup is to be leveraged, then an additional license needs to be acquired that corresponds to the tool (such as Quick Test Professional, Quality Center, and more). In some cases, you might also have to integrate the existing automation scripts developed with tools such as Quick Test Professional/Unified Functional Testing along with the automation scripts developed for the mobile. In such a case, the framework already in place needs to be modified. Install the tools on the automation computer and establish the connectivity with the real devices. Installation may not be as simple as just running an executable file when it comes to mobile automation. There are various network-level settings and additional drivers that are needed to connect the computer and to control various mobile devices from the computer. Hence, all this should be done and planned well in advance. Script the test cases and execute them on real devices. Limitations of this automation This approach has the following limitations: The overall cost can be high as multiple devices are required to be procured for different teams and testers Maintenance and physical security can be an overhead Script maintenance can be delayed if testing cycles are overlapping with functional and automation teams Emulators-based automation Emulators are programs that replicate the behavior of a mobile operating system and, to some extent, the device features on a computer. So, in essence, these programs are used to create virtual devices. So, any mobile application can be deployed on such virtual devices and then tested without the use of a real device. Ideally speaking, there are two types of mobile device virtualization programs: emulators and simulators. From a purely theoretical standpoint, the following are the differences between an emulator and a simulator. A device emulator is a desktop application that emulates both the mobile device hardware and its operating systems; thus, it allows us to test the applications to a lesser degree of tolerance and better accuracy. There are also operating system emulators that don't represent any real device hardware, but rather the operating system as a whole. These exist for Windows Mobile and Android, but a simulator is a simpler application that simulates some of the behavior of a device, does not emulate hardware, and does not work over the real operating system. These tools are simpler and less useful than emulators. A simulator may be created by the device manufacturer or by some other company that offers a simulation environment for developers. Thus, simulator programs have lesser accuracy than emulator programs. For the sake of keeping the discussion simple, we will refer to both as emulators in this article. Since this technique does not use real devices, it is a natural consequence that the AUT is not tested over a real network (GSM, CDMA, or Wi-Fi), and the network connection of the machine is utilized to make a connection with the application server (if it connects to a server, which around 90 percent of mobile applications do). Since the virtual devices are available on the computer, there is no external connection required between the device's operating system and automation tool. However, an emulator is not as simple as automating any other program because the actual AUT runs inside the shell of the virtual device. So, a special configuration needs to be enabled with the automation tools to enable the automation on the virtual device. The following is a diagram depicting an Android emulator running on a Windows 7 computer: In most projects, this technique is used for prelaunch testing of the application, but there are cases where emulators are automated to a great extent. However, since the emulator is essentially more limited in scope than the real devices, mobile-network-specific and certain other features such as memory utilization cannot be relied upon while testing automation with emulators. There are numerous automation tools, both licensed as well as of an open source freeware available for mobile automation on these virtual devices, and ideally, emulators for various mobile platforms can be automated with most of the tools that support real device automation. The prominent licensed tools are: ExperiTest SeeTest TestPlant eggPlant Mobile /eggOn Jamo Solutions M-eux Test Tools such as Selenium and ExperiTest SeeTest can be used to launch device platform emulators and execute scripts on the AUT. The prominent free-to-use tools for emulator automation are: Selenium WebDriver Appium MonkeyTalk (formerly FoneMonkey) Since emulators are also software that run on other machines, device-specific configurations need to be performed prior to test automation and have to be handled in the scripts. The following is the conceptual depiction of this technique. The emulator and simulator programs are installed on a computer with a given operating system, such as Windows, Linux, or Mac, which then virtualizes the mobile operating system, such as Android, iOS, RIM, or Windows, and subsequently, which can be used to run scripts that emulate the behavior of an application on the real devices. Steps to set up automation The following are the steps to set up the automation process for this approach: Identify the various platforms for which the AUT needs to be automated. Establish the connectivity to AUT by enabling the firewall access in the required network for mobile applications. Identify the various devices, platforms, emulators, and device configurations, according to which test needs to be carried out. Install emulators/simulators for the various platforms. Create scripts and execute them across multiple emulators/simulators. Advantages This approach has the following advantages: Standalone emulators that don't have real devices can be utilized No additional connectivity is required for automation This provides support for iOS and Android with freeware This provides support to all platforms and types of applications with licensed tools, such as Jamo Solutions M-eux and ExperiTest SeeTest Limitations This approach has the following limitations: This can be difficult to automate as the emulators and simulators are themselves not thoroughly tested software and might have unknown bugs. Selenium WebDriver cannot be used to automate Android applications in some versions due to a bug in the Android emulator. It might sometimes be difficult to triangulate a defect that is detected on a virtual device and it might be needed that you recreate it on a real device first. In many cases, it has been observed that defects caught on emulators are not reproduced on real devices. For iOS simulators, access to a Mac machine with Xcode is required, which can be difficult to set up in a secure Offshore Development Center (ODC) due to security restrictions. User agent-simulation-based automation The third technique is the simplest of all. However, it is also very limited in its scope of applicability. It can be used only for mobile web applications and only to a very limited extent. Hence, it is generally only used to automate the functional regression testing of mobile web applications and rarely used for GUI validations. User agent is the string that web servers use to identify information, such as the operating system of the requester and the browser that is accessing it. This string is normally sent with the HTTP/HTTPS request to identify the requester details to the server. Based on this information, a server presents the required interface to the requesting browser. This approach utilizes the browser user agent manipulation technique. This is depicted in the following schematic diagram: In this approach, an external program or a browser add-on is used to override the user agent information that is sent to the web application server to identify the requestor system as a mobile instead of its real information. So, for example, when a web application URL such as https://www.yahoo.com is accessed from a mobile device, the application server detects the requester to be a mobile device and redirects it to https://mobile.yahoo.com/, thereby presenting the mobile view. If the user agent information is overridden to indicate that it is coming from a Safari browser on an iPhone, then it will be presented with the mobile view. The following screenshot depicts how the application server has responded to a request when it detects that the request is from an iPhone 4 and is presented the mobile view: Since the mobile web application is accessed entirely from the computer, automation can be done using traditional web browser automation tools, such as Quick Test Professional/Unified Functional Testing or Selenium. The salient features of this technique are as follows: With browser user agent manipulation, any mobile platform can be simulated Browser uder agent manipulation is limited to only mobile web applications and is not extended to native and hybrid applications Browser simulation can be done using freeware files that are available for all leading web browsers The common user agent switching tools are: Bayden UAPick for IE User agent switcher add-on for Firefox Fiddler for IE Modify Headers for Firefox UA Spoofer add-on for Chrome Built-in device emulator with Chrome that can be accessed from developer tools Steps to set up the automation The following are the steps to set up the automation process for this approach: Identify the various platforms for which the AUT needs to be validated. Identify the user-agent switcher tool that corresponds to any browser that needs to be leveraged for testing. Identify the user-agent string for all platforms in scope and set up configuration in the user-agent switcher tool. Leverage any functional testing tool that offers testing capabilities using any web browser, for example, Quick Test Professional, RFT, SilkTest, and Selenium WebDriver. Advantages This approach has the following advantages: All platforms can be automated with little modification of scripts Quick implementation of automation solution Can leverage an open source software, such as Selenium for automation Existing automation set up can be leveraged Limitations This approach has the following limitations: Least representative of real device-based tests Device-specific issues cannot be captured through this approach This cannot be used for UI-related test cases This approach supports only web-based mobile applications Cloud-based automation This technique provides most of the capabilities for test automation, but is also one of the more expensive techniques. In this technique, automation is done on real devices connected to real networks that are accessed remotely through cloud-based solutions, such as Perfecto Mobile, Mobile Labs, Sauce Labs, and DeviceAnywhere. The salient features of this technique are as follows: Cloud-based tools, such as Perfecto mobile and Device Anywhere provide a WYSIWYG (What You See Is What You Get) solution for automation Both OCR (Optical Character Recognition) and native object recognition and analysis is utilized in these tools for automation These tools also provide simple high-level keywords, such as Launch Browser, Call Me, and many more that can be used to design test cases The scripts that are thus created need to be re-recorded for every new type of device due to differences between the interface and GUI objects Mobile devices are accessed via a web interface or thick-client, by teams in various regions The devices are connected to real networks that use Wi-Fi or various mobile network operators (AT&T, Vodafone, and more) The AUT is accessed via the Internet or through a secure intranet connection This approach provides offer integration with common automation tools such as Quick Test Professional/UFT and Selenium Steps to set up the automation The following are the steps to set up the automation process for this approach: Identify the various platforms and devices for which the AUT needs to be automated. Establish connectivity to AUT by enabling the firewall access for mobile web applications. Open an account with the chosen cloud solution provider and negotiate to get the licenses for automation or set up a private cloud infrastructure within your company premises. Install the cloud service provider client-side software setup along with the automation plugin for the relevant tool of choice (UFT or Selenium). Book the devices as per testing needs (this usage normally has a cost associated with it). Create scripts and execute across multiple devices. Advantages This approach has the following advantages: This allows us to test automation on multiple devices of various manufactures (hardware) For example: Samsung, Apple, Sony, Nokia A script can be executed on multiple mobile devices from the same manufactures (models) For example: Galaxy SII, Galaxy SIII, iPhone 4S, iPhone 5, iPad2 Scripts can be tested on different platforms (software) For example: Android 2.3 - 4.4, iOS 4-8, Symbian, Bada Limitations This approach has the following limitations: Network latency may be experienced Cost can be high as fees depends on device usage Setting up a private mobile lab is costly, but may be necessary due to an organization's security policies, particularly in legally regulated industries, such as BFSI organizations Types of mobile application tests Apart from the usual functional test, which ensures that the application is working as per the requirements, there are a few more types that need to be handled with an automation solution: Interrupt testing: A mobile application while functioning may face several interruptions that can affect the performance or functionality of an application. The different types of interruptions that can adversely affect the functionality of an application are: Incoming calls and SMS or MMS Receiving notifications, such as Push Notifications Sudden removal of battery Transfer of data through a data cable by inserting or removing the data cable Network/Data loss or recovery Turning a Media Player off or on Ideally, an application should be able to handle these interruptions, for example, whenever an interruption is there, an application can go into a suspended state and resuming afterwards. So, we should design automation scripts in such a way that they can not only test these interrupts, but they can reliably also reproduce them at the requisite step of the flow. UI testing: A user interface for a mobile application is designed to support various screen sizes and hence, the various components of a mobile application screen appear differently or in some cases, even behave differently as per the OS or device make. Hence, any automation script needs be able to work with varying components and also be able to verify the component's behavior. Use of automation ensures that the application is quickly tested and the fixes are regression tested across different applications. Since UI is where the end users interact with the application, use of a robust automation suite is the best way to ensure that the application is thoroughly tested so that it rolls out to the users in the most cost-effective manner. A properly tested application makes the end user experience more seamless and thereby, the application under test is more likely to get a better star rating and its key to commercial success. Installation testing: Installation testing ensures that the installation process goes smoothly without the user facing any difficulty. This type of a testing process includes not only installing an application but also updating and uninstalling an application. Use of automation to install and uninstall any application as per the defined process is one of the most cost-effective ways to do this type of testing. Form factor testing: Applications may behave differently (especially in terms of user interface) on smartphones and tablets. If the test application supports both smartphones and tablets, it should be tested on both form factors. This can be treated as an extension to the UI testing type. Selection of the best mobile testing approach While selecting a suitable mobile testing approach, you need to look at the following important considerations: Availability of automation tools: The availability of relevant mobile automation tool plays a big role in the selection and implementation of the mobile automation approach. Mode of connection of devices: This is one of the primary, if not the most important, aspect that plays a pivotal role in the selection of a mobile automation approach. There are different ways in which devices can be connected to the automation tools such as: Using a USB connection Using a Wi-Fi connection Using a Bluetooth connectivity (only for a very limited set of tools) Using localized hotspots, that is, having one device as a hotspot and other devices riding its network for access Cloud connection Use of emulators and simulators All these approaches need specific configurations on machines, and with the automation tools, which may sometimes be restricted, any automation solution should be able to work around the constraints in various setups. The key consideration is the degree of tolerance of the automation solution. The four different approaches that we discussed earlier in this article have each got a different level of accuracy. The least accurate is the user agent-based approach because it relies just on a web browser's rendering on a Windows machine rather than a real device. The most accurate approach, in terms of closeness to the real-world situation, is the use of real devices. However, this approach suffers from restrictions in terms of scalability of the solution, that is, supporting multiple devices simultaneously. Use of emulators and simulators is also prone to inaccuracies with respect to the real-device features, such as RAM, screen resolutions, pixel sizes, and many more. While working with cloud-based solutions, a remote connection is established with the devices, but there can be unwanted signal delays and screen refresh issues due to network bandwidth issues. So, any approach that is selected for automation should factor in the degree of tolerance that is acceptable with any automation suite. For example, for a mobile application that makes heavy usage of graphics and advanced HTML 5 controls, such as embedded videos and music, automation should not be carried out with an emulator solution, as the degree of accuracy would suffer adversely and usually beyond the acceptable tolerance limit. Consider another application that is a simple mobile web application with no complex controls and that doesn't rely on any mobile-device-specific controls, such as camera controls, or touch screen sensitive controls, such as pinch and zoom. Such an application can easily be automated with the user agent-based approach without any significant impact on the degree of accuracy. If an application uses network bandwidth very heavily, then it is not recommended to use the cloud-based approach, as it will suffer from network issues more severely and would have unhandled exceptions in the automation suite. Conversely, the cloud-based approach is most suitable for organizations that have geographically and logically dispersed teams that can use remotely connected devices from a single web interface. This approach is also very suitable when there are restrictions on the usage of other device connection approaches, such as USB, Wi-Fi, or Bluetooth. Although this approach does need additional tools to enable cloud access, it is a worthwhile investment for organizations that have a high need for system and network security, such as banking and financial organizations. Troubleshooting and best practices The following best practices should ideally be followed for any mobile automation project: The mode of connectivity between the AUT, DUT, and computer on which the automation tool is installed should be clearly established with all the considerations of any organization's security policies. In most cases, there is no way to workaround to the absence of USB connectivity, other than to use cloud-based automation solutions. So, before starting a project, the physical setup should be thoroughly vetted. The various operating systems and versions, mobile equipment manufacturers, and different form factors that need to be supported with the application, and consequently, the automation solution should be designed to support all of them. However, if you start automating before identifying all the supported devices, then there would invariably be a lot of rework required to make the scripts work with other devices. Hence, automation scripts should be made for all supported OSes and devices right from the design stage. A user agent-based automation can only be implemented for mobile web applications. It is a cost-effective and quick way to implement solutions since it involves automation of just a few basic features. However, this technique should not be relied upon for validating GUI components and should always be accompanied with a round of device testing. If any simulation or emulation technique (user agent or emulators/simulators) is used for automation, then it should strictly be used for functional regression testing on different device configurations. Ideally, projects utilizing these solutions should also have a GUI testing round with real devices, at least for the first release. If a geographically-distributed team is to utilize the automation solution, for example, an offshore-onsite team that needs to use the same devices, then the most cost-effective solution in the long run is the cloud-based automation. Even though the initial setup cost of the cloud solution generally is the highest of the four techniques, since different teams can multiplex and use devices from different locations and so the overall cost is offset by using fewer devices overall. During the use of emulators/simulators, the automation scripts should be designed to trigger the virtualization program with the required settings for memory, RAM, and the requisite version of the operating system, so that there is no manual intervention required to start the programs before you trigger the execution. Also, this way, scripts can be triggered remotely and in an unmonitored way. Irrespective of the technique utilized, a proper framework should be implemented with the automation solution. Summary In this article, we learned what mobile test automation is, what are the different mobile packages that are available, and what factors should be considered during mobile automation testing. We then moved on to learn the different types of approaches and selection of the best approach according to any specific project requirements. So, it is evident that with the use of automation to test any mobile application, a good user experience can be ensured with a defect-free software, with which a good star rating can be expected for the AUT. Resources for Article: Further resources on this subject: DOM and QTP [article] Automated testing using Robotium [article] Managing Test Structure with Robot Framework [article]
Read more
  • 0
  • 0
  • 1201

Packt
04 May 2015
7 min read
Save for later

Git Teaches – Great Tools Don't Make Great Craftsmen

Packt
04 May 2015
7 min read
This article is written by Ferdinando Santacroce, author of the book, Git Essentials. (For more resources related to this topic, see here.) Git is a powerful tool. In case you need to retain multiple versions of files—even if you may not be a software developer—Git can perform this task easily. As a Git user, in my humble career, I have never found a dead-end street—a circumstance where I had to give up because of a lack of solutions. Git always offers a wide range of alternatives even when you make a mistake; you can use either git revert to revert your change, or git reset if there is no need to preserve the previous commit. Another key strength of Git is its ability to let your project grow and take different ways when needed. Git branching is a killer feature of this tool. Every versioning system is able to manage branches. However in Git, using this feature is a pleasure; it is super-fast (it does all the work locally), and it does not require a great amount of space. For those who are used to work with other versioning systems such as Subversion, this probably makes a difference. In my career as a developer, I have assisted in situations where developers wouldn't create new branches for new features, because branching was a time-consuming process. Their versioning system, on large repositories, required 5-6 minutes to create a new branch. Git usually doesn't concede alibis. The git branch command and the consecutive git merge operations are fast and very reliable. Even when you commit, git commit doesn't allow you to store a new commit without a message to protect you from our laziness and grow a talking repository with a clear history and not a mute one. However, Git can't perform miracles. So, to get the most out of it, we need a little discipline. This discipline distinguishes an apprentice from a good craftsman. One of the most difficult things in software development is with regard to the sharing of a common code base. Often, programmers are solitary people who love instructions that are typed in their preferred editor, which helps them make working software without any hassles. However, in professional software development, you usually deal with big projects that require more than a single developer at a time; everyone contributes their own code. At this point, if you don't have an effective tool to share code like Git and a little bit of discipline, you can easily screw up. When I talk about discipline, I talk about two main concepts—writing good commits and using the right workflow. Let's start with the first point. What is a good commit? What makes a commit either good or bad? Well, you will come across highly opinionated answers to this question. So here, I will provide mine. First of all, good commits are those commits that do not mix apples and oranges; when you commit something, you have to focus on resolving one problem at a time (fix a bug and implement a new feature or make a clear step forward towards the final target) without modifying anything that is not strictly related to the task you are working on. While writing some code, especially when you have to refactor or modify the existing code, you may too often fall into the temptation to fix here and there some other things. This is just your nature, I know. Developers hate ugly code, even though they often are the ones who wrote it some time ago; they can't leave it there even for a minute. It's a compulsive reaction. So, in a matter of a few minutes, you end up with a ton of modified files with dozens of cross-modifications that are quite difficult to comment in a commit message. They are also hard to merge and quite impossible to cherry-pick, if necessary. So, one of the first things that you have to learn is to make consistent commits. It has to became a habit, and we all know that habits are hard to grow and hard to break. There are some simple tricks that have helped me become a better committer day by day (yes, I'm still far from becoming a good committer). One of the most effective tricks that you can use to make consistent commits is to have a pencil and a paper with you; when you find something wrong with your code that is not related to what you are working on at the moment, pick up the pencil and write down a note on a piece of paper. Don't work on it immediately. However, remind yourself that there is something that you have to fix in the next commit. In the same manner, when you feel that the feature you're going to implement either is too wide for a single commit, or requires more than a bunch of hours to terminate (I tend to avoid long coding sessions), make an effort and try to split the work in two or three parts, writing down these steps on the paper. Thus, you are unconsciously wrapping up your next commits. Another way to avoid a loss of focus is to write the commit message before you start coding. This may sound a little weird, but having the target of your actual work in front of your eyes helps a lot. If you practice Test Driver Development (TDD), or even better, Behavior Drive Development (BDD), you probably already know that they have a huge side-effect despite their main testing purpose—they force you to look at the final results, maintaining the focus on what the code has to do, and not the implementation details. Writing preemptive commit messages is the same thing. When the target of your commit is clear and you can keep an eye on it every time you look at your paper notebook, then you know that you can code peacefully because you will not go off the rails. Now that we have a clear vision of what makes a good commit, let's move your attention to good workflows. Generally speaking, the sharing of a common way to work is the most taken-for-granted advice that you can give. However, it often represents exactly the biggest problem when you look at underperforming firms. Having a versioning workflow that is decided with the help of common agreement is the most important thing about a development team (even for a team of one) because it lets you feel comfortable even in case of emergency. When I talk about emergencies, I talk about common hitches for a software developer—urgently fixing a bug on a specific software version, developing different features in parallel, and building beta or testing versions to let testers and users give you feedback. There are plenty of good Git workflows out there. You can take inspiration from them. You can use a workflow as it is, or you can take some inspiration and adapt it to fit your project peculiarities. However, the important thing is that you have to keep on using it not only to be consistent (don't cheat!), but also to adapt it when premises change. Don't blindly follow a workflow if you don't feel comfortable with it, and don't even try to use the same workflow every time. There are good workflows for web projects where there's usually no need to keep multiple versions of the same software and the ones that fit desktop applications better, where multiple versions are the order of the day. Every kind of project needs its perfectly tailored workflow. The last thing I wish to suggest to the developers interested in Git is to share common sense. Good developers share coding standards, and a good team has to share the same committing policy and the same workflow. Lone cowboys and outlaws represent a problem even in software development, and not just in Spaghetti Western movies. Resources for Article: Further resources on this subject: Configuration [article] Maintaining Your GitLab Instance [article] Searching and Resolving Conflicts [article]
Read more
  • 0
  • 0
  • 735
article-image-getting-started-codeception
Packt
04 May 2015
17 min read
Save for later

Getting started with Codeception

Packt
04 May 2015
17 min read
In this article by Matteo Pescarin, the author of Learning Yii Testing, we will get introduced to Codeception. Not everyone has been exposed to testing. The ones who actually have are aware of the quirks and limitations of the testing tools they've used. Some might be more efficient than others, and in either case, you had to rely on the situation that was presented to you: legacy code, hard to test architectures, no automation, no support whatsoever on the tools, and other setup problems, just to name a few. Only certain companies, because they have either the right skillsets or the budget, invest in testing, but most of them don't have the capacity to see beyond the point that quality assurance is important. Getting the testing infrastructure and tools in place is the immediate step following getting developers to be responsible for their own code and to test it. (For more resources related to this topic, see here.) Even if testing is something not particularly new in the programming world, PHP always had a weak point regarding it. Its history is not the one of a pure-bred programming language done with all the nice little details, and only just recently has PHP found itself in a better position and started to become more appreciated. Because of this, the only and most important tool that came out has been PHPUnit, which was released just 10 years ago, in 2004, thanks to the efforts of Sebastian Bergmann. PHPUnit was and sometimes is still difficult to master and understand. It requires time and dedication, particularly if you are coming from a non-testing experience. PHPUnit simply provided a low-level framework to implement unit tests and, up to a certain point, integration tests, with the ability to create mocks and fakes when needed. Although it still is the quickest way to discover bugs, it didn't cover everything and using it to create large integration tests will end up being an almost impossible task. On top of this, PHPUnit since version 3.7, when it switched to a different autoloading mechanism and moved away from PEAR, caused several headaches rendering most of the installations unusable. Other tools developed since mostly come from other environments and requirements, programming languages, and frameworks. Some of these tools were incredibly strong and well-built, but they came with their own way of declaring tests and interacting with the application, set of rules, and configuration specifics. A modular framework rather than just another tool Clearly, mastering all these tools required a bit of understanding, and the learning curve wasn't promised to be the same among all of them. So, if this is the current panorama, why create another tool if you will end up in the same situation we were in before? Well, one of the most important things to be understood about Codeception is that it's not just a tool, rather a full stack, as noted on the Codeception site, a suite of frameworks, or if you want to go meta, a framework for frameworks. Codeception provides a uniform way to design different types of test by using as much as possible the same semantic and logic, a way to make the whole testing infrastructure more coherent and approachable. Outlining concepts behind Codeception Codeception has been created with the following basic concepts in mind: Easy to read: By using a declarative syntax close to the natural language, tests can be read and interpreted quite easily, making them an ideal candidate to be used as documentation for the application. Any stakeholder and engineer close to the project can ensure that tests are written correctly and cover the required scenarios without knowing any special lingo. It can also generate BDD-style test scenarios from code test cases. Easy to write: As we already underlined, every testing framework uses its own syntax or language to write tests, resulting in some degree of difficulty when switching from one suite to the other, without taking into account the learning curve each one has. Codeception tries to bridge this gap of knowledge by using a common declarative language. Further, abstractions provide a comfortable environment that makes maintenance simple. Easy to debug: Codeception is born with the ability to see what's behind the scenes without messing around with the configuration files or doing random print_r around your code. On top of this all, Codeception has also been written with modularity and extensibility in mind, so that organizing your code is simple while also promoting code reuse throughout your tests. But let's see what's provided by Codeception in more detail. Types of tests As we've seen, Codeception provides three basic types of test: Unit tests Functional tests Acceptance tests Each one of them is self-contained in its own folder where you can find anything needed, from the configuration and the actual tests to any additional piece of information that is valuable, such as the fixtures, database snapshots, or specific data to be fed to your tests. In order to start writing tests, you need to initialize all the required classes that will allow you to run your tests, and you can do this by invoking codecept with the build argument: $ cd tests $ ../vendor/bin/codecept build Building Actor classes for suites: functional, acceptance, unit FunctionalTester includes modules: Filesystem, Yii2 FunctionalTester.php generated successfully. 61 methods added AcceptanceTester includes modules: PhpBrowser AcceptanceTester.php generated successfully. 47 methods added UnitTester includes modules: UnitTester.php generated successfully. 0 methods added $ The codecept build command needs to be run every time you modify any configuration file owned by Codeception when adding or removing any module, in other words, whenever you modify any of the .suite.yml files available in the /tests folder. What you have probably already noticed in the preceding output is the presence of a very peculiar naming system for the test classes. Codeception introduces the Guys that have been renamed in Yii terminology as Testers, and are as follows: AcceptanceTester: This is used for acceptance tests FunctionalTester: This is used for functional tests UnitTester: This is used for unit tests These will become your main interaction points with (most of) the tests and we will see why. By using such nomenclature, Codeception shifts the point of attention from the code itself to the person that is meant to be acting the tests you will be writing. This way we will become more fluent in thinking in a more BDD-like mindset rather than trying to figure out all the possible solutions that could be covered, while losing the focus of what we're trying to achieve. Once again, BDD is an improvement over TDD, because it declares in a more detailed way what needs to be tested and what doesn't. AcceptanceTester AcceptanceTester can be seen as a person who does not have any knowledge of the technologies used and tries to verify the acceptance criteria that have been defined at the beginning. If we want to re-write our previously defined acceptance tests in a more standardized BDD way, we need to remember the structure of a so-called user story. The story should have a clear title, a short introduction that specifies the role that is involved in obtaining a certain result or effect, and the value that this will reflect. Following this, we will then need to specify the various scenarios or acceptance criteria, which are defined by outlining the initial scenario, the trigger event, and the expected outcome in one or more clauses. Let's discuss login using a modal window, which is one of the two features we are going to implement in our application. Story title – successful user login I, as an acceptance tester, want to log in into the application from any page. Scenario 1: Log in from the homepage      I am on the homepage.      I click on the login link.      I enter my username.      I enter my password.      I press submit.      The login link now reads "logout (<username>)" and I'm still on the homepage. Scenario 2: Log in from a secondary page      I am on a secondary page.     I click on the login link.     I enter my username.     I enter my password.     I press Submit.     The login link now reads "logout (<username>)" and I'm still on the secondary page. As you might have noticed I am limiting the preceding example to successful cases. The preceding story can be immediately translated into something along the lines of the following code: // SuccessfulLoginAcceptanceTest.php   $I = new AcceptanceTester($scenario); $I->wantTo("login into the application from any page");   // scenario 1 $I->amOnPage("/"); $I->click("login"); $I->fillField("username", $username); $I->fillField("password", $password); $I->click("submit"); $I->canSee("logout (".$username.")"); $I->seeInCurrentUrl("/");   // scenario 2 $I->amOnPage("/"); $I->click("about"); $I->seeLink("login"); $I->click("login"); $I->fillField("username", $username); $I->fillField("password", $password); $I->click("submit"); $I->canSee("logout (".$username.")"); $I->amOnPage("about"); As you can see this is totally straightforward and easy to read, to the point that anyone in the business should be able to write any case scenario (this is an overstatement, but you get the idea). Clearly, the only thing that is needed to understand is what the AcceptanceTester is able to do: The class generated by the codecept build command can be found in tests/codeception/acceptance/AcceptanceTester.php, which contains all the available methods. You might want to skim through it if you need to understand how to assert a particular condition or perform an action on the page. The online documentation available at http://codeception.com/docs/04-AcceptanceTests will also give you a more readable way to get this information. Don't forget that at the end AcceptanceTester is just a name of a class, which is defined in the YAML file for the specific test type: $ grep class tests/codeception/acceptance.suite.yml class_name: AcceptanceTester Acceptance tests are the topmost level of tests, as some sort of high-level user-oriented integration tests. Because of this, acceptance tests end up using an almost real environment, where no mocks or fakes are required. Clearly, we would need some sort of initial state that we can revert to, particularly if we're causing actions that modify the state of the database. As per Codeception documentation, we could have used a snapshot of the database to be loaded at the beginning of each test. Unfortunately, I didn't have much luck in finding this feature working. So later on, we'll be forced to use the fixtures. Everything will then make more sense. When we will write our acceptance tests, we will also explore the various modules that you can also use with it, such as PHPBrowser and Selenium WebDriver and their related configuration options. FunctionalTester As we said earlier, FunctionalTester represents our character when dealing with functional tests. You might think of functional tests as a way to leverage on the correctness of the implementation from a higher standpoint. The way to implement functional tests bears the same structure as that of acceptance tests, to the point that most of the time the code we've written for an acceptance test in Codeception can be easily swapped with that for a functional test, so you might ask yourself: "where are the differences?" It must be noted that the concept of functional tests is something specific to Codeception and can be considered almost the same as that of integration tests for the mid-layer of your application. The most important thing is that functional tests do not require a web server to run, and they're called headless: For this reason, they are not only quicker than acceptance tests, but also less "real" with all the implications of running on a specific environment. And it's not the case that the acceptance tests provided by default by the basic application are, almost, the same as the functional tests. Because of this, we will end up having more functional tests that will cover more use cases for specific parts of our application. FunctionalTester is somehow setting the $_GET, $_POST and $_REQUEST variables and running the application from within a test. For this reason, Codeception ships with modules that let it interact with the underlying framework, be it Symfony2, Laravel4, Zend, or, in our case, Yii 2. In the configuration file, you will notice the module for Yii 2 already enabled: # tests/functional.suite.yml   class_name: FunctionalTester modules:    enabled:      - Filesystem      - Yii2 # ... FunctionalTester has got a better understanding of the technologies used although he might not have the faintest idea of how the various features he's going to test have been implemented in detail; he just knows the specifications. This makes a perfect case for the functional tests to be owned or written by the developers or anyone that is close to the knowledge of how the various features have been exposed for general consumption. The base functionality of the REST application, exposed through the API, will also be heavily tested, and in this case, we will have the following scenarios: I can use POST to send correct authentication data and will receive a JSON containing the successful authentication I can use POST to send bad authentication data and will receive a JSON containing the unsuccessful authentication After a correct authentication, I can use GET to retrieve the user data After a correct authentication, I will receive an error when doing a GET for a user stating that it's me I can use POST to send my updated hashed password Without a correct authentication, I cannot perform any of the preceding actions The most important thing to remember is that at the end of each test, it's your responsibility to keep the memory clean: The PHP application will not terminate after processing a request. All requests happening in the same memory container are not isolated. If you see your tests failing for some unknown reason when they shouldn't, try to execute a single test separately. UnitTester I've left UnitTester for the end as it's a very special guy. For all we know, until now, Codeception must have used some other framework to cover unit tests, and we're pretty much sure that PHPUnit is the only candidate to achieve this. If any of you have already worked with PHPUnit, you will remember the learning curve together with the initial problem of understanding its syntax and performing even the simplest of tasks. I found that most developers have a love-and-hate relationship with PHPUnit: either you learn its syntax or you spend half of the time looking at the manual to get to a single point. And I won't blame you. We will see that Codeception will come to our aid once again if we're struggling with tests: remember that these unit tests are the simplest and most atomic part of the work we're going to test. Together with them come the integration tests that cover the interaction of different components, most likely with the use of fake data and fixtures. If you're used to working with PHPUnit, you won't find any particular problems writing tests; otherwise, you can make use of UnitTester and implement the same tests by using the Verify and Specify syntax. UnitTester assumes a deep understanding of the signature and how the infrastructure and framework work, so these tests can be considered the cornerstone of testing. They are super fast to run, compared to any other type of test, and they should also be relatively easy to write. You can start with adequately simple assertions and move to data providers before needing to deal with fixtures. Other features provided by Codeception On top of the types of tests, Codeception provides some more aids to help you organize, modularize, and extend your test code. As we've seen, functional and acceptance tests have a very plain and declarative structure, and all the code and the scenarios related to specific acceptance criteria are kept in the same file at the same level and these are executed linearly. In most of the situations, as it is in our case, this is good enough, but when your code starts growing and the number of components and features become more and more complex, the list of scenarios and steps to perform an acceptance or functional test can be quite lengthy. Further, some tests might end up depending on others, so you might want to start considering writing more compact scenarios and promote code reuse throughout your tests or split your test into two or more tests. If you feel your code needs a better organization and structure, you might want to start generating CEST classes instead of normal tests, which are called CEPT instead. A CEST class groups the scenarios all together as methods as highlighted in the following snippet: <?php // SuccessfulLoginCest.php   class SuccessfulLoginCest {    public function _before(CodeceptionEventTestEvent $event) {}      CodeceptionEventTestEvent $event        public function _fail(CodeceptionEventTestEvent $event) {}      // tests    public function loginIntoTheApplicationTest(AcceptanceTester $I)    {        $I->wantTo("login into the application from any page");        $I->amOnPage("/");        $I->click("login");        $I->fillField("username", $username);        $I->fillField("password", $password);        $I->click("submit");        $I->canSee("logout (".$username.")");        $I->seeInCurrentUrl("/");        // ...    } } ?> Any method that is not preceded by the underscore is considered a test, and the reserved methods _before and _after are executed at the beginning and at the end of the list of tests contained in the test class, while the _fail method is used as a cleanup method in case of failure. This alone might not be enough, and you can use document annotations to create reusable code to be run before and after the tests with the use of @before <methodName> and @after <methodName>. You can also be stricter and require a specific test to pass before any other by using the document annotation @depends <methodName>. We're going to use some of these document annotations, but before we start installing Codeception, I'd like to highlight two more features: PageObjects and StepObjects. The PageObject is a common pattern amongst test automation engineers. It represents a web page as a class, where its DOM elements are properties of the class, and methods instead provide some basic interactions with the page. The main reason for using PageObjects is to avoid hardcoding CSS and XPATH locators in your tests. Yii provides some example implementation of the PageObjects used in /tests/codeception/_pages. StepObject is another way to promote code reuse in your tests: It will define some common actions that can be used in several tests. Together with PageObjects, StepObjects can become quite powerful. StepObject extends the Tester class and can be used to interact with the PageObject. This way your tests will become less dependent on a specific implementation and will save you the cost of refactoring when the markup and the way to interact with each component in the page changes. For future reference, you can find all of these in the Codeception documentation in the section regarding the advanced use at http://codeception.com/docs/07-AdvancedUsage together with other features, like grouping and an interactive console that you can use to test your scenarios at runtime. Summary In this article, we got hands-on with Codeception and looked at the different types of tests available. Resources for Article: Further resources on this subject: Building a Content Management System [article] Creating an Extension in Yii 2 [article] Database, Active Record, and Model Tricks [article]
Read more
  • 0
  • 0
  • 2692

article-image-welcome-spring-framework
Packt
30 Apr 2015
17 min read
Save for later

Welcome to the Spring Framework

Packt
30 Apr 2015
17 min read
In this article by Ravi Kant Soni, author of the book Learning Spring Application Development, you will be closely acquainted with the Spring Framework. Spring is an open source framework created by Rod Johnson to address the complexity of enterprise application development. Spring is now a long time de facto standard for Java enterprise software development. The framework was designed with developer productivity in mind and this makes it easier to work with the existing Java and JEE APIs. Using Spring, we can develop standalone applications, desktop applications, two tier applications, web applications, distributed applications, enterprise applications, and so on. (For more resources related to this topic, see here.) Features of the Spring Framework Lightweight: Spring is described as a lightweight framework when it comes to size and transparency. Lightweight frameworks reduce complexity in application code and also avoid unnecessary complexity in their own functioning. Non intrusive: Non intrusive means that your domain logic code has no dependencies on the framework itself. Spring is designed to be non intrusive. Container: Spring's container is a lightweight container, which contains and manages the life cycle and configuration of application objects. Inversion of control (IoC): Inversion of Control is an architectural pattern. This describes the Dependency Injection that needs to be performed by external entities instead of creating dependencies by the component itself. Aspect-oriented programming (AOP): Aspect-oriented programming refers to the programming paradigm that isolates supporting functions from the main program's business logic. It allows developers to build the core functionality of a system without making it aware of the secondary requirements of this system. JDBC exception handling: The JDBC abstraction layer of the Spring Framework offers a exceptional hierarchy that simplifies the error handling strategy. Spring MVC Framework: Spring comes with an MVC web application framework to build robust and maintainable web applications. Spring Security: Spring Security offers a declarative security mechanism for Spring-based applications, which is a critical aspect of many applications. ApplicationContext ApplicationContext is defined by the org.springframework.context.ApplicationContext interface. BeanFactory provides a basic functionality, while ApplicationContext provides advance features to our spring applications, which make them enterprise-level applications. Create ApplicationContext by using the ClassPathXmlApplicationContext framework API. This API loads the beans configuration file and it takes care of creating and initializing all the beans mentioned in the configuration file: import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;   public class MainApp {   public static void main(String[] args) {      ApplicationContext context =    new ClassPathXmlApplicationContext("beans.xml");      HelloWorld helloWorld =    (HelloWorld) context.getBean("helloworld");      helloWorld.getMessage(); } } Autowiring modes There are five modes of autowiring that can be used to instruct Spring Container to use autowiring for Dependency Injection. You use the autowire attribute of the <bean/> element to specify the autowire mode for a bean definition. The following table explains the different modes of autowire: Mode Description no By default, the Spring bean autowiring is turned off, meaning no autowiring is to be performed. You should use the explicit bean reference called ref for wiring purposes. byName This autowires by the property name. If the bean property is the same as the other bean name, autowire it. The setter method is used for this type of autowiring to inject dependency. byType Data type is used for this type of autowiring. If the data type bean property is compatible with the data type of the other bean, autowire it. Only one bean should be configured for this type in the configuration file; otherwise, a fatal exception will be thrown. constructor This is similar to the byType autowire, but here a constructor is used to inject dependencies. autodetect Spring first tries to autowire by constructor; if this does not work, then it tries to autowire by byType. This option is deprecated. Stereotype annotation Generally, @Component, a parent stereotype annotation, can define all beans. The following table explains the different stereotype annotations: Annotation Use Description @Component Type This is a generic stereotype annotation for any Spring-managed component. @Service Type This stereotypes a component as a service and is used when defining a class that handles the business logic. @Controller Type This stereotypes a component as a Spring MVC controller. It is used when defining a controller class, which composes of a presentation layer and is available only on Spring MVC. @Repository Type This stereotypes a component as a repository and is used when defining a class that handles the data access logic and provide translations on the exception occurred at the persistence layer. Annotation-based container configuration For a Spring IoC container to recognize annotation, the following definition must be added to the configuration file: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans    http://www.springframework.org/schema/beans/spring-beans.xsd    http://www.springframework.org/schema/context    http://www.springframework.org/schema/context/spring-context-    3.2.xsd">   <context:annotation-config />                             </beans> Aspect-oriented programming (AOP) supports in Spring AOP is used in Spring to provide declarative enterprise services, especially as a replacement for EJB declarative services. Application objects do what they're supposed to do—perform business logic—and nothing more. They are not responsible for (or even aware of) other system concerns, such as logging, security, auditing, locking, and event handling. AOP is a methodology of applying middleware services, such as security services, transaction management services, and so on on the Spring application. Declaring an aspect An aspect can be declared by annotating the POJO class with the @Aspect annotation. This aspect is required to import the org.aspectj.lang.annotation.aspect package. The following code snippet represents the aspect declaration in the @AspectJ form: import org.aspectj.lang.annotation.Aspect; import org.springframework.stereotype.Component;   @Aspect @Component ("myAspect") public class AspectModule { // ... } JDBC with the Spring Framework The DriverManagerDataSource class is used to configure the DataSource for application, which is defined in the Spring.xml configuration file. The central class of Spring JDBC's abstraction framework is the JdbcTemplate class that includes the most common logic in using the JDBC API to access data (such as handling the creation of connection, creation of statement, execution of statement, and release of resources). The JdbcTemplate class resides in the org.springframework.jdbc.core package. JdbcTemplate can be used to execute different types of SQL statements. DML is an abbreviation of data manipulation language and is used to retrieve, modify, insert, update, and delete data in a database. Examples of DML are SELECT, INSERT, or UPDATE statements. DDL is an abbreviation of data definition language and is used to create or modify the structure of database objects in a database. Examples of DDL are CREATE, ALTER, and DROP statements. The JDBC batch operation in Spring The JDBC batch operation allows you to submit multiple SQL DataSource to process at once. Submitting multiple SQL DataSource together instead of separately improves the performance: JDBC with batch processing Hibernate with the Spring Framework Data persistence is an ability of an object to save its state so that it can regain the same state. Hibernate is one of the ORM libraries that is available to the open source community. Hibernate is the main component available for a Java developer with features such as POJO-based approach and supports relationship definitions. The object query language used by Hibernate is called as Hibernate Query Language (HQL). HQL is an SQL-like textual query language working at a class level or a field level. Let's start learning the architecture of Hibernate. Hibernate annotations is the powerful way to provide the metadata for the object and relational table mapping. Hibernate provides an implementation of the Java Persistence API so that we can use JPA annotations with model beans. Hibernate will take care of configuring it to be used in CRUD operations. The following table explains JPA annotations: JPA annotation Description @Entity The javax.persistence.Entity annotation is used to mark a class as an entity bean that can be persisted by Hibernate, as Hibernate provides the JPA implementation. @Table The javax.persistence.Table annotation is used to define table mapping and unique constraints for various columns. The @Table annotation provides four attributes, which allows you to override the name of the table, its catalogue, and its schema. This annotation also allows you to enforce unique constraints on columns in the table. For now, we will just use the table name as Employee. @Id Each entity bean will have a primary key, which you annotate on the class with the @Id annotation. The javax.persistence.Id annotation is used to define the primary key for the table. By default, the @Id annotation will automatically determine the most appropriate primary key generation strategy to be used. @GeneratedValue javax.persistence.GeneratedValue is used to define the field that will be autogenerated. It takes two parameters, that is, strategy and generator. The GenerationType.IDENTITY strategy is used so that the generated id value is mapped to the bean and can be retrieved in the Java program. @Column javax.persistence.Column is used to map the field with the table column. We can also specify the length, nullable, and uniqueness for the bean properties. Object-relational mapping (ORM, O/RM, and O/R mapping) ORM stands for Object-relational Mapping. ORM is the process of persisting objects in a relational database such as RDBMS. ORM bridges the gap between object and relational schemas, allowing object-oriented application to persist objects directly without having the need to convert object to and from a relational format: Hibernate Query Language (HQL) Hibernate Query Language (HQL) is an object-oriented query language that works on persistence object and their properties instead of operating on tables and columns. To use HQL, we need to use a query object. Query interface is an object-oriented representation of HQL. The query interface provides many methods; let's take a look at a few of them: Method Description public int executeUpdate() This is used to execute the update or delete query public List list() This returns the result of the relation as a list public Query setFirstResult(int rowno) This specifies the row number from where a record will be retrieved public Query setMaxResult(int rowno) This specifies the number of records to be retrieved from the relation (table) public Query setParameter(int position, Object value) This sets the value to the JDBC style query parameter public Query setParameter(String name, Object value) This sets the value to a named query parameter The Spring Web MVC Framework Spring Framework supports web application development by providing comprehensive and intensive support. The Spring MVC framework is a robust, flexible, and well-designed framework used to develop web applications. It's designed in such a way that development of a web application is highly configurable to Model, View, and Controller. In an MVC design pattern, Model represents the data of a web application, View represents the UI, that is, user interface components, such as checkbox, textbox, and so on, that are used to display web pages, and Controller processes the user request. Spring MVC framework supports the integration of other frameworks, such as Struts and WebWork, in a Spring application. This framework also helps in integrating other view technologies, such as Java Server Pages (JSP), velocity, tiles, and FreeMarker in a Spring application. The Spring MVC Framework is designed around a DispatcherServlet. The DispatcherServlet dispatches the http request to handler, which is a very simple controller interface. The Spring MVC Framework provides a set of the following web support features: Powerful configuration of framework and application classes: The Spring MVC Framework provides a powerful and straightforward configuration of framework and application classes (such as JavaBeans). Easier testing: Most of the Spring classes are designed as JavaBeans, which enable you to inject the test data using the setter method of these JavaBeans classes. The Spring MVC framework also provides classes to handle the Hyper Text Transfer Protocol (HTTP) requests (HttpServletRequest), which makes the unit testing of the web application much simpler. Separation of roles: Each component of a Spring MVC Framework performs a different role during request handling. A request is handled by components (such as controller, validator, model object, view resolver, and the HandlerMapping interface). The whole task is dependent on these components and provides a clear separation of roles. No need of the duplication of code: In the Spring MVC Framework, we can use the existing business code in any component of the Spring MVC application. Therefore, no duplicity of code arises in a Spring MVC application. Specific validation and binding: Validation errors are displayed when any mismatched data is entered in a form. DispatcherServlet in Spring MVC The DispatcherServlet of the Spring MVC Framework is an implementation of front controller and is a Java Servlet component for Spring MVC applications. DispatcherServlet is a front controller class that receives all incoming HTTP client request for the Spring MVC application. DispatcherServlet is also responsible for initializing the framework components that will be used to process the request at various stages. The following code snippet declares the DispatcherServlet in the web.xml deployment descriptor: <servlet> <servlet-name>SpringDispatcher</servlet-name> <servlet-class>    org.springframework.web.DispatcherServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet>   <servlet-mapping> <servlet-name>SpringDispatcher</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> In the preceding code snippet, the user-defined name of the DispatcherServlet class is SpringDispatcher, which is enclosed with the <servlet-name> element. When our newly created SpringDispatcher class is loaded in a web application, it loads an application context from an XML file. DispatcherServlet will try to load the application context from a file named SpringDispatcher-servlet.xml, which will be located in the application's WEB-INF directory: <beans xsi_schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context- 3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd">   <mvc:annotation-driven />   <context:component-scan base- package="org.packt.Spring.chapter7.springmvc" />   <beanclass="org.springframework.web.servlet.view. InternalResourceViewResolver">    <property name="prefix" value="/WEB-INF/views/" />    <property name="suffix" value=".jsp" /> </bean>   </beans> Spring Security The Spring Security framework is the de facto standard to secure Spring-based applications. The Spring Security framework provides security services for enterprise Java software applications by handling authentication and authorization. The Spring Security framework handles authentication and authorization at the web request and the method invocation level. The two major operations provided by Spring Security are as follows: Authentication: Authentication is the process of assuring that a user is the one who he/she claims to be. It's a combination of identification and verification. The identification process can be performed in a number of different ways, that is, username and password that can be stored in a database, LDAP, or CAS (single sign-out protocol), and so on. Spring Security provides a password encoder interface to make sure that the user's password is hashed. Authorization: Authorization provides access control to an authenticated user. It's the process of assurance that the authenticated user is allowed to access only those resources that he/she is authorized for use. Let's take a look at an example of the HR payroll application, where some parts of the application have access to HR and to some other parts, all the employees have access. The access rights given to user of the system will determine the access rules. In a web-based application, this is often done by URL-based security and is implemented using filters that play an primary role in securing the Spring web application. Sometimes, URL-based security is not enough in web application because URLs can be manipulated and can have relative pass. So, Spring Security also provides method level security. An authorized user will only able to invoke those methods that he is granted access for. Securing web application's URL access HttpServletRequest is the starting point of Java's web application. To configure web security, it's required to set up a filter that provides various security features. In order to enable Spring Security, add filter and their mapping in the web.xml file: <!—Spring Security --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter. DelegatingFilterProxy</filter-class> </filter>   <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Logging in to a web application There are multiple ways supported by Spring security for users to log in to a web application: HTTP basic authentication: This is supported by Spring Security by processing the basic credentials presented in the header of the HTTP request. It's generally used with stateless clients, who on each request pass their credential. Form-based login service: Spring Security supports the form-based login service by providing a default login form page for users to log in to the web application. Logout service: Spring Security supports logout services that allow users to log out of this application. Anonymous login: This service is provided by Spring Security that grants authority to an anonymous user, such as a normal user. Remember-me support: This is also supported by Spring Security and remembers the identity of a user across multiple browser sessions. Encrypting passwords Spring Security supports some hashing algorithms such as MD5 (Md5PasswordEncoder), SHA (ShaPasswordEncoder), and BCrypt (BCryptPasswordEncoder) for password encryption. To enable the password encoder, use the <password-encoder/> element and set the hash attribute, as shown in the following code snippet: <authentication-manager> <authentication-provider>    <password-encoder hash="md5" />    <jdbc-user-service data-source-    ref="dataSource"    . . .   </authentication-provider> </authentication-manager> Mail support in the Spring Framework The Spring Framework provides a simplified API and plug-in for full e-mail support, which minimizes the effect of the underlying e-mailing system specifications. The Sprig e-mail supports provide an abstract, easy, and implementation independent API to send e-mails. The Spring Framework provides an API to simplify the use of the JavaMail API. The classes handle the initialization, cleanup operations, and exceptions. The packages for the JavaMail API provided by the Spring Framework are listed as follows: Package Description org.springframework.mail This defines the basic set of classes and interfaces to send e-mails. org.springframework.mail.java This defines JavaMail API-specific classes and interfaces to send e-mails. Spring's Java Messaging Service (JMS) Java Message Service is a Java Message-oriented middleware (MOM) API responsible for sending messages between two or more clients. JMS is a part of the Java enterprise edition. JMS is a broker similar to a postman who acts like a middleware between the message sender and the receiver. Message is nothing, but just bytes of data or information exchanged between two parties. By taking different specifications, a message can be described in various ways. However, it's nothing, but an entity of communication. A message can be used to transfer a piece of information from one application to another, which may or may not run on the same platform. The JMS application Let's look at the sample JMS application pictorial, as shown in the following diagram: We have a Sender and a Receiver. The Sender is responsible for sending a message and the Receiver is responsible for receiving a message. We need a broker or MOM between the Sender and Receiver, who takes the sender's message and passes it from the network to the receiver. Message oriented middleware (MOM) is basically an MQ application such as ActiveMQ or IBM-MQ, which are two different message providers. The sender promises loose coupling and it can be .NET or mainframe-based application. The receiver can be Java or Spring-based application and it sends back the message to the sender as well. This is a two-way communication, which is loosely coupled. Summary This article covered the architecture of Spring Framework and how to set up the key components of the Spring application development environment. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Serving and processing forms [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 4031

article-image-auto-updating-child-records-process-builder
Packt
29 Apr 2015
5 min read
Save for later

Auto updating child records in Process Builder

Packt
29 Apr 2015
5 min read
In this article by Rakesh Gupta, the author of the book Learning Salesforce Visual Workflow, we will discuss how to auto update child records using Process Builder of Salesforce. There are several business use cases where a customer wants to update child records based on some criteria, for example, auto-updating all related Opportunity to Closed-Lost if an account is updated to Inactive. To achieve these types of business requirements, you can use the Apex trigger. You can also achieve these types of requirements using the following methods: Process Builder A combination of Flow and Process Builder A combination of Flow and Inline Visualforce page on the account detail page (For more resources related to this topic, see here.) We will use Process Builder to solve these types of business requirements. Let's start with a business requirement. Here is a business scenario: Alice Atwood is working as a system administrator in Universal Container. She has received a requirement that once an account gets activated, the account phone must be synced with the related contact asst. phone field. This means whenever an account phone fields gets updated, the same phone number will be copied to the related contacts asst. phone field. Follow these instructions to achieve the preceding requirement using Process Builder: First of all, navigate to Setup | Build | Customize | Accounts | Fields and make sure that the Active picklist is available in your Salesforce organization. If it's not available, create a custom Picklist field with the name as Active, and enter the Yes and No values. To create a Process, navigate to Setup | Build | Create | Workflow & Approvals | Process Builder, click on New Button, and enter the following details: Name: Enter the name of the Process. Enter Update Contacts Asst Phone in Name. This must be within 255 characters. API Name: This will be autopopulated based on the name. Description: Write some meaningful text so that other developers or administrators can easily understand why this Process is created. The properties window will appear as shown in the following screenshot: Once you are done, click on the Save button. It will redirect you to the Process canvas, which allows you to create or modify the Process. After Define Process Properties, the next task is to select the object on which you want to create a Process and define the evaluation criteria. For this, click on the Add Object node. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Object: Start typing and then select the Account object. Start the process: For Start the process, select when a record is created or edited. This means the Process will fire every time, irrespective of record creation or updating. Allow process to evaluate a record multiple times in a single transaction?: Select this checkbox only when you want the Process to evaluate the same record up to five times in a single transaction. It might re-examine the record because a Process, Workflow Rule, or Flow may have updated the record in the same transaction. In this case, leave this unchecked. This window will appear as shown in the following screenshot: Once you are done with adding the Process criteria, click on the Save button. Similar to the Workflow Rule, once you save the panel, it doesn't allow you to change the selected object. After defining the evaluation criteria, the next step is to add the Process criteria. Once the Process criteria are true, only then will the Process execute the associated actions. To define the Process criteria, click on the Add Criteria node. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Criteria Name: Enter a name for the criteria node. Enter Update Contacts in Criteria Name. Criteria for Executing Actions: Select the type of criteria you want to define. You can use either a formula or a filter to define the Process criteria or no criteria. In this case, select Active equals to Yes. This means the Process will fire only when the account is active. This window will appear as shown in the following screenshot: Once you are done with defining the Process criteria, click on the Save button. Once you are done with the Process criteria node, the next step is to add an immediate action to update the related contact's asst. phone field. For this, we will use the Update Records action available under Process. Click on Add Action available under IMMEDIATE ACTIONS. It will open an additional window on the right side of the Process canvas screen, where you have to enter the following details: Action Type: Select the type of action. In this case, select Update Records. Action Name: Enter a name for this action. Enter Update Assts Phone in Action Name. Object: Start typing and then select the [Account].Contacts object. Field: Map the Asst. Phone field with the [Account]. Phone field. To select the fields, you can use field picker. To enter the value, use the text entry field. It will appear as shown in the following screenshot: Once you are done, click on the Save button. Once you are done with the immediate action, the final step is to activate it. To activate a Process, click on the Activate button available on the button bar. From now on, if you try to update an active account, Process will automatically update the related contact's asst. phone with the value available in the account phone field. Summary In this article, we have learned the technique of auto updating records in Process Builder. Resources for Article: Further resources on this subject: Visualforce Development with Apex [Article] Configuration in Salesforce CRM [Article] Introducing Salesforce Chatter [Article]
Read more
  • 0
  • 0
  • 7005
article-image-custom-coding-apex
Packt
27 Apr 2015
18 min read
Save for later

Custom Coding with Apex

Packt
27 Apr 2015
18 min read
In this article by Chamil Madusanka, author of the book Learning Force.com Application Development, you will learn about the custom coding in Apex and also about triggers. We have used many declarative methods such as creating the object's structure, relationships, workflow rules, and approval process to develop the Force.com application. The declarative development method doesn't require any coding skill and specific Integrated Development Environment (IDE). This article will show you how to extend the declarative capabilities using custom coding of the Force.com platform. Apex controllers and Apex triggers will be explained with examples of the sample application. The Force.com platform query language and data manipulation language will be described with syntaxes and examples. At the end of the article, there will be a section to describe bulk data handling methods in Apex. This article covers the following topics: Introducing Apex Working with Apex (For more resources related to this topic, see here.) Introducing Apex Apex is the world's first on-demand programming language that allows developers to implement and execute business flows, business logic, and transactions on the Force.com platform. There are two types of Force.com application development methods: declarative developments and programmatic developments. Apex is categorized under the programmatic development method. Since Apex is a strongly-typed, object-based language, it is connected with data in the Force.com platform and data manipulation using the query language and the search language. The Apex language has the following features: Apex provides a lot of built-in support for the Force.com platform features such as: Data Manipulation Language (DML) with the built-in exception handling (DmlException) to manipulate the data during the execution of the business logic. Salesforce Object Query Language (SOQL) and Salesforce Object Search Language (SOSL) to query and retrieve the list of sObjects records. Bulk data processing on multiple records at a time. Apex allows handling errors and warning using an in-built error-handling mechanism. Apex has its own record-locking mechanism to prevent conflicts of record updates. Apex allows building custom public Force.com APIs from stored Apex methods. Apex runs in a multitenant environment. The Force.com platform has multitenant architecture. Therefore, the Apex runtime engine obeys the multitenant environment. It prevents monopolizing of shared resources using the guard with limits. If any particular Apex code violates the limits, error messages will be displayed. Apex is hosted in the Force.com platform. Therefore, the Force.com platform interprets, executes, and controls Apex. Automatically upgradable and versioned: Apex codes are stored as metadata in the platform. Therefore, they are automatically upgraded with the platform. You don't need to rewrite your code when the platform gets updated. Each code is saved with the current upgrade version. You can manually change the version. It is easy to maintain the Apex code with the versioned mechanism. Apex can be used easily. Apex is similar to Java syntax and variables. The syntaxes and semantics of Apex are easy to understand and write codes. Apex is a data-focused programming language. Apex is designed for multithreaded query and DML statements in a single execution context on the Force.com servers. Many developers can use database stored procedures to run multiple transaction statements on the database server. Apex is different from other databases when it comes to stored procedures; it doesn't attempt to provide general support for rendering elements in the user interface. The execution context is one of the key concepts in Apex programming. It influences every aspect of software development on the Force.com platform. Apex is a strongly-typed language that directly refers to schema objects and object fields. If there is any error, it fails the compilation. All the objects, fields, classes, and pages are stored in metadata after successful compilation. Easy to perform unit testing. Apex provides a built-in feature for unit testing and test execution with the code coverage. Apex allows developers to write the logic in two ways: As an Apex class: The developer can write classes in the Force.com platform using Apex code. An Apex class includes action methods which related to the logic implementation. An Apex class can be called from a trigger. A class can be associated with a Visualforce page (Visualforce Controllers/Extensions) or can act as a supporting class (WebService, Email-to-Apex service/Helper classes, Batch Apex, and Schedules). Therefore, Apex classes are explicitly called from different places on the Force.com platform. As a database trigger: A trigger is executed related to a particular database interaction of a Force.com object. For example, you can create a trigger on the Leave Type object that fires whenever the Leave Type record is inserted. Therefore, triggers are implicitly called from a database action. Apex is included in the Unlimited Edition, Developer Edition, Enterprise Edition, Database.com, and Performance Edition. The developer can write Apex classes or Apex triggers in a developer organization or a sandbox of a production organization. After you finish the development of the Apex code, you can deploy the particular Apex code to the production organization. Before you deploy the Apex code, you have to write test methods to cover the implemented Apex code. Apex code in the runtime environment You already know that Apex code is stored and executed on the Force.com platform. Apex code also has a compile time and a runtime. When you attempt to save an Apex code, it checks for errors, and if there are no errors, it saves with the compilation. The code is compiled into a set of instructions that are about to execute at runtime. Apex always adheres to built-in governor limits of the Force.com platform. These governor limits protect the multitenant environment from runaway processes. Apex code and unit testing Unit testing is important because it checks the code and executes the particular method or trigger for failures and exceptions during test execution. It provides a structured development environment. We gain two good requirements for this unit testing, namely, best practice for development and best practice for maintaining the Apex code. The Force.com platform forces you to cover the Apex code you implemented. Therefore, the Force.com platform ensures that you follow the best practices on the platform. Apex governors and limits Apex codes are executed on the Force.com multitenant infrastructure and the shared resources are used across all customers, partners, and developers. When we are writing custom code using Apex, it is important that the Apex code uses the shared resources efficiently. Apex governors are responsible for enforcing runtime limits set by Salesforce. It discontinues the misbehaviors of the particular Apex code. If the code exceeds a limit, a runtime exception is thrown that cannot be handled. This error will be seen by the end user. Limit warnings can be sent via e-mail, but they also appear in the logs. Governor limits are specific to a namespace, so AppExchange certified managed applications have their own set of limits, independent of the other applications running in the same organization. Therefore, the governor limits have their own scope. The limit scope will start from the beginning of the code execution. It will be run through the subsequent blocks of code until the particular code terminates. Apex code and security The Force.com platform has a component-based security, record-based security and rich security framework, including profiles, record ownership, and sharing. Normally, Apex codes are executed as a system mode (not as a user mode), which means the Apex code has access to all data and components. However, you can make the Apex class run in user mode by defining the Apex class with the sharing keyword. The with sharing/without sharing keywords are employed to designate that the sharing rules for the running user are considered for the particular Apex class. Use the with sharing keyword when declaring a class to enforce the sharing rules that apply to the current user. Use the without sharing keyword when declaring a class to ensure that the sharing rules for the current user are not enforced. For example, you may want to explicitly turn off sharing rule enforcement when a class acquires sharing rules after it is called from another class that is declared using with sharing. The profile also can maintain the permission for developing Apex code and accessing Apex classes. The author's Apex permission is required to develop Apex codes and we can limit the access of Apex classes through the profile by adding or removing the granted Apex classes. Although triggers are built using Apex code, the execution of triggers cannot be controlled by the user. They depend on the particular operation, and if the user has permission for the particular operation, then the trigger will be fired. Apex code and web services Like other programming languages, Apex supports communication with the outside world through web services. Apex methods can be exposed as a web service. Therefore, an external system can invoke the Apex web service to execute the particular logic. When you write a web service method, you must use the webservice keyword at the beginning of the method declaration. The variables can also be exposed with the webservice keyword. After you create the webservice method, you can generate the Web Service Definition Language (WSDL), which can be consumed by an external application. Apex supports both Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) web services. Apex and metadata Because Apex is a proprietary language, it is strongly typed to Salesforce metadata. The same sObject and fields that are created through the declarative setup menu can be referred to through Apex. Like other Force.com features, the system will provide an error if you try to delete an object or field that is used within Apex. Apex is not technically autoupgraded with each new Salesforce release, as it is saved with a specific version of the API. Therefore, Apex, like other Force.com features, will automatically work with future versions of Salesforce applications. Force.com application development tools use the metadata. Working with Apex Before you start coding with Apex, you need to learn a few basic things. Apex basics Apex has come up with a syntactical framework. Similar to Java, Apex is strongly typed and is an object-based language. If you have some experience with Java, it will be easy to understand Apex. The following table explains the similarities and differences between Apex and Java: Similarities Differences Both languages have classes, inheritance, polymorphism, and other common object oriented programming features Apex runs in a multitenant environment and is very controlled in its invocations and governor limits Both languages have extremely similar syntax and notations Apex is case sensitive Both languages are compiled, strongly-typed, and transactional Apex is on-demand and is compiled and executed in the cloud   Apex is not a general purpose programming language, but is instead a proprietary language used for specific business logic functions   Apex requires unit testing for deployment into a production environment This section will not discuss everything that is included in the Apex documentation from Salesforce, but it will cover topics that are essential for understanding concepts discussed in this article. With this basic knowledge of Apex, you can create Apex code in the Force.com platform. Apex data types In Apex classes and triggers, we use variables that contain data values. Variables must be bound to a data type and that particular variable can hold the values with the same data type. All variables and expressions have one of the following data types: Primitives Enums sObjects Collections An object created from the user or system-defined classes Null (for the null constant) Primitive data types Apex uses the same primitive data types as the web services API, most of which are similar to their Java counterparts. It may seem that Apex primitive variables are passed by value, but they actually use immutable references, similar to Java string behavior. The following are the primitive data types of Apex: Boolean: A value that can only be assigned true, false, or null. Date, Datetime, and Time: A Date value indicates particular day and not contains any information about time. A Datetime value indicates a particular day and time. A Time value indicates a particular time. Date, Datetime and Time values must always be created with a system static method. ID: 18 or 15 digits version. Integer, Long, Double, and Decimal: Integer is a 32-bit number that does not include decimal points. Integers have a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647. Long is a 64-bit number that does not include a decimal point. Use this datatype when you need a range of values wider than those provided by Integer. Double is a 64-bit number that includes a decimal point. Both Long and Doubles have a minimum value of -263 and a maximum value of 263-1. Decimal is a number that includes a decimal point. Decimal is an arbitrary precision number. String: String is any set of characters surrounded by single quotes. Strings have no limit on the number of characters that can be included. But the heap size limit is used to ensure to the particular Apex program do not grow too large. Blob: Blob is a collection of binary data stored as a single object. Blog can be accepted as Web service argument, stored in a document or sent as attachments. Object: This can be used as the base type for any other data type. Objects are supported for casting. Enum data types Enum (or enumerated list) is an abstract data type that stores one value of a finite set of specified identifiers. To define an Enum, use the enum keyword in the variable declaration and then define the list of values. You can define and use enum in the following way: Public enum Status {NEW, APPROVED, REJECTED, CANCELLED} The preceding enum has four values: NEW, APPROVED, REJECTED, CANCELLED. By creating this enum, you have created a new data type called Status that can be used as any other data type for variables, return types, and method arguments. Status leaveStatus = Status. NEW; Apex provides Enums for built-in concepts such as API error (System.StatusCode). System-defined enums cannot be used in web service methods. sObject data types sObjects (short for Salesforce Object) are standard or custom objects that store record data in the Force.com database. There is also an sObject data type in Apex that is the programmatic representation of these sObjects and their data in code. Developers refer to sObjects and their fields by their API names, which can be found in the schema browser. sObject and field references within Apex are validated against actual object and field names when code is written. Force.com tracks the objects and fields used within Apex to prevent users from making the following changes: Changing a field or object name Converting from one data type to another Deleting a field or object Organization-wide changes such as record sharing It is possible to declare variables of the generic sObject data type. The new operator still requires a concrete sObject type, so the instances are all specific sObjects. The following is a code example: sObject s = new Employee__c(); Casting will be applied as expected as each row knows its runtime type and can be cast back to that type. The following casting works fine: Employee__c e = (Employee__c)s; However, the following casting will generate a runtime exception for data type collision: Leave__c leave = (Leave__c)s; sObject super class only has the ID variable. So we can only access the ID via the sObject class. This method can also be used with collections and DML operations, although only concrete types can be instantiated. Collection will be described in the upcoming section and DML operations will be discussed in the Data manipulation section on the Force.com platform. Let's have a look at the following code: sObject[] sList = new Employee__c[0]; List<Employee__c> = (List<Employee__c>)sList; Database.insert(sList); Collection data types Collection data types store groups of elements of other primitive, composite, or collection data types. There are three different types of collections in Apex: List: A list is an ordered collection of primitives or composite data types distinguished by its index. Each element in a list contains two pieces of information; an index (this is an integer) and a value (the data). The index of the first element is zero. You can define an Apex list in the following way: List<DataType> listName = new List<DataType>(); List<String> sList = new List< String >(); There are built-in methods that can be used with lists adding/removing elements from the end of the list, getting/setting values at a particular index, and sizing the list by obtaining the number of elements. A full set of list methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_list.htm. The Apex list is defined in the following way: List<String> sList = new List< String >(); sList.add('string1'); sList.add('string2'); sList.add('string3'); sList.add('string4'); Integer sListSize = sList.size(); // this will return the   value as 4 sList.get(3); //This method will return the value as   "string4" Apex allows developers familiar with the standard array syntax to use that interchangeably with the list syntax. The main difference is the use of square brackets, which is shown in the following code: String[] sList = new String[4]; sList [0] = 'string1'; sList [1] = 'string2'; sList [2] = 'string3'; sList [3] = 'string4'; Integer sListSize = sList.size(); // this will return the   value as 4 Lists, as well as maps, can be nested up to five levels deep. Therefore, you can create a list of lists in the following way: List<List<String>> nestedList = new List<List<String>> (); Set: A set is an unordered collection of data of one primitive data type or sObjects that must have unique values. The Set methods are listed at http://www.salesforce.com/us/developer/docs/dbcom_apex230/Content/apex_methods_system_set.htm. Similar to the declaration of List, you can define a Set in the following way: Set<DataType> setName = new Set<DataType>(); Set<String> setName = new Set<String>(); There are built-in methods for sets, including add/remove elements to/from the set, check whether the set contains certain elements, and the size of the set. Map: A map is an unordered collection of unique keys of one primitive data type and their corresponding values. The Map methods are listed in the following link at http://www.salesforce.com/us/developer/docs/dbcom_apex250/Content/apex_methods_system_map.htm. You can define a Map in the following way: Map<PrimitiveKeyDataType, DataType> = mapName = new   Map<PrimitiveKeyDataType, DataType>(); Map<Integer, String> mapName = new Map<Integer, String>(); Map<Integer, List<String>> sMap = new Map<Integer,   List<String>>(); Maps are often used to map IDs to sObjects. There are built-in methods that you can use with maps, including adding/removing elements on the map, getting values for a particular key, and checking whether the map contains certain keys. You can use these methods as follows: Map<Integer, String> sMap = new Map<Integer, String>(); sMap.put(1, 'string1'); // put key and values pair sMap.put(2, 'string2'); sMap.put(3, 'string3'); sMap.put(4, 'string4'); sMap.get(2); // retrieve the value of key 2 Apex logics and loops Like all programming languages, Apex language has the syntax to implement conditional logics (IF-THEN-ELSE) and loops (for, Do-while, while). The following table will explain the conditional logic and loops in Apex: IF Conditional IF statements in Apex are similar to Java. The IF-THEN statement is the most basic of all the control flow statements. It tells your program to execute a certain section of code only if a particular test evaluates to true. The IF-THEN-ELSE statement provides a secondary path of execution when an IF clause evaluates to false. if (Boolean_expression){ statement; statement; statement; statement;} else { statement; statement;} For There are three variations of the FOR loop in Apex, which are as follows: FOR(initialization;Boolean_exit_condition;increment) {     statement; }   FOR(variable : list_or_set) {     statement; }   FOR(variable : [inline_soql_query]) {     statement; } All loops allow for the following commands: break: This is used to exit the loop continue: This is used to skip to the next iteration of the loop While The while loop is similar, but the condition is checked before the first loop, as shown in the following code: while (Boolean_condition) { code_block; }; Do-While The do-while loop repeatedly executes as long as a particular Boolean condition remains true. The condition is not checked until after the first pass is executed, as shown in the following code: do { //code_block; } while (Boolean_condition); Summary In this article, you have learned to develop custom coding in the Force.com platform, including the Apex classes and triggers. And you learned two query languages in the Force.com platform. Resources for Article: Further resources on this subject: Force.com: Data Management [article] Configuration in Salesforce CRM [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 6443

article-image-resource-manager-centos-6
Packt
27 Apr 2015
19 min read
Save for later

Resource Manager on CentOS 6

Packt
27 Apr 2015
19 min read
In this article is written by Mitja Resman, author of the book CentOS High Availability, we will learn cluster resource management on CentOS 6 with the RGManager cluster resource manager. We will learn how and where to find the information you require about the cluster resources that are supported by RGManager, and all the details about cluster resource configuration. We will also learn how to add, delete, and reconfigure resources and services in your cluster. Then we will learn how to start, stop, and migrate resources from one cluster node to another. When we are done with this article, your cluster will be configured to run and provide end users with a service. (For more resources related to this topic, see here.) Working with RGManager When we work with RGManager, the cluster resources are configured within the /etc/cluster/cluster.conf CMAN configuration file. RGManager has a dedicated section in the CMAN configuration file defined by the <rm> tag. Part of configuration within the <rm> tag belongs to RGManager. The RGManager section begins with the <rm> tag and ends with the </rm> tag. This syntax is common for XML files. The RGManager section must be defined within the <cluster> section of the CMAN configuration file but not within the <clusternodes> or <fencedevices> sections. We will be able to review the exact configuration syntax from the example configuration file provided in the next paragraphs. The following elements can be used within the <rm> RGManager tag: Failover Domain: (tag: <failoverdomains></failoverdomains>): A failover domain is a set of cluster nodes that are eligible to run a specific cluster service in the event of a cluster node failure. More than one failure domain can be configured with different rules applied for different cluster services. Global Resources: (tag: <resources></resources>): Global cluster resources are globally configured resources that can be related when configuring cluster services. Global cluster resources simplify the process of cluster service configuration by global resource name reference. Cluster Service: (tag: <service></service>): A cluster service usually defines more than one resource combined to provide a cluster service. The order of resources provided within a cluster service is important because it defines the resource start and stop order. The most used and important RGManager command-line expressions are as follows: clustat: The clustat command provides cluster status information. It also provides information about the cluster, cluster nodes, and cluster services. clusvcadm: The clusvcadm command provides cluster service management commands such as start, stop, disable, enable, relocate, and others. By default, RGManager logging is configured to log information related to RGManager to the syslog/var/log/messages file. If the logfile parameter in the Corosync configuration file's logging section is configured, information related to RGManager will be logged in the location specified by the logfile parameter. The default RGManager log file is named rgmanager.log. Let's start with the details of RGManager configuration. Configuring failover domains The <rm> tag in the CMAN configuration file usually begins with the definition of a failover domain, but configuring a failover domain is not required for normal operation of the cluster. A failover domain is a set of cluster nodes with configured failover rules. The failover domain is attached to the cluster service configuration; in the event of a cluster node failure, the configured cluster service's failover domain rules are applied. Failover domains are configured within the <rm> RGManager tag. The failover domain configuration begins with the <failoverdomains> tag and ends with the </failoverdomains> tag. Within the <failoverdomains> tag, you can specify one or more failover domains in the following form: <failoverdomain failoverdomainname failoverdomain_options> </failoverdomain> The failoverdomainname parameter is a unique name provided for the failover domain in the form of name="desired_name". The failoverdomain_options options are the rules that we apply to the failover domain. The following rules can be configured for a failover domain: Unrestricted: (parameter: restricted="0"): This failover domain configuration allows you to run a cluster service on any of the configured cluster nodes. Restricted: (parameter: restricted="1"): This failover domain configuration allows you to restrict a cluster service to run on the members you configure. Ordered: (parameter: ordered="1"): This failover domain configuration allows you to configure a preference order for cluster nodes. In the event of cluster node failure, the preference order is taken into account. The order of the listed cluster nodes is important because it is also the priority order. Unordered: (parameter: ordered="0"): This failover domain configuration allows any of the configured cluster nodes to run a specific cluster service. Failback: (parameter: nofailback="0"): This failover domain configuration allows you to configure failback for the cluster service. This means the cluster service will fail back to the originating cluster node once the cluster node is operational. Nofailback: (parameter: nofailback="1"): This failover domain configuration allows you to disable the failback of the cluster service back to the originating cluster node once it is operational. Within the <failoverdomain> tag, the desired cluster nodes are configured with a <failoverdomainnode> tag in the following form: <failoverdomainnode nodename/> The nodename parameter is the cluster node name as provided in the <clusternode> tag of the CMAN configuration file. You can add the following simple failover domain configuration to your existing CMAN configuration file. In the following screenshot, you can see the CMAN configuration file with a simple failover domain configuration. The previous example shows a failover domain named simple with no failback, no ordering, and no restrictions configured. All three cluster nodes are listed as failover domain nodes. Note that it is important to change the config_version parameter in the second line on every CMAN cluster configuration file. Once you have configured the failover domain, you need to validate the cluster configuration file. A valid CMAN configuration is required for normal operation of the cluster. If the validation of the cluster configuration file fails, recheck the configuration file for common typo errors. In the following screenshot, you can see the command used to check the CMAN configuration file for errors: Note that, if a specific cluster node is not online, the configuration file will have to be transferred manually and the cluster stack software will have to be restarted to catch up once it comes back online. Once your configuration is validated, you can propagate it to other cluster nodes. In this screenshot, you can see the CMAN configuration file propagation command used on the node-1 cluster node: For successful CMAN configuration file distribution to the other cluster nodes, the CMAN configuration file's config_version parameter number must be increased. You can confirm that the configuration file was successfully distributed by issuing the ccs_config_dump command on any of the other cluster nodes and comparing the XML output. Adding cluster resources and services The difference between cluster resources and cluster services is that a cluster service is a service built from one or more cluster resources. A configured cluster resource is prepared to be used within a cluster service. When you are configuring a cluster service, you reference a configured cluster resource by its unique name. Resources Cluster resources are defined within the <rm> RGManager tag of the CMAN configuration file. They begin with the <resources> tag and end with the </resources> tag. Within the <resources> tag, all cluster resources supported by RGManager can be configured. Cluster resources are configured with resource scripts, and all RGManager-supported resource scripts are located in the /usr/share/cluster directory along with the cluster resource metadata information required to configure a cluster resource. For some cluster resources, the metadata information is listed within the cluster resource scripts, while others have separate cluster resource metadata files. RGManager reads metadata from the scripts while validating the CMAN configuration file. Therefore, knowing the metadata information is the best way to correctly define and configure a cluster resource. The basic syntax used to configure a cluster resource is as follows: <resource_agent_name resource_options"/> The resource_agent_name parameter is provided in the cluster resource metadata information and is defined as name. The resource_options option is cluster resource-configurable options as provided in the cluster resource metadata information. If you want to configure an IP address cluster resource, you should first review the IP address of the cluster resource metadata information, which is available in the /usr/share/cluster/ip.sh script file. The syntax used to define an IP address cluster resource is as follows: <ip ip_address_options/> We can configure a simple IPv4 IP address, such as 192.168.88.50, and bind it to the eth1 network interface by adding the following line to the CMAN configuration: <ip address="192.168.88.50" family="IPv4" prefer_interface="eth1"/> The address option is the IP address you want to configure. The family option is the address protocol family. The prefer_interface option binds the IP address to the specific network interface. By reviewing the IP address of resource metadata information we can see that a few additional options are configurable and well explained: monitor_link nfslock sleeptime disable_rdisc If you want to configure an Apache web server cluster resource, you should first review the Apache web server resource's metadata information in the /usr/share/cluster/apache.metadata metadata file. The syntax used to define an Apache web server cluster resource is as follows: <apache apache_web_server_options/> We can configure a simple Apache web server cluster resource by adding the following line to the CMAN configuration file: <apache name="apache" server_root="/etc/httpd" config_file="conf/httpd.conf" shutdown_wait="60"/> The name parameter is the unique name provided for the apache cluster resource. The server_root option provides the Apache installation location. If no server_root option is provided, the default value is /etc/httpd. The config_file option is the path to the main Apache web server configuration file from the server_root file. If no config_file option is provided, the default value is conf/httpd.conf. The shutdown_wait option is the number of seconds to wait before the correct end-of-service shutdown. By reviewing the Apache web server resource metadata, you can see that a few additional options are configurable and well explained: httpd httpd_options service_name We can add the IP address and Apache web server cluster resources to the example configuration we are building, as follows. <resources> <ip address="192.168.10.50" family="IPv4"   prefer_interface="eth1"/> <apache name="apache" server_root="/etc/httpd"   config_file="conf/httpd.conf" shutdown_wait="60"/> </resources> Do not forget to increase the config_version parameter number. Make sure that you the validate cluster configuration file with every change. In the following screenshot, you can see the command used to validate the CMAN configuration: After we've validated our configuration, we can distribute the cluster configuration file to other nodes. In this screenshot, you can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: Services The cluster services are defined within the <rm> RGManager tag of the CMAN configuration file after the cluster resources tag. They begin with the <service> tag and end with the </service> tag. The syntax used to define a service is as follows: <service service_options> </service> The resources within the cluster services are referenced to the globally configured cluster resources. The order of the cluster resources configured within the cluster service is important. This is because it is also a resource start order. The syntax for cluster resource configuration within the cluster service is as follows: <service service_options> <resource_agent_name ref="referenced_cluster_resource_name"/> </service> The service options can be the following: Autostart: (parameter: autostart="1"): This parameter starts services when RGManager starts. By default, RGManager starts all services when it is started and Quorum is present. Noautostart (parameter: autostart="0"): This parameter disables the start of all services when RGManager starts. Restart recovery (parameter: recovery="restart"): This is RGManager's default recovery policy. On failure, RGManager will restart the service on the same cluster node. If the service restart fails, RGManager will relocate the service to another operational cluster node. Relocate recovery (parameter: recovery="relocate"): On failure, RGManager will try to start the service on other operational cluster nodes. Disable recovery (parameter: recovery="disable"): On failure RGManager, will place the service in the disabled state. Restart disable recovery (parameter: recovery="restart-disable"): On failure, RGManager will try to restart the service on the same cluster node. If the restart fails, it will place the service in the disabled state. Additional restart policy extensions are available, as follows: Maximum restarts (parameter: max_restarts="N"; where N is the desired integer value): the maximum restarts parameter is defined by an integer that specifies the maximum number of service restarts before taking additional recovery policy actions Restart expire time (parameter: restart_expire_time="N"; where N is the desired integer value in seconds): The restart expire time parameter is defined by an integer value in seconds, and configures the time to remember a restart event We can configure a web server cluster service with respect to the configured IP address and Apache web server resources with the following CMAN configuration file syntax: <service name="webserver" autostart="1" recovery="relocate"> <ip ref="192.168.88.50"/> <apache ref="apache"/> </service> A minimal configuration of a web server cluster service requires a cluster IP address and an Apache web server resource. The name parameter defines a unique name for the web server cluster service. The autostart parameter defines an automatic start of the webserver cluster service on RGManager startup. The recovery parameter configures the restart of the web server cluster service on other cluster nodes in the event of failure. We can add the web server cluster service to the example CMAN configuration file we are building, as follows. <resources> <ip address="192.168.10.50" family="IPv4"   prefer_interface="eth1"/> <apache name="apache" server_root="/etc/httpd"   config_file="conf/httpd.conf" shutdown_wait="60"/> </resources> <service name="webserver" autostart="1" recovery="relocate"> <ip ref="192.168.10.50"/> <apache ref="apache"/> </service> Do not forget to increase the config_version parameter. Make sure you validate the cluster configuration file with every change. In the following screenshot, we can see the command used to validate the CMAN configuration: After you've validated your configuration, you can distribute the cluster configuration file to other nodes. In this screenshot, we can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: With the final distribution of the cluster configuration, a cluster service is configured and RGManager starts the cluster service called webserver. You can use the clustat command to check whether the web server cluster service was successfully started and also which cluster node it is running on. In the following screenshot, you can see the clustat command issued on the node-1 cluster node: Let's take a look at the following terms: Service Name: This column defines the name of the service as configured in the CMAN configuration file. Owner: This column lists the node the service is running on or was last running on. State: This column provides information about the status of the service. Managing cluster services Once you have configured the cluster services as you like, you must learn how to manage them. We can manage cluster services with the clusvcadm command and additional parameters. The syntax of the clusvcadm command is as follows: clusvcadm [parameter] With the clusvcadm command, you can perform the following actions: Disable service (syntax: clusvcadm -d <service_name>): This stops the cluster service and puts it into the disabled state. This is the only permitted operation if the service in question is in the failed state. Start service (syntax: clusvcadm -e <service_name> -m <cluster_node>): This starts a non-running cluster service. It optionally provides the cluster node name you would like to start the service on. Relocate service (syntax: clusvcadm -r <service_name> -m <cluster_node>): This stops the cluster service and starts it on a different cluster node as provided with the -m parameter. Migrate service (syntax: clusvcadm -M <service_name> -m <cluster_node>): Note that this applies only to virtual machine live migrations. Restart service (syntax: clusvcadm -R <service_name>): This stops and starts a cluster service on the same cluster node. Stop service (syntax: clusvcadm -s <service_name>): This stops the cluster service and keeps it on the current cluster node in the stopped state. Freeze service (syntax: clusvcadm -Z <service_name>): This keeps the cluster service running on the current cluster node but disables service status checks and service failover in the event of a cluster node failure. Unfreeze service (syntax: clusvcadm -U <service_name>): This takes the cluster service out of the frozen state and enables service status checks and failover. We can continue with the previous example and migrate the webserver cluster service from the currently running node-1 cluster node to the node-3 cluster node. To achieve cluster service relocation, the clusvcadm command with the relocate service parameter must be used, as follows. In the following screenshot, we can see the command issued to migrate the webserver cluster service to the node-3 cluster node: The clusvcadm command is the cluster service command used to administer and manage cluster services. The -r webserver parameter provides information that we need to relocate a cluster service named webserver. The -m node-3 command provides information on where we want to relocate the cluster service. Once the cluster service migration command completes, the webserver cluster service will be relocated to the node-3 cluster node. The clustat command shows that the webserver service is now running on the node-3 cluster node. In this screenshot, we can see that the webserver cluster service was successfully relocated to the node-3 cluster node: We can easily stop the webserver cluster service by issuing the appropriate command. In the following screenshot, we can see the command used to stop the webserver cluster service: The clusvcadm command is the cluster service command used to administer and manage cluster services. The -s webserver parameter provides the information that you require to stop a cluster service named webserver. Another take at the clustat command should show that the webserver cluster service has stopped; it also provides the information that the last owner of the running webserver cluster service is the node-3 cluster node. In this screenshot, we can see the output of the clustat command, showing that the webserver cluster service is running on the node-3 cluster node: If we want to start the webserver cluster service on the node-1 cluster node, we can do this by issuing the appropriate command. In the following screenshot, we can see the command used to start the webserver cluster service on the node-1 cluster node: clusvcadm is the cluster service command used to administer and manage cluster services. The -e webserver parameter provides the information that you need to start a webserver cluster service. The -m node-1 parameter provides the information that you need to start the webserver cluster service on the node-1 cluster node. As expected, another look at the clustat command should make it clear that the webserver cluster service has started on the node-1 cluster node, as follows. In this screenshot, you can see the output of the clustat command, showing that the webserver cluster service is running on the node -1 cluster node: Removing cluster resources and services Removing cluster resources and services is the reverse of adding them. Resources and services are removed by editing the CMAN configuration file and removing the lines that define the resources or services you would like to remove. When removing cluster resources, it is important to verify that the resources are not being used within any of the configured or running cluster services. As always, when editing the CMAN configuration file, the config_version parameter must be increased. Once the CMAN configuration file is edited, you must run the CMAN configuration validation check for errors. When the CMAN configuration file validation succeeds, you can distribute it to all other cluster nodes. The procedure for removing cluster resources and services is as follows: Remove the desired cluster resources and services and increase the config_version number. Validate the CMAN configuration file. Distribute the CMAN configuration file to all other nodes. We can proceed to remove the webserver cluster service from our example cluster configuration. Edit the CMAN configuration file and remove the webserver cluster service definition. Remember to increase the config_version number. Validate your cluster configuration with every CMAN configuration file change. In this screenshot, we can see the command used to validate the CMAN configuration: When your cluster configuration is valid, you can distribute the CMAN configuration file to all other cluster nodes. In the following screenshot, we can see the command used to distribute the CMAN configuration file from the node-1 cluster node to other cluster nodes: Once the cluster configuration is distributed to all cluster nodes, the webserver cluster service will be stopped and removed. The clustat command shows no service configured and running. In the following screenshot, we can see that the output of the clustat command shows no cluster service called webserver existing in the cluster: Summary In this article, you learned how to add and remove cluster failover domains, cluster resources, and cluster services. We also learned how to start, stop, and migrate cluster services from one cluster node to another, and how to remove cluster resources and services from a running cluster configuration. Resources for Article: Further resources on this subject: Replication [article] Managing public and private groups [article] Installing CentOS [article]
Read more
  • 0
  • 0
  • 2390