Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-creating-dynamic-maps
Packt
27 Jan 2017
15 min read
Save for later

Creating Dynamic Maps

Packt
27 Jan 2017
15 min read
In this article by Joel Lawhead, author of the book, QGIS Python Programming Cookbook - Second Edition, we will cover the following recipes: Setting a transparent layer fill Using a filled marker symbol Rendering a single band raster using a color ramp algorithm Setting a feature's color using a column in a CSV file Creating a complex vector layer symbol Using an outline for font markers Using arrow symbols (For more resources related to this topic, see here.) Setting a transparent layer fill Sometimes, you may just want to display the outline of a polygon in a layer and have the insides of the polygon render transparently, so you can see the other features and background layers inside that space. For example, this technique is common with political boundaries. In this recipe, we will load a polygon layer onto the map, and then interactively change it to just an outline of the polygon. Getting ready Download the zipped shapefile and extract it to your qgis_data directory into a folder named ms from https://github.com/GeospatialPython/Learn/raw/master/Mississippi.zip. How to do it… In the following steps, we'll load a vector polygon layer, set up a properties dictionary to define the color and style, apply the properties to the layer's symbol, and repaint the layer. In Python Console, execute the following: Create the polygon layer: lyr = QgsVectorLayer("/qgis_data/ms/mississippi.shp", "Mississippi", "ogr") Load the layer onto the map: QgsMapLayerRegistry.instance().addMapLayer(lyr) Now, we’ll create the properties dictionary: properties = {} Next, set each property for the fill color, border color, border width, and a style of no meaning no-brush. Note that we’ll still set a fill color; we are just making it transparent: properties["color"] = '#289e26' properties["color_border"] = '#289e26' properties["width_border"] = '2' properties["style"] = 'no' Now, we create a new symbol and set its new property: sym = QgsFillSymbolV2.createSimple(properties) Next, we access the layer's renderer: renderer = lyr.rendererV2() Then, we set the renderer's symbol to the new symbol we created: renderer.setSymbol(sym) Finally, we repaint the layer to show the style updates: lyr.triggerRepaint() How it works… In this recipe, we used a simple dictionary to define our properties combined with the createSimple method of the QgsFillSymbolV2 class. Note that we could have changed the symbology of the layer before adding it to the canvas, but adding it first allows you to see the change take place interactively. Using a filled marker symbol A newer feature of QGIS is filled marker symbols. Filled marker symbols are powerful features that allow you to use other symbols, such as point markers, lines, and shapebursts as a fill pattern for a polygon. Filled marker symbols allow for an endless set of options for rendering a polygon. In this recipe, we'll do a very simple filled marker symbol that paints a polygon with stars. Getting ready Download the zipped shapefile and extract it to your qgis_data directory into a folder named ms from https://github.com/GeospatialPython/Learn/raw/master/Mississippi.zip. How to do it… A filled marker symbol requires us to first create the representative star point marker symbol. Then, we'll add that symbol to the filled marker symbol and change it with the layer's default symbol. Finally, we'll repaint the layer to update the symbology: First, create the layer with our polygon shapefile: lyr = QgsVectorLayer("/qgis_data/ms/mississippi.shp", "Mississippi", "ogr") Next, load the layer onto the map: QgsMapLayerRegistry.instance().addMapLayer(lyr) Now, set up the dictionary with the properties of the star marker symbol: marker_props = {} marker_props["color"] = 'red' marker_props["color_border"] = 'black' marker_props["name"] = 'star' marker_props["size"] = '3' Now, create the star marker symbol: marker = QgsMarkerSymbolV2.createSimple(marker_props) Then, we create our filled marker symbol: filled_marker = QgsPointPatternFillSymbolLayer() We need to set the horizontal and vertical spacing of the filled markers in millimeters: filled_marker.setDistanceX(4.0) filled_marker.setDistanceY(4.0) Now, we can add the simple star marker to the filled marker symbol: filled_marker.setSubSymbol(marker) Next, access the layer's renderer: renderer = lyr.rendererV2() Now, we swap the first symbol layer of the first symbol with our filled marker using zero indexes to reference them: renderer.symbols()[0].changeSymbolLayer(0, filled_marker) Finally, we repaint the layer to see the changes: lyr.triggerRepaint() Verify that the result looks similar to the following screenshot: Rendering a single band raster using a color ramp algorithm A color ramp allows you to render a raster using just a few colors to represent different ranges of cell values that have a similar meaning in order to group them. The approach that will be used in this recipe is the most common way to render elevation data. Getting ready You can download a sample DEM from https://github.com/GeospatialPython/Learn/raw/master/dem.zip, which you can unzip in a directory named rasters in your qgis_data directory. How to do it... In the following steps, we will set up objects to color a raster, create a list establishing the color ramp ranges, apply the ramp to the layer renderer, and finally, add the layer to the map. To do this, we need to perform the following: First, we import the QtGui library for color objects in Python Console: from PyQt4 import QtGui Next, we load the raster layer, as follows: lyr = QgsRasterLayer("/qgis_data/rasters/dem.asc", "DEM") Now, we create a generic raster shader object: s = QgsRasterShader() Then, we instantiate the specialized ramp shader object: c = QgsColorRampShader() We must name a type for the ramp shader. In this case, we use an INTERPOLATED shader: c.setColorRampType(QgsColorRampShader.INTERPOLATED) Now, we'll create a list of our color ramp definitions: i = [] Then, we populate the list with the color ramp values that correspond to the elevation value ranges: i.append(QgsColorRampShader.ColorRampItem(400, QtGui.QColor('#d7191c'), '400')) i.append(QgsColorRampShader.ColorRampItem(900, QtGui.QColor('#fdae61'), '900')) i.append(QgsColorRampShader.ColorRampItem(1500, QtGui.QColor('#ffffbf'), '1500')) i.append(QgsColorRampShader.ColorRampItem(2000, QtGui.QColor('#abdda4'), '2000')) i.append(QgsColorRampShader.ColorRampItem(2500, QtGui.QColor('#2b83ba'), '2500')) Now, we assign the color ramp to our shader: c.setColorRampItemList(i) Now, we tell the generic raster shader to use the color ramp: s.setRasterShaderFunction(c) Next, we create a raster renderer object with the shader: ps = QgsSingleBandPseudoColorRenderer(lyr.dataProvider(), 1, s) We assign the renderer to the raster layer: lyr.setRenderer(ps) Finally, we add the layer to the canvas in order to view it: QgsMapLayerRegistry.instance().addMapLayer(lyr) How it works… While it takes a stack of four objects to create a color ramp, this recipe demonstrates how flexible the PyQGIS API is. Typically, the more number of objects it takes to accomplish an operation in QGIS, the richer the API is, giving you the flexibility to make complex maps. Notice that in each ColorRampItem object, you specify a starting elevation value, the color, and a label as the string. The range for the color ramp ends at any value less than the following item. So, in this case, the first color will be assigned to the cells with a value between 400 and 899. The following screenshot shows the applied color ramp: Setting a feature's color using a column in a CSV file Comma Separated Value (CSV) files are an easy way to store basic geospatial information. But you can also store styling properties alongside the geospatial data for QGIS to use in order to dynamically style the feature data. In this recipe, we'll load some points into QGIS from a CSV file and use one of the columns to determine the color of each point. Getting ready Download the sample zipped CSV file from the following URL: https://github.com/GeospatialPython/Learn/raw/master/point_colors.csv.zip Extract it and place it in your qgis_data directory in a directory named shapes. How to do it… We'll load the CSV file into QGIS as a vector layer and create a default point symbol. Then we'll specify the property and the CSV column we want to control. Finally we'll assign the symbol to the layer and add the layer to the map: First, create the URI string needed to load the CSV: uri = "file:///qgis_data/shapes/point_colors.csv?" uri += "type=csv&" uri += "xField=X&yField=Y&" uri += "spatialIndex=no&" uri += "subsetIndex=no&" uri += "watchFile=no&" uri += "crs=epsg:4326" Next, create the layer using the URI string: lyr = QgsVectorLayer(uri,"Points","delimitedtext") Now, create a default symbol for the layer's geometry type: sym = QgsSymbolV2.defaultSymbol(lyr.geometryType()) Then, we access the layer's symbol layer: symLyr = sym.symbolLayer(0) Now, we perform the key step, which is to assign a symbol layer property to a CSV column: symLyr.setDataDefinedProperty("color", '"COLOR"') Then, we change the existing symbol layer with our data-driven symbol layer: lyr.rendererV2().symbols()[0].changeSymbolLayer(0, symLyr) Finally, we add the layer to the map and verify that each point has the correct color, as defined in the CSV: QgsMapLayerRegistry.instance().addMapLayers([lyr]) How it works… In this example, we pulled feature colors from the CSV, but you could control any symbol layer property in this manner. CSV files can be a simple alternative to databases for lightweight applications or for testing key parts of a large application before investing the overhead to set up a database. Creating a complex vector layer symbol The true power of QGIS symbology lies in its ability to stack multiple symbols in order to create a single complex symbol. This ability makes it possible to create virtually any type of map symbol you can imagine. In this recipe, we'll merge two symbols to create a single symbol and begin unlocking the potential of complex symbols. Getting ready For this recipe, we will need a line shapefile, which you can download and extract from https://github.com/GeospatialPython/Learn/raw/master/paths.zip. Add this shapefile to a directory named shapes in your qgis_data directory. How to do it… Using Python Console, we will create a classic railroad line symbol by placing a series of short, rotated line markers along a regular line symbol. To do this, we need to perform the following steps: First, we load our line shapefile: lyr = QgsVectorLayer("/qgis_data/shapes/paths.shp", "Route", "ogr") Next, we get the symbol list and reference the default symbol: symbolList = lyr.rendererV2().symbols() symbol = symbolList[0] Then,we create a shorter variable name for the symbol layer registry: symLyrReg = QgsSymbolLayerV2Registry Now, we set up the line style for a simple line using a Python dictionary: lineStyle = {'width':'0.26', 'color':'0,0,0'} Then, we create an abstract symbol layer for a simple line: symLyr1Meta = symLyrReg.instance().symbolLayerMetadata("SimpleLine") We instantiate a symbol layer from the abstract layer using the line style properties: symLyr1 = symLyr1Meta.createSymbolLayer(lineStyle) Now, we add the symbol layer to the layer's symbol: symbol.appendSymbolLayer(symLyr1) Now,in order to create the rails on the railroad, we begin building a marker line style with another Python dictionary, as follows: markerStyle = {} markerStyle['width'] = '0.26' markerStyle['color'] = '0,0,0' markerStyle['interval'] = '3' markerStyle['interval_unit'] = 'MM' markerStyle['placement'] = 'interval' markerStyle['rotate'] = '1' Then, we create the marker line abstract symbol layer for the second symbol: symLyr2Meta = symLyrReg.instance().symbolLayerMetadata("MarkerLine") We instatiate the symbol layer, as shown here: symLyr2 = symLyr2Meta.createSymbolLayer(markerStyle) Now, we must work with a subsymbol that defines the markers along the marker line: sybSym = symLyr2.subSymbol() We must delete the default subsymbol: sybSym.deleteSymbolLayer(0) Now, we set up the style for our rail marker using a dictionary: railStyle = {'size':'2', 'color':'0,0,0', 'name':'line', 'angle':'0'} Now, we repeat the process of building a symbol layer and add it to the subsymbol: railMeta = symLyrReg.instance().symbolLayerMetadata("SimpleMarker") rail = railMeta.createSymbolLayer(railStyle) sybSym.appendSymbolLayer(rail) Then, we add the subsymbol to the second symbol layer: symbol.appendSymbolLayer(symLyr2) Finally, we add the layer to the map: QgsMapLayerRegistry.instance().addMapLayer(lyr) How it works… First, we must create a simple line symbol. The marker line, by itself, will render correctly, but the underlying simple line will be a randomly chosen color. We must also change the subsymbol of the marker line because the default subsymbol is a simple circle. Using an outline for font markers Font markers open up broad possibilities for icons, but a single-color shape can be hard to see across a varied map background. Recently, QGIS added the ability to place outlines around font marker symbols. In this recipe, we'll use font marker symbol methods to place an outline around the symbol to give it contrast and, therefore, visibility on any type of background. Getting ready Download the following zipped shapefile. Extract it and place it in a directory named ms in your qgis_data directory: https://github.com/GeospatialPython/Learn/raw/master/tourism_points.zip How to do it… This recipe will load a layer from a shapefile, set up a font marker symbol, put an outline on it, and then add it to the layer. We'll use a simple text character, an @ sign, as our font marker to keep things simple: First, we need to import the QtGUI library, so we can work with color objects: from PyQt4.QtGui import * Now, we create a path string to our shapefile: src = "/qgis_data/ms/tourism_points.shp" Next, we can create the layer: lyr = QgsVectorLayer(src, "Points of Interest", "ogr") Then, we can create the font marker symbol specifying the font size and color in the constructor: symLyr = QgsFontMarkerSymbolLayerV2(pointSize=16, color=QColor("cyan")) Now, we can set the font family, character, outline width, and outline color: symLyr.setFontFamily("'Arial'") symLyr.setCharacter("@") symLyr.setOutlineWidth(.5) symLyr.setOutlineColor(QColor("black")) We are now ready to assign the symbol to the layer: lyr.rendererV2().symbols()[0].changeSymbolLayer(0, symLyr) Finally, we add the layer to the map: QgsMapLayerRegistry.instance().addMapLayer(lyr) Verify that your map looks similar to the following image: How it works… We used class methods to set this symbol up, but we also could have used a property dictionary just as easily. Note that the font size and color were set in the object constructor for the font maker symbol instead of using setter methods. QgsFontMarkerSymbolLayerV2 doesn't have methods for these two properties. Using arrow symbols Line features convey location, but sometimes you also need to convey a direction along a line. QGIS recently added a symbol that does just that by turning lines into arrows. In this recipe, we'll symbolize some line features showing historical human migration routes around the world. This data requires directional arrows for us to understand it: Getting ready We will use two shapefiles in this example. One is a world boundaries shapefile and the other is a route shapefile. You can download the countries shapefile here: https://github.com/GeospatialPython/Learn/raw/master/countries.zip You can download the routes shapefile here: https://github.com/GeospatialPython/Learn/raw/master/human_migration_routes.zip Download these ZIP files and unzip the shapefiles into your qgis_data directory. How to do it… We will load the countries shapefile as a background reference layer and then, the route shapefile. Before we display the layers on the map, we'll create the arrow symbol layer, configure it, and then add it to the routes layer. Finally, we'll add the layers to the map. First, we'll create the URI strings for the paths to the two shapefiles: countries_shp = "/qgis_data/countries.shp" routes_shp = "/qgis_data/human_migration_routes.shp" Next, we'll create our countries and routes layers: countries = QgsVectorLayer(countries_shp, "Countries", "ogr") routes = QgsVectorLayer(routes_shp, "Human Migration Routes", "ogr") Now, we’ll create the arrow symbol layer: symLyr = QgsArrowSymbolLayer() Then, we’ll configure the layer. We'll use the default configuration except for two paramters--to curve the arrow and to not repeat the arrow symbol for each line segment: symLyr.setIsCurved(True) symLyr.setIsRepeated(False) Next, we add the symbol layer to the map layer: routes.rendererV2().symbols()[0].changeSymbolLayer(0, symLyr) Finally, we add the layers to the map: QgsMapLayerRegistry.instance().addMapLayers([routes,countries]) Verify that your map looks similar to the following image: How it works… The symbol calculates the arrow's direction based on the order of the feature's points. You may find that you need to edit the underlying feature data to produce the desired visual effect, especially when using curved arrows. You have limited control over the arc of the curve using the end points plus an optional third vertex. This symbol is one of the several new powerful visual effects added to QGIS, which would have normally been done in a vector illustration program after you produced a map. Summary In this article, weprogrammatically created dynamic maps using Python to control every aspect of the QGIS map canvas. We learnt to dynamically apply symbology from data in a CSV file. We also learnt how to use some newer QGIS custom symbology including font markers, arrow symbols, null symbols, and the powerful new 2.5D renderer for buildings. Wesaw that every aspect of QGIS is up for grabs with Python, to write your own application. Sometimes, the PyQGIS API may not directly support our application goal, but there is nearly always a way to accomplish what you set out to do with QGIS. Resources for Article: Further resources on this subject: Normal maps [article] Putting the Fun in Functional Python [article] Revisiting Linux Network Basics [article]
Read more
  • 0
  • 0
  • 2690

article-image-building-scalable-microservices
Packt
18 Jan 2017
33 min read
Save for later

Building Scalable Microservices

Packt
18 Jan 2017
33 min read
In this article by Vikram Murugesan, the author of the book Microservices Deployment Cookbook, we will see a brief introduction to concept of the microservices. (For more resources related to this topic, see here.) Writing microservices with Spring Boot Now that our project is ready, let's look at how to write our microservice. There are several Java-based frameworks that let you create microservices. One of the most popular frameworks from the Spring ecosystem is the Spring Boot framework. In this article, we will look at how to create a simple microservice application using Spring Boot. Getting ready Any application requires an entry point to start the application. For Java-based applications, you can write a class that has the main method and run that class as a Java application. Similarly, Spring Boot requires a simple Java class with the main method to run it as a Spring Boot application (microservice). Before you start writing your Spring Boot microservice, you will also require some Maven dependencies in your pom.xml file. How to do it… Create a Java class called com.packt.microservices.geolocation.GeoLocationApplication.java and give it an empty main method: package com.packt.microservices.geolocation; public class GeoLocationApplication { public static void main(String[] args) { // left empty intentionally } } Now that we have our basic template project, let's make our project a child project of Spring Boot's spring-boot-starter-parent pom module. This module has a lot of prerequisite configurations in its pom.xml file, thereby reducing the amount of boilerplate code in our pom.xml file. At the time of writing this, 1.3.6.RELEASE was the most recent version: <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.3.6.RELEASE</version> </parent> After this step, you might want to run a Maven update on your project as you have added a new parent module. If you see any warnings about the version of the maven-compiler plugin, you can either ignore it or just remove the <version>3.5.1</version> element. If you remove the version element, please perform a Maven update afterward. Spring Boot has the ability to enable or disable Spring modules such as Spring MVC, Spring Data, and Spring Caching. In our use case, we will be creating some REST APIs to consume the geolocation information of the users. So we will need Spring MVC. Add the following dependencies to your pom.xml file: <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> We also need to expose the APIs using web servers such as Tomcat, Jetty, or Undertow. Spring Boot has an in-memory Tomcat server that starts up as soon as you start your Spring Boot application. So we already have an in-memory Tomcat server that we could utilize. Now let's modify the GeoLocationApplication.java class to make it a Spring Boot application: package com.packt.microservices.geolocation; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class GeoLocationApplication { public static void main(String[] args) { SpringApplication.run(GeoLocationApplication.class, args); } } As you can see, we have added an annotation, @SpringBootApplication, to our class. The @SpringBootApplication annotation reduces the number of lines of code written by adding the following three annotations implicitly: @Configuration @ComponentScan @EnableAutoConfiguration If you are familiar with Spring, you will already know what the first two annotations do. @EnableAutoConfiguration is the only annotation that is part of Spring Boot. The AutoConfiguration package has an intelligent mechanism that guesses the configuration of your application and automatically configures the beans that you will likely need in your code. You can also see that we have added one more line to the main method, which actually tells Spring Boot the class that will be used to start this application. In our case, it is GeoLocationApplication.class. If you would like to add more initialization logic to your application, such as setting up the database or setting up your cache, feel free to add it here. Now that our Spring Boot application is all set to run, let's see how to run our microservice. Right-click on GeoLocationApplication.java from Package Explorer, select Run As, and then select Spring Boot App. You can also choose Java Application instead of Spring Boot App. Both the options ultimately do the same thing. You should see something like this on your STS console: If you look closely at the console logs, you will notice that Tomcat is being started on port number 8080. In order to make sure our Tomcat server is listening, let's run a simple curl command. cURL is a command-line utility available on most Unix and Mac systems. For Windows, use tools such as Cygwin or even Postman. Postman is a Google Chrome extension that gives you the ability to send and receive HTTP requests. For simplicity, we will use cURL. Execute the following command on your terminal: curl http://localhost:8080 This should give us an output like this: {"timestamp":1467420963000,"status":404,"error":"Not Found","message":"No message available","path":"/"} This error message is being produced by Spring. This verifies that our Spring Boot microservice is ready to start building on with more features. There are more configurations that are needed for Spring Boot, which we will perform later in this article along with Spring MVC. Writing microservices with WildFly Swarm WildFly Swarm is a J2EE application packaging framework from RedHat that utilizes the in-memory Undertow server to deploy microservices. In this article, we will create the same GeoLocation API using WildFly Swarm and JAX-RS. To avoid confusion and dependency conflicts in our project, we will create the WildFly Swarm microservice as its own Maven project. This article is just here to help you get started on WildFly Swarm. When you are building your production-level application, it is your choice to either use Spring Boot, WildFly Swarm, Dropwizard, or SparkJava based on your needs. Getting ready Similar to how we created the Spring Boot Maven project, create a Maven WAR module with the groupId com.packt.microservices and name/artifactId geolocation-wildfly. Feel free to use either your IDE or the command line. Be aware that some IDEs complain about a missing web.xml file. We will see how to fix that in the next section. How to do it… Before we set up the WildFly Swarm project, we have to fix the missing web.xml error. The error message says that Maven expects to see a web.xml file in your project as it is a WAR module, but this file is missing in your project. In order to fix this, we have to add and configure maven-war-plugin. Add the following code snippet to your pom.xml file's project section: <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.6</version> <configuration> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> </plugins> </build> After adding the snippet, save your pom.xml file and perform a Maven update. Also, if you see that your project is using a Java version other than 1.8. Again, perform a Maven update for the changes to take effect. Now, let's add the dependencies required for this project. As we know that we will be exposing our APIs, we have to add the JAX-RS library. JAX-RS is the standard JSR-compliant API for creating RESTful web services. JBoss has its own version of JAX-RS. So let's  add that dependency to the pom.xml file: <dependencies> <dependency> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.0_spec</artifactId> <version>1.0.0.Final</version> <scope>provided</scope> </dependency> </dependencies> The one thing that you have to note here is the provided scope. The provide scope in general means that this JAR need not be bundled with the final artifact when it is built. Usually, the dependencies with provided scope will be available to your application either via your web server or application server. In this case, when Wildfly Swarm bundles your app and runs it on the in-memory Undertow server, your server will already have this dependency. The next step toward creating the GeoLocation API using Wildfly Swarm is creating the domain object. Use the com.packt.microservices.geolocation.GeoLocation.java file. Now that we have the domain object, there are two classes that you need to create in order to write your first JAX-RS web service. The first of those is the Application class. The Application class in JAX-RS is used to define the various components that you will be using in your application. It can also hold some metadata about your application, such as your basePath (or ApplicationPath) to all resources listed in this Application class. In this case, we are going to use /geolocation as our basePath. Let's see how that looks: package com.packt.microservices.geolocation; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/geolocation") public class GeoLocationApplication extends Application { public GeoLocationApplication() {} } There are two things to note in this class; one is the Application class and the other is the @ApplicationPath annotation—both of which we've already talked about. Now let's move on to the resource class, which is responsible for exposing the APIs. If you are familiar with Spring MVC, you can compare Resource classes to Controllers. They are responsible for defining the API for any specific resource. The annotations are slightly different from that of Spring MVC. Let's create a new resource class called com.packt.microservices.geolocation.GeoLocationResource.java that exposes a simple GET API: package com.packt.microservices.geolocation; import java.util.ArrayList; import java.util.List; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("/") public class GeoLocationResource { @GET @Produces("application/json") public List<GeoLocation> findAll() { return new ArrayList<>(); } } All the three annotations, @GET, @Path, and @Produces, are pretty self explanatory. Before we start writing the APIs and the service class, let's test the application from the command line to make sure it works as expected. With the current implementation, any GET request sent to the /geolocation URL should return an empty JSON array. So far, we have created the RESTful APIs using JAX-RS. It's just another JAX-RS project: In order to make it a microservice using Wildfly Swarm, all you have to do is add the wildfly-swarm-plugin to the Maven pom.xml file. This plugin will be tied to the package phase of the build so that whenever the package goal is triggered, the plugin will create an uber JAR with all required dependencies. An uber JAR is just a fat JAR that has all dependencies bundled inside itself. It also deploys our application in an in-memory Undertow server. Add the following snippet to the plugins section of the pom.xml file: <plugin> <groupId>org.wildfly.swarm</groupId> <artifactId>wildfly-swarm-plugin</artifactId> <version>1.0.0.Final</version> <executions> <execution> <id>package</id> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> Now execute the mvn clean package command from the project's root directory, and wait for the Maven build to be successful. If you look at the logs, you can see that wildfly-swarm-plugin will create the uber JAR, which has all its dependencies. You should see something like this in your console logs: After the build is successful, you will find two artifacts in the target directory of your project. The geolocation-wildfly-0.0.1-SNAPSHOT.war file is the final WAR created by the maven-war-plugin. The geolocation-wildfly-0.0.1-SNAPSHOT-swarm.jar file is the uber JAR created by the wildfly-swarm-plugin. Execute the following command in the same terminal to start your microservice: java –jar target/geolocation-wildfly-0.0.1-SNAPSHOT-swarm.jar After executing this command, you will see that Undertow has started on port number 8080, exposing the geolocation resource we created. You will see something like this: Execute the following cURL command in a separate terminal window to make sure our API is exposed. The response of the command should be [], indicating there are no geolocations: curl http://localhost:8080/geolocation Now let's build the service class and finish the APIs that we started. For simplicity purposes, we are going to store the geolocations in a collection in the service class itself. In a real-time scenario, you will be writing repository classes or DAOs that talk to the database that holds your geolocations. Get the com.packt.microservices.geolocation.GeoLocationService.java interface. We'll use the same interface here. Create a new class called com.packt.microservices.geolocation.GeoLocationServiceImpl.java that extends the GeoLocationService interface: package com.packt.microservices.geolocation; import java.util.ArrayList; import java.util.Collections; import java.util.List; public class GeoLocationServiceImpl implements GeoLocationService { private static List<GeoLocation> geolocations = new ArrayList<>(); @Override public GeoLocation create(GeoLocation geolocation) { geolocations.add(geolocation); return geolocation; } @Override public List<GeoLocation> findAll() { return Collections.unmodifiableList(geolocations); } } Now that our service classes are implemented, let's finish building the APIs. We already have a very basic stubbed-out GET API. Let's just introduce the service class to the resource class and call the findAll method. Similarly, let's use the service's create method for POST API calls. Add the following snippet to GeoLocationResource.java: private GeoLocationService service = new GeoLocationServiceImpl(); @GET @Produces("application/json") public List<GeoLocation> findAll() { return service.findAll(); } @POST @Produces("application/json") @Consumes("application/json") public GeoLocation create(GeoLocation geolocation) { return service.create(geolocation); } We are now ready to test our application. Go ahead and build your application. After the build is successful, run your microservice: let's try to create two geolocations using the POST API and later try to retrieve them using the GET method. Execute the following cURL commands in your terminal one by one: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation This should give you something like the following output (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 9.568012, "longitude": 77.962444}' http://localhost:8080/geolocation This command should give you an output similar to the following (pretty-printed for readability): { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } To verify whether your entities were stored correctly, execute the following cURL command: curl http://localhost:8080/geolocation This should give you an output like this (pretty-printed for readability): [ { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 }, { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } ] Whatever we have seen so far will give you a head start in building microservices with WildFly Swarm. Of course, there are tons of features that WildFly Swarm offers. Feel free to try them out based on your application needs. I strongly recommend going through the WildFly Swarm documentation for any advanced usages. Writing microservices with Dropwizard Dropwizard is a collection of libraries that help you build powerful applications quickly and easily. The libraries vary from Jackson, Jersey, Jetty, and so on. You can take a look at the full list of libraries on their website. This ecosystem of libraries that help you build powerful applications could be utilized to create microservices as well. As we saw earlier, it utilizes Jetty to expose its services. In this article, we will create the same GeoLocation API using Dropwizard and Jersey. To avoid confusion and dependency conflicts in our project, we will create the Dropwizard microservice as its own Maven project. This article is just here to help you get started with Dropwizard. When you are building your production-level application, it is your choice to either use Spring Boot, WildFly Swarm, Dropwizard, or SparkJava based on your needs. Getting ready Similar to how we created other Maven projects,  create a Maven JAR module with the groupId com.packt.microservices and name/artifactId geolocation-dropwizard. Feel free to use either your IDE or the command line. After the project is created, if you see that your project is using a Java version other than 1.8. Perform a Maven update for the change to take effect. How to do it… The first thing that you will need is the dropwizard-core Maven dependency. Add the following snippet to your project's pom.xml file: <dependencies> <dependency> <groupId>io.dropwizard</groupId> <artifactId>dropwizard-core</artifactId> <version>0.9.3</version> </dependency> </dependencies> Guess what? This is the only dependency you will need to spin up a simple Jersey-based Dropwizard microservice. Before we start configuring Dropwizard, we have to create the domain object, service class, and resource class: com.packt.microservices.geolocation.GeoLocation.java com.packt.microservices.geolocation.GeoLocationService.java com.packt.microservices.geolocation.GeoLocationImpl.java com.packt.microservices.geolocation.GeoLocationResource.java Let's see what each of these classes does. The GeoLocation.java class is our domain object that holds the geolocation information. The GeoLocationService.java class defines our interface, which is then implemented by the GeoLocationServiceImpl.java class. If you take a look at the GeoLocationServiceImpl.java class, we are using a simple collection to store the GeoLocation domain objects. In a real-time scenario, you will be persisting these objects in a database. But to keep it simple, we will not go that far. To be consistent with the previous, let's change the path of GeoLocationResource to /geolocation. To do so, replace @Path("/") with @Path("/geolocation") on line number 11 of the GeoLocationResource.java class. We have now created the service classes, domain object, and resource class. Let's configure Dropwizard. In order to make your project a microservice, you have to do two things: Create a Dropwizard configuration class. This is used to store any meta-information or resource information that your application will need during runtime, such as DB connection, Jetty server, logging, and metrics configurations. These configurations are ideally stored in a YAML file, which will them be mapped to your Configuration class using Jackson. In this application, we are not going to use the YAML configuration as it is out of scope for this article. If you would like to know more about configuring Dropwizard, refer to their Getting Started documentation page at http://www.dropwizard.io/0.7.1/docs/getting-started.html. Let's  create an empty Configuration class called GeoLocationConfiguration.java: package com.packt.microservices.geolocation; import io.dropwizard.Configuration; public class GeoLocationConfiguration extends Configuration { } The YAML configuration file has a lot to offer. Take a look at a sample YAML file from Dropwizard's Getting Started documentation page to learn more. The name of the YAML file is usually derived from the name of your microservice. The microservice name is usually identified by the return value of the overridden method public String getName() in your Application class. Now let's create the GeoLocationApplication.java application class: package com.packt.microservices.geolocation; import io.dropwizard.Application; import io.dropwizard.setup.Environment; public class GeoLocationApplication extends Application<GeoLocationConfiguration> { public static void main(String[] args) throws Exception { new GeoLocationApplication().run(args); } @Override public void run(GeoLocationConfiguration config, Environment env) throws Exception { env.jersey().register(new GeoLocationResource()); } } There are a lot of things going on here. Let's look at them one by one. Firstly, this class extends Application with the GeoLocationConfiguration generic. This clearly makes an instance of your GeoLocationConfiguraiton.java class available so that you have access to all the properties you have defined in your YAML file at the same time mapped in the Configuration class. The next one is the run method. The run method takes two arguments: your configuration and environment. The Environment instance is a wrapper to other library-specific objects such as MetricsRegistry, HealthCheckRegistry, and JerseyEnvironment. For example, we could register our Jersey resources using the JerseyEnvironment instance. The env.jersey().register(new GeoLocationResource())line does exactly that. The main method is pretty straight-forward. All it does is call the run method. Before we can start the microservice, we have to configure this project to create a runnable uber JAR. Uber JARs are just fat JARs that bundle their dependencies in themselves. For this purpose, we will be using the maven-shade-plugin. Add the following snippet to the build section of the pom.xml file. If this is your first plugin, you might want to wrap it in a <plugins> element under <build>: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.3</version> <configuration> <createDependencyReducedPom>true</createDependencyReducedPom> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" /> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.packt.microservices.geolocation.GeoLocationApplication</mainClass> </transformer> </transformers> </configuration> </execution> </executions> </plugin> The previous snippet does the following: It creates a runnable uber JAR that has a reduced pom.xml file that does not include the dependencies that are added to the uber JAR. To learn more about this property, take a look at the documentation of maven-shade-plugin. It utilizes com.packt.microservices.geolocation.GeoLocationApplication as the class whose main method will be invoked when this JAR is executed. This is done by updating the MANIFEST file. It excludes all signatures from signed JARs. This is required to avoid security errors. Now that our project is properly configured, let's try to build and run it from the command line. To build the project, execute mvn clean package from the project's root directory in your terminal. This will create your final JAR in the target directory. Execute the following command to start your microservice: java -jar target/geolocation-dropwizard-0.0.1-SNAPSHOT.jar server The server argument instructs Dropwizard to start the Jetty server. After you issue the command, you should be able to see that Dropwizard has started the in-memory Jetty server on port 8080. If you see any warnings about health checks, ignore them. Your console logs should look something like this: We are now ready to test our application. Let's try to create two geolocations using the POST API and later try to retrieve them using the GET method. Execute the following cURL commands in your terminal one by one: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation This should give you an output similar to the following (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 9.568012, "longitude": 77.962444}' http://localhost:8080/geolocation This should give you an output like this (pretty-printed for readability): { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } To verify whether your entities were stored correctly, execute the following cURL command: curl http://localhost:8080/geolocation It should give you an output similar to the following (pretty-printed for readability): [ { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 }, { "latitude": 9.568012, "longitude": 77.962444, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } ] Excellent! You have created your first microservice with Dropwizard. Dropwizard offers more than what we have seen so far. Some of it is out of scope for this article. I believe the metrics API that Dropwizard uses could be used in any type of application. Writing your Dockerfile So far in this article, we have seen how to package our application and how to install Docker. Now that we have our JAR artifact and Docker set up, let's see how to Dockerize our microservice application using Docker. Getting ready In order to Dockerize our application, we will have to tell Docker how our image is going to look. This is exactly the purpose of a Dockerfile. A Dockerfile has its own syntax (or Dockerfile instructions) and will be used by Docker to create images. Throughout this article, we will try to understand some of the most commonly used Dockerfile instructions as we write our Dockerfile for the geolocation tracker microservice. How to do it… First, open your STS IDE and create a new file called Dockerfile in the geolocation project. The first line of the Dockerfile is always the FROM instruction followed by the base image that you would like to create your image from. There are thousands of images on Docker Hub to choose from. In our case, we would need something that already has Java installed on it. There are some images that are official, meaning they are well documented and maintained. Docker Official Repositories are very well documented, and they follow best practices and standards. Docker has its own team to maintain these repositories. This is essential in order to keep the repository clear, thus helping the user make the right choice of repository. To read more about Docker Official Repositories, take a look at https://docs.docker.com/docker-hub/official_repos/ We will be using the Java official repository. To find the official repository, go to hub.docker.com and search for java. You have to choose the one that says official. At the time of writing this, the Java image documentation says it will soon be deprecated in favor of the openjdk image. So the first line of our Dockerfile will look like this: FROM openjdk:8 As you can see, we have used version (or tag) 8 for our image. If you are wondering what type of operating system this image uses, take a look at the Dockerfile of this image, which you can get from the Docker Hub page. Docker images are usually tagged with the version of the software they are written for. That way, it is easy for users to pick from. The next step is creating a directory for our project where we will store our JAR artifact. Add this as your next line: RUN mkdir -p /opt/packt/geolocation This is a simple Unix command that creates the /opt/packt/geolocation directory. The –p flag instructs it to create the intermediate directories if they don't exist. Now let's create an instruction that will add the JAR file that was created in your local machine into the container at /opt/packt/geolocation. ADD target/geolocation-0.0.1-SNAPSHOT.jar /opt/packt/geolocation/ As you can see, we are picking up the uber JAR from target directory and dropping it into the /opt/packt/geolocation directory of the container. Take a look at the / at the end of the target path. That says that the JAR has to be copied into the directory. Before we can start the application, there is one thing we have to do, that is, expose the ports that we would like to be mapped to the Docker host ports. In our case, the in-memory Tomcat instance is running on port 8080. In order to be able to map port 8080 of our container to any port to our Docker host, we have to expose it first. For that, we will use the EXPOSE instruction. Add the following line to your Dockerfile: EXPOSE 8080 Now that we are ready to start the app, let's go ahead and tell Docker how to start a container for this image. For that, we will use the CMD instruction: CMD ["java", "-jar", "/opt/packt/geolocation/geolocation-0.0.1-SNAPSHOT.jar"] There are two things we have to note here. Once is the way we are starting the application and the other is how the command is broken down into comma-separated Strings. First, let's talk about how we start the application. You might be wondering why we haven't used the mvn spring-boot:run command to start the application. Keep in mind that this command will be executed inside the container, and our container does not have Maven installed, only OpenJDK 8. If you would like to use the maven command, take that as an exercise, and try to install Maven on your container and use the mvn command to start the application. Now that we know we have Java installed, we are issuing a very simple java –jar command to run the JAR. In fact, the Spring Boot Maven plugin internally issues the same command. The next thing is how the command has been broken down into comma-separated Strings. This is a standard that the CMD instruction follows. To keep it simple, keep in mind that for whatever command you would like to run upon running the container, just break it down into comma-separated Strings (in whitespaces). Your final Dockerfile should look something like this: FROM openjdk:8 RUN mkdir -p /opt/packt/geolocation ADD target/geolocation-0.0.1-SNAPSHOT.jar /opt/packt/geolocation/ EXPOSE 8080 CMD ["java", "-jar", "/opt/packt/geolocation/geolocation-0.0.1-SNAPSHOT.jar"] This Dockerfile is one of the simplest implementations. Dockerfiles can sometimes get bigger due to the fact that you need a lot of customizations to your image. In such cases, it is a good idea to break it down into multiple images that can be reused and maintained separately. There are some best practices to follow whenever you create your own Dockerfile and image. Though we haven't covered that here as it is out of the scope of this article, you still should take a look at and follow them. To learn more about the various Dockerfile instructions, go to https://docs.docker.com/engine/reference/builder/. Building your Docker image We created the Dockerfile, which will be used in this article to create an image for our microservice. If you are wondering why we would need an image, it is the only way we can ship our software to any system. Once you have your image created and uploaded to a common repository, it will be easier to pull your image from any location. Getting ready Before you jump right into it, it might be a good idea to get yourself familiar with some of the most commonly used Docker commands. In this article, we will use the build command. Take a look at this URL to understand the other commands: https://docs.docker.com/engine/reference/commandline/#/image-commands. After familiarizing yourself with the commands, open up a new terminal, and change your directory to the root of the geolocation project. Make sure your docker-machine instance is running. If it is not running, use the docker-machine start command to run your docker-machine instance: docker-machine start default If you have to configure your shell for the default Docker machine, go ahead and execute the following command: eval $(docker-machine env default) How to do it… From the terminal, issue the following docker build command: docker build –t packt/geolocation. We'll try to understand the command later. For now, let's see what happens after you issue the preceding command. You should see Docker downloading the openjdk image from Docker Hub. Once the image has been downloaded, you will see that Docker tries to validate each and every instruction provided in the Dockerfile. When the last instruction has been processed, you will see a message saying Successfully built. This says that your image has been successfully built. Now let's try to understand the command. There are three things to note here: The first thing is the docker build command itself. The docker build command is used to build a Docker image from a Dockerfile. It needs at least one input, which is usually the location of the Dockerfile. Dockerfiles can be renamed to something other than Dockerfile and can be referred to using the –f option of the docker build command. An instance of this being used is when teams have different Dockerfiles for different build environments, for example, using DockerfileDev for the dev environment, DockerfileStaging for the staging environment, and DockerfileProd for the production environment. It is still encouraged as best practice to use other Docker options in order to keep the same Dockerfile for all environments. The second thing is the –t option. The –t option takes the name of the repo and a tag. In our case, we have not mentioned the tag, so by default, it will pick up latest as the tag. If you look at the repo name, it is different from the official openjdk image name. It has two parts: packt and geolocation. It is always a good practice to put the Docker Hub account name followed by the actual image name as the name of your repo. For now, we will use packt as our account name, we will see how to create our own Docker Hub account and use that account name here. The third thing is the dot at the end. The dot operator says that the Dockerfile is located in the current directory, or the present working directory to be more precise. Let's go ahead and verify whether our image was created. In order to do that, issue the following command on your terminal: docker images The docker images command is used to list down all images available in your Docker host. After issuing the command, you should see something like this: As you can see, the newly built image is listed as packt/geolocation in your Docker host. The tag for this image is latest as we did not specify any. The image ID uniquely identifies your image. Note the size of the image. It is a few megabytes bigger than the openjdk:8 image. That is most probably because of the size of our executable uber JAR inside the container. Now that we know how to build an image using an existing Dockerfile, we are at the end of this article. This is just a very quick intro to the docker build command. There are more options that you can provide to the command, such as CPUs and memory. To learn more about the docker build command, take a look at this page: https://docs.docker.com/engine/reference/commandline/build/ Running your microservice as a Docker container We successfully created our Docker image in the Docker host. Keep in mind that if you are using Windows or Mac, your Docker host is the VirtualBox VM and not your local computer. In this article, we will look at how to spin off a container for the newly created image. Getting ready To spin off a new container for our packt/geolocation image, we will use the docker run command. This command is used to run any command inside your container, given the image. Open your terminal and go to the root of the geolocation project. If you have to start your Docker machine instance, do so using the docker-machine start command, and set the environment using the docker-machine env command. How to do it… Go ahead and issue the following command on your terminal: docker run packt/geolocation Right after you run the command, you should see something like this: Yay! We can see that our microservice is running as a Docker container. But wait—there is more to it. Let's see how we can access our microservice's in-memory Tomcat instance. Try to run a curl command to see if our app is up and running: Open a new terminal instance and execute the following cURL command in that shell: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://localhost:8080/geolocation Did you get an error message like this? curl: (7) Failed to connect to localhost port 8080: Connection refused Let's try to understand what happened here. Why would we get a connection refused error when our microservice logs clearly say that it is running on port 8080? Yes, you guessed it right: the microservice is not running on your local computer; it is actually running inside the container, which in turn is running inside your Docker host. Here, your Docker host is the VirtualBox VM called default. So we have to replace localhost with the IP of the container. But getting the IP of the container is not straightforward. That is the reason we are going to map port 8080 of the container to the same port on the VM. This mapping will make sure that any request made to port 8080 on the VM will be forwarded to port 8080 of the container. Now go to the shell that is currently running your container, and stop your container. Usually, Ctrl + C will do the job. After your container is stopped, issue the following command: docker run –p 8080:8080 packt/geolocation The –p option does the port mapping from Docker host to container. The port number to the left of the colon indicates the port number of the Docker host, and the port number to the right of the colon indicates that of the container. In our case, both of them are same. After you execute the previous command, you should see the same logs that you saw before. We are not done yet. We still have to find the IP that we have to use to hit our RESTful endpoint. The IP that we have to use is the IP of our Docker Machine VM. To find the IP of the docker-machine instance, execute the following command in a new terminal instance: docker-machine ip default. This should give you the IP of the VM. Let's say the IP that you received was 192.168.99.100. Now, replace localhost in your cURL command with this IP, and execute the cURL command again: curl -H "Content-Type: application/json" -X POST -d '{"timestamp": 1468203975, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "latitude": 41.803488, "longitude": -88.144040}' http://192.168.99.100:8080/geolocation This should give you an output similar to the following (pretty-printed for readability): { "latitude": 41.803488, "longitude": -88.14404, "userId": "f1196aac-470e-11e6-beb8-9e71128cae77", "timestamp": 1468203975 } This confirms that you are able to access your microservice from the outside. Take a moment to understand how the port mapping is done. The following figure shows how your machine, VM, and container are orchestrated: This confirms that you are able to access your microservice from the outside. Summary We looked at an example of a geolocation tracker application to see how it can be broken down into smaller and manageable services. Next, we saw how to create the GeoLocationTracker service using the Spring Boot framework. Resources for Article: Further resources on this subject: Domain-Driven Design [article] Breaking into Microservices Architecture [article] A capability model for microservices [article]
Read more
  • 0
  • 0
  • 3658

article-image-web-framework-behavior-tuning
Packt
12 Jan 2017
8 min read
Save for later

Web Framework Behavior Tuning

Packt
12 Jan 2017
8 min read
In this article by Alex Antonov, the author of the book Spring Boot Cookbook – Second Edition, learn to use and configure spring resources and build your own Spring-based application using Spring Boot. In this article, you will learn about the following topics: Configuring route matching patterns Configuring custom static path mappings Adding custom connectors (For more resources related to this topic, see here.) Introduction We will look into enhancing our web application by doing behavior tuning, configuring the custom routing rules and patterns, adding additional static asset paths, and adding and modifying servlet container connectors and other properties, such as enabling SSL. Configuring route matching patterns When we build web applications, it is not always the case that a default, out-of-the-box, mapping configuration is applicable. At times, we want to create our RESTful URLs that contain characters such as . (dot), which Spring treats as a delimiter defining format, like path.xml, or we might not want to recognize a trailing slash, and so on. Conveniently, Spring provides us with a way to get this accomplished with ease. Let's imagine that the ISBN format does allow the use of dots to separate the book number from the revision with a pattern looking like [isbn-number].[revision]. How to do it… We will configure our application to not use the suffix pattern match of .* and not to strip the values after the dot when parsing the parameters. Let's perform the following steps: Let's add the necessary configuration to our WebConfiguration class with the following content: @Override public void configurePathMatch(PathMatchConfigurer configurer) { configurer.setUseSuffixPatternMatch(false). setUseTrailingSlashMatch(true); } Start the application by running ./gradlew clean bootRun. Let's open http://localhost:8080/books/978-1-78528-415-1.1 in the browser to see the following results: If we enter the correct ISBN, we will see a different result, as shown in the following screenshot: How it works… Let's look at what we did in detail. The configurePathMatch(PathMatchConfigurer configurer) method gives us an ability to set our own behavior in how we want Spring to match the request URL path to the controller parameters: configurer.setUseSuffixPatternMatch(false): This method indicates that we don't want to use the .* suffix so as to strip the trailing characters after the last dot. This translates into Spring parsing out 978-1-78528-415-1.1 as an {isbn} parameter for BookController. So, http://localhost:8080/books/978-1-78528-415-1.1 and http://localhost:8080/books/978-1-78528-415-1 will become different URLs. configurer.setUseTrailingSlashMatch(true): This method indicates that we want to use the trailing / in the URL as a match, as if it were not there. This effectively makes http://localhost:8080/books/978-1-78528-415-1 the same as http://localhost:8080/books/978-1-78528-415-1/. If you want to do further configuration on how the path matching takes place, you can provide your own implementation of PathMatcher and UrlPathHelper, but these will be required in the most extreme and custom-tailored situations and are not generally recommended. Configuring custom static path mappings It is possible to control how our web application deals with static assets and the files that exist on the filesystem or are bundled in the deployable archive. Let's say that we want to expose our internal application.properties file via the static web URL of http://localhost:8080/internal/application.properties from our application. To get started with this, proceed with the steps in the next section. How to do it… Let's add a new method, addResourceHandlers, to the WebConfiguration class with the following content: @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/internal/**").addResourceLocations("classpath:/"); } Start the application by running ./gradlew clean bootRun. Let's open http://localhost:8080/internal/application.properties in the browser to see the following results: How it works… The method that we overrode, addResourceHandlers(ResourceHandlerRegistry registry), is another configuration method from WebMvcConfigurer, which gives us an ability to define custom mappings for static resource URLs and connect them with the resources on the filesystem or application classpath. In our case, we defined a mapping of anything that is being accessed via the / internal URL to be looked for in classpath:/ of our application. (For production environment, you probably don't want to expose the entire classpath as a static resource!) So, let's take a detailed look at what we did, as follows: registry.addResourceHandler("/internal/**"): This method adds a resource handler to the registry to handle our static resources, and it returns ResourceHandlerRegistration to us, which can be used to further configure the mapping in a chained fashion. /internal/** is a path pattern that will be used to match against the request URL using PathMatcher. We have seen how PathMatcher can be configured in the previous example but, by default, an AntPathMatcher implementation is used. We can configure more than one URL pattern to be matched to a particular resource location. addResourceLocations("classpath:/"):This method is called on the newly created instance of ResourceHandlerRegistration, and it defines the directories where the resources should be loaded from. These should be valid filesystems or classpath directories, and there can be more than one entered. If multiple locations are provided, they will be checked in the order in which they were entered. setCachePeriod (Integer cachePeriod): Using this method, we can also configure a caching interval for the given resource by adding custom connectors. Another very common scenario in the enterprise application development and deployment is to run the application with two separate HTTP port connectors: one for HTTP and the other for HTTPS. Adding custom connectors Another very common scenario in the enterprise application development and deployment is to run the application with two separate HTTP port connectors: one for HTTP and the other for HTTPS. Getting ready For this recipe, we will undo the changes that we implemented in the previous example. In order to create an HTTPS connector, we will need a few things; but, most importantly, we will need to generate a certificate keystore that is used to encrypt and decrypt the SSL communication with the browser. If you are using Unix or Mac, you can do it by running the following command: $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA On Windows, this can be achieved via the following code: "%JAVA_HOME%binkeytool" -genkey -alias tomcat -keyalg RSA During the creation of the keystore, you should enter the information that is appropriate to you, including passwords, name, and so on. For the purpose of this book, we will use the default password: changeit. Once the execution is complete, a newly generated keystore file will appear in your home directory under the name .keystore. You can find more information about preparing the certificate keystore at https://tomcat.apache.org/tomcat-8.0-doc/ssl-howto.html#Prepare_the_Certificate_Keystore. How to do it… With the keystore creation complete, we will need to create a separate properties file in order to store our configuration for the HTTPS connector, such as port and others. After that, we will create a configuration property binding object and use it to configure our new connector. Perform the following steps: First, we will create a new properties file named tomcat.https.properties in the src/main/resources directory from the root of our project with the following content: custom.tomcat.https.port=8443 custom.tomcat.https.secure=true custom.tomcat.https.scheme=https custom.tomcat.https.ssl=true custom.tomcat.https.keystore=${user.home}/.keystore custom.tomcat.https.keystore-password=changeit Next, we will create a nested static class named TomcatSslConnectorProperties in our WebConfiguration, with the following content: @ConfigurationProperties(prefix = "custom.tomcat.https") public static class TomcatSslConnectorProperties { private Integer port; private Boolean ssl= true; private Boolean secure = true; private String scheme = "https"; private File keystore; private String keystorePassword; //Skipping getters and setters to save space, but we do need them public void configureConnector(Connector connector) { if (port != null) connector.setPort(port); if (secure != null) connector.setSecure(secure); if (scheme != null) connector.setScheme(scheme); if (ssl!= null) connector.setProperty("SSLEnabled", ssl.toString()); if (keystore!= null &&keystore.exists()) { connector.setProperty("keystoreFile", keystore.getAbsolutePath()); connector.setProperty("keystorePassword", keystorePassword); } } } Now, we will need to add our newly created tomcat.http.properties file as a Spring Boot property source and enable TomcatSslConnectorProperties to be bound. This can be done by adding the following code right above the class declaration of the WebConfiguration class: @Configuration @PropertySource("classpath:/tomcat.https.properties") @EnableConfigurationProperties(WebConfiguration.TomcatSslConnectorProperties.class) public class WebConfiguration extends WebMvcConfigurerAdapter {...} Finally, we will need to create an EmbeddedServletContainerFactory Spring bean where we will add our HTTPS connector. We will do that by adding the following code to the WebConfiguration class: @Bean public EmbeddedServletContainerFactory servletContainer(TomcatSslConnectorProperties properties) { TomcatEmbeddedServletContainerFactory tomcat = new TomcatEmbeddedServletContainerFactory(); tomcat.addAdditionalTomcatConnectors( createSslConnector(properties)); return tomcat; } private Connector createSslConnector(TomcatSslConnectorProperties properties) { Connector connector = new Connector(); properties.configureConnector(connector); return connector; } Start the application by running ./gradlew clean bootRun. Let's open https://localhost:8443/internal/tomcat.https.properties in the browser to see the following results: Summary In this article, you learned how to fine-tune the behavior of a web application. This article has given a small gist about custom routes, asset paths, and amending routing patterns. You also learned how to add more connectors to the servlet container. Resources for Article: Further resources on this subject: Introduction to Spring Framework [article] Setting up Microsoft Bot Framework Dev Environment [article] Creating our first bot, WebBot [article]
Read more
  • 0
  • 0
  • 1668
Banner background image

article-image-hello-c-welcome-net-core
Packt
11 Jan 2017
10 min read
Save for later

Hello, C#! Welcome, .NET Core!

Packt
11 Jan 2017
10 min read
In this article by Mark J. Price, author of the book C# 7 and .NET Core: Modern Cross-Platform Development-Second Edition, we will discuss about setting up your development environment; understanding the similarities and differences between .NET Core, .NET Framework, .NET Standard Library, and .NET Native. Most people learn complex topics by imitation and repetition rather than reading a detailed explanation of theory. So, I will not explain every keyword and step. This article covers the following topics: Setting up your development environment Understanding .NET (For more resources related to this topic, see here.) Setting up your development environment Before you start programming, you will need to set up your Interactive Development Environment (IDE) that includes a code editor for C#. The best IDE to choose is Microsoft Visual Studio 2017, but it only runs on the Windows operating system. To develop on alternative operating systems such as macOS and Linux, a good choice of IDE is Microsoft Visual Studio Code. Using alternative C# IDEs There are alternative IDEs for C#, for example, MonoDevelop and JetBrains Project Rider. They each have versions available for Windows, Linux, and macOS, allowing you to write code on one operating system and deploy to the same or a different one. For MonoDevelop IDE, visit http://www.monodevelop.com/ For JetBrains Project Rider, visit https://www.jetbrains.com/rider/ Cloud9 is a web browser-based IDE, so it's even more cross-platform than the others. Here is the link: https://c9.io/web/sign-up/free Linux and Docker are popular server host platforms because they are relatively lightweight and more cost-effectively scalable when compared to operating system platforms that are more for end users, such as Windows and macOS. Using Visual Studio 2017 on Windows 10 You can use Windows 7 or later to run code, but you will have a better experience if you use Windows 10. If you don't have Windows 10, and then you can create a virtual machine (VM) to use for development. You can choose any cloud provider, but Microsoft Azure has preconfigured VMs that include properly licensed Windows 10 and Visual Studio 2017. You only pay for the minutes your VM is running, so it is a way for users of Linux, macOS, and older Windows versions to have all the benefits of using Visual Studio 2017. Since October 2014, Microsoft has made a professional-quality edition of Visual Studio available to everyone for free. It is called the Community Edition. Microsoft has combined all its free developer offerings in a program called Visual Studio Dev Essentials. This includes the Community Edition, the free level of Visual Studio Team Services, Azure credits for test and development, and free training from Pluralsight, Wintellect, and Xamarin. Installing Microsoft Visual Studio 2017 Download and install Microsoft Visual Studio Community 2017 or later: https://www.visualstudio.com/vs/visual-studio-2017/. Choosing workloads On the Workloads tab, choose the following: Universal Windows Platform development .NET desktop development Web development Azure development .NET Core and Docker development On the Individual components tab, choose the following: Git for Windows GitHub extension for Visual Studio Click Install. You can choose to install everything if you want support for languages such as C++, Python, and R. Completing the installation Wait for the software to download and install. When the installation is complete, click Launch. While you wait for Visual Studio 2017 to install, you can jump to the Understanding .NET section in this article. Signing in to Visual Studio The first time that you run Visual Studio 2017, you will be prompted to sign in. If you have a Microsoft account, for example, a Hotmail, MSN, Live, or Outlook e-mail address, you can use that account. If you don't, then register for a new one at the following link: https://signup.live.com/ You will see the Visual Studio user interface with the Start Page open in the central area. Like most Windows desktop applications, Visual Studio has a menu bar, a toolbar for common commands, and a status bar at the bottom. On the right is the Solution Explorer window that will list your open projects: To have quick access to Visual Studio in the future, right-click on its entry in the Windows taskbar and select Pin this program to taskbar. Using older versions of Visual Studio The free Community Edition has been available since Visual Studio 2013 with Update 4. If you want to use a free version of Visual Studio older than 2013, then you can use one of the more limited Express editions. A lot of the code in this book will work with older versions if you bear in mind when the following features were introduced: Year C# Features 2005 2 Generics with <T> 2008 3 Lambda expressions with => and manipulating sequences with LINQ (from, in, where, orderby, ascending, descending, select, group, into) 2010 4 Dynamic typing with dynamic and multithreading with Task 2012 5 Simplifying multithreading with async and await 2015 6 string interpolation with $"" and importing static types with using static 2017 7 Tuples (with deconstruction), patterns, out variables, literal improvements Understanding .NET .NET Framework, .NET Core, .NET Standard Library, and .NET Native are related and overlapping platforms for developers to build applications and services upon. Understanding the .NET Framework platform Microsoft's .NET Framework is a development platform that includes a Common Language Runtime (CLR) that manages the execution of code and a rich library of classes for building applications. Microsoft designed the .NET Framework to have the possibility of being cross-platform, but Microsoft put their implementation effort into making it work best with Windows. Practically speaking, the .NET Framework is Windows-only. Understanding the Mono and Xamarin projects Third parties developed a cross-platform .NET implementation named the Mono project (http://www.mono-project.com/). Mono is cross-platform, but it fell well behind the official implementation of .NET Framework. It has found a niche as the foundation of the Xamarin mobile platform. Microsoft purchased Xamarin and now includes what used to be an expensive product for free with Visual Studio 2017. Microsoft has renamed the Xamarin Studio development tool to Visual Studio for the Mac. Xamarin is targeted at mobile development and building cloud services to support mobile apps. Understanding the .NET Core platform Today, we live in a truly cross-platform world. Modern mobile and cloud development have made Windows a much less important operating system. So, Microsoft has been working on an effort to decouple the .NET Framework from its close ties with Windows. While rewriting .NET to be truly cross-platform, Microsoft has taken the opportunity to refactor .NET to remove major parts that are no longer considered core. This new product is branded as the .NET Core, which includes a cross-platform implementation of the CLR known as CoreCLR and a streamlined library of classes known as CoreFX. Streamlining .NET .NET Core is much smaller than the current version of the .NET Framework because a lot has been removed. For example, Windows Forms and Windows Presentation Foundation (WPF) can be used to build graphical user interface (GUI) applications, but they are tightly bound to Windows, so they have been removed from the .NET Core. The latest technology for building Windows apps is the Universal Windows Platform (UWP). ASP.NET Web Forms and Windows Communication Foundation (WCF) are old web applications and service technologies that fewer developers choose to use today, so they have also been removed from the .NET Core. Instead, developers prefer to use ASP.NET MVC and ASP.NET Web API. These two technologies have been refactored and combined into a new product that runs on the .NET Core named ASP.NET Core. The Entity Framework (EF) 6.x is an object-relational mapping technology for working with data stored in relational databases such as Oracle and Microsoft SQL Server. It has gained baggage over the years, so the cross-platform version has been slimmed down and named Entity Framework Core. Some data types in .NET that are included with both the .NET Framework and the .NET Core have been simplified by removing some members. For example, in the .NET Framework, the File class has both a Close and Dispose method, and either can be used to release the file resources. In .NET Core, there is only the Dispose method. This reduces the memory footprint of the assembly and simplifies the API you have to learn. The .NET Framework 4.6 is about 200 MB. The .NET Core is about 11 MB. Eventually, the .NET Core may grow to a similar larger size. Microsoft's goal is not to make the .NET Core smaller than the .NET Framework. The goal is to componentize .NET Core to support modern technologies and to have fewer dependencies so that deployment requires only those components that your application really needs. Understanding the .NET Standard The situation with .NET today is that there are three forked .NET platforms, all controlled by Microsoft: .NET Framework, Xamarin, and .NET Core. Each have different strengths and weaknesses. This has led to the problem that a developer must learn three platforms, each with annoying quirks and limitations. So, Microsoft is working on defining the .NET Standard 2.0: a set of APIs that all .NET platforms must implement. Today, in 2016, there is the .NET Standard 1.6, but only .NET Core 1.0 supports it; .NET Framework and Xamarin do not! .NET Standard 2.0 will be implemented by .NET Framework, .NET Core, and Xamarin. For .NET Core, this will add many of the missing APIs that developers need to port old code written for .NET Framework to the new cross-platform .NET Core. .NET Standard 2.0 will probably be released towards the end of 2017, so I hope to write a third edition of this book for when that's finally released. The future of .NET The .NET Standard 2.0 is the near future of .NET, and it will make it much easier for developers to share code between any flavor of .NET, but we are not there yet. For cross-platform development, .NET Core is a great start, but it will take another version or two to become as mature as the current version of the .NET Framework. This book will focus on the .NET Core, but will use the .NET Framework when important or useful features have not (yet) been implemented in the .NET Core. Understanding the .NET Native compiler Another .NET initiative is the .NET Native compiler. This compiles C# code to native CPU instructions ahead-of-time (AoT) rather than using the CLR to compile IL just-in-time (JIT) to native code later. The .NET Native compiler improves execution speed and reduces the memory footprint for applications. It supports the following: UWP apps for Windows 10, Windows 10 Mobile, Xbox One, HoloLens, and Internet of Things (IoT) devices such as Raspberry Pi Server-side web development with ASP.NET Core Console applications for use on the command line Comparing .NET technologies The following table summarizes and compares the .NET technologies: Platform Feature set C# compiles to Host OSes .NET Framework Mature and extensive IL executed by a runtime Windows only Xamarin Mature and limited to mobile features iOS, Android, Windows Mobile .NET Core Brand-new and somewhat limited Windows, Linux, macOS, Docker .NET Native Brand-new and very limited Native code Summary In this article, we have learned how to set up the development environment, and discussed in detail about .NET technologies. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Reactive Programming with C# [article] Functional Programming in C# [article]
Read more
  • 0
  • 0
  • 1783

article-image-data-storage-forcecom
Packt
09 Jan 2017
14 min read
Save for later

Data Storage in Force.com

Packt
09 Jan 2017
14 min read
In this article by Andrew Fawcett, author of the book Force.com Enterprise Architecture - Second Edition, we will discuss how it is important to consider your customers' storage needs and use cases around their data creation and consumption patterns early in the application design phase. This ensures that your object schema is the most optimum one with respect to large data volumes, data migration processes (inbound and outbound), and storage cost. In this article, we will extend the Custom Objects in the FormulaForce application as we explore how the platform stores and manages data. We will also explore the difference between your applications operational data and configuration data and the benefits of using Custom Metadata Types for configuration management and deployment. (For more resources related to this topic, see here.) You will obtain a good understanding of the types of storage provided and how the costs associated with each are calculated. It is also important to understand the options that are available when it comes to reusing or attempting to mirror the Standard Objects such as Account, Opportunity, or Product, which extend the discussion further into license cost considerations. You will also become aware of the options for standard and custom indexes over your application data. Finally, we will have some insight into new platform features for consuming external data storage from within the platform. In this article, we will cover the following topics: Mapping out end user storage requirements Understanding the different storage types Reusing existing Standard Objects Importing and exporting application data Options for replicating and archiving data External data sources Mapping out end user storage requirements During the initial requirements and design phase of your application, the best practice is to create user categorizations known as personas. Personas consider the users' typical skills, needs, and objectives. From this information, you should also start to extrapolate their data requirements, such as the data they are responsible for creating (either directly or indirectly, by running processes) and what data they need to consume (reporting). Once you have done this, try to provide an estimate of the number of records that they will create and/or consume per month. Share these personas and their data requirements with your executive sponsors, your market researchers, early adopters, and finally the whole development team so that they can keep them in mind and test against them as the application is developed. For example, in our FormulaForce application, it is likely that managers will create and consume data, whereas race strategists will mostly consume a lot of data. Administrators will also want to manage your applications configuration data. Finally, there will likely be a background process in the application, generating a lot of data, such as the process that records Race Data from the cars and drivers during the qualification stages and the race itself, such as sector (a designated portion of the track) times. You may want to capture your conclusions regarding personas and data requirements in a spreadsheet along with some formulas that help predict data storage requirements. This will help in the future as you discuss your application with Salesforce during the AppExchange Listing process and will be a useful tool during the sales cycle as prospective customers wish to know how to budget their storage costs with your application installed. Understanding the different storage types The storage used by your application records contributes to the most important part of the overall data storage allocation on the platform. There is also another type of storage used by the files uploaded or created on the platform. From the Storage Usage page under the Setup menu, you can see a summary of the records used, including those that reside in the Salesforce Standard Objects. Later in this article, we will create a Custom Metadata Type object to store configuration data. Storage consumed by this type of object is not reflected on the Storage Usage page and is managed and limited in a different way. The preceding page also shows which users are using the most amount of storage. In addition to the individual's User details page, you can also locate the Used Data Space and Used File Space fields; next to these are the links to view the users' data and file storage usage. The limit shown for each is based on a calculation between the minimum allocated data storage depending on the type of organization or the number of users multiplied by a certain number of MBs, which also depends on the organization type; whichever is greater becomes the limit. For full details of this, click on the Help for this Page link shown on the page. Data storage Unlike other database platforms, Salesforce typically uses a fixed 2 KB per record size as part of its storage usage calculations, regardless of the actual number of fields or the size of the data within them on each record. There are some exceptions to this rule, such as Campaigns that take up 8 KB and stored Email Messages use up the size of the contained e-mail, though all Custom Object records take up 2 KB. Note that this record size also applies even if the Custom Object uses large text area fields. File storage Salesforce has a growing number of ways to store file-based data, ranging from the historic Document tab, to the more sophisticated Content tab, to using the Files tab, and not to mention Attachments, which can be applied to your Custom Object records if enabled. Each has its own pros and cons for end users and file size limits that are well defined in the Salesforce documentation. From the perspective of application development, as with data storage, be aware of how much your application is generating on behalf of the user and give them a means to control and delete that information. In some cases, consider if the end user would be happy to have the option to recreate the file on demand (perhaps as a PDF) rather than always having the application to store it. Reusing the existing Standard Objects When designing your object model, a good knowledge of the existing Standard Objects and their features is the key to knowing when and when not to reference them. Keep in mind the following points when considering the use of Standard Objects: From a data storage perspective: Ignoring Standard Objects creates a potential data duplication and integration effort for your end users if they are already using similar Standard Objects as pre-existing Salesforce customers. Remember that adding additional custom fields to the Standard Objects via your package will not increase the data storage consumption for those objects. From a license cost perspective: Conversely, referencing some Standard Objects might cause additional license costs for your users, since not all are available to the users without additional licenses from Salesforce. Make sure that you understand the differences between Salesforce (CRM) and Salesforce Platform licenses with respect to the Standard Objects available. Currently, the Salesforce Platform license provides Accounts and Contacts; however, to use the Opportunity or Product objects, a Salesforce (CRM) license is needed by the user. Refer to the Salesforce documentation for the latest details on these. Use your user personas to define what Standard Objects your users use and reference them via lookups, Apex code, and Visualforce accordingly. You may wish to use extension packages and/or dynamic Apex and SOQL to make these kind of references optional. Since Developer Edition orgs have all these licenses and objects available (although in a limited quantity), make sure that you review your Package dependencies before clicking on the Upload button each time to check for unintentional references. Importing and exporting data Salesforce provides a number of its own tools for importing and exporting data as well as a number of third-party options based on the Salesforce APIs; these are listed on AppExchange. When importing records with other record relationships, it is not possible to predict and include the IDs of related records, such as the Season record ID when importing Race records; in this section, we will present a solution to this. Salesforce provides Data Import Wizard, which is available under the Setup menu. This tool only supports Custom Objects and Custom Settings. Custom Metadata Type records are essentially considered metadata by the platform, and as such, you can use packages, developer tools, and Change Sets to migrate these records between orgs. There is an open source CSV data loader for Custom Metadata Types at https://github.com/haripriyamurthy/CustomMetadataLoader. It is straightforward to import a CSV file with a list of race Season since this is a top-level object and has no other object dependencies. However, to import the Race information (which is a child object related to Season), the Season and Fasted Lap By record IDs are required, which will typically not be present in a Race import CSV file by default. Note that IDs are unique across the platform and cannot be shared between orgs. External ID fields help address this problem by allowing Salesforce to use the existing values of such fields as a secondary means to associate records being imported that need to reference parent or related records. All that is required is that the related record Name or, ideally, a unique external ID be included in the import data file. This CSV file includes three columns: Year, Name, and Fastest Lap By (of the driver who performed the fastest lap of that race, indicated by their Twitter handle). You may remember that a Driver record can also be identified by this since the field has been defined as an External ID field. Both the 2014 Season record and the Lewis Hamilton Driver record should already be present in your packaging org. Now, run Data Import Wizard and complete the settings as shown in the following screenshot: Next, complete the field mappings as shown in the following screenshot: Click on Start Import and then on OK to review the results once the data import has completed. You should find that four new Race records have been created under 2014 Season, with the Fasted Lap By field correctly associated with the Lewis Hamilton Driver record. Note that these tools will also stress your Apex Trigger code for volumes, as they typically have the bulk mode enabled and insert records in chunks of 200 records. Thus, it is recommended that you test your triggers to at least this level of record volumes. Options for replicating and archiving data Enterprise customers often have legacy and/or external systems that are still being used or that they wish to phase out in the future. As such, they may have requirements to replicate aspects of the data stored in the Salesforce platform to another. Likewise, in order to move unwanted data off the platform and manage their data storage costs, there is a need to archive data. The following lists some platform and API facilities that can help you and/or your customers build solutions to replicate or archive data. There are, of course, a number of AppExchange solutions listed that provide applications that use these APIs already: Replication API: This API exists in both the web service SOAP and Apex form. It allows you to develop a scheduled process to query the platform for any new, updated, or deleted records between a given time period for a specific object. The getUpdated and getDeleted API methods return only the IDs of the records, requiring you to use the conventional Salesforce APIs to query the remaining data for the replication. The frequency in which this API is called is important to avoid gaps. Refer to the Salesforce documentation for more details. Outbound Messaging: This feature offers a more real-time alternative to the replication API. An outbound message event can be configured using the standard workflow feature of the platform. This event, once configured against a given object, provides a Web Service Definition Language (WSDL) file that describes a web service endpoint to be called when records are created and updated. It is the responsibility of a web service developer to create the end point based on this definition. Note that there is no provision for deletion with this option. Bulk API: This API provides a means to move up to 5000 chunks of Salesforce data (up to 10 MB or 10,000 records per chunk) per rolling 24-hour period. Salesforce and third-party data loader tools, including the Salesforce Data Loader tool, offer this as an option. It can also be used to delete records without them going into the recycle bin. This API is ideal for building solutions to archive data. Heroku Connect is a seamless data synchronization solution between Salesforce and Heroku Postgres. For further information, refer to https://www.heroku.com/connect. External data sources One of the downsides of moving data off the platform in an archive use case or with not being able to replicate data onto the platform is that the end users have to move between applications and logins to view data; this causes an overhead as the process and data is not connected. The Salesforce Connect (previously known as Lightning Connect) is a chargeable add-on feature of the platform is the ability to surface external data within the Salesforce user interface via the so-called External Objects and External Data Sources configurations under Setup. They offer a similar functionality to Custom Objects, such as List views, Layouts, and Custom Buttons. Currently, Reports and Dashboards are not supported, though it is possible to build custom report solutions via Apex, Visualforce or Lightning Components. External Data Sources can be connected to existing OData-based end points and secured through OAuth or Basic Authentication. Alternatively, Apex provides a Connector API whereby developers can implement adapters to connect to other HTTP-based APIs. Depending on the capabilities of the associated External Data Source, users accessing External Objects using the data source can read and even update records through the standard Salesforce UIs such as Salesforce Mobile and desktop interfaces. Summary This article explored the declarative aspects of developing an application on the platform that applies to how an application is stored and how relational data integrity is enforced through the use of the lookup field deletion constraints and applying unique fields. Upload the latest version of the FormulaForce package and install it into your test org. The summary page during the installation of new and upgraded components should look something like the following screenshot. Note that the permission sets are upgraded during the install. Once you have installed the package in your testing org, visit the Custom Metadata Types page under Setup and click on Manage Records next to the object. You will see that the records are shown as managed and cannot be deleted. Click on one of the records to see that the field values themselves cannot also be edited. This is the effect of the Field Manageability checkbox when defining the fields. The Namespace Prefix shown here will differ from yours. Try changing or adding the Track Lap Time records in your packaging org, for example, update a track time on an existing record. Upload the package again then upgrade your test org. You will see the records are automatically updated. Conversely, any records you created in your test org will be retained between upgrades. In this article, we have now covered some major aspects of the platform with respect to packaging, platform alignment, and how your application data is stored as well as the key aspects of your application's architecture. Resources for Article: Further resources on this subject: Process Builder to the Rescue [article] Custom Coding with Apex [article] Building, Publishing, and Supporting Your Force.com Application [article]
Read more
  • 0
  • 0
  • 1480

article-image-test-driven-development
Packt
05 Jan 2017
19 min read
Save for later

Test-Driven Development

Packt
05 Jan 2017
19 min read
In this article by Md. Ziaul Haq, the author of the book Angular 2 Test-Driven Development, introduces you to the fundamentals of test-driven development with AngularJS, including: An overview of test-driven development (TDD) The TDD life cycle: test first, make it run, and make it better Common testing techniques (For more resources related to this topic, see here.) Angular2 is at the forefront of client-side JavaScript testing. Every Angular2 tutorial includes an accompanying test, and event test modules are a part of the core AngularJS package. The Angular2 team is focused on making testing fundamental to web development. An overview of TDD Test-driven development (TDD) is an evolutionary approach to development, where you write a test before you write just enough production code to fulfill that test and its refactoring. The following section will explore the fundamentals of TDD and how they are applied by a tailor. Fundamentals of TDD Get the idea of what to write in your code before you start writing it. This may sound cliched, but this is essentially what TDD gives you. TDD begins by defining expectations, then makes you meet the expectations, and finally, forces you to refine the changes after the expectations are met. Some of the clear benefits that can be gained by practicing TDD are as follows: No change is small: Small changes can cause a hell lot of breaking issues in the entire project. Only practicing TDD can help out, as after any change, test suit will catch the breaking points and save the project and the life of developers. Specifically identify the tasks: A test suit provides a clear vision of the tasks specifically and provides the workflow step-by-step in order to be successful. Setting up the tests first allows you to focus on only the components that have been defined in the tests. Confidence in refactoring: Refactoring involves moving, fixing, and changing a project. Tests protect the core logic from refactoring by ensuring that the logic behaves independently of the code structure. Upfront investment, benefits in future: Initially, it looks like testing kills the extra time, but it actually pays off later, when the project becomes bigger, it gives confidence to extend the feature as just running the test will get the breaking issues, if any. QA resource might be limited: In most cases, there are some limitations on QA resources as it always takes extra time for everything to be manually checked by the QA team, but writing some test case and by running them successfully will save some QA time definitely. Documentation: Tests define the expectations that a particular object or function must meet. An expectation acts as a contract and can be used to see how a method should or can be used. This makes the code readable and easier to understand. Measuring the success with different eyes TDD is not just a software development practice. The fundamental principles are shared by other craftsmen as well. One of these craftsmen is a tailor, whose success depends on precise measurements and careful planning. Breaking down the steps Here are the high-level steps a tailor takes to make a suit: Test first: Determining the measurements for the suit Having the customer determine the style and material they want for their suit Measuring the customer's arms, shoulders, torso, waist, and legs Making the cuts: Measuring the fabric and cutting it Selecting the fabric based on the desired style Measuring the fabric based on the customer's waist and legs Cutting the fabric based on the measurements Refactoring: Comparing the resulting product to the expected style, reviewing, and making changes Comparing the cut and look to the customer's desired style Making adjustments to meet the desired style Repeating: Test first: Determining the measurements for the pants Making the cuts: Measuring the fabric and making the cuts Refactor: Making changes based on the reviews The preceding steps are an example of a TDD approach. The measurements must be taken before the tailor can start cutting up the raw material. Imagine, for a moment, that the tailor didn't use a test-driven approach and didn't use a measuring tape (testing tool). It would be ridiculous if the tailor started cutting before measuring. As a developer, do you "cut before measuring"? Would you trust a tailor without a measuring tape? How would you feel about a developer who doesn't test? Measure twice, cut once The tailor always starts with measurements. What would happen if the tailor made cuts before measuring? What would happen if the fabric was cut too short? How much extra time would go into the tailoring? Measure twice, cut once. Software developers can choose from an endless amount of approaches to use before starting developing. One common approach is to work off a specification. A documented approach may help in defining what needs to be built; however, without tangible criteria for how to meet a specification, the actual application that gets developed may be completely different from the specification. With a TDD approach (test first, make it run, and make it better), every stage of the process verifies that the result meets the specification. Think about how a tailor continues to use a measuring tape to verify the suit throughout the process. TDD embodies a test-first methodology. TDD gives developers the ability to start with a clear goal and write code that will directly meet a specification. Develop like a professional and follow the practices that will help you write quality software. Practical TDD with JavaScript Let's dive into practical TDD in the context of JavaScript. This walk through will take you through the process of adding the multiplication functionality to a calculator. Just keep the TDD life cycle, as follows, in mind: Test first Make it run Make it better Point out the development to-do list A development to-do list helps to organize and focus on tasks specifically. It also helps to provide a surface to list down the ideas during the development process, which could be a single feature later on. Let's add the first feature in the development to-do list—add multiplication functionality: 3 * 3 = 9. The preceding list describes what needs to be done. It also provides a clear example of how to verify multiplication—3 * 3 = 9. Setting up the test suit To set up the test, let's create the initial calculator in a file, called calculator.js, and is initialized as an object as follows: var calculator = {}; The test will be run through a web browser as a simple HTML page. So, for that, let's create an HTML page and import calculator.js to test it and save the page as testRunner.html. To run the test, open the testRunner.html file in your web browser. The testRunner.html file will look as follows: <!DOCTYPE html> <html> <head> <title>Test Runner</title> </head> <body> <script src="calculator.js"></script> </body> </html> The test suit is ready for the project and the development to-do list for feature is ready as well. The next step is to dive into the TDD life cycle based on the feature list one by one. Test first Though it's easy to write a multiplication function and it will work as its pretty simple feature, as a part of practicing TDD, it's time to follow the TDD life cycle. The first phase of the life cycle is to write a test based on the development to-do list. Here are the steps for the first test: Open calculator.js. Create a new function to test multiplying 3 * 3: function multipleTest1() { // Test var result = calculator.multiply(3, 3); // Assert Result is expected if (result === 9) { console.log('Test Passed'); } else { console.log('Test Failed'); } }; The test calls a multiply function, which still needs to be defined. It then asserts that the results are as expected, by displaying a pass or fail message. Keep in mind that in TDD, you are looking at the use of the method and explicitly writing how it should be used. This allows you to define the interface through a use case, as opposed to only looking at the limited scope of the function being developed. The next step in the TDD life cycle is focused on making the test run. Make it run In this step, we will run the test, just as the tailor did with the suit. The measurements were taken during the test step, and now the application can be molded to fit the measurements. The following are the steps to run the test: Open testRunner.html on a web browser. Open the JavaScript developer Console window in the browser. Test will throw an error, which will be visible in the browser's developer console, as shown in the following screenshot: The thrown error is about the undefined function, which is expected as the calculator application calls a function that hasn't been created yet—calculator.multiply. In TDD, the focus is on adding the easiest change to get a test to pass. There is no need to actually implement the multiplication logic. This may seem unintuitive. The point is that once a passing test exists, it should always pass. When a method contains fairly complex logic, it is easier to run a passing test against it to ensure that it meets the expectations. What is the easiest change that can be made to make the test pass? By returning the expected value of 9, the test should pass. Although this won't add the multiply function, it will confirm the application wiring. In addition, after you have passed the test, making future changes will be easy as you have to simply keep the test passing! Now, add the multiply function and have it return the required value of 9, as illustrated: var calculator = { multiply : function() { return 9; } }; Now, let's refresh the page to rerun the test and look at the JavaScript console. The result should be as shown in the following screenshot: Yes! No more errors, there's a message showing that test has been passed. Now that there is a passing test, the next step will be to remove the hardcoded value in the multiply function. Make it better The refactoring step needs to remove the hardcoded return value of the multiply function that we added as the easiest solution to pass the test and will add the required logic to get the expected result. The required logic is as follows: var calculator = { multiply : function(amount1, amount2) { return amount1 * amount2; } }; Now, let's refresh the browser to rerun the tests, it will pass the test as it did before. Excellent! Now the multiply function is complete. The full code of the calculator.js file for the calculator object with its test will look as follows: var calculator = { multiply : function(amount1, amount2) { return amount1 * amount2; } }; function multipleTest1() { // Test var result = calculator.multiply(3, 3); // Assert Result is expected if (result === 9) { console.log('Test Passed'); } else { console.log('Test Failed'); } }; multipleTest1(); Mechanism of testing To be a proper TDD following developer, it is important to understand some fundamental mechanisms of testing, techniques, and approaches to testing. In this section, we will walk you through a couple of examples of techniques and mechanisms of the tests that will be leveraged in this article. This will mostly include the following points: Testing doubles with Jasmine spies Refactoring the existing tests Building patterns In addition, here are the additional terms that will be used: Function under test: This is the function being tested. It is also referred to as system under test, object under test, and so on. The 3 A's (Arrange, Act, and Assert): This is a technique used to set up tests, first described by Bill Wake (http://xp123.com/articles/3a-arrange-act-assert/). Testing with a framework We have already seen a quick and simple way to perform tests on calculator application, where we have set the test for the multiply method. But in real life, it will be more complex and a way larger application, where the earlier technique will be too complex to manage and perform. In that case, it will be very handy and easier to use a testing framework. A testing framework provides methods and structures to test. This includes a standard structure to create and run tests, the ability to create assertions/expectations, the ability to use test doubles, and more. The following example code is not exactly how it runs with the Jasmine test/spec runner, it's just about the idea of how the doubles work, or how these doubles return the expected result. Testing doubles with Jasmine spies A test double is an object that acts and is used in place of another object. Jasmine has a test double function that is known as spies. Jasmine spy is used with the spyOn()method. Take a look at the following testableObject object that needs to be tested. Using a test double, you can determine the number of times testableFunction gets called. The following is an example of Test double: var testableObject = { testableFunction : function() { } }; jasmine.spyOn(testableObject, 'testableFunction'); testableObject.testableFunction(); testableObject.testableFunction(); testableObject.testableFunction(); console.log(testableObject.testableFunction.count); The preceding code creates a test double using a Jasmine spy (jasmine.spyOn). The test double is then used to determine the number of times testableFunction gets called. The following are some of the features that a Jasmine test double offers: The count of calls on a function The ability to specify a return value (stub a return value) The ability to pass a call to the underlying function (pass through) Stubbing return value The great thing about using a test double is that the underlying code of a method does not have to be called. With a test double, you can specify exactly what a method should return for a given test. Consider the following example of an object and a function, where the function returns a string: var testableObject = { testableFunction : function() { return 'stub me'; } }; The preceding object (testableObject) has a function (testableFunction) that needs to be stubbed. So, to stub the single return value, it will need to chain the and.returnValuemethod and will pass the expected value as param. Here is how to spy chain the single return value to stub it: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValue('stubbed value'); Now, when testableObject.testableFunction is called, a stubbed value will be returned. Consider the following example of the preceding single stubbed value: var testableObject = { testableFunction : function() { return 'stub me'; } }; //before the return value is stubbed Console.log(testableObject.testableFunction()); //displays 'stub me' jasmine.spyOn(testableObject,'testableFunction') .and .returnValue('stubbed value'); //After the return value is stubbed Console.log(testableObject.testableFunction()); //displays 'stubbed value' Similarly, we can pass multiple retuned values as the preceding example. To do so, it will chain the and.returnValuesmethod with the expected values as param, where the values will be separated by commas. Here is how to spy chain the multiple return values to stub them one by one: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValues('first stubbed value', 'second stubbed value', 'third stubbed value'); So, for every call of testableObject.testableFunction, it will return the stubbedvalue in order until reaches the end of the return value list. Consider the given example of the preceding multiple stubbed values: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValue('first stubbed value', 'second stubbed value', 'third stubbed value'); //After the is stubbed return values Console.log(testableObject.testableFunction()); //displays 'first stubbed value' Console.log(testableObject.testableFunction()); //displays 'second stubbed value' Console.log(testableObject.testableFunction()); //displays 'third stubbed value' Testing arguments A test double provides insights into how a method is used in an application. As an example, a test might want to assert what arguments a method was called with or the number of times a method was called. Here is an example function: var testableObject = { testableFunction : function(arg1, arg2) {} }; The following are the steps to test the arguments with which the preceding function is called: Create a spy so that the arguments called can be captured: jasmine.spyOn(testableObject, 'testableFunction'); Then, to access the arguments, do the following: //Get the arguments for the first call of the function var callArgs = testableObject.testableFunction.call.argsFor(0); console.log(callArgs); //displays ['param1', 'param2'] Here is how the arguments can be displayed using console.log: var testableObject = { testableFunction : function(arg1, arg2) {} }; //create the spy jasmine.spyOn(testableObject, 'testableFunction'); //Call the method with specific arguments testableObject.testableFunction('param1', 'param2'); //Get the arguments for the first call of the function var callArgs = testableObject.testableFunction.call.argsFor(0); console.log(callArgs); //displays ['param1', 'param2'] Refactoring Refactoring is the act of restructuring, rewriting, renaming, and removing code in order to improve the design, readability, maintainability, and overall aesthetics of a piece of code. The TDD life cycle step of "making it better" is primarily concerned with refactoring. This section will walk you through a refactoring example. Take a look at the following example of a function that needs to be refactored: var abc = function(z) { var x = false; if(z > 10) return true; return x; } This function works fine and does not contain any syntactical or logical issues. The problem is that the function is difficult to read and understand. Refactoring this function will improve the naming, structure, and definition. The exercise will remove the masquerading complexity and reveal the function's true meaning and intention. Here are the steps: Rename the function and variable names to be more meaningful, that is, rename x and z so that they make sense, as shown: var isTenOrGreater = function(value) { var falseValue = false; if(value > 10) return true; return falseValue; } Now, the function can easily be read and the naming makes sense. Remove unnecessary complexity. In this case, the if conditional statement can be removed completely, as follows: var isTenOrGreater = function(value) { return value > 10; }; Reflect on the result. At this point, the refactoring is complete, and the function's purpose should jump out at you. The next question that should be asked is "why does this method exist in the first place?". This example only provided a brief walk-through of the steps that can be taken to identify issues in code and how to improve them. Building with a builder These days, design pattern is almost a kind of common practice, and we follow design pattern to make life easier. For the same reason, the builder pattern will be followed here. The builder pattern uses a builder object to create another object. Imagine an object with 10 properties. How will test data be created for every property? Will the object have to be recreated in every test? A builder object defines an object to be reused across multiple tests. The following code snippet provides an example of the use of this pattern. This example will use the builder object in the validate method: var book = { id : null, author : null, dateTime : null }; The book object has three properties: id, author, and dateTime. From a testing perspective, you would want the ability to create a valid object, that is, one that has all the fields defined. You may also want to create an invalid object with missing properties, or you may want to set certain values in the object to test the validation logic, that is, dateTime is an actual date. Here are the steps to create a builder for the dateTime object: Create a builder function, as shown: var bookBuilder = function() {}; Create a valid object within the builder, as follows: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; } Create a function to return the built object, as given: var bookBuilder = function() { var _resultBook = { id: 1, author: "Any Author", dateTime: new Date() }; this.build = function() { return _resultBook; } } As illustrated, create another function to set the _resultBook author field: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; this.build = function() { return _resultBook; }; this.setAuthor = function(author){ _resultBook.author = author; }; }; Make the function fluent, as follows, so that calls can be chained: this.setAuthor = function(author) { _resultBook.author = author; return this; }; A setter function will also be created for dateTime, as shown: this.setDateTime = function(dateTime) { _resultBook.dateTime = dateTime; return this; }; Now, bookBuilder can be used to create a new book, as follows: var bookBuilder = new bookBuilder(); var builtBook = bookBuilder.setAuthor('Ziaul Haq') .setDateTime(new Date()) .build(); console.log(builtBook.author); // Ziaul Haq The preceding builder can now be used throughout your tests to create a single consistent object. Here is the complete builder for your reference: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; this.build = function() { return _resultBook; }; this.setAuthor = function(author) { _resultBook.author = author; return this; }; this.setDateTime = function(dateTime) { _resultBook.dateTime = dateTime; return this; }; }; Let's create the validate method to validate the created book object from builder. var validate = function(builtBookToValidate){ if(!builtBookToValidate.author) { return false; } if(!builtBookToValidate.dateTime) { return false; } return true; }; So, at first, let's create a valid book object with builder by passing all the required information, and if this is passed via the validate object, this should show a valid message: var validBuilder = new bookBuilder().setAuthor('Ziaul Haq') .setDateTime(new Date()) .build(); // Validate the object with validate() method if (validate(validBuilder)) { console.log('Valid Book created'); } In the same way, let's create an invalid book object via builder by passing some null value in the required information. And by passing the object to the validate method, it should show the message, why it's invalid. var invalidBuilder = new bookBuilder().setAuthor(null).build(); if (!validate(invalidBuilder)) { console.log('Invalid Book created as author is null'); } var invalidBuilder = new bookBuilder().setDateTime(null).build(); if (!validate(invalidBuilder)) { console.log('Invalid Book created as dateTime is null'); } Self-test questions Q1. A test double is another name for a duplicate test. True False Q2. TDD stands for test-driven development. True False Q3. The purpose of refactoring is to improve code quality. True False Q4. A test object builder consolidates the creation of objects for testing. True False Q5. The 3 A's are a sports team. True False Summary This article provided an introduction to TDD. It discussed the TDD life cycle (test first, make it run, and make it better) and showed how the same steps are used by a tailor. Finally, it looked over some of the testing techniques such as test doubles, refactoring, and building patterns. Although TDD is a huge topic, this article is solely focused on the TDD principles and practices to be used with AngularJS. Resources for Article: Further resources on this subject: Angular 2.0 [Article] Writing a Blog Application with Node.js and AngularJS [Article] Integrating a D3.js visualization into a simple AngularJS application [Article]
Read more
  • 0
  • 0
  • 1044
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
Packt
05 Jan 2017
17 min read
Save for later

Data Types – Foundational Structures

Packt
05 Jan 2017
17 min read
This article by William Smith, author of the book Everyday Data Structures reviews the most common and most important fundamental data types from the 10,000-foot view. Calling data types foundational structures may seem like a bit of a misnomer but not when you consider that developers use data types to build their classes and collections. So, before we dive into examining proper data structures, it's a good idea to quickly review data types, as these are the foundation of what comes next. In this article, we will briefly explain the following topics: Numeric data types Casting,Narrowing, and Widening 32-bit and 64-bit architecture concerns Boolean data types Logic operations Order of operations Nesting operations Short-circuiting String data types Mutability of strings (For more resources related to this topic, see here.) Numeric data types A detailed description of all the numeric data types in each of these four languages namely, C#, Java, Objective C, and Swift, could easily encompass a book of its own. The simplest way to evaluate these types is based on the underlying size of the data, using examples from each language as a framework for the discussion. When you are developing applications for multiple mobile platforms, you should be aware that the languages you use could share a data type identifier or keyword, but under the hood, those identifiers may not be equal in value. Likewise, the same data type in one language may have a different identifier in another. For example, examine the case of the 16 bit unsigned integer, sometimes referred to as an unsigned short. Well, it's called an unsigned short in Objective-C. In C#, we are talking about a ushort, while Swift calls it a UInt16. Java, on the other hand, uses a char for this data type. Each of these data types represents a 16 bit unsigned integer; they just use different names. This may seem like a small point, but if you are developing apps for multiple devices using each platform's native language, for the sake of consistency, you will need to be aware of these differences. Otherwise, you may risk introducing platform-specific bugs that are extremely difficult to detect and diagnose. Integer types The integer data types are defined as representing whole numbers and can be either signed (negative, zero, or positive values) or unsigned (zero or positive values). Each language uses its own identifiers and keywords for the integer types, so it is easiest to think in terms of memory length. For our purpose, we will only discuss the integer types representing 8, 16, 32, and 64 bit memory objects. 8 bit data types, or bytes as they are more commonly referred to, are the smallest data types that we will examine. If you have brushed up on your binary math, you will know that an 8 bit memory block can represent 28, or 256 values. Signed bytes can range in values from -128 to 127, or -(27) to (27) - 1. Unsigned bytes can range in values from 0 to 255, or 0 to (28) -1. A 16 bit data type is often referred to as a short, although that is not always the case. These types can represent 216, or 65,536 values. Signed shorts can range in values from -32,768 to 32,767, or -(215) to (215) - 1. Unsigned shorts can range in values from 0 to 65,535, or 0 to (216) - 1. A 32 bit data type is most commonly identified as an int, although it is sometimes identified as a long. Integer types can represent 232, or 4,294,967,296 values. Signed ints can range in values from -2,147,483,648 to 2,147,483,647, or -(231) to (231) - 1. Unsigned ints can range in values from 0 to 4,294,967,295, or 0 to (232) - 1. Finally, a 64 bit data type is most commonly identified as a long, although Objective-C identifies it as a long long. Long types can represent 264, or 18,446,744,073,709,551,616 values. Signed longs can range in values from −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, or -(263) to (263) - 1. Unsigned longs can range in values from 0 to 18,446,744,073,709,551,615, or 0 to (263) - 1. Note that these values happen to be consistent across the four languages we will work with, but some languages will introduce slight variations. It is always a good idea to become familiar with the details of a language's numeric identifiers. This is especially true if you expect to be working with cases that involve the identifier's extreme values. Single precision float Single precision floating point numbers, or floats as they are more commonly referred to, are 32 bit floating point containers that allow for storing values with much greater precision than the integer types, typically 6 or 7 significant digits. Many languages use the float keyword or identifier for single precision float values, and that is the case for each of the four languages we are discussing. You should be aware that floating point values are subject to rounding errors because they cannot represent base-10 numbers exactly. The arithmetic of floating point types is a fairly complex topic, the details of which will not be pertinent to the majority of developers on any given day. However, it is still a good practice to familiarize yourself with the particulars of the underlying science as well as the implementation in each language. Double precision float Double precision floating point numbers, or doubles as they are more commonly referred to, are 64 bit floating point values that allow for storing values with much greater precision than the integer types, typically to 15 significant digits. Many languages use the double identifier for double precision float values and that is also the case for each of the four languages: C#, Objective C, Java, and Swift. In most circumstances, it will not matter whether you choose float over double, unless memory space is a concern in which case you will want to choose float whenever possible. Many argue that float is more performant than double under most conditions, and generally speaking, this is the case. However, there are other conditions where double will be more performant than float. The reality is the efficiency of each type is going to vary from case to case, based on a number of criteria that are too numerous to detail in the context of this discussion. Therefore, if your particular application requires truly peak efficiency, you should research the requirements and environmental factors carefully and decide what is best for your situation. Otherwise, just use whichever container will get the job done and move on. Currency Due to the inherent inaccuracy found in floating point arithmetic, grounded in the fact that they are based on binary arithmetic, floats, and doubles cannot accurately represent the base-10 multiples we use for currency. Representing currency as a float or double may seem like a good idea at first as the software will round off the tiny errors in your arithmetic. However, as you begin to perform more and complex arithmetic operations on these inexact results, your precision errors will begin to add up and result in serious inaccuracies and bugs that can be very difficult to track down. This makes float and double data types insufficient for working with currency where perfect accuracy for multiples of 10 is essential. Typecasting In the realm of computer science, type conversion or typecasting means to convert an instance of one object or data type into another. This can be done through either implicit conversion, sometimes called coercion, or explicit conversion, otherwise known as casting. To fully appreciate casting, we also need to understand the difference between static and dynamic languages. Statically versus dynamically typed languages A statically typed language will perform its type checking at compile time. This means that when you try to build your solution, the compiler will verify and enforce each of the constraints that apply to the types in your application. If they are not enforced, you will receive an error and the application will not build. C#, Java, and Swift are all statically typed languages. Dynamically typed languages, on the other hand, do the most or all of their type checking at run time. This means that the application could build just fine, but experience a problem while it is actually running if the developer wasn't careful in how he wrote the code. Objective-C is a dynamically typed language because it uses a mixture of statically typed objects and dynamically typed objects. The Objective-C classes NSNumber and NSDecimalNumber are both examples of dynamically typed objects. Consider the following code example in Objective-C: double myDouble = @"chicken"; NSNumber *myNumber = @"salad"; The compiler will throw an error on the first line, stating Initializing 'double' with an expression of incompatible type 'NSString *'. That's because double is a plain C object, and it is statically typed. The compiler knows what to do with this statically typed object before we even get to the build, so your build will fail. However, the compiler will only throw a warning on the second line, stating Incompatible pointer types initializing 'NSNumber *' with an expression of type 'NSString *'. That's because NSNumber is an Objective-C class, and it is dynamically typed. The compiler is smart enough to catch your mistake, but it will allow the build to succeed (unless you have instructed the compiler to treat warnings as errors in your build settings). Although the forthcoming crash at runtime is obvious in the previous example, there are cases where your app will function perfectly fine despite the warnings. However, no matter what type of language you are working with, it is always a good idea to consistently clean up your code warnings before moving on to new code. This helps keep your code clean and avoids any bugs that can be difficult to diagnose. On those rare occasions where it is not prudent to address the warning immediately, you should clearly document your code and explain the source of the warning so that other developers will understand your reasoning. As a last resort, you can take advantage of macros or pre-processor (pre-compiler) directives that can suppress warnings on a line by line basis. Implicit and explicit casting Implicit casting does not require any special syntax in your source code. This makes implicit casting somewhat convenient. However, since implicit casts do not define their types manually, the compiler cannot always determine which constraints apply to the conversion and therefore will not be able to check these constraints until runtime. This makes the implicit cast also somewhat dangerous. Consider the following code example in C#: double x = "54"; This is an implicit conversion because you have not told the compiler how to treat the string value. In this case, the conversion will fail when you try to build the application, and the compiler will throw an error for this line, stating Cannot implicitly convert type 'string' to 'double'. Now, consider the explicitly cast version of this example: double x = double.Parse("42"); Console.WriteLine("40 + 2 = {0}", x); /* Output 40 + 2 = 42 */ This conversion is explicit and therefore type safe, assuming that the string value is parsable. Widening and narrowing When casting between two types, an important consideration is whether the result of the change is within the range of the target data type. If your source data type supports more bytes than your target data type, the cast is considered to be a narrowing conversion. Narrowing conversions are either casts that cannot be proven to always succeed or casts that are known to possibly lose information. For example, casting from a float to an integer will result in loss of information (precision in this case), as the result will be rounded off to the nearest whole number. In most statically typed languages, narrowing casts cannot be performed implicitly. Here is an example by borrowing from the C# single precision: //C# piFloat = piDouble; In this example, the compiler will throw an error, stating Cannot implicitly convert type 'double' to 'float'. And explicit conversion exists (Are you missing a cast?). The compiler sees this as a narrowing conversion and treats the loss of precision as an error. The error message itself is helpful and suggests an explicit cast as a potential solution for our problem: //C# piFloat = (float)piDouble; We have now explicitly cast the double value piDouble to a float, and the compiler no longer concerns itself with loss of precision. If your source data type supports fewer bytes than your target data type, the cast is considered to be a widening conversion. Widening conversions will preserve the source object's value, but may change its representation in some way. Most statically typed languages will permit implicit widening casts. Let's borrow again from our previous C# example: //C# piDouble = piFloat; In this example, the compiler is completely satisfied with the implicit conversion and the app will build. Let's expand the example further: //C# piDouble = (double)piFloat; This explicit cast improves readability, but does not change the nature of the statement in any way. The compiler also finds this format to be completely acceptable, even if it is somewhat more verbose. Beyond improved readability, explicit casting when widening adds nothing to your application. Therefore, it is your preference if you want to use explicit casting when widening is a matter of personal preference. Boolean data type Boolean data types are intended to symbolize binary values, usually denoted by 1 and 0, true and false, or even YES and NO. Boolean types are used to represent truth logic, which is based on Boolean algebra. This is just a way of saying that Boolean values are used in conditional statements, such as if or while, to evaluate logic or repeat an execution conditionally. Equality operations include any operations that compare the value of any two entities. The equality operators are: == implies equal to != implies not equal to Relational operations include any operations that test a relation between two entities. The relational operators are: > implies greater than >= implies greater than or equal to < implies less than <= implies less than or equal to Logic operations include any operations in your program that evaluate and manipulate Boolean values. There are three primary logic operators, namely AND, OR, and NOT. Another, slightly less commonly used operator, is the exclusive or, or XOR operator.  All Boolean functions and statements can be built with these four basic operators. The AND operator is the most exclusive comparator. Given two Boolean variables A and B, AND will return true if and only if both A and B are true. Boolean variables are often visualized using tools called truth tables. Consider the following truth table for the AND operator: A B A ^ B 0 0 0 0 1 0 1 0 0 1 1 1 This table demonstrates the AND operator.  When evaluating a conditional statement, 0 is considered to be false, while any other value is considered to be true. Only when the value of both A and B is true, is the resulting comparison of A ^ B also true. The OR operator is the inclusive operator. Given two Boolean variables A and B, OR will return true if either A or B are true, including the case when both A and B are true. Consider the following truth table for the OR operator: A B A v B 0 0 0 0 1 1 1 0 1 1 1 1 Next, the NOT A operator is true when A is false, and false when A is true. Consider the following truth table for the NOT operator: A !A 0 1 1 0 Finally, the XOR operator is true when either A or B is true, but not both. Another way to say it is, XOR is true when A and B are different. There are many occasions where it is useful to evaluate an expression in this manner, so most computer architectures include it. Consider the following truth table for XOR: A B A xor B 0 0 0 0 1 1 1 0 1 1 1 0 Operator precedence Just as with arithmetic, comparison and Boolean operations have operator precedence. This means the architecture will give a higher precedence to one operator over another. Generally speaking, the Boolean order of operations for all languages is as follows: Parenthesis Relational operators Equality operators Bitwise operators (not discussed) NOT AND OR XOR Ternary operator Assignment operators It is extremely important to understand operator precedence when working with Boolean values, because mistaking how the architecture will evaluate complex logical operations will introduce bugs in your code that you will not understand how to sort out. When in doubt, remember that as in arithmetic parenthesis, take the highest precedence and anything defined within them will be evaluated first. Short-circuiting As you recall, AND only returns true when both of the operands are true, and OR returns true as soon as one operand is true. These characteristics sometimes make it possible to determine the outcome of an expression by evaluating only one of the operands. When your applications stops evaluation immediately upon determining the overall outcome of an expression, it is called short-circuiting. There are three main reasons why you would want to use short-circuiting in your code. First, short-circuiting can improve your application's performance by limiting the number of operations your code must perform. Second, when later operands could potentially generate errors based on the value of a previous operand, short-circuiting can halt execution before the higher risk operand is reached. Finally, short-circuiting can improve the readability and complexity of your code by eliminating the need for nested logical statements. Strings Strings data types are simply objects whose value is text. Under the hood, strings contain a sequential collection of read-only char objects. This read-only nature of a string object makes strings immutable, which means the objects cannot be changed once they have been created in memory. It is important to understand that changing any immutable object, not just a string, means your program is actually creating a new object in memory and discarding the old one. This is a more intensive operation than simply changing the value of an address in memory and requires more processing. Merging two strings together is called concatenation, and this is an even more costly procedure as you are disposing of two objects before creating a new one. If you find that you are editing your string values frequently, or frequently concatenating strings together, be aware that your program is not as efficient as it could be. Strings are strictly immutable in C#, Java, and Objective-C. It is interesting to note that the Swift documentation refers to strings as mutable. However, the behavior is similar to Java, in that, when a string is modified, it gets copied on assignment to another object. Therefore, although the documentation says otherwise, strings are effectively immutable in Swift as well. Summary In this article, you learned about the basic data types available to a programmer in each of the four most common mobile development languages. Numeric and floating point data type characteristics and operations are as much dependent on the underlying architecture as on the specifications of the language. You also learned about casting objects from one type to another and how the type of cast is defined as either a widening cast or a narrowing cast depending on the size of the source and target data types in the conversion. Next, we discussed Boolean types and how they are used in comparators to affect program flow and execution. In this, we discussed order of precedence of operator and nested operations. You also learned how to use short-circuiting to improve your code's performance. Finally, we examined the String data type and what it means to work with mutable objects. Resources for Article: Further resources on this subject: Why Bother? – Basic [article] Introducing Algorithm Design Paradigms [article] Algorithm Analysis [article]
Read more
  • 0
  • 0
  • 1083

article-image-testing-and-quality-control
Packt
04 Jan 2017
19 min read
Save for later

Testing and Quality Control

Packt
04 Jan 2017
19 min read
In this article by Pablo Solar Vilariño and Carlos Pérez Sánchez, the author of the book, PHP Microservices, we will see the following topics: (For more resources related to this topic, see here.) Test-driven development Behavior-driven development Acceptance test-driven development Tools Test-driven development Test-Driven Development (TDD) is part of Agile philosophy, and it appears to solve the common developer's problem that shows when an application is evolving and growing, and the code is getting sick, so the developers fix the problems to make it run but every single line that we add can be a new bug or it can even break other functions. Test-driven development is a learning technique that helps the developer to learn about the domain problem of the application they are going to build, doing it in an iterative, incremental, and constructivist way: Iterative because the technique always repeats the same process to get the value Incremental because for each iteration, we have more unit tests to be used Constructivist because it is possible to test all we are developing during the process straight away, so we can get immediate feedback Also, when we finish developing each unit test or iteration, we can forget it because it will be kept from now on throughout the entire development process, helping us to remember the domain problem through the unit test; this is a good approach for forgetful developers. It is very important to understand that TDD includes four things: analysis, design, development, and testing; in other words, doing TDD is understanding the domain problem and correctly analyzing the problem, designing the application well, developing well, and testing it. It needs to be clear; TDD is not just about implementing unit tests, it is the whole process of software development. TDD perfectly matches projects based on microservices because using microservices in a large project is dividing it into little microservices or functionalities, and it is like an aggrupation of little projects connected by a communication channel. The project size is independent of using TDD because in this technique, you divide each functionality into little examples, and to do this, it does not matter if the project is big or small, and even less when our project is divided by microservices. Also, microservices are still better than a monolithic project because the functionalities for the unit tests are organized in microservices, and it will help the developers to know where they can begin using TDD. How to do TDD? Doing TDD is not difficult; we just need to follow some steps and repeat them by improving our code and checking that we did not break anything. TDD involves the following steps: Write the unit test: It needs to be the simplest and clearest test possible, and once it is done, it has to fail; this is mandatory. If it does not fail, there is something that we are not doing properly. Run the tests: If it has errors (it fails), this is the moment to develop the minimum code to pass the test, just what is necessary, do not code additional things. Once you develop the minimum code to pass the test, run the test again (step two); if it passes, go to the next step, if not then fix it and run the test again. Improve the test: If you think it is possible to improve the code you wrote, do it and run the tests again (step two). If you think it is perfect then write a new unit test (step one). To do TDD, it is necessary to write the tests before implementing the function; if the tests are written after the implementation has started, it is not TDD; it is just testing. If we start implementing the application without testing and it is finished, or if we start creating unit tests during the process, we are doing the classic testing and we are not approaching the TDD benefits. Developing the functions without prior testing, the abstract idea of the domain problem in your mind can be wrong or may even be clear at the start but during the development process it can change or the concepts can be mixed. Writing the tests after that, we are checking if all the ideas in our main were correct after we finished the implementation, so probably we have to change some methods or even whole functionalities after spend time coding. Obviously, testing is always better than not testing, but doing TDD is still better than just classic testing. Why should I use TDD? TDD is the answer to questions such as: Where shall I begin? How can I do it? How can I write code that can be modified without breaking anything? How can I know what I have to implement? The goal is not to write many unit tests without sense but to design it properly following the requirements. In TDD, we do not to think about implementing functions, but we think about good examples of functions related with the domain problem in order to remove the ambiguity created by the domain problem. In other words, by doing TDD, we should reproduce a specific function or case of use in X examples until we get the necessary examples to describe the function or task without ambiguity or misinterpretations. TDD can be the best way to document your application. Using other methodologies of software development, we start thinking about how the architecture is going to be, what pattern is going to be used, how the communication between microservices is going to be, and so on, but what happens if once we have all this planned, we realize that this is not necessary? How much time is going to pass until we realize that? How much effort and money are we going to spend? TDD defines the architecture of our application by creating little examples in many iterations until we realize what the architecture is; the examples will slowly show us the steps to follow in order to define what the best structures, patterns, or tools to use are, avoiding expenditure of resources during the firsts stages of our application. This does not mean that we are working without an architecture; obviously, we have to know if our application is going to be a website or a mobile app and use a proper framework. What is going to be the interoperability in the application? In our case it will be an application based on microservices, so it will give us support to start creating the first unit tests. The architectures that we remove are the architectures on top of the architecture, in other words, the guidelines to develop an application as always. TDD will produce an architecture without ambiguity from unit testing. TDD is not cure-all: In other words, it does not give the same results to a senior developer as to a junior developer, but it is useful for the entire team. Let's look at some advantages of using TDD: Code reuse: Creates every functionality with only the necessary code to pass the tests in the second stage (Green) and allows you to see if there are more functions using the same code structure or parts of a specific function, so it helps you to reuse the previous code you wrote. Teamwork is easier: It allows you to be confident with your team colleagues. Some architects or senior developers do not trust developers with poor experience, and they need to check their code before committing the changes, creating a bottleneck at that point, so TDD helps to trust developers with less experience. Increases communication between team colleagues: The communication is more fluent, so the team share their knowledge about the project reflected on the unit tests. Avoid overdesigning application in the first stages: As we said before, doing TDD allows you to have an overview of the application little by little, avoiding the creation of useless structures or patterns in your project, which, maybe, you will trash in the future stages. Unit tests are the best documentation: The best way to give a good point of view of a specific functionality is reading its unit test. It will help to understand how it works instead of human words. Allows discovering more use cases in the design stage: In every test you have to create, you will understand how the functionality should work better and all the possible stages that a functionality can have. Increases the feeling of a job well done: In every commit of your code, you will have the feeling that it was done properly because the rest of the unit tests passes without errors, so you will not be worried about other broken functionalities. Increases the software quality: During the step of refactoring, we spend our efforts on making the code more efficient and maintainable, checking that the whole project still works properly after the changes. TDD algorithm The technical concepts and steps to follow the TDD algorithm are easy and clear, and the proper way to make it happen improves by practicing it. There are only three steps, called red, green, and refactor: Red – Writing the unit tests It is possible to write a test even when the code is not written yet; you just need to think about whether it is possible to write a specification before implementing it. So, in this first step you should consider that the unit test you start writing is not like a unit test, but it is like an example or specification of the functionality. In TDD, this first example or specification is not immovable; in other words, the unit test can be modified in the future. Before starting to write the first unit test, it is necessary to think about how the Software Under Test (SUT) is going to be. We need to think about how the SUT code is going to be and how we would check that it works they way we want it to. The way that TDD works drives us to firstly design what is more comfortable and clear if it fits the requirements. Green – Make the code work Once the example is written, we have to code the minimum to make it pass the test; in other words, set the unit test to green. It does not matter if the code is ugly and not optimized, it will be our task in the next step and iterations. In this step, the important thing is only to write the necessary code for the requirements without unnecessary things. It does not mean writing without thinking about the functionality, but thinking about it to be efficient. It looks easy but you will realize that you will write extra code the first time. If you concentrate on this step, new questions will appear about the SUT behavior with different entries, but you should be strong and avoid writing extra code about other functionalities related to the current one. Instead of coding them, take notes to convert them into functionalities in the next iterations. Refactor – Eliminate redundancy Refactoring is not the same as rewriting code. You should be able to change the design without changing the behavior. In this step, you should remove the duplicity in your code and check if the code matches the principles of good practices, thinking about the efficiency, clarity, and future maintainability of the code. This part depends on the experience of each developer. The key to good refactoring is making it in small steps To refactor a functionality, the best way is to change a little part and then execute all the available tests; if they pass, continue with another little part, until you are happy with the obtained result. Behavior-driven development Behavior-Driven Development (BDD) is a process that broadens the TDD technique and mixes it with other design ideas and business analyses provided to the developers, in order to improve the software development. In BDD, we test the scenarios and classes’ behavior in order to meet the scenarios, which can be composed by many classes. It is very useful to use a DSL in order to have a common language to be used by the customer, project owner, business analyst, or developers. The goal is to have a ubiquitous language. What is BDD? As we said before, BDD is an AGILE technique based on TDD and ATDD, promoting the collaboration between the entire team of a project. The goal of BDD is that the entire team understands what the customer wants, and the customer knows what the rest of the team understood from their specifications. Most of the times, when a project starts, the developers don't have the same point of view as the customer, and during the development process the customer realizes that, maybe, they did not explain it or the developer did not understand it properly, so it adds more time to changing the code to meet the customer's needs. So, BDD is writing test cases in human language, using rules, or in a ubiquitous language, so the customer and developers can understand it. It also defines a DSL for the tests. How does it work? It is necessary to define the features as user stories (we will explain what this is in the ATDD section of this article) and their acceptance criteria. Once the user story is defined, we have to focus on the possible scenarios, which describe the project behavior for a concrete user or a situation using DSL. The steps are: Given [context], When [event occurs], Then [Outcome]. To sum up, the defined scenario for a user story gives the acceptance criteria to check if the feature is done. Acceptance Test-Driven Development Perhaps, the most important methodology in a project is the Acceptance Test-Driven Development (ATDD) or Story Test-Driven Development (STDD); it is TDD but on a different level. The acceptance (or customer) tests are the written criteria for a project meeting the business requirements that the customer demands. They are examples (like the examples in TDD) written by the project owner. It is the start of development for each iteration, the bridge between Scrum and agile development. In ATDD, we start the implementation of our project in a way different from the traditional methodologies. The business requirements written in human language are replaced by executables agreed upon by some team members and also the customer. It is not about replacing the whole documentation, but only a part of the requirements. The advantages of using ATDD are the following: Real examples and a common language for the entire team to understand the domain It allows identifying the domain rules properly It is possible to know if a user story is finished in each iteration The workflow works from the first steps The development does not start until the tests are defined and accepted by the team ATDD algorithm The algorithm of ATDD is like that of TDD but reaches more people than only the developers; in other words, doing ATDD, the tests of each story are written in a meeting that includes the project owners, developers, and QA technicians because the entire team must understand what is necessary to do and why it is necessary, so they can see if it is what the code should do. The ATDD cycle is depicted in the following diagram: Discuss The starting point of the ATDD algorithm is the discussion. In this first step, the business has a meeting with the customer to clarify how the application should work, and the analyst should create the user stories from that conversation. Also, they should be able to explain the conditions of satisfaction of every user story in order to be translated into examples. By the end of the meeting, the examples should be clear and concise, so we can get a list of examples of user stories in order to cover all the needs of the customer, reviewed and understood for him. Also, the entire team will have a project overview in order to understand the business value of the user story, and in case the user story is too big, it could be divided into little user stories, getting the first one for the first iteration of this process. Distill High-level acceptance tests are written by the customer and the development team. In this step, the writing of the test cases that we got from the examples in the discussion step begins, and the entire team can take part in the discussion and help clarify the information or specify the real needs of that. The tests should cover all the examples that were discovered in the discussion step, and extra tests could be added during this process bit by bit till we understand the functionality better. At the end of this step, we will obtain the necessary tests written in human language, so the entire team (including the customer) can understand what they are going to do in the next step. These tests can be used like a documentation. Develop In this step, the development of acceptance test cases is begun by the development team and the project owner. The methodology to follow in this step is the same as TDD, the developers should create a test and watch it fail (Red) and then develop the minimum amount of lines to pass (Green). Once the acceptance tests are green, this should be verified and tested to be ready to be delivered. During this process, the developers may find new scenarios that need to be added into the tests or even if it needs a large amount of work, it could be pushed to the user story. At the end of this step, we will have software that passes the acceptance tests and maybe more comprehensive tests. Demo The created functionality is shown by running the acceptance test cases and manually exploring the features of the new functionality. After the demonstration, the team discusses whether the user story was done properly and it meets the product owner's needs and decides if it can continue with the next story. Tools After knowing more about TDD and BDD, it is time to explain a few tools you can use in your development workflow. There are a lot of tools available, but we will only explain the most used ones. Composer Composer is a PHP tool used to manage software dependencies. You only need to declare the libraries needed by your project and the composer will manage them, installing and updating when necessary. This tool has only a few requirements: if you have PHP 5.3.2+, you are ready to go. In the case of a missing requirement, the composer will warn you. You could install this dependency manager on your development machine, but since we are using Docker, we are going to install it directly on our PHP-FPM containers. The installation of composer in Docker is very easy; you only need to add the following rule to the Dockerfile: RUN curl -sS https://getcomposer.org/installer | php -- --install-"dir=/usr/bin/ --filename=composer PHPUnit Another tool we need for our project is PHPUnit, a unit test framework. As before, we will be adding this tool to our PHP-FPM containers to keep our development machine clean. If you are wondering why we are not installing anything on our development machine except for Docker, the response is clear. Having everything in the containers will help you avoid any conflict with other projects and gives you the flexibility of changing versions without being too worried. Add the following RUN command to your PHP-FPM Dockerfile, and you will have the latest PHPUnit version installed and ready to use: RUN curl -sSL https://phar.phpunit.de/phpunit.phar -o "/usr/bin/phpunit && chmod +x /usr/bin/phpunit Now that we have all our requirements too, it is time to install our PHP framework and start doing some TDD stuff. Later, we will continue updating our Docker environment with new tools. We choose Lumen for our example. Please feel free to adapt all the examples to your favorite framework. Our source code will be living inside our containers, but at this point of development, we do not want immutable containers. We want every change we make to our code to be available instantaneously in our containers, so we will be using a container as a storage volume. To create a container with our source and use it as a storage volume, we only need to edit our docker-compose.yml and create one source container per each microservice, as follows: source_battle: image: nginx:stable volumes: - ../source/battle:/var/www/html command: "true" The above piece of code creates a container image named source_battle, and it stores our battle source (located at ../source/battle from the docker-compose.yml current path). Once we have our source container available, we can edit each one of our services and assign a volume. For instance, we can add the following line in our microservice_battle_fpm and microservice_battle_nginx container descriptions: volumes_from: - source_battle Our battle source will be available in our source container in the path, /var/www/html, and the remaining step to install Lumen is to do a simple composer execution. First, you need to be sure that your infrastructure is up with a simple command, as follows: $ docker-compose up The preceding command spins up our containers and outputs the log to the standard IO. Now that we are sure that everything is up and running, we need to enter in our PHP-FPM containers and install Lumen. If you need to know the names assigned to each one of your containers, you can do a $ docker ps and copy the container name. As an example, we are going to enter the battle PHP-FPM container with the following command: $ docker exec -it docker_microservice_battle_fpm_1 /bin/bash The preceding command opens an interactive shell in your container, so you can do anything you want; let's install Lumen with a single command: # cd /var/www/html && composer create-project --prefer-dist "laravel/lumen . Repeat the preceding commands for each one of your microservices. Now, you have everything ready to start doing Unit tests and coding your application. Summary In this article, you learned about test-driven development, behavior-driven development, acceptance test-driven development, and PHPUnit. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [Article] Understanding PHP basics [Article] The Multi-Table Query Generator using phpMyAdmin and MySQL [Article]
Read more
  • 0
  • 0
  • 1760

article-image-enterprise-architecture-concepts
Packt
02 Jan 2017
8 min read
Save for later

Enterprise Architecture Concepts

Packt
02 Jan 2017
8 min read
In this article by Habib Ahmed Qureshi, Ganesan Senthilvel, and Ovais Mehboob Ahmed Khan, author of the book Enterprise Application Architecture with .NET Core, you will learn how to architect and design highly scalable, robust, clean, and highly performant applications in .NET Core 1.0. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Why we need Enterprise Architecture? Knowing the role of an architect Why we need Enterprise Architecture? We will need to define, or at least provide, some basic fixed points to identify enterprise architecture specifically. Sketch Before playing an enterprise architect role, I used to get confused with so many architectural roles and terms, such as architect, solution architect, enterprise architect, data architect, blueprint, system diagram, and so on. In general, the industry perception is that the IT architect role is to draw few boxes with few suggestions; rest is with the development community. They feel that the architect role is quite easy just by drawing the diagram and not doing anything else. Like I said, it is completely a perception of few associates in the industry, and I used to be dragged by this category earlier: However, my enterprise architect job has cleared this perception and understands the true value of an enterprise architect. Definition of Enterprise Architecture In simple terms, enterprise is nothing but human endeavor. The objective of an enterprise is where people are collaborating for a particular purpose supported by a platform. Let me explain with an example of an online e-commerce company. Employees of that company are people who worked together to produce the profit of the firm using their various platforms, such as infrastructure, software, equipment, building, and so on. Enterprise has the structure/arrangements of all these pieces/components to build the complete organization. This is the exact place where enterprise architecture plays its key role. Every enterprise has an enterprise architect. EA is a process of architecting that applies the discipline to produce the prescribed output components. This process needs the experience, skill, discipline, and descriptions. Consider the following image where EA anticipates the system in two key states: Every enterprise needs an enterprise architect, not an optional. Let me give a simple example. When you need a car for business activities, you have two choices, either drive yourself or rent a driver. Still, you will need the driving capability to operate the car. EA is pretty similar to it. As depicted in the preceding diagram, EA anticipates the system in two key states, which are as follows: How it currently is How will it be in the future Basically, they work on options/alternatives to move from current to future state of an enterprise system. In this process, Enterprise Architecture does the following: Creates the frameworks to manage the architect Details the descriptions of the architect Roadmaps to lay the best way to change/improve the architecture Defines constraint/opportunity Anticipates the costs and benefits Evaluates the risks and values In this process of architecting, the system applies the discipline to produce the prescribed output components. Stakeholders of Enterprise Architecture Enterprise Architecture is so special because to its holistic view of management and evolution of an enterprise holistically. It has the unique combination of specialist technology, such as architecture frameworks and design pattern practices. Such a special EA has the following key stakeholders/users in its eco system: S.No. Stakeholders Organizational actions 1  Strategic planner Capability planning Set strategic direction Impact analysis 2  Decision makers Investment Divestment Approvals for the project Alignment with strategic direction 3  Analyst Quality assurance Compliance Alignment with business goals 4  Architects, project managers Solution development Investigate the opportunities Analysis of the existing options Business benefits Though many organizations intervened without EAs, every firm has the strong belief that it is better to architect before creating any system. It is integrated in coherent fashion with proactively designed system instead of random ad hoc and inconsistent mode. In terms of business benefits, cost is the key factor in the meaning of Return on Investment (RoI). That is how the industry business is driven in this highly competitive IT world. EA has the opportunity to prove its value for its own stakeholders with three major benefits, ranging from tactical to strategic positions. They are as follows: Cost reduction by technology standardization Business Process Improvement (BPI) Strategic differentiation Gartner's research paper on TCO: The First Justification for Enterprise IT Architecture by Colleen Young is one of the good references to justify the business benefits of an Enterprise Architecture. Check out https://www.gartner.com/doc/388268/enterprise-architecture-benefits-justification for more information. In the grand scheme of cost saving strategy, technology standardization adds a lot of efficiency to make the indirect benefits. Let me share my experience in this space. In one of my earlier legacy organization, it was noticed that the variety of technologies and products were built to server the business purpose due to the historical acquisitions and mergers. The best solution was platform standardization. All businesses have processes; few life examples are credit card processing, employee on-boarding, student enrollment, and so on. In this methodology, there are people involved with few steps for the particular system to get things done. In means of the business growth, the processes become chaotic, which leads to the duplicate efforts across the departments. Here, we miss the cross learning of the mistakes and corrections. BPI is an industry approach that is designed to support the enterprise for the realignment of the existing business operational process into the significant improved process. It helps the enterprise to identify and adopt in a better way using the industry tools and techniques. BPI is originally designed to induce a drastic game changing effect in the enterprise performance instead of bringing the changes in the incremental steps. In the current highly competitive market, strategic differentiation efforts make the firm create the perception in customers minds of receiving something of greater value than offered by the competition. An effective differentiation strategy is the best tool to highlight a business's unique features and make it stand out from the crowd. As the outcome of strategic differentiation, the business should realize the benefits on Enterprise Architecture investment. Also, it makes the business to institute the new ways of thinking to add the new customer segments along with new major competitive strategies. Knowing the role of an architect When I planned to switch my career to architecture track, I had too many questions in mind. People were referring to so many titles in the industry, such as architect, solution architect, enterprise architect, data architect, infra architect, and so on that I didn't know where exactly do I need to start and end. Industry had so many confusions to opt for. To understand it better, let me give my own work experiences as the best use cases. In the IT industry, two higher-level architects are named as follows: Solution architect (SA) Enterprise architect (EA) In my view, Enterprise Architecture is a much broader discipline than Solution Architecture with the sum of Business Architecture, Application Architecture, Data Architecture, and Technology Architecture. It will be covered in detail in the subsequent section: SA is focused on a specific solution and addresses the technological details that are compiled to the standards, roadmaps, and strategic objectives of the business. On comparing with SA, EA is at senior level. In general, EA takes a strategic, inclusive, and long term view at goals, opportunities, and challenges facing the company. However, SA is assigned to a particular project/program in an enterprise to ensure technical integrity and consistency of the solution at every stage of its life cycle. Role comparison between EA and SA Let me explain the working experiences of two different roles—EA and SA. When I played the SA role for Internet based telephony system, my role needs to build the tools, such as code generation, automation, and so on around the existing telephony system. It needs the skill set of the Microsoft platform technology and telephony domain to understand the existing system in a better way and then provide the better solution to improve the productivity and performance of the existing ecosystem. I was not really involved in the enterprise-level decision making process. Basically, it was pretty much like an individual contributor to build effective and efficient solutions to improvise the current system. As the second work, let me share my experience on the EA role for a leading financial company. The job was to build the enterprise data hub using the emerging big data technology. Degree of comparisons If we plot EA versus SA graphically, EA needs the higher degree of strategy focus and technology breath, as depicted in the following image: In terms of roles and responsibilities, EA and SA differ in their scope. Basically, the SA scope is limited within a project team and the expected delivery is to make the system quality of the solution to the business. In the same time, the EA scope is beyond SA by identifying or envisioning the future state of an organization. Summary In this article, you understood the fundamental concepts of enterprise architecture, and its related business need and benefits. Resources for Article: Further resources on this subject: Getting Started with ASP.NET Core and Bootstrap 4 [Article] Setting Up the Environment for ASP.NET MVC 6 [Article] How to Set Up CoreOS Environment [Article]
Read more
  • 0
  • 0
  • 2561

article-image-building-your-first-odoo-application
Packt
02 Jan 2017
22 min read
Save for later

Building Your First Odoo Application

Packt
02 Jan 2017
22 min read
In this article by, Daniel Reis, the author of the book Odoo 10 Development Essentials, we will create our first Odoo application and learn the steps needed make it available to Odoo and install it. (For more resources related to this topic, see here.) Inspired by the notable http://todomvc.com/ project, we will build a simple To-Do application. It should allow us to add new tasks, mark them as completed, and finally clear the task list from all the already completed tasks. Understanding applications and modules It's common to hear about Odoo modules and applications. But what exactly is the difference between them? Module add-ons are building blocks for Odoo applications. A module can add new features to Odoo, or modify existing ones. It is a directory containing a manifest, or descriptor file, named __manifest__.py, plus the remaining files that implement its features. Applications are the way major features are added to Odoo. They provide the core elements for a functional area, such as Accounting or HR, based on which additional add-on modules modify or extend features. Because of this, they are highlighted in the Odoo Apps menu. If your module is complex, and adds new or major functionality to Odoo, you might consider creating it as an application. If you module just makes changes to existing functionality in Odoo, it is likely not an application. Whether a module is an application or not is defined in the manifest. Technically is does not have any particular effect on how the add-on module behaves. It is only used for highlight on the Apps list. Creating the module basic skeleton We should have the Odoo server at ~/odoo-dev/odoo/. To keep things tidy, we will create a new directory alongside it to host our custom modules, at ~/odoo-dev/custom-addons. Odoo includes a scaffold command to automatically create a new module directory, with a basic structure already in place. You can learn more about it with: $ ~/odoo-dev/odoo/odoo-bin scaffold --help You might want to keep this in mind when you start working your next module, but we won't be using it right now, since we will prefer to manually create all the structure for our module. An Odoo add-on module is a directory containing a __manifest__.py descriptor file. In previous versions, this descriptor file was named __openerp__.py. This name is still supported, but is deprecated. It also needs to be Python-importable, so it must also have an __init__.py file. The module's directory name is its technical name. We will use todo_app for it. The technical name must be a valid Python identifier: it should begin with a letter and can only contain letters, numbers, and the underscore character. The following commands create the module directory and create an empty __init__.py file in it, ~/odoo-dev/custom-addons/todo_app/__init__.py. In case you would like to do that directly from the command line, this is what you would use: $ mkdir ~/odoo-dev/custom-addons/todo_app $ touch ~/odoo-dev/custom-addons/todo_app/__init__.py Next, we need to create the descriptor file. It should contain only a Python dictionary with about a dozen possible attributes; of this, only the name attribute is required. A longer description attribute and the author attribute also have some visibility and are advised. We should now add a __manifest__.py file alongside the __init__.py file with the following content: { 'name': 'To-Do Application', 'description': 'Manage your personal To-Do tasks.', 'author': 'Daniel Reis', 'depends': ['base'], 'application': True, } The depends attribute can have a list of other modules that are required. Odoo will have them automatically installed when this module is installed. It's not a mandatory attribute, but it's advised to always have it. If no particular dependencies are needed, we should depend on the core base module. You should be careful to ensure all dependencies are explicitly set here; otherwise, the module may fail to install in a clean database (due to missing dependencies) or have loading errors, if by chance the other required modules are loaded afterwards. For our application, we don't need any specific dependencies, so we depend on the base module only. To be concise, we chose to use very few descriptor keys, but in a real word scenario, we recommend that you also use the additional keys since they are relevant for the Odoo apps store: summary: This is displayed as a subtitle for the module. version: By default, is 1.0. It should follow semantic versioning rules (see http://semver.org/ for details). license: By default, is LGPL-3. website: This is a URL to find more information about the module. This can help people find more documentation or the issue tracker to file bugs and suggestions. category: This is the functional category of the module, which defaults to Uncategorized. The list of existing categories can be found in the security groups form (Settings | User | Groups), in the Application field drop-down list. These other descriptor keys are also available: installable: It is by default True but can be set to False to disable a module. auto_install: If the auto_install module is set to True, this module will be automatically installed, provided all its dependencies are already installed. It is used for the Glue modules. Since Odoo 8.0, instead of the description key, we can use a README.rst or README.md file in the module's top directory. A word about licenses Choosing a license for your work is very important, and you should consider carefully what is the best choice for you, and its implications. The most used licenses for Odoo modules are the GNU Lesser General Public License (LGLP) and the Affero General Public License (AGPL). The LGPL is more permissive and allows commercial derivate work, without the need to share the corresponding source code. The AGPL is a stronger open source license, and requires derivate work and service hosting to share their source code. Learn more about the GNU licenses at https://www.gnu.org/licenses/. Adding to the add-ons path Now that we have a minimalistic new module, we want to make it available to the Odoo instance. For that, we need to make sure the directory containing the module is in the add-ons path, and then update the Odoo module list. We will position in our work directory and start the server with the appropriate add-ons path configuration: $ cd ~/odoo-dev $ ./odoo/odoo-bin -d todo --addons-path="custom-addons,odoo/addons" --save The --save option saves the options you used in a config file. This spares us from repeating them every time we restart the server: just run ./odoo-bin and the last saved options will be used. Look closely at the server log. It should have an INFO ? odoo: addons paths:[...] line. It should include our custom-addons directory. Remember to also include any other add-ons directories you might be using. For instance, if you also have a ~/odoo-dev/extra directory containing additional modules to be used, you might want to include them also using the option: --addons-path="custom-addons,extra,odoo/addons" Now we need the Odoo instance to acknowledge the new module we just added. Installing the new module In the Apps top menu, select the Update Apps List option. This will update the module list, adding any modules that may have been added since the last update to the list. Remember that we need the developer mode enabled for this option to be visible. That is done in the Settings dashboard, in the link at the bottom right, below the Odoo version number information . Make sure your web client session is working with the right database. You can check that at the top right: the database name is shown in parenthesis, right after the user name. A way to enforce using the correct database is to start the server instance with the additional option --db-filter=^MYDB$. The Apps option shows us the list of available modules. By default it shows only application modules. Since we created an application module we don't need to remove that filter to see it. Type todo in the search and you should see our new module, ready to be installed. Now click on the module's Install button and we're ready! The Model layer Now that Odoo knows about our new module, let's start by adding a simple model to it. Models describe business objects, such as an opportunity, sales order, or partner (customer, supplier, and so on.). A model has a list of attributes and can also define its specific business. Models are implemented using a Python class derived from an Odoo template class. They translate directly to database objects, and Odoo automatically takes care of this when installing or upgrading the module. The mechanism responsible for this is Object Relational Model (ORM). Our module will be a very simple application to keep to-do tasks. These tasks will have a single text field for the description and a checkbox to mark them as complete. We should later add a button to clean the to-do list from the old completed tasks. Creating the data model The Odoo development guidelines state that the Python files for models should be placed inside a models subdirectory. For simplicity, we won't be following this here, so let's create a todo_model.py file in the main directory of the todo_app module. Add the following content to it: # -*- coding: utf-8 -*- from odoo import models, fields class TodoTask(models.Model): _name = 'todo.task' _description = 'To-do Task' name = fields.Char('Description', required=True) is_done = fields.Boolean('Done?') active = fields.Boolean('Active?', default=True) The first line is a special marker telling the Python interpreter that this file has UTF-8 so that it can expect and handle non-ASCII characters. We won't be using any, but it's a good practice to have it anyway. The second line is a Python import statement, making available the models and fields objects from the Odoo core. The third line declares our new model. It's a class derived from models.Model. The next line sets the _name attribute defining the identifier that will be used throughout Odoo to refer to this model. Note that the actual Python class name , TodoTask in this case, is meaningless to other Odoo modules. The _name value is what will be used as an identifier. Notice that this and the following lines are indented. If you're not familiar with Python, you should know that this is important: indentation defines a nested code block, so these four lines should all be equally indented. Then we have the _description model attribute. It is not mandatory, but it provides a user friendly name for the model records, that can be used for better user messages. The last three lines define the model's fields. It's worth noting that name and active are special field names. By default, Odoo will use the name field as the record's title when referencing it from other models. The active field is used to inactivate records, and by default, only active records will be shown. We will use it to clear away completed tasks without actually deleting them from the database. Right now, this file is not yet used by the module. We must tell Python to load it with the module in the __init__.py file. Let's edit it to add the following line: from . import todo_model That's it. For our Python code changes to take effect the server instance needs to be restarted (unless it was using the --dev mode). We won't see any menu option to access this new model, since we didn't add them yet. Still we can inspect the newly created model using the Technical menu. In the Settings top menu, go to Technical | Database Structure | Models, search for the todo.task model on the list and then click on it to see its definition: If everything goes right, it is confirmed that the model and fields were created. If you can't see them here, try a server restart with a module upgrade, as described before. We can also see some additional fields we didn't declare. These are reserved fields Odoo automatically adds to every new model. They are as follows: id: A unique, numeric identifier for each record in the model. create_date and create_uid: These specify when the record was created and who created it, respectively. write_date and write_uid: These confirm when the record was last modified and who modified it, respectively. __last_update: This is a helper that is not actually stored in the database. It is used for concurrency checks. The View layer The View layer describes the user interface. Views are defined using XML, which is used by the web client framework to generate data-aware HTML views. We have menu items that can activate the actions that can render views. For example, the Users menu item processes an action also called Users, that in turn renders a series of views. There are several view types available, such as the list and form views, and the filter options made available are also defined by particular type of view, the search view. The Odoo development guidelines state that the XML files defining the user interface should be placed inside a views/ subdirectory. Let's start creating the user interface for our To-Do application. Adding menu items Now that we have a model to store our data, we should make it available on the user interface. For that we should add a menu option to open the To-do Task model so that it can be used. Create the views/todo_menu.xml file to define a menu item and the action performed by it: <?xml version="1.0"?> <odoo> <!-- Action to open To-do Task list --> <act_window id="action_todo_task" name="To-do Task" res_model="todo.task" view_mode="tree,form" /> <!-- Menu item to open To-do Task list --> <menuitem id="menu_todo_task" name="Todos" action="action_todo_task" /> </odoo> The user interface, including menu options and actions, is stored in database tables. The XML file is a data file used to load those definitions into the database when the module is installed or upgraded. The preceding code is an Odoo data file, describing two records to add to Odoo: The <act_window> element defines a client-side window action that will open the todo.task model with the tree and form views enabled, in that order The <menuitem> defines a top menu item calling the action_todo_task action, which was defined before Both elements include an id attribute. This id , also called an XML ID, is very important: it is used to uniquely identify each data element inside the module, and can be used by other elements to reference it. In this case, the <menuitem> element needs to reference the action to process, and needs to make use of the <act_window> id for that. Our module does not know yet about the new XML data file. This is done by adding it to the data attribute in the __manifest__.py file. It holds the list of files to be loaded by the module. Add this attribute to the descriptor's dictionary: 'data': ['views/todo_menu.xml'], Now we need to upgrade the module again for these changes to take effect. Go to the Todos top menu and you should see our new menu option available: Even though we haven't defined our user interface view, clicking on the Todos menu will open an automatically generated form for our model, allowing us to add and edit records. Odoo is nice enough to automatically generate them so that we can start working with our model right away. Odoo supports several types of views, but the three most important ones are: tree (usually called list views), form, and search views. We'll add an example of each to our module. Creating the form view All views are stored in the database, in the ir.ui.view model. To add a view to a module, we declare a <record> element describing the view in an XML file, which is to be loaded into the database when the module is installed. Add this new views/todo_view.xml file to define our form view: <?xml version="1.0"?> <odoo> <record id="view_form_todo_task" model="ir.ui.view"> <field name="name">To-do Task Form</field> <field name="model">todo.task</field> <field name="arch" type="xml"> <form string="To-do Task"> <group> <field name="name"/> <field name="is_done"/> <field name="active" readonly="1"/> </group> </form> </field> </record> </odoo> Remember to add this new file to the data key in manifest file, otherwise our module won't know about it and it won't be loaded. This will add a record to the ir.ui.view model with the identifier view_form_todo_task. The view is for the todo.task model and is named To-do Task Form. The name is just for information; it does not have to be unique, but it should allow one to easily identify which record it refers to. In fact the name can be entirely omitted, in that case it will be automatically generated from the model name and the view type. The most important attribute is arch, and contains the view definition, highlighted in the XML code above. The <form> tag defines the view type, and in this case contains three fields. We also added an attribute to the active field to make it read-only. Adding action buttons Forms can have buttons to perform actions. These buttons are able to trigger workflow actions, run window actions—such as opening another form, or run Python functions defined in the model. They can be placed anywhere inside a form, but for document-style forms, the recommended place for them is the <header> section. For our application, we will add two buttons to run the methods of the todo.task model: <header> <button name="do_toggle_done" type="object" string="Toggle Done" class="oe_highlight" /> <button name="do_clear_done" type="object" string="Clear All Done" /> </header> The basic attributes of a button comprise the following: The string attribute that has the text to be displayed on the button The type attribute referring to the action it performs The name attribute referring to the identifier for that action The class attribute, which is an optional attribute to apply CSS styles, like in regular HTML The complete form view At this point, our todo.task form view should look like this: <form> <header> <button name="do_toggle_done" type="object" string="Toggle Done" class="oe_highlight" /> <button name="do_clear_done" type="object" string="Clear All Done" /> </header> <sheet> <group name="group_top"> <group name="group_left"> <field name="name"/> </group> <group name="group_right"> <field name="is_done"/> <field name="active" readonly="1" /> </group> </group> </sheet> </form> Remember that for the changes to be loaded to our Odoo database, a module upgrade is needed. To see the changes in the web client, the form needs to be reloaded: either click again on the menu option that opens it or reload the browser page (F5 in most browsers). The action buttons won't work yet, since we still need to add their business logic. The business logic layer Now we will add some logic to our buttons. This is done with Python code, using the methods in the model's Python class. Adding business logic We should edit the todo_model.py Python file to add to the class the methods called by the buttons. First we need to import the new API, so add it to the import statement at the top of the Python file: from odoo import models, fields, api The action of the Toggle Done button will be very simple: just toggle the Is Done? flag. For logic on records, use the @api.multi decorator. Here, self will represent a recordset, and we should then loop through each record. Inside the TodoTask class, add this: @api.multi def do_toggle_done(self): for task in self: task.is_done = not task.is_done return True The code loops through all the to-do task records, and for each one, modifies the is_done field, inverting its value. The method does not need to return anything, but we should have it to at least return a True value. The reason is that clients can use XML-RPC to call these methods, and this protocol does not support server functions returning just a None value. For the Clear All Done button, we want to go a little further. It should look for all active records that are done and make them inactive. Usually, form buttons are expected to act only on the selected record, but in this case, we will want it also act on records other than the current one: @api.model def do_clear_done(self): dones = self.search([('is_done', '=', True)]) dones.write({'active': False}) return True On methods decorated with @api.model, the self variable represents the model with no record in particular. We will build a dones recordset containing all the tasks that are marked as done. Then, we set on the active flag to False on them. The search method is an API method that returns the records that meet some conditions. These conditions are written in a domain, which is a list of triplets. The write method sets the values at once on all the elements of a recordset. The values to write are described using a dictionary. Using write here is more efficient than iterating through the recordset to assign the value to each of them one by one. Set up access security You might have noticed that upon loading, our module is getting a warning message in the server log: The model todo.task has no access rules, consider adding one. The message is pretty clear: our new model has no access rules, so it can't be used by anyone other than the admin super user. As a super user, the admin ignores data access rules, and that's why we were able to use the form without errors. But we must fix this before other users can use our model. Another issue yet to address is that we want the to-do tasks to be private to each user. Odoo supports row-level access rules, which we will use to implement that. Adding access control security To get a picture of what information is needed to add access rules to a model, use the web client and go to Settings | Technical | Security | Access Controls List: Here we can see the ACL for some models. It indicates, per security group, what actions are allowed on records. This information has to be provided by the module using a data file to load the lines into the ir.model.access model. We will add full access to the Employee group on the model. Employee is the basic access group nearly everyone belongs to. This is done using a CSV file named security/ir.model.access.csv. Let's add it with the following content: id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink acess_todo_task_group_user,todo.task.user,model_todo_task,base.group_user,1,1,1,1 The filename corresponds to the model to load the data into, and the first line of the file has the column names. These are the columns provided by the CSV file: id: It is the record external identifier (also known as XML ID). It should be unique in our module. name: This is a description title. It is only informative and it's best if it's kept unique. Official modules usually use a dot-separated string with the model name and the group. Following this convention, we used todo.task.user. model_id: This is the external identifier for the model we are giving access to. Models have XML IDs automatically generated by the ORM: for todo.task, the identifier is model_todo_task. group_id: This identifies the security group to give permissions to. The most important ones are provided by the base module. The Employee group is such a case and has the identifier base.group_user. The last four perm fields flag the access to grant read, write, create, or unlink (delete) access. We must not forget to add the reference to this new file in the __manifest__.py descriptor's data attribute It should look like this: 'data': [ 'security/ir.model.access.csv', 'views/todo_view.xml', 'views/todo_menu.xml', ], As before, upgrade the module for these additions to take effect. The warning message should be gone, and we can confirm that the permissions are OK by logging in with the user demo (password is also demo). If we run our tests now it they should only fail the test_record_rule test case. Summary We created a new module from the start, covering the most frequently used elements in a module: models, the three basic types of views (form, list, and search), business logic in model methods, and access security. Always remember, when adding model fields, an upgrade is needed. When changing Python code, including the manifest file, a restart is needed. When changing XML or CSV files, an upgrade is needed; also, when in doubt, do both: restart the server and upgrade the modules. Resources for Article: Further resources on this subject: Getting Started with Odoo Development [Article] Introduction to Odoo [Article] Web Server Development [Article]
Read more
  • 0
  • 0
  • 13951
article-image-introduction-creational-patterns-using-go-programming
Packt
02 Jan 2017
12 min read
Save for later

Introduction to Creational Patterns using Go Programming

Packt
02 Jan 2017
12 min read
This article by Mario Castro Contreras, author of the book Go Design Patterns, introduces you to the Creational design patterns that are explained in the book. As the title implies, this article groups common practices for creating objects. Creational patterns try to give ready-to-use objects to users instead of asking for their input, which, in some cases, could be complex and will couple your code with the concrete implementations of the functionality that should be defined in an interface. (For more resources related to this topic, see here.) Singleton design pattern – Having a unique instance of an object in the entire program Have you ever done interviews for software engineers? It's interesting that when you ask them about design patterns, more than 80% will start saying Singleton design pattern. Why is that? Maybe it's because it is one of the most used design patterns out there or one of the easiest to grasp. We will start our journey on creational design patterns because of the latter reason. Description Singleton pattern is easy to remember. As the name implies, it will provide you a single instance of an object, and guarantee that there are no duplicates. At the first call to use the instance, it is created and then reused between all the parts in the application that need to use that particular behavior. Objective of the Singleton pattern You'll use Singleton pattern in many different situations. For example: When you want to use the same connection to a database to make every query When you open a Secure Shell (SSH) connection to a server to do a few tasks, and don't want to reopen the connection for each task If you need to limit the access to some variable or space, you use a Singleton as the door to this variable. If you need to limit the number of calls to some places, you create a Singleton instance to make the calls in the accepted window The possibilities are endless, and we have just mentioned some of them. Implementation Finally, we have to implement the Singleton pattern. You'll usually write a static method and instance to retrieve the Singleton instance. In Go, we don't have the keyword static, but we can achieve the same result by using the scope of the package. First, we create a structure that contains the object which we want to guarantee to be a Singleton during the execution of the program: package creational type singleton struct{ count int } var instance *singleton func GetInstance() *singleton { if instance == nil { instance = new(singleton) } return instance } func (s *singleton) AddOne() int { s.count++ return s.count } We must pay close attention to this piece of code. In languages like Java or C++, the variable instance would be initialized to NULL at the beginning of the program. In Go, you can initialize a pointer to a structure as nil, but you cannot initialize a structure to nil (the equivalent of NULL). So the var instance *singleton line defines a pointer to a structure of type Singleton as nil, and the variable called instance. We created a GetInstance method that checks if the instance has not been initialized already (instance == nil), and creates an instance in the space already allocated in the line instance = new(singleton). Remember, when we use the keyword new, we are creating a pointer to the type between the parentheses. The AddOne method will take the count of the variable instance, raise it by one, and return the current value of the counter. Lets run now our unit tests again: $ go test -v -run=GetInstance === RUN TestGetInstance --- PASS: TestGetInstance (0.00s) PASS ok Factory method – Delegating the creation of different types of payments The Factory method pattern (or simply, Factory) is probably the second-best known and used design pattern in the industry. Its purpose is to abstract the user from the knowledge of the structure it needs to achieve a specific purpose. By delegating this decision to a Factory, this Factory can provide the object that best fits the user needs or the most updated version. It can also ease the process of downgrading or upgrading of the implementation of an object if needed. Description When using the Factory method design pattern, we gain an extra layer of encapsulation so that our program can grow in a controlled environment. With the Factory method, we delegate the creation of families of objects to a different package or object to abstract us from the knowledge of the pool of possible objects we could use. Imagine that you have two ways to access some specific resource: by HTTP or FTP. For us, the specific implementation of this access should be invisible. Maybe, we just know that the resource is in HTTP or in FTP, and we just want a connection that uses one of these protocols. Instead of implementing the connection by ourselves, we can use the Factory method to ask for the specific connection. With this approach, we can grow easily in the future if we need to add an HTTPS object. Objective of the Factory method After the previous description, the following objectives of the Factory Method design pattern must be clear to you: Delegating the creation of new instances of structures to a different part of the program Working at the interface level instead of with concrete implementations Grouping families of objects to obtain a family object creator Implementation We will start with the GetPaymentMethod method. It must receive an integer that matches with one of the defined constants of the same file to know which implementation it should return. package creational import ( "errors" "fmt" ) type PaymentMethod interface { Pay(amount float32) string } const ( Cash = 1 DebitCard = 2 ) func GetPaymentMethod(m int) (PaymentMethod, error) { switch m { case Cash: return new(CashPM), nilcase DebitCard: return new(DebitCardPM), nil default: return nil, errors.New(fmt.Sprintf("Payment method %d not recognizedn", m)) } } We use a plain switch to check the contents of the argument m (method). If it matches any of the known methods—cash or debit card, it returns a new instance of them. Otherwise, it will return a nil and an error indicating that the payment method has not been recognized. Now we can run our tests again to check the second part of the unit tests: $go test -v -run=GetPaymentMethod . === RUN TestGetPaymentMethodCash --- FAIL: TestGetPaymentMethodCash (0.00s) factory_test.go:16: The cash payment method message wasn't correct factory_test.go:18: LOG: === RUN TestGetPaymentMethodDebitCard --- FAIL: TestGetPaymentMethodDebitCard (0.00s) factory_test.go:28: The debit card payment method message wasn't correct factory_test.go:30: LOG: === RUN TestGetPaymentMethodNonExistent --- PASS: TestGetPaymentMethodNonExistent (0.00s) factory_test.go:38: LOG: Payment method 20 not recognized FAIL exit status 1 FAIL Now we do not get the errors saying it couldn't find the type of payment methods. Instead, we receive a message not correct error when it tries to use any of the methods that it covers. We also got rid of the Not implemented message that was being returned when we asked for an unknown payment method. Lets implement the structures now: type CashPM struct{} type DebitCardPM struct{} func (c *CashPM) Pay(amount float32) string { return fmt.Sprintf("%0.2f paid using cashn", amount) } func (c *DebitCardPM) Pay(amount float32) string { return fmt.Sprintf("%#0.2f paid using debit cardn", amount) } We just get the amount, printing it in a nice formatted message. With this implementation, the tests will all passing now: $ go test -v -run=GetPaymentMethod . === RUN TestGetPaymentMethodCash --- PASS: TestGetPaymentMethodCash (0.00s) factory_test.go:18: LOG: 10.30 paid using cash === RUN TestGetPaymentMethodDebitCard --- PASS: TestGetPaymentMethodDebitCard (0.00s) factory_test.go:30: LOG: 22.30 paid using debit card === RUN TestGetPaymentMethodNonExistent --- PASS: TestGetPaymentMethodNonExistent (0.00s) factory_test.go:38: LOG: Payment method 20 not recognized PASS ok Do you see the LOG: messages? They aren't errors—we just print some information that we receive when using the package under test. These messages can be omitted unless you pass the -v flag to the test command: $ go test -run=GetPaymentMethod . ok Abstract Factory – A factory of factories After learning about the factory design pattern is when we grouped a family of related objects in our case payment methods, one can be quick to think: what if I group families of objects in a more structured hierarchy of families? Description The Abstract Factory design pattern is a new layer of grouping to achieve a bigger (and more complex) composite object, which is used through its interfaces. The idea behind grouping objects in families and grouping families is to have big factories that can be interchangeable and can grow more easily. In the early stages of development, it is also easier to work with factories and abstract factories than to wait until all concrete implementations are done to start your code. Also, you won't write an Abstract Factory from the beginning unless you know that your object's inventory for a particular field is going to be very large and it could be easily grouped into families. The objective Grouping related families of objects is very convenient when your object number is growing so much that creating a unique point to get them all seems the only way to gain flexibility of the runtime object creation. Following objectives of the Abstract Factory method must be clear to you: Provide a new layer of encapsulation for Factory methods that returns a common interface for all factories Group common factories into a super Factory (also called factory of factories) Implementation The implementation of every factory is already done for the sake of brevity. They are very similar to the factory method with the only difference being that in the factory method, we don't use an instance of the factory, because we use the package functions directly. The implementation of the vehicle factory is as follows: func GetVehicleFactory(f int) (VehicleFactory, error) { switch f { case CarFactoryType: return new(CarFactory), nil case MotorbikeFactoryType: return new(MotorbikeFactory), nil default: return nil, errors.New(fmt.Sprintf("Factory with id %d not recognizedn", f)) } } Like in any factory, we switched between the factory possibilities to return the one that was demanded. As we have already implemented all concrete vehicles, the tests must be run too: go test -v -run=Factory -cover . === RUN TestMotorbikeFactory --- PASS: TestMotorbikeFactory (0.00s) vehicle_factory_test.go:16: Motorbike vehicle has 2 wheels vehicle_factory_test.go:22: Sport motorbike has type 1 === RUN TestCarFactory --- PASS: TestCarFactory (0.00s) vehicle_factory_test.go:36: Car vehicle has 4 seats vehicle_factory_test.go:42: Luxury car has 4 doors. PASS coverage: 45.8% of statements ok All of them passed. Take a close look and note that we have used the -cover flag when running the tests to return a coverage percentage of the package 45.8%. What this tells us is that 45.8% of the lines are covered by the tests we have written, but 54.2% is still not under the tests. This is because we haven't covered the cruise motorbike and the Family car with tests. If you write those tests, the result should rise to around 70.8%. Prototype design pattern The last pattern we will see in this article is the Prototype pattern. Like all creational patterns, this too comes in handy when creating objects and it is very common to see the Prototype pattern surrounded by more patterns. Description The aim of the Prototype pattern is to have an object or a set of objects that are already created at compilation time, but which you can clone as many times as you want at runtime. This is useful, for example, as a default template for a user who has just registered with your webpage or a default pricing plan in some service. The key difference between this and a Builder pattern is that objects are cloned for the user instead of building them at runtime. You can also build a cache-like solution, storing information using a prototype. Objective Maintain a set of objects that will be cloned to create new instances Free CPU of complex object initialization to take more memory resources We will start with the GetClone method. This method should return an item of the specified type: type ShirtsCache struct {} func (s *ShirtsCache)GetClone(m int) (ItemInfoGetter, error) { switch m { case White: newItem := *whitePrototype return &newItem, nil case Black: newItem := *blackPrototype return &newItem, nil case Blue: newItem := *bluePrototype return &newItem, nil default: return nil, errors.New("Shirt model not recognized") } } The Shirt structure also needs a GetInfo implementation to print the contents of the instances. type ShirtColor byte type Shirt struct { Price float32 SKU string Color ShirtColor } func (s *Shirt) GetInfo() string { return fmt.Sprintf("Shirt with SKU '%s' and Color id %d that costs %fn", s.SKU, s.Color, s.Price) } Finally, lets run the tests to see that everything is now working: go test -run=TestClone -v . === RUN TestClone --- PASS: TestClone (0.00s) prototype_test.go:41: LOG: Shirt with SKU 'abbcc' and Color id 1 that costs 15.000000 prototype_test.go:42: LOG: Shirt with SKU 'empty' and Color id 1 that costs 15.000000 prototype_test.go:44: LOG: The memory positions of the shirts are different 0xc42002c038 != 0xc42002c040 PASS ok In the log (remember to set the -v flag when running the tests), you can check that shirt1 and shirt2 have different SKUs. Also, we can see the memory positions of both objects. Take into account that the positions shown on your computer will probably be different. Summary We have seen the creational design patterns commonly used in the software industry. Their purpose is to abstract the user from the creation of objects for handling complexity or maintainability purposes. Design patterns have been the foundation of thousands of applications and libraries since the nineties, and most of the software we use today has many of these creational patterns under the hood. Resources for Article: Further resources on this subject: Getting Started [article] Thinking Functionally [article] Auditing and E-discovery [article]
Read more
  • 0
  • 0
  • 3214

article-image-what-lightning
Packt
30 Dec 2016
18 min read
Save for later

What is Lightning?

Packt
30 Dec 2016
18 min read
In this article by Mike Topalovich, author of the book Salesforce Lightning Application Development Essentials, we will discuss about Salesforce. As Salesforce developers, we know since Dreamforce '15, Salesforce has been all Lightning, all the time. The flagship CRM products – Sales Cloud and Service Cloud – have been rebranded to Sales Cloud Lightning and Service Cloud Lightning. In fact, many Salesforce products have undergone the Lightning treatment, the word Lightning being added to their product names overnight with few noticeable changes to the products themselves. This has led many in the Salesforce ecosystem to step back and ask, What is Lightning? (For more resources related to this topic, see here.) Lightning changes everything Lightning is not just the new Salesforce UI—it is a complete re-imagining of the entire Salesforce application and platform. Lightning represents a grand vision of unifying the Salesforce experience for everyone who interacts with Salesforce products or the Salesforce platform, on any device. In no uncertain terms, Lightning is the most important product update in the history of Salesforce as a company. Lightning represents a completely new vision for both the flagship CRM products and the platform. Salesforce is betting the company on it. Lightning changes not only how we interact with Salesforce, but how we design and develop solutions for the platform. Developers and ISV partners now have the ability to create rich, responsive applications using the same framework and design tools that Salesforce uses internally, ensuring that the user experience maintains a consistent look and feel across all applications. While the initial overuse of the term Lightning may be a source of confusion for many in the Salesforce ecosystem, don't let the noise drown out the vision. Lightning changes everything we know about Salesforce, but to understand how, we need to focus on the three key pillars of Lightning: Lightning Experience Salesforce Lightning Design System Lightning Component framework If we think about Lightning as the unified Salesforce experience across all our devices, it makes our mission much clearer. If we think about designing and developing responsive, reusable components for this new unified experience, Lightning makes a lot more sense. A brief history of Lightning The unified Lightning vision as we know it today has been rolled out in fits and starts. To understand how we arrived at the current vision for Lightning, we can look back on prior Dreamforce events as milestones in the history of Lightning. Dreamforce 2013 With Dreamforce '13, Salesforce recognized that the world was moving to a mobile-first mindset and rebranded their mobile application as Salesforce1. With Salesforce1, they also tried to sell the vision of a unified customer experience for the first time. According to a press release for Dreamforce '13, "Salesforce1 is a new social, mobile and cloud customer platform built to transform sales, service and marketing apps for the Internet of Customers." The vision was too ambitious at the time, the messaging was too confusing, and the platform was nowhere close to being ready to support any type of unified experience. Dreamforce 2014 Lightning emerged as a platform with Dreamforce '14. Branded as Salesforce1 Lightning, Salesforce opened up the underlying Aura JavaScript UI framework to developers for the first time, enabling the development of custom components for the Salesforce1 mobile application using the same technology that Salesforce had used to develop the Salesforce1 mobile application. The press release for Dreamforce '14 hinted at what was in store for Lightning, "Now customers, developers and Salesforce partners can take advantage of the new Lightning Framework to quickly build engaging apps across every device. The same framework Salesforce's internal development team uses to build apps can now be used to build custom Lightning components and create any user experience." At this point, Salesforce was still using the Salesforce1 branding for the unified end-to-end experience across Salesforce products and platforms, but we now officially had the Lightning Framework to work with. Dreamforce 2015 Dreamforce '15 may have been the official coming out party for Lightning, but in an unprecedented move for Salesforce, the company held a special pre-Dreamforce Meet the New Salesforce event on August 25, 2015, to announce the new Lightning Experience user interface as well as a complete rebranding of the end-to-end experience of Lightning. The Dreamforce event focused on strengthening the branding and educating developers, admins, and end users on what this unified experience meant for the future of Salesforce. Since then, Salesforce has been all Lightning, all the time. Dreamforce 2016 With Dreamforce '16 and the Winter '17 release of Salesforce, Lightning had finally arrived as a stable, optimized, enterprise-ready platform. Dreamforce '16 was less about hype and more about driving Lightning adoption. Sessions focused on design patterns and best practices rather than selling the platform to developers. New tooling was introduced to make the Lightning development experience something that Salesforce developers could get excited about. With Winter '17, Lightning Experience felt like a true unified end-to-end experience instead of a patchwork of functionality. The Winter '17 release notes were packed with enhancements that would get many organizations off the fence about Lightning and shift the adoption bell curve from toward an early majority from the early adopter state that it had been lingering in while Salesforce filled in the gaps in the platform. This is the Lightning we had been waiting for. Lightning Experience If someone were to ask you what the Lightning Experience was all about, your first reaction might be, "It's the new Salesforce UI." While that is technically correct, as Lightning Experience is the brand name given to the user interface that replaces what we now refer to as Salesforce Classic, the implementation of this user interface is part of the larger vision of a unified Salesforce experience across all devices. Lightning Experience takes the old way of doing things in Salesforce Classic – long, scrolling pages – and blows it up into tiny pieces. You need to reassemble those pieces in a way that makes the most sense for your business users. You can even create your own custom pieces, called components, and include those alongside components that Salesforce gives you out of the box, or components built by third parties that you download from the AppExchange. It is all completely seamless. Focusing on getting things done While the initial release of Lightning Experience focused on making salespeople more productive, the interface is rapidly evolving to improve on all areas of CRM. The ability to seamlessly transition work between desktop and mobile devices will enable every Salesforce user to find new ways to connect with customers and increase the effectiveness of business processes across the enterprise. While the initial release of Lightning Experience wasn't quite complete, it did include over 25 new features and a total redesign of many pages. Some of the notable improvements include: Component-based record pages that focused on getting work done in the context of the Salesforce object record, rather than having to scroll to find pertinent related information Completely redesigned reports and dashboards that enable visualizing customer data in new ways, with flexible layouts and additional filtering options An opportunity workspace designed to help salespeople get to closed won faster by focusing on actions instead of raw data Kanban boards for visualizing opportunities in their various deal stages and enabling salespeople to drag and drop multiple opportunities to new stages rather than having to edit and save individual records An enhanced Notes feature that enables users to create notes with rich text and attach them to multiple records at the same time Unified Salesforce Experience Following the mobile-first mindset adopted with the launch of the Salesforce1 platform, the same component framework that is used to power Salesforce1 is what underlies Lightning Experience. The incorporation of design principles from the Salesforce Lightning Design System ensures that users get the same responsive experience whether they access Salesforce from desktop browsers, mobile devices, tablets, wearables, or anything else that comes along in the near future. Developers and ISV partners can now build custom components and applications that plug right into Lightning Experience, rather than having to build custom pages and standalone applications. Blurring the lines between clicks and code Experienced Salesforce developers know that a key consideration when designing Salesforce solutions is to find the right balance between using declarative, out-of-the-box functionality and the programmatic capabilities of the platform. In the world of Salesforce development, we lovingly refer to this dichotomy as clicks versus code. With Lightning Experience and the introduction of Lightning App Builder, the discussion shifts from clicks or code to clicks and code, as developers can now build custom components and expose them to the Lightning App Builder interface, allowing admins to drag and drop these reusable components onto a canvas and assemble new Lightning pages. While this sentiment may strike fear, uncertainty, and doubt into the hearts of developers, any time we can move from code to clicks, or enable admins to maintain Salesforce customizations, it is a good thing. Lightning Experience enables a closer relationship between admins and developers. Developers can focus on building reusable components, admins can focus on maintaining the user experience. Salesforce Lightning Design System A design system is a collection of design principles, style guides, and elements that enable developers to focus on how components and applications work, while designers can focus on how the application looks and feels. The Salesforce Lightning Design System (SLDS) is a trusted, living, platform-agnostic design system that was built from the ground up to provide developers with everything needed to implement the look and feel of Lightning Experience and the Salesforce1 mobile application. SLDS ensures consistency across all components and applications, whether they are written by Salesforce developers, ISV partners, or even Salesforce itself when designing and implementing product features. Salesforce developed SLDS with four key design principles in mind: Clarity Efficiency Consistency Beauty These principles are applied to colors, typography, layout, icons, and more, throughout the accompanying CSS framework. Developers can implement the design system by including SLDS Cascading Style Sheets (CSS) and icon libraries in components and applications, and applying the appropriate CSS classes to component markup. SLDS also includes CSS style sheets for applying the design system to Visualforce components, Heroku, and native iOS applications. When SLDS was first introduced, adding the style sheets or icon libraries to a Salesforce org required installing an unmanaged package or uploading files as static resources and manually upgrading to new versions of the design system as they were released. As of the Winter '17 release of Salesforce, SLDS is included out of the box with all Salesforce orgs and no longer requires an explicit reference to the static resources from Lightning components or applications. You simply reference the appropriate SLDS class names in your component markup and it will be applied automatically, or you can use Lightning Base Components, which apply SLDS implicitly without additional markup. Lightning Component framework Traditionally, Salesforce UI design came down to two questions: Do I want to recreate the look and feel of Salesforce with custom Visualforce pages, or do I want to install a third-party framework to create rich application interfaces? The prevailing design pattern since the fall of Adobe Flash has been to use Visualforce to simply render the output from JavaScript frameworks such as Backbone, Angular, Ember, Sencha, and others. Developers could continue to follow MVC patterns by using Apex controllers to retrieve data from the Salesforce database. While this may have enabled a rich, responsive experience for certain applications developed for Salesforce, users still had to deal with a mixed experience across all of Salesforce, especially if multiple JavaScript frameworks were in use. Lighting Experience and the Lightning Component framework solve a problem that has long been a barrier to a truly unified experience for all Salesforce users: Providing a single, integrated framework for developers that enabled the creation of rich, responsive applications that could be seamlessly plugged in anywhere in the UI rather than having to stand alone in separate pages or standalone applications. Because the Lightning Component framework is what underlies Lightning Experience and the Salesforce1 mobile applications, we no longer have to choose a JavaScript framework developed and maintained outside of Salesforce. We can now create components and applications using a rich JavaScript framework, which is provided and maintained by Salesforce. What is the Lightning Component framework? The Lightning Component framework is a client-side UI framework that was built by Salesforce using the open source Aura UI framework. The framework uses JavaScript for client-side operations and exposes an API to access Salesforce data using Apex controllers. The Lightning Component framework was initially created to support development for the Salesforce1 mobile application, but is now the standard for responsive, client-side single-page applications for the end-to-end Salesforce user and developer experience across all browsers and devices. The framework provides a number of reusable out-of-the-box components for you to get started building your own Lightning components, and the platform is fully maintained and supported by Salesforce. The problem Salesforce solved with the Lightning Component framework was to give Salesforce developers a single standardized and supported JavaScript framework to move beyond the limitations of Visualforce and build rich applications with a common design system without having to select, install, and maintain a third-party framework. Eliminating the JavaScript framework sprawl within the Salesforce development ecosystem enables developers and admins to deliver customized business solutions with a standardized look and feel without having to learn and maintain yet another framework from another vendor that wasn't built specifically for the Salesforce platform. What is JavaScript? Along with HTML and CSS, JavaScript is one of the three core languages of web development. If you do not have a background in JavaScript, don't worry. Even though it will be the most difficult thing you will have to learn when coming up to speed on Lightning component-development, JavaScript has been around for decades and there are countless resources available for learning the language. JavaScript was introduced in the mid-1990s as a scripting language for the Netscape Navigator browser, with Microsoft subsequently releasing a version for Internet Explorer. By the late 1990s, JavaScript was standardized across browsers with the introduction of what is called ECMAScript. Today, all modern browsers include JavaScript engines. JavaScript will not be completely foreign to Apex developers, or anyone with an object-oriented programming (OOP) background, as it follows object-oriented principles. What will throw you off if you have an Apex background is the fact that JavaScript is loosely typed, whereas Apex is a strongly typed language. This will take some getting used to, conceptually. At its core, JavaScript is a language that is used to read and manipulate what we call the Document Object Model (DOM). The DOM is a programmatic interface that gets built when a browser reads an HTML document and converts each HTML element into what are called node objects. These HTML nodes form a tree-like structure in the DOM, and we can use JavaScript to find nodes, add nodes, remove nodes, and modify nodes to make our web applications dynamic. The other core function that JavaScript performs is listening for and handling events that occur in the browser. HTML itself is a static markup language and was not built to be dynamic in its rendering, which is why JavaScript was created to handle events such as clicking a mouse, changing a picklist value, or putting a form element into focus. You can write JavaScript functions to handle DOM events, and your JavaScript functions can in turn manipulate the DOM by adding, removing, or modifying DOM elements. Many of us have only had to learn JavaScript at a cursory level because JavaScript libraries such as jQuery and JavaScript frameworks such as Sencha ExtJS, Angular, Node, and Backbone take care of a lot of the heavy lifting for us when it comes to actual JavaScript programming. Lightning requires more direct JavaScript programming than many frameworks do, which gives you greater control over the functions in your Lightning components and applications, but unfortunately, you're going to have to bone up on your JavaScript knowledge before you can take advantage of that level of control. What are JavaScript frameworks? JavaScript frameworks handle much of the behind-the-scenes complexity of JavaScript coding and DOM manipulation, and give developers a simplified template-based approach to building web applications. While JavaScript itself does not follow the Model-View-Controller (MVC) design pattern, many JavaScript frameworks implement MVC to provide developers with a familiar architecture for separating the data model and view of an application with a logic, or controller, layer. Each component of the MVC architecture can be maintained separately and be brought together in a cohesive application. Some frameworks, such as Sencha ExtJS, may implement MVC but are more focused on enabling rich user interfaces by giving developers pre-built UI widgets that can be configured declaratively. Other frameworks are designed for touch-driven responsive mobile applications. There are dozens of JavaScript frameworks out there, the most common examples being Backbone.js, AngularJS, Ember.js, Knockout.js, React.js, and ExtJS, among others. What is Aura? Aura is an open-source JavaScript UI framework that is maintained by Salesforce. Aura is component-based, and uses JavaScript on the client-side frontend, with Java on the server-side backend. Aura is the framework that underpins the Lightning Component framework. While the Lightning Component framework and Aura have many similarities on the surface, do not try to use Aura components or functions that are not explicitly supported in the Lightning Component framework documentation. Many developers have already found that these undocumented features may work at first, but unless they are explicitly supported by Salesforce, they can be taken away at any time. Why should I use the Lightning Component framework? Why should Salesforce developers consider moving to Lightning Component development? For starters, Lightning is the future of Salesforce development. There is no way around it: Salesforce is betting the company on Lightning. If you ignore that and simply focus on the value that Lightning can provide, you will find that there are many compelling reasons for making the jump to Lightning. Responsive design From a single code base, you can create reusable components that will give your users a unified experience across all form factors. You no longer have to maintain separate desktop applications, tablet applications and mobile applications. Reusable components You can create components that can be consumed by other Lightning components and applications, allowing your component to be reused many times in many different places. Admins can use your components in the Lightning App Builder, allowing them to declaratively create new Lightning Pages with your components. This is where the line between declarative and programmatic development starts to blur! Better performance Because Lightning components render on the client and do not require expensive round trips to a server-side controller to update data and context, you can build components that are lightning fast. Have you ever used the AJAX components in Visualforce to do something as simple as create a type-ahead function for an input field? There was always a lag between the time an event was handled and the target component was re-rendered. With Lightning components, any change to an attribute value will automatically re-render a component, giving you the ability to create high-performing client-side Salesforce applications. Rendering on the client side will reduce the need for mobile applications to make expensive calls to a server API, significantly improving performance in low bandwidth situations. JavaScript + HTML5 + CSS If you have at least a cursory knowledge of web development, Lightning follows open standards and practices that you are already familiar with. While there may be a learning curve for Visualforce developers who do not have experience with JavaScript, HTML5, or CSS, the use of web standards rather than proprietary framework means there is a wealth of information and resources available to quickly get up to speed on the basics and transition to Lightning Component development. For new developers coming onto the Salesforce platform, Lightning provides an opportunity to quickly apply existing web-application development skills to hit the ground running, without having to first learn Apex or Visualforce. Event-driven architecture With Visualforce development, we are constrained to a fairly rigid architecture, which executes server-side controller actions when user-driven events occur, such as clicking a link or a button. Granted, you can use specialized Visualforce tags or component-specific event handlers to handle supported events, but this requires a significant amount of hard-coding to server-side controller methods. With Lightning, you can listen for and handle just about any DOM event or custom component event and determine what action you want to take when that event occurs. For example, you can handle the onchange event when a picklist value is selected and immediately call a function in your client-side controller to take action based on that value changing. You even have the ability to define and raise custom events, and determine how those events should be handled within your component hierarchy. Component encapsulation Encapsulation simply means that we can wall off the inner workings of a Lightning component and not expose the secret sauce behind it to another component or application that references it. This allows us to update the code within a Lightning Component without breaking any upstream components or applications. Encapsulation enables us to write reusable and efficient code that is easily maintainable. Summary In this article we have learned about Salesforce, the history about Lightning and the need to use the Lightning Component framework. Resources for Article: Further resources on this subject: Introducing Salesforce Chatter [article] Subscribing to a report [article] Learning How to Manage Records in Visualforce [article]
Read more
  • 0
  • 0
  • 1073

article-image-introduction-functional-programming-php
Packt
30 Dec 2016
12 min read
Save for later

Introduction to Functional Programming in PHP

Packt
30 Dec 2016
12 min read
This article by Gilles Crettenand, author of the book Functional PHP, covers some of the concepts explained in book in a concise manner. We will look at the following: Declarative programming Functions Recursion Composing functions Benefits of functional programming (For more resources related to this topic, see here.) Functional programming has gained a lot of traction in the last few years. Various big tech companies started using functional languages, for example: Twitter on Scala (http://www.artima.com/scalazine/articles/twitter_on_scala.html) WhatsApp being written in Erlang (http://www.fastcompany.com/3026758/inside-erlang-the-rare-programming-language-behind-whatsapps-success) Facebook using Haskell (https://code.facebook.com/posts/302060973291128/open-sourcing-haxl-a-library-for-haskell/1) There is some really wonderful and successful work done on functional languages that compile to JavaScript—the Elm and PureScript languages to name a few. There are efforts to create new languages that either extend or compile to some more traditional languages, such as Hy and Coconut for Python. Even Apple's new language for iOS development, Swift, has multiple concepts from functional programming integrated into its core. However, this article is not about using a new language or learning a whole new technology, it is about benefiting from functional techniques without having to change our whole stack. By just applying some principles to our everyday PHP, we can greatly improve the quality of our life and code. Declarative programming Functional programming is also sometimes called declarative programming in contrast to imperative programming. This languages are called programming paradigms. Object-oriented programming is also a paradigm, but it is the one that is strongly tied to the imperative programming. Instead of explaining the difference at length, let's demonstrate with an example. First an imperative programming using PHP: <?php function getPrices(array $products) { // let's assume the $products parameter is an array of products. $prices = []; foreach($products as $p) { if($p->stock > 0) { $prices[] = $p->price; } } return $prices; } Now let's see how you can do the same with SQL which, among other things, is a declarative language: SELECT price FROM products WHERE stock > 0; Notice the difference? In the first example, you tell the computer what to do step by step, taking care of storing intermediary results yourself. In the second example, you only describe what you want and it will then be the role of the database engine to return the results. In a way, functional programming looks a lot more like SQL than the PHP code we just saw. Functions Functional programming, as it names suggests, revolves around functions. In order to apply functional techniques effectively, a language must support functions as a first-class citizen or first functions. This means that functions are considered like any other value. They can be created and passed around as parameters to other functions and they can be used as return values. Luckily, PHP is such a language, you can create functions at will, pass them around as parameters, and even return them. Another fundamental concept is the idea of a pure function or, in other words, functions that only use their input to produce a result. This means that you cannot use any kind of external or internal state to perform your computation. Another way to look at this is from the angle of dependencies. All of the dependencies of your functions need to be clearly declared in the signature. This helps a lot when someone tries to understand how and what your function is doing. Higher-order functions PHP functions can take functions as parameters and return functions as return values. A function that does either of those is called a higher-order function. It is as simple as that. There are a few of those that are commonly used in any functional code base. Map The map, or array_map, method in PHP is a higher order function that applies a given callback to all the elements of a collection. The return value is a collection in the same order. A simple example is as follows: <?php function square(int $x): int { return $x * $x; } $squared = array_map('square', [1, 2, 3, 4]); /* $squared contains [1, 4, 9, 16] */ Filter The filter, or array_filter, method in PHP is a higher order function that keeps only certain elements of a collection based on a Boolean predicate. The return value is a collection that will only contain elements returning true for the predicate function. A simple example is as follows: <?php function odd(int $a): bool { return $a % 2 === 1; } $filtered = array_filter([1, 2, 3, 4, 5, 6], 'odd'); /* $filtered contains [1, 3, 5] */ Fold or reduce Folding refers to a process where you reduce a collection to a return value using a combining function. Depending on the language, this operation can have multiple names—fold, reduce, accumulate, aggregate, or compress. As with other functions related to arrays, the PHP version is the array_reduce function. You may be familiar with the array_sum function, which calculates the sum of all the values in an array. This is in fact a fold and can be easily written using the array_reduce function: <?php function sum(int $carry, int $i): int { return $carry + $i; } $summed = array_reduce([1, 2, 3, 4], 'sum', 0); /* $summed contains 10 */ You don't necessarily need to use the elements to produce a value. You could, for example, implement a naive replacement for the in_array method using fold: <?php function in_array2(string $needle, array $haystack): bool { $search = function(bool $contains, string $i) use ($needle) : bool { return $needle == $i ? true : $contains; }; return array_reduce($haystack, $search, false); } var_dump(in_array2('two', ['one', 'two', 'three'])); // bool(true) Recursion In the academic sense, recursion is the idea of dividing a problem into smaller instances of the same problem. For example, if you need to scan a directory recursively, you first scan the starting directory and then scan its children and grandchildren. Most programming languages support recursion by allowing a function to call itself. This idea is often what is described as being recursion. Let's see how we can scan a directory using recursion: <?php function searchDirectory($dir, $accumulator = []) { foreach (scandir($dir) as $path) { // Ignore hidden files, current directory and parent directory if(strpos($path, '.') === 0) { continue; } $fullPath = $dir.DIRECTORY_SEPARATOR.$path; if(is_dir($fullPath)) { $accumulator = searchDirectory($path, $accumulator); } else { $accumulator[] = $fullPath; } } return $accumulator; } We start by using the scandir method to obtain all files and directories. Then, if we encounter a child directory, we call the function on it again. Otherwise, we simply add the file to the accumulator. This function is recursive because it calls itself. You can write this using control structures, but as you don't know in advance what the depth of your folder hierarchy is, the code will probably be a lot messier and harder to understand. Trampolines Each time you call a function, information gets added to the memory. This can be an issue when doing recursion as you only have a limited amount of memory available. Until the last recursive call, memory usage will continue growing and a stack overflow can happen. The only way we can avoid stack growth is to return a value instead of calling a new function. This value can hold the information that is needed to perform a new function call, which will continue the computation. This also means that we need some cooperation from the caller of the function. This helpful caller is called a trampoline and here is how it works: The trampoline calls our f function Instead of making a recursive call, the f function returns the next call encapsulated inside a data structure with all the arguments The trampoline extracts the information and performs a new call to the f function Repeat the two last steps until the f function returns a real value The trampoline receives a value and returns those to the real caller If you want to use trampolines in your own project, I invite you to install the following library, which offers some helpers as compared to our crude implementation: composer require functional-php/trampoline Here is an example taken from the documentation: <?php use FunctionalPHPTrampoline as t; function factorial($n, $acc = 1) { return $n <= 1 ? $acc : tbounce('factorial', $n - 1, $n * $acc); }; Composing functions Previously, we discussed the idea of building blocks and small pure functions. But, so far, we haven't even hinted at how those can be used to build something bigger. What good is a building block if you cannot use it? The answer partly lies in function's composition. As it is often the case in functional programming, the concept is borrowed from mathematics. If you have two functions f and g, you can create a third function by composing them. The usual notation in mathematics is (f   g)(x), which is equivalent to calling them one after the other as f(g(x)). You can compose any two given functions really easily with PHP using a wrapper function. Say, you want to display a title in all caps and only safe HTML characters: <?php function safe_title2(string $s) { return strtoupper(htmlspecialchars($s)); } Functional libraries for PHP often come with a helper that can create new functions out of multiple subparts easily. For example, using Lars Strojny's Functional PHP library, you can write the following: <?php $titles4 = array_map(compose('htmlspecialchars', 'strtoupper', 'trim'), $titles); Partial application You might want to set some parameters of a function but leave some of them unassigned for later. For example, we might want to create a function that returns an excerpt of a blog post. The dedicated term for setting such a value is "to bind a parameter" or "bind an argument". The process itself is called partial application and the new function is set to be partially applied. The Functional PHP library also comes with helpers to partially apply a function: <?php use function Functionalpartial_right; use function Functionalpartial_left; use function Functionalpartial_any; use const Functional…; $excerpt = partial_right('substr', 0, 5); echo $excerpt('Lorem ipsum dolor si amet.'); // Lorem $fixed_string = partial_left('substr', 'Lorem ipsum dolor si amet.'); echo $fixed_string(6, 5); // ipsum $start_placeholder = partial_any('substr', 'Lorem ipsum dolor si amet.', …(), 5); echo $start_placeholder(12); // dolor Currying Currying is often used as a synonym to partial application. Although both concepts allows us to bind some parameters of a function, the core ideas are a bit different. The idea behind currying is to transform a function that takes multiple arguments into a sequence of functions that take one argument. As this might be a bit hard to grasp, let's try to curry the substr method. The result is called a curryied function. Again, a helper to create such functions is available in the Functional PHP library: <?php use function Functionalcurry; function add($a, $b, $c, $d) { return $a + $b + $c + $d; } $curryedAdd = curry('add'); $add10 = $curryedAdd(10); $add15 = $add10(5); $add42 = $add15(27); $add42(10); // -> 52 Benefits of functional programing As we just saw, the functional world is moving, adoption by the enterprise world is growing, and even new imperative languages are taking inspiration from functional languages. But why it is so? Reduce the cognitive burden on developers You've probably often read or heard that a programmer should not be interrupted because even a small interruption can lead to literally tens of minutes being lost. This is partly due to the cognitive burden or, in other words, the amount of information you have to keep in memory in order to understand the problem or function at hand. By forcing you to clearly state the dependencies of your functions and avoiding using any kind of external data, functional programming helps a lot in writing self-contained code that can be readily understood and thus reduces cognitive burden a lot. Software with fewer bugs We just saw that functional programming reduces the cognitive burden and makes your code easier to reason about. This is already a huge win when it comes to bugs because it will allow you to spot issues quickly as you will spend less time understanding how the code works to focus on what it should do. But all the benefits we've just seen have another advantage. They make testing a lot easier too! If you have a pure function and you test it with a given set of values, you have the absolute certitude that it will always return exactly the same thing in production. Easier refactoring Refactoring is never easy. However, since the only inputs of a pure function are its parameters and its sole output is the returned value, things are simpler. If you're refactored function continues to return the same output for a given input, you can have the guarantee that your software will continue to work. You cannot forget to set a few state somewhere in an object because your function are side-effect free. Enforcing good practices This article and the related book are the proof that functional programming is more about the way we do things instead of a particular language. You can use functional techniques in nearly any language that has functions. Your language still needs to have certain properties, but not that much. I like to talk about having a functional mindset. If it is so, why do companies move to functional languages? Because those languages enforce the best practice that we will learn in this book. In PHP, you will have to always remember to use functional techniques. In Haskell, you cannot do anything else, the language forces you to write pure functions. Summary This small article is by no mean a complete introduction to functional programming, this is what the Functional PHP book is for. I however hope I convinced you it is a set of techniques worth learning. We only brushed the surface here, all topics are covered more in depth in the various chapters. You will also learn about more advanced topics like the following: Functors, applicatives, and Monads Type systems Pattern matching Functional reactive programming Property-based testing Parallel execution of functional code There is also a whole chapter about using functional programming in conjunction with various frameworks like Symfony, Drupal, Laraval, and Wordpress. Resources for Article: Further resources on this subject: Understanding PHP basics [article] Developing Middleware [article] Continuous Integration [article]
Read more
  • 0
  • 0
  • 3789
article-image-introduction-spring-framework
Packt
30 Dec 2016
10 min read
Save for later

Introduction to Spring Framework

Packt
30 Dec 2016
10 min read
In this article by, Tejaswini Mandar Jog, author of the book, Learning Spring 5.0, we will cover the following topics: Introduction to Spring framework Problems address by Spring in enterprise application development Spring architecture What's new in Spring 5.0 Container Spring the fresh new start after the winter of traditional J2EE, is what Spring framework is in actual. A complete solution to the most of the problems occurred in handling the development of numerous complex modules collaborating with each other in a Java enterprise application. Spring is not a replacement to the traditional Java Development but it is a reliable solution to the companies to withstand in today's competitive and faster growing market without forcing the developers to be tightly coupled on Spring APIs. Problems addressed by Spring Java Platform is long term, complex, scalable, aggressive, and rapidly developing platform. The application development takes place on a particular version. The applications need to keep on upgrading to the latest version in order to maintain recent standards and cope up with them. These applications have numerous classes which interact with each other, reuse the APIs to take their fullest advantage so as to make the application is running smoothly. But this leads to some very common problems of as. Scalability The growth and development of each of the technologies in market is pretty fast both in hardware as well as software. The application developed, couple of years back may get outdated because of this growth in these areas. The market is so demanding that the developers need to keep on changing the application on frequent basis. That means whatever application we develop today should be capable of handling the upcoming demands and growth without affecting the working application. The scalability of an application is handling or supporting the handling of the increased load of the work to adapt to the growing environment instead of replacing them. The application when supports handling of increased traffic of website due to increase in numbers of users is a very simple example to call the application is scalable. As the code is tightly coupled, making it scalable becomes a problem. Plumbing code Let's take an example of configuring the DataSource in the Tomcat environment. Now the developers want to use this configured DataSource in the application. What will we do? Yes, we will do the JNDI lookup to get the DataSource. In order to handle JDBC we will acquire and then release the resources in try catch. The code like try catch as we discuss here, inter computer communication, collections too necessary but are not application specific are the plumbing codes. The plumbing code increases the length of the code and makes debugging complex. Boilerplate code How do we get the connection while doing JDBC? We need to register Driver class and invoke the getConnection() method on DriverManager to obtain the connection object. Is there any alternative to these steps? Actually NO! Whenever, wherever we have to do JDBC these same steps have to repeat every time. This kind of repetitive code, block of code which developer write at many places with little or no modification to achieve some task is called as boilerplate code. The boilerplate code makes the Java development unnecessarily lengthier and complex. Unavoidable non-functional code Whenever application development happens, the developer concentrate on the business logic, look and feel and persistency to be achieved. But along with these things the developers also give a rigorous thought on how to manage the transactions, how to handle increasing load on site, how to make the application secure and many more. If we give a close look, these things are not core concerns of the application but still these are unavoidable. Such kind of code which is not handling the business logic (functional) requirement but important for maintenance, trouble shooting, managing security of an application is called as non-functional code. In most of the Java application along with core concerns the developers have to write down non-functional code quite frequently. This leads to provide biased concentration on business logic development. Unit testing of the application Let's take an example. We want to test a code which is saving the data to the table in database. Here testing the database is not our motive, we just want to be sure whether the code which we have written is working fine or not. Enterprise Java application consists of many classes, which are interdependent. As there is dependency exists in the objects it becomes difficult to carry out the testing. POJO based development The class is a very basic structure of application development. If the class is getting extended or implementing an interface of the framework, reusing it becomes difficult as they are tightly coupled with API. The Plain Old Java Object (POJO) is very famous and regularly used terminology in Java application development. Unlike Struts and EJB Spring doesn't force developers to write the code which is importing or extending Spring APIs. The best thing about Spring is that developers can write the code which generally doesn't has any dependencies on framework and for this, POJOs are the favorite choice. POJOs support loosely coupled modules which are reusable and easy to test. The Spring framework is called to be non-invasive as it doesn't force the developer to use API classes or interfaces and allows to develop loosely coupled application. Loose coupling through DI Coupling, is the degree of knowledge in class has about the other. When a class is less dependent on the design of any other class, the class will be called as loosely coupled. Loose coupling can be best achieved by interface programming. In the Spring framework, we can keep the dependencies of the class separated from the code in a separate configuration file. Using interfaces and dependency injection techniques provided by Spring, developers can write loosely coupled code (Don't worry, very soon we will discuss about Dependency Injection and how to achieve it). With the help of loose coupling one can write a code which needs a frequent change, due to the change in the dependency it has. It makes the application more flexible and maintainable. Declarative programming In declarative programming, the code states what is it going to perform but not how it will be performed. This is totally opposite of imperative programming where we need to state stepwise what we will execute. The declarative programming can be achieved using XML and annotations. Spring framework keeps all configurations in XML from where it can be used by the framework to maintain the lifecycle of a bean. As the development happened in Spring framework, the 2.0 onward version gave an alternative to XML configuration with a wide range of annotations. Boilerplate code reduction using aspects and templates We just have discussed couple of pages back that repetitive code is boilerplate code. The boiler plate code is essential and without which providing transactions, security, logging, and so on, will become difficult. The framework gives solution of writing aspect which will deal with such cross cutting concerns and no need to write them along with business logic code. The use of aspect helps in reduction of boilerplate code but the developers still can achieve the same end effect. One more thing the framework provides, is the templates for different requirements. The JDBCTemplate and HibernateTemplate are couple of more useful concepts given by Spring which does reduction of boilerplate code. But as a matter of fact, you need to wait to understand and discover the actual potential. Layered architecture Unlike Struts and Hibernate which provides web persistency solutions respectively, Spring has a wide range of modules for numerous enterprise development problems. This layered architecture helps the developer to choose any one or more of the modules to write solution for his application in a coherent way. E.g. one can choose Web MVC module to handle web request efficiently without even knowing that there are many other modules available in the framework. Spring architecture Spring provides more than 20 different modules which can be broadly summaries under 7 main modules which are as follows: Spring modules What more Spring supports underneath? The following sections covers the additional features of Spring. Security module Now a days the applications alone with basic functionalities also need to provide sound ways to handle security at different levels. Spring5 support declarative security mechanism using Spring AOP. Batch module The Java Enterprise Applications needs to perform bulk processing, handling of large amount of data in many business solutions without user interactions. To handle such things in batches is the best solution available. Spring provides integration of batch processing to develop robust application. Spring integration In the development of enterprise application, the application may need interaction with them. Spring integration is extension of the core spring framework to provide integration of other enterprise applications with the help of declarative adapters. The messaging is one of such integration which is extensively supported by Spring. Mobile module The extensive use of mobiles opens the new doors in development. This module is an extension of Spring MVC which helps in developing mobile web applications known as Spring Android Project. It also provide detection of the type of device which is making the request and accordingly renders the views. LDAP module The basic aim of Spring was to simplify the development and to reduce the boilerplate code. The Spring LDAP module supports easy LDAP integration using template based development. .NEW module The new module has been introduced to support .NET platform. The modules like ADO.NET, NHibernate, ASP.NET has been in the .NET module includes to simplify the .NET development taking the advantages of features as DI, AOP, loose coupling. Container – the heart of Spring POJO development is the backbone of Spring framework. The POJO configured in the and whose object instantiation, object assembly, object management is done by Spring IoC container is called as bean or Spring bean. We use Spring IoC as it on the pattern of Inversion of Control. Inversion of Control (IoC) In every Java application, the first important thing which each developer does is, to get an object which he can use in the application. The state of an object can be obtained at runtime or it may be at compile time. But developers creates object where he use boiler plate code at a number of times. When the same developer uses Spring instead of creating object by himself he will be dependent on the framework to obtain object from. The term inversion of control comes as Spring container inverts the responsibility of object creation from developers. Spring IoC container is just a terminology, the Spring framework provides two containers: The BeanFactory The ApplicationContext Summary So in this article, we discussed about the general problems faced in Java enterprise application development and how they have been address by Spring framework. We have seen the overall major changes happened in each version of Spring from its first introduction in market. Enabling Spring Faces support Design with Spring AOP Getting Started with Spring Security
Read more
  • 0
  • 0
  • 3145

article-image-asynchronous-programming-futures-and-promises
Packt
30 Dec 2016
18 min read
Save for later

Asynchronous Programming with Futures and Promises

Packt
30 Dec 2016
18 min read
This article by Aleksandar Prokopec, author of the book Learning Concurrent Programming in Scala - Second Edition, explains the concepts of asynchronous programming in Scala. Asynchronous programming helps you eliminate blocking; instead of suspending the thread whenever a resource is not available, a separate computation is scheduled to proceed once the resource becomes available. In a way, many of the concurrency patterns seen so far support asynchronous programming; thread creation and scheduling execution context tasks can be used to start executing a computation concurrent to the main program flow. Still, it is not straightforward to use these facilities directly when avoiding blocking or composing asynchronous computations. In this article, we will focus on two abstractions in Scala that are specifically tailored for this task—futures and promises. More specifically, we will study the following topics: Starting asynchronous computations and using Future objects Using Promise objects to interface Blocking threads inside asynchronous computations Alternative future frameworks (For more resources related to this topic, see here.) Futures The parallel executions in a concurrent program proceed on entities called threads. At any point, the execution of a thread can be temporarily suspended until a specific condition is fulfilled. When this happens, we say that the thread is blocked. Why do we block threads in the first place in concurrent programming? One of the reasons is that we have a finite amount of resources; so, multiple computations that share these resources sometimes need to wait. In other situations, a computation needs specific data to proceed, and if that data is not yet available, threads responsible for producing the data could be slow or the source of the data could be external to the program. A classic example is waiting for the data to arrive over the network. Let's assume that we have a getWebpage method that returns that webpage's contents when given a url string with the location of the webpage: def getWebpage(url: String): String The return type of the getWebpage method is String; the method must return a string with the webpage's contents. Upon sending an HTTP request, though, the webpage's contents are not available immediately. It takes some time for the request to travel over the network to the server and back before the program can access the document. The only way for the method to return the contents of the webpage as a string value is to wait for the HTTP response to arrive. However, this can take a relatively long period of time from the program's point of view even with a high-speed Internet connection, the getWebpage method needs to wait. Since the thread that called the getWebpage method cannot proceed without the contents of the webpage, it needs to pause its execution; therefore, the only way to correctly implement the getWebpage method is for blocking. We already know that blocking can have negative side effects, so can we change the return value of the getWebpage method to some special value that can be returned immediately? The answer is yes. In Scala, this special value is called a future. The future is a placeholder, that is, a memory location for the value. This placeholder does not need to contain a value when the future is created; the value can be placed into the future eventually by getWebpage. We can change the signature of the getWebpage method to return a future as follows: def getWebpage(url: String): Future[String] Here, the Future[String] type means that the future object can eventually contain a String value. We can now implement getWebpage without blocking—we can start the HTTP request asynchronously and place the webpage's contents into the future when they become available. When this happens, we can say that the getWebpage method completes the future. Importantly, after the future is completed with some value, that value can no longer change. The Future[T] type encodes latency in the program—use it to encode values that will become available later during execution. This removes blocking from the getWebpage method, but it is not clear how the calling thread can extract the content of the future. Polling is one non-blocking way of extracting the content. In the polling approach, the calling thread calls a special method for blocking until the value becomes available. While this approach does not eliminate blocking, it transfers the responsibility of blocking from the getWebpage method to the caller thread. Java defines its own Future type to encode values that will become available later. However, as a Scala developer, you should use Scala's futures instead; they allow additional ways of handling future values and avoid blocking, as we will soon see. When programming with futures in Scala, we need to distinguish between future values and future computations. A future value of the Future[T] type denotes some value of the T type in the program that might not be currently available, but could become available later. Usually, when we say a future, we really mean a future value. In the scala.concurrent package, futures are represented with the Future[T] trait: trait Future[T] By contrast, a future computation is an asynchronous computation that produces a future value. A future computation can be started by calling the apply method on the Future companion object. This method has the following signature in the scala.concurrent package: def apply[T](b: =>T)(implicit e: ExecutionContext): Future[T] This method takes a by-name parameter of the T type. This is the body of the asynchronous computation that results in some value of type T. It also takes an implicit ExecutionContext parameter, which abstracts over where and when the thread gets executed. Recall that Scala's implicit parameters can either be specified when calling a method, in the same way as normal parameters, or they can be left out; in this case, the Scala compiler searches for a value of the ExecutionContext type in the surrounding scope. Most Future methods take an implicit execution context. Finally, the Future.apply method returns a future of the T type. This future is completed with the value resulting from the asynchronous computation, b. Starting future computations Let's see how to start a future computation in an example. We first import the contents of the scala.concurrent package. We then import the global execution context from the Implicits object. This makes sure that the future computations execute on the global context—the default execution context you can use in most cases: import scala.concurrent._ import ExecutionContext.Implicits.global object FuturesCreate extends App { Future { log("the future is here") } log("the future is coming") Thread.sleep(1000) } The order in which the log method calls (in the future computation and the main thread) execute is nondeterministic. The Future singleton object followed by a block is syntactic sugar for calling the Future.apply method. In the following example, we can use the scala.io.Source object to read the contents of our build.sbt file in a future computation. The main thread calls the isCompleted method on the future value, buildFile, returned from the future computation. Chances are that the build file was not read so fast, so isCompleted returns false. After 250 milliseconds, the main thread calls isCompleted again, and this time, isCompleted returns true. Finally, the main thread calls the value method, which returns the contents of the build file: import scala.io.Source object FuturesDataType extends App { val buildFile: Future[String] = Future { val f = Source.fromFile("build.sbt") try f.getLines.mkString("n") finally f.close() } log(s"started reading the build file asynchronously") log(s"status: ${buildFile.isCompleted}") Thread.sleep(250) log(s"status: ${buildFile.isCompleted}") log(s"value: ${buildFile.value}") } In this example, we used polling to obtain the value of the future. The Future singleton object's polling methods are non-blocking, but they are also nondeterministic; the isCompleted method will repeatedly return false until the future is completed. Importantly, completion of the future is in a happens-before relationship with the polling calls. If the future completes before the invocation of the polling method, then its effects are visible to the thread after the polling completes. Shown graphically, polling looks as shown in the following figure: Polling diagram Polling is like calling your potential employer every five minutes to ask if you're hired. What you really want to do is hand in a job application and then apply for other jobs instead of busy-waiting for the employer's response. Once your employer decides to hire you, he will give you a call on the phone number you left him. We want futures to do the same; when they are completed, they should call a specific function that we left for them. Promises Promises are objects that can be assigned a value or an exception only once. This is why promises are sometimes also called single-assignment variables. A promise is represented with the Promise[T] type in Scala. To create a promise instance, we use the Promise.apply method on the Promise companion object: def apply[T](): Promise[T] This method returns a new promise instance. Like the Future.apply method, the Promise.apply method returns immediately; it is non-blocking. However, the Promise.apply method does not start an asynchronous computation, it just creates a fresh Promise object. When the Promise object is created, it does not contain a value or an exception. To assign a value or an exception to a promise, we use the success or failure method, respectively. Perhaps you have noticed that promises are very similar to futures. Both futures and promises are initially empty and can be completed with either a value or an exception. This is intentional—every promise object corresponds to exactly one future object. To obtain the future associated with a promise, we can call the future method on the promise. Calling this method multiple times always returns the same future object. A promise and a future represent two aspects of a single-assignment variable. The promise allows you to assign a value to the future object, whereas the future allows you to read that value. In the following code snippet, we create two promises, p and q, that can hold string values. We then install a foreach callback on the future associated with the p promise and wait for 1 second. The callback is not invoked until the p promise is completed by calling the success method. We then fail the q promise in the same way and install a failed.foreach callback: object PromisesCreate extends App { val p = Promise[String] val q = Promise[String] p.future foreach { case x => log(s"p succeeded with '$x'") } Thread.sleep(1000) p success "assigned" q failure new Exception("not kept") q.future.failed foreach { case t => log(s"q failed with $t") } Thread.sleep(1000) } Alternatively, we can use the complete method and specify a Try[T] object to complete the promise. Depending on whether the Try[T] object is a success or a failure, the promise is successfully completed or failed. Importantly, after a promise is either successfully completed or failed, it cannot be assigned an exception or a value again in any way. Trying to do so results in an exception. Note that this is true even when there are multiple threads simultaneously calling success or complete. Only one thread completes the promise, and the rest throw an exception. Assigning a value or an exception to an already completed promise is not allowed and throws an exception. We can also use the trySuccess, tryFailure, and tryComplete methods that correspond to success, failure, and complete states, respectively, but return a Boolean value to indicate whether the assignment was successful. Recall that using the Future.apply method and callback methods with referentially transparent functions results in deterministic concurrent programs. As long as we do not use the trySuccess, tryFailure, and tryComplete methods, and none of the success, failure, and complete methods ever throws an exception, we can use promises and retain determinism in our programs. We now have everything we need to implement our custom Future.apply method. We call it the myFuture method in the following example. The myFuture method takes a b by-name parameter that is the asynchronous computation. First, it creates a p promise. Then, it starts an asynchronous computation on the global execution context. This computation tries to evaluate b and complete the promise. However, if the b body throws a nonfatal exception, the asynchronous computation fails the promise with that exception. In the meanwhile, the myFuture method returns the future immediately after starting the asynchronous computation: import scala.util.control.NonFatal object PromisesCustomAsync extends App { def myFuture[T](b: =>T): Future[T] = { val p = Promise[T] global.execute(new Runnable { def run() = try { p.success(b) } catch { case NonFatal(e) => p.failure(e) } }) p.future } val f = myFuture { "naa" + "na" * 8 + " Katamari Damacy!" } f foreach { case text => log(text) } } This is a common pattern when producing futures. We create a promise, let some other computation complete that promise, and return the corresponding future. However, promises were not invented just for our custom future's myFuture computation method. In the following sections, we will study use cases in which promises are useful. Futures and blocking Futures and asynchronous computations mainly exist to avoid blocking, but in some cases, we cannot live without it. It is, therefore, valid to ask how blocking interacts with futures. There are two ways to block with futures. The first way is to wait until a future is completed. The second way is by blocking from within an asynchronous computation. We will study both the topics in this section. Awaiting futures In rare situations, we cannot use callbacks or future combinators to avoid blocking. For example, the main thread that starts multiple asynchronous computations has to wait for these computations to finish. If an execution context uses daemon threads, as is the case with the global execution context, the main thread needs to block to prevent the JVM process from terminating. In these exceptional circumstances, we use the ready and result methods on the Await object from the scala.concurrent package. The ready method blocks the caller thread until the specified future is completed. The result method also blocks the caller thread, but returns the value of the future if it was completed successfully or throws the exception in the future if the future is failed. Both the methods require specifying a timeout parameter; the longest duration that the caller should wait for the completion of the future before a TimeoutException method is thrown. To specify a timeout, we import the scala.concurrent.duration package. This allows us to write expressions such as 10.seconds: import scala.concurrent.duration._ object BlockingAwait extends App { val urlSpecSizeFuture = Future { val specUrl = "http://www.w3.org/Addressing/URL/url-spec.txt" Source.fromURL(specUrl).size } val urlSpecSize = Await.result(urlSpecSizeFuture, 10.seconds) log(s"url spec contains $urlSpecSize characters") } In this example, the main thread starts a computation that retrieves the URL specification and then awaits. By this time, the World Wide Web Consortium (W3C) is worried that a DOS attack is under way, so this is the last time we download the URL specification. Blocking in asynchronous computations Waiting for the completion of a future is not the only way to block. Some legacy APIs do not use callbacks to asynchronously return results. Instead, such APIs expose the blocking methods. After we call a blocking method, we lose control over the thread; it is up to the blocking method to unblock the thread and return the control back. Execution contexts are often implemented using thread pools. By starting future computations that block, it is possible to reduce parallelism and even cause deadlocks. This is illustrated in the following example, in which 16 separate future computations call the sleep method, and the main thread waits until they complete for an unbounded amount of time: val startTime = System.nanoTime val futures = for (_ <- 0 until 16) yield Future { Thread.sleep(1000) } for (f <- futures) Await.ready(f, Duration.Inf) val endTime = System.nanoTime log(s"Total time = ${(endTime - startTime) / 1000000} ms") log(s"Total CPUs = ${Runtime.getRuntime.availableProcessors}") Assume that you have eight cores in your processor. This program does not end in one second. Instead, a first batch of eight futures started by Future.apply will block all the worker threads for one second, and then another batch of eight futures will block for another second. As a result, none of our eight processor cores can do any useful work for one second. Avoid blocking in asynchronous computations, as it can cause thread starvation. If you absolutely must block, then the part of the code that blocks should be enclosed within the blocking call. This signals to the execution context that the worker thread is blocked and allows it to temporarily spawn additional worker threads if necessary:   val futures = for (_ <- 0 until 16) yield Future { blocking { Thread.sleep(1000) } } With the blocking call around the sleep call, the global execution context spawns additional threads when it detects that there is more work than the worker threads. All 16 future computations can execute concurrently and the program terminates after one second. The Await.ready and Await.result statements block the caller thread until the future is completed and are in most cases used outside asynchronous computations. They are blocking operations. The blocking statement is used inside asynchronous code to designate that the enclosed block of code contains a blocking call. It is not a blocking operation by itself. Alternative future frameworks Scala futures and promises API resulted from an attempt to consolidate several different APIs for asynchronous programming, among them, legacy Scala futures, Akka futures, Scalaz futures, and Twitter's Finagle futures. Legacy Scala futures and Akka futures have already converged to the futures and promises API that you've learned about so far in this article. Finagle's com.twitter.util.Future type is planned to eventually implement the same interface as scala.concurrent.Future package, while the Scalaz scalaz.concurrent.Future type implements a slightly different interface. In this section, we give a brief of Scalaz futures. To use Scalaz, we add the following dependency to the build.sbt file: libraryDependencies += "org.scalaz" %% "scalaz-concurrent" % "7.0.6" We now encode an asynchronous tombola program using Scalaz. The Future type in Scalaz does not have the foreach method. Instead, we use its runAsync method, which asynchronously runs the future computation to obtain its value and then calls the specified callback: import scalaz.concurrent._ object Scalaz extends App { val tombola = Future { scala.util.Random.shuffle((0 until 10000).toVector) } tombola.runAsync { numbers => log(s"And the winner is: ${numbers.head}") } tombola.runAsync { numbers => log(s"... ahem, winner is: ${numbers.head}") } } Unless you are terribly lucky and draw the same permutation twice, running this program reveals that the two runAsync calls print different numbers. Each runAsync call separately computes the permutation of the random numbers. This is not surprising, as Scalaz futures have the pull semantics, in which the value is computed each time some callback requests it, in contrast to Finagle and Scala futures' push semantics, in which the callback is stored, and applied if and when the asynchronously computed value becomes available. To achieve the same semantics, as we would have with Scala futures, we need to use the start combinator that runs the asynchronous computation once, and caches its result: val tombola = Future { scala.util.Random.shuffle((0 until 10000).toVector) } start With this change, the two runAsync calls use the same permutation of random numbers in the tombola variable and print the same values. We will not dive further into the internals of alternate frameworks. The fundamentals about futures and promises that you learned about in this article should be sufficient to easily familiarize yourself with other asynchronous programming libraries, should the need arise. Summary This article presented some powerful abstractions for asynchronous programming. We saw how to encode latency with the Future type. You learned that futures and promises are closely tied together and that promises allow interfacing with legacy callback-based systems. In cases, where blocking was unavoidable, you learned how to use the Await object and the blocking statement. Finally, you learned that the Scala Async library is a powerful alternative for expressing future computations more concisely. Resources for Article: Further resources on this subject: Introduction to Scala [article] Concurrency in Practice [article] Integrating Scala, Groovy, and Flex Development with Apache Maven [article]
Read more
  • 0
  • 0
  • 6081