Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-setting-up-logistic-regression-model-using-tensorflow
Packt Editorial Staff
25 Apr 2018
8 min read
Save for later

Setting up Logistic Regression model using TensorFlow

Packt Editorial Staff
25 Apr 2018
8 min read
TensorFlow is another open source library developed by the Google Brain Team to build numerical computation models using data flow graphs. The core of TensorFlow was developed in C++ with the wrapper in Python. The tensorflow package in R gives you access to the TensorFlow API composed of Python modules to execute computation models. TensorFlow supports both CPU- and GPU-based computations. In this article, we will cover the application of TensorFlow in setting up a logistic regression model. The example will use a similar dataset to that used in the H2O model setup. The tensorflow package in R calls the Python tensorflow API for execution, which is essential to install the tensorflow package in both R and Python to make R work. The following are the dependencies for tensorflow: Python 2.7 / 3.x R (>3.2) devtools package in R for installing TensorFlow from GitHub TensorFlow in Python pip Getting ready The code for this section is created on Linux but can be run on any operating system. To start modeling, load the tensorflow package in the environment. R loads the default TensorFlow environment variable and also the NumPy library from Python in the np variable: library("tensorflow") # Load TensorFlow np <- import("numpy") # Load numpy library How to do it... The data is imported using a standard function from R, as shown in the following code. The data is imported using the csv file and transformed into the matrix format followed by selecting the features used to model as defined in xFeatures and yFeatures. The next step in TensorFlow is to set up a graph to run optimization: # Loading input and test data xFeatures = c("Temperature", "Humidity", "Light", "CO2", "HumidityRatio") yFeatures = "Occupancy" occupancy_train <-as.matrix(read.csv("datatraining.txt",stringsAsFactors = T)) occupancy_test <- as.matrix(read.csv("datatest.txt",stringsAsFactors = T)) # subset features for modeling and transform to numeric values occupancy_train<-apply(occupancy_train[, c(xFeatures, yFeatures)], 2, FUN=as.numeric) occupancy_test<-apply(occupancy_test[, c(xFeatures, yFeatures)], 2, FUN=as.numeric) # Data dimensions nFeatures<-length(xFeatures) nRow<-nrow(occupancy_train) Before setting up the graph, let's reset the graph using the following command: # Reset the graph tf$reset_default_graph() Additionally, let's start an interactive session as it will allow us to execute variables without referring to the session-to-session object: # Starting session as interactive session sess<-tf$InteractiveSession() Define the logistic regression model in TensorFlow: # Setting-up Logistic regression graph x <- tf$constant(unlist(occupancy_train[, xFeatures]), shape=c(nRow, nFeatures), dtype=np$float32) # W <- tf$Variable(tf$random_uniform(shape(nFeatures, 1L))) b <- tf$Variable(tf$zeros(shape(1L))) y <- tf$matmul(x, W) + b The input feature x is defined as a constant as it will be an input to the system. The weight W and bias b are defined as variables that will be optimized during the optimization process. The y is set up as a symbolic representation between x, W, and b. The weight W is set up to initialize random uniform distribution and b is assigned the value zero. The next step is to set up the cost function for logistic regression: # Setting-up cost function and optimizer y_ <- tf$constant(unlist(occupancy_train[, yFeatures]), dtype="float32", shape=c(nRow, 1L)) cross_entropy<-tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labels=y_, logits=y, name="cross_entropy")) optimizer <- tf$train$GradientDescentOptimizer(0.15)$minimize(cross_entropy) # Start a session init <- tf$global_variables_initializer() sess$run(init) Execute the gradient descent algorithm for the optimization of weights using cross entropy as the loss function: # Running optimization for (step in 1:5000) {   sess$run(optimizer)   if (step %% 20== 0)     cat(step, "-", sess$run(W), sess$run(b), "==>", sess$run(cross_entropy), "n") } How it works... The performance of the model can be evaluated using AUC: # Performance on Train library(pROC) ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b)) roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred)) # Performance on test nRowt<-nrow(occupancy_test) xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32) ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b)) roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt)). AUC can be visualized using the plot.auc function from the pROC package, as shown in the screenshot following this command. The performance for training and testing (hold-out) is very similar. plot.roc(roc_obj, col = "green", lty=2, lwd=2) plot.roc(roc_objt, add=T, col="red", lty=4, lwd=2) Performance of logistic regression using TensorFlow Visualizing TensorFlow graphs TensorFlow graphs can be visualized using TensorBoard. It is a service that utilizes TensorFlow event files to visualize TensorFlow models as graphs. Graph model visualization in TensorBoard is also used to debug TensorFlow models. Getting ready TensorBoard can be started using the following command in the terminal: $ tensorboard --logdir home/log --port 6006 The following are the major parameters for TensorBoard: --logdir : To map to the directory to load TensorFlow events --debug: To increase log verbosity --host: To define the host to listen to its localhost (0.0.1) by default --port: To define the port to which TensorBoard will serve The preceding command will launch the TensorFlow service on localhost at port 6006, as shown in the following screenshot: TensorBoard The tabs on the TensorBoard capture relevant data generated during graph execution. How to do it... The section covers how to visualize TensorFlow models and output in TernsorBoard. To visualize summaries and graphs, data from TensorFlow can be exported using the FileWriter command from the summary module. A default session graph can be added using the following command: # Create Writer Obj for log log_writer = tf$summary$FileWriter('c:/log', sess$graph) The graph for logistic regression developed using the preceding code is shown in the following screenshot: Visualization of the logistic regression graph in TensorBoard Details about symbol descriptions on TensorBoard can be found at https://www.tensorflow.org/get_started/graph_viz. Similarly, other variable summaries can be added to the TensorBoard using correct summaries, as shown in the following code: # Adding histogram summary to weight and bias variable w_hist = tf$histogram_summary("weights", W) b_hist = tf$histogram_summary("biases", b) Create a cross entropy evaluation for test. An example script to generate the cross entropy cost function for test and train is shown in the following command: # Set-up cross entropy for test nRowt<-nrow(occupancy_test) xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32) ypredt <- tf$nn$sigmoid(tf$matmul(xt, W) + b) yt_ <- tf$constant(unlist(occupancy_test[, yFeatures]), dtype="float32", shape=c(nRowt, 1L)) cross_entropy_tst<-tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labels=yt_, logits=ypredt, name="cross_entropy_tst")) Add summary variables to be collected: # Add summary ops to collect data w_hist = tf$summary$histogram("weights", W) b_hist = tf$summary$histogram("biases", b) crossEntropySummary<-tf$summary$scalar("costFunction", cross_entropy) crossEntropyTstSummary<-tf$summary$scalar("costFunction_test", cross_entropy_tst) Open the writing object, log_writer. It writes the default graph to the location, c:/log: # Create Writer Obj for log log_writer = tf$summary$FileWriter('c:/log', sess$graph) Run the optimization and collect the summaries: for (step in 1:2500) {   sess$run(optimizer)   # Evaluate performance on training and test data after 50 Iteration   if (step %% 50== 0){    ### Performance on Train    ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b))    roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred))    ### Performance on Test    ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b))    roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt))    cat("train AUC: ", auc(roc_obj), " Test AUC: ", auc(roc_objt), "n")    # Save summary of Bias and weights    log_writer$add_summary(sess$run(b_hist), global_step=step)    log_writer$add_summary(sess$run(w_hist), global_step=step)    log_writer$add_summary(sess$run(crossEntropySummary), global_step=step)    log_writer$add_summary(sess$run(crossEntropyTstSummary), global_step=step) } } Collect all the summaries to a single tensor using themerge_all command from the summary module: summary = tf$summary$merge_all() Write the summaries to the log file using the log_writer object: log_writer = tf$summary$FileWriter('c:/log', sess$graph) summary_str = sess$run(summary) log_writer$add_summary(summary_str, step) log_writer$close() We have learned how to perform logistic regression using TensorFlow also we have covered the application of TensorFlow in setting up a logistic regression model. [box type="shadow" align="" class="" width=""]This article is book excerpt taken from, R Deep Learning Cookbook, co-authored by PKS Prakash & Achyutuni Sri Krishna Rao. This book contains powerful and independent recipes to build deep learning models in different application areas using R libraries.[/box] Read More Getting started with Linear and logistic regression Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions Using Logistic regression to predict market direction in algorithmic trading  
Read more
  • 0
  • 0
  • 3829

article-image-building-a-real-time-dashboard-with-meteor-and-vue-js
Kunal Chaudhari
25 Apr 2018
14 min read
Save for later

Building a real-time dashboard with Meteor and Vue.js

Kunal Chaudhari
25 Apr 2018
14 min read
In this article, we will use Vue.js with an entirely different stack--Meteor! We will discover this full-stack JavaScript framework and build a real-time dashboard with Meteor to monitor the production of some products. We will cover the following topics: Installing Meteor and setting up a project Storing data into a Meteor collection with a Meteor method Subscribing to the collection and using the data in our Vue components The app will have a main page with some indicators, such as: It will also have another page with buttons to generate fake measures since we won't have real sensors available. Setting up the project In this first part, we will cover Meteor and get a simple app up and running on this platform. What is Meteor? Meteor is a full-stack JavaScript framework for building web applications. The mains elements of the Meteor stack are as follows: Web client (can use any frontend library, such as React or Vue); it has a client-side database called Minimongo Server based on nodejs; it supports the modern ES2015+ features, including the import/export syntax Real-time database on the server using MongoDB Communication between clients and the server is abstracted; the client-side and server-side databases can be easily synchronized in real-time Optional hybrid mobile app (Android and iOS), built in one command Integrated developer tools, such as a powerful command-line utility and an easy- to-use build tool Meteor-specific packages (but you can also use npm packages) As you can see, JavaScript is used everywhere. Meteor also encourages you to share code between the client and the server. Since Meteor manages the entire stack, it offers very powerful systems that are easy to use. For example, the entire stack is fully reactive and real-time--if a client sends an update to the server, all the other clients will receive the new data and their UI will automatically be up to date. Meteor has its own build system called "IsoBuild" and doesn't use Webpack. It focuses on ease of use (no configuration), but is, as a result, also less flexible. Installing Meteor If you don't have Meteor on your system, you need to open the Installation Guide on the official Meteor website. Follow the instructions there for your OS to install Meteor. When you are done, you can check whether Meteor was correctly installed with the following command: meteor --version The current version of Meteor should be displayed. Creating the project Now that Meteor is installed, let's set up a new project: Let's create our first Meteor project with the meteor create command: meteor create --bare <folder> cd <folder> The --bare argument tells Meteor we want an empty project. By default, Meteor will generate some boilerplate files we don't need, so this keeps us from having to delete them. Then, we need two Meteor-specific packages--one for compiling the Vue components, and one for compiling Stylus inside those components. Install them with the meteor add command: meteor add akryum:vue-component akryum:vue-stylus We will also install the vue and vue-router package from npm: meteor npm i -S vue vue-router Note that we use the meteor npm command instead of just npm. This is to have the same environment as Meteor (nodejs and npm versions). To start our Meteor app in development mode, just run the meteor command: Meteor Meteor should start an HTTP proxy, a MongoDB, and the nodejs server: It also shows the URL where the app is available; however, if you open it right now, it will be blank. Our first Vue Meteor app In this section, we will display a simple Vue component in our app: Create a new index.html file inside the project directory and tell Meteor we want div in the page body with the app id: <head> <title>Production Dashboard</title> </head> <body> <div id="app"></div>      </body> This is not a real HTML file. It is a special format where we can inject additional elements to the head or body section of the final HTML page. Here, Meteor will add a title into the head section and the <div> into the body section. Create a new client folder, new components subfolder, and a new App.vue component with a simple template: <!-- client/components/App.vue --> <template> <div id="#app"> <h1>Meteor</h1> </div>   </template> Download (https://github.com/Akryum/packt-vue-project-guide/tree/ master/chapter8-full/client) this stylus file in the client folder and add it to the main App.vue component: <style lang="stylus" src="../style.styl" /> Create a main.js file in the client folder that starts the Vue application inside the Meteor.startup hook: import { Meteor } from 'meteor/meteor' import Vue from 'vue' import App from './components/App.vue' Meteor.startup(() => { new Vue({ el: '#app', ...App, }) }) In a Meteor app, it is recommended that you create the Vue app inside the Meteor.startup hook to ensure that all the Meteor systems are ready before starting the frontend. This code will only be run on the client because it is located in a client folder. You should now have a simple app displayed in your browser. You can also open the Vue devtools and check whether you have the App component present on the page. Routing Let's add some routing to the app; we will have two pages--the dashboard with indicators and a page with buttons to generate fake data: In the client/components folder, create two new components--ProductionGenerator.vue and ProductionDashboard.vue. Next to the main.js file, create the router in a router.js file: import Vue from 'vue' import VueRouter from 'vue-router' import ProductionDashboard from './components/ProductionDashboard.vue' import ProductionGenerator from './components/ProductionGenerator.vue' Vue.use(VueRouter) const routes = [ { path: '/', name: 'dashboard', component: ProductionDashboard }, { path: '/generate', name: 'generate', component: ProductionGenerator }, ] const router = new VueRouter({ mode: 'history', routes, }) export default router    Then, import the router in the main.js file and inject it into the app.    In the App.vue main component, add the navigation menu and the router view: <nav> <router-link :to="{ name: 'dashboard' }" exact>Dashboard </router-link> <router-link :to="{ name: 'generate' }">Measure</router-link> </nav> <router-view /> The basic structure of our app is now done: Production measures The first page we will make is the Measures page, where we will have two buttons: The first one will generate a fake production measure with current date and random value The second one will also generate a measure, but with the error property set to true All these measures will be stored in a collection called "Measures". Meteor collections integration A Meteor collection is a reactive list of objects, similar to a MongoDB collection (in fact, it uses MongoDB under the hood). We need to use a Vue plugin to integrate the Meteor collections into our Vue app in order to update it automatically: Add the vue-meteor-tracker npm package: meteor npm i -S vue-meteor-tracker    Then, install the library into Vue: import VueMeteorTracker from 'vue-meteor-tracker' Vue.use(VueMeteorTracker)    Restart Meteor with the meteor command. The app is now aware of the Meteor collection and we can use them in our components, as we will do in a moment. Setting up data The next step is setting up the Meteor collection where we will store our measures data Adding a collection We will store our measures into a Measures Meteor collection. Create a new lib folder in the project directory. All the code in this folder will be executed first, both on the client and the server. Create a collections.js file, where we will declare our Measures collection: import { Mongo } from 'meteor/mongo' export const Measures = new Mongo.Collection('measures') Adding a Meteor method A Meteor method is a special function that will be called both on the client and the server. This is very useful for updating collection data and will improve the perceived speed of the app--the client will execute on minimongo without waiting for the server to receive and process it. This technique is called "Optimistic Update" and is very effective when the network quality is poor.  Next to the collections.js file in the lib folder, create a new methods.js file. Then, add a measure.add method that inserts a new measure into the Measures collection: import { Meteor } from 'meteor/meteor' import { Measures } from './collections' Meteor.methods({ 'measure.add' (measure) { Measures.insert({ ...measure, date: new Date(), }) }, }) We can now call this method with the Meteor.call function: Meteor.call('measure.add', someMeasure) The method will be run on both the client (using the client-side database called minimongo) and on the server. That way, the update will be instant for the client. Simulating measures Without further delay, let's build the simple component that will call this measure.add Meteor method: Add two buttons in the template of ProductionGenerator.vue: <template> <div class="production-generator"> <h1>Measure production</h1> <section class="actions"> <button @click="generateMeasure(false)">Generate Measure</button> <button @click="generateMeasure(true)">Generate Error</button> </section> </div> </template> Then, in the component script, create the generateMeasure method that generates some dummy data and then call the measure.add Meteor method: <script> import { Meteor } from 'meteor/meteor' export default { methods: { generateMeasure (error) { const value = Math.round(Math.random() * 100) const measure = { value, error, } Meteor.call('measure.add', measure) }, }, } </script> The component should look like this: If you click on the buttons, nothing visible should happen. Inspecting the data There is an easy way to check whether our code works and to verify that you can add items in the Measures collection. We can connect to the MongoDB database in a single command. In another terminal, run the following command to connect to the app's database: meteor mongo Then, enter this MongoDB query to fetch the documents of the measures collection (the argument used when creating the Measures Meteor collection): db.measures.find({}) If you clicked on the buttons, a list of measure documents should be displayed This means that our Meteor method worked and objects were inserted in our MongoDB database. Dashboard and reporting Now that our first page is done, we can continue with the real-time dashboard. Progress bars library To display some pretty indicators, let's install another Vue library that allows drawing progress bars along SVG paths; that way, we can have semi-circular bars: Add the vue-progress-path npm package to the project: meteor npm i -S vue-progress-path We need to tell the Vue compiler for Meteor not to process the files in node_modules where the package is installed. Create a new .vueignore file in the project root directory. This file works like a .gitignore: each line is a rule to ignore some paths. If it ends with a slash /, it will ignore only corresponding folders. So, the content of .vueignore should be as follows: node_modules/ Finally, install the vue-progress-path plugin in the client/main.js file: import 'vue-progress-path/dist/vue-progress-path.css' import VueProgress from 'vue-progress-path' Vue.use(VueProgress, { defaultShape: 'semicircle', }) Meteor publication To synchronize data, the client must subscribe to a publication declared on the server. A Meteor publication is a function that returns a Meteor collection query. It can take arguments to filter the data that will be synchronized. For our app, we will only need a simple measures publication that sends all the documents of the Measures collection: This code should only be run on the server. So, create a new server in the project folder and a new publications.js file inside that folder: import { Meteor } from 'meteor/meteor' import { Measures } from '../lib/collections' Meteor.publish('measures', function () { return Measures.find({}) }) This code will only run on the server because it is located in a folder called server. Creating the Dashboard component We are ready to build our ProductionDashboard component. Thanks to the vue- meteor-tracker we installed earlier, we have a new component definition option-- meteor. This is an object that describes the publications that need to be subscribed to and the collection data that needs to be retrieved for that component.    Add the following script section with the meteor definition option: <script> export default { meteor: { // Subscriptions and Collections queries here }, } </script> Inside the meteor option, subscribe to the measures publication with the $subscribe object: meteor: { $subscribe: { 'measures': [], }, }, Retrieve the measures with a query on the Measures Meteor collection inside the meteor option: meteor: { // ... measures () { return Measures.find({}, { sort: { date: -1 }, }) }, }, The second parameter of the find method is an options object very similar to the MongoDB JavaScript API. Here, we are sorting the documents by their date in descending order, thanks to the sort property of the options object. Finally, create the measures data property and initialize it to an empty array. The script of the component should now look like this: <script> import { Measures } from '../../lib/collections' export default { data () { return { measures: [], } }, meteor: { $subscribe: { 'measures': [], }, measures () { return Measures.find({}, { sort: { date: -1 }, }) }, }, } </script> In the browser devtools, you can now check whether the component has retrieved the items from the collection. Indicators We will create a separate component for the dashboard indicators, as follows: In the components folder, create a new ProductionIndicator.vue component. Declare a template that displays a progress bar, a title, and additional info text: <template> <div class="production-indicator"> <loading-progress :progress="value" /> <div class="title">{{ title }}</div> <div class="info">{{ info }}</div> </div> </template> Add the value, title, and info props: <script> export default { props: { value: { type: Number, required: true, }, title: String, info: [String, Number], }, } </script> Back in our ProductionDashboard component, let's compute the average of the values and the rate of errors: computed: { length () { return this.measures.length }, average () { if (!this.length) return 0 let total = this.measures.reduce( (total, measure) => total += measure.value, 0 ) return total / this.length }, errorRate () { if (!this.length) return 0 let total = this.measures.reduce( (total, measure) => total += measure.error ? 1 : 0, 0 ) return total / this.length }, }, 5. Add two indicators in the templates - one for the average value and one for the error rate: <template> <div class="production-dashboard"> <h1>Production Dashboard</h1> <section class="indicators"> <ProductionIndicator :value="average / 100" title="Average" :info="Math.round(average)" /> <ProductionIndicator class="danger" :value="errorRate" title="Errors" :info="`${Math.round(errorRate * 100)}%`" /> </section> </div> </template> The indicators should look like this: Listing the measures Finally, we will display a list of the measures below the indicators:  Add a simple list of <div> elements for each measure, displaying the date if it has an error and the value: <section class="list"> <div v-for="item of measures" :key="item._id" > <div class="date">{{ item.date.toLocaleString() }}</div> <div class="error">{{ item.error ? 'Error' : '' }}</div> <div class="value">{{ item.value }}</div> </div> </section> The app should now look as follows, with a navigation toolbar, two indicators, and the measures list: If you open the app in another window and put your windows side by side, you can see the full-stack reactivity of Meteor in action. Open the dashboard in one window and the generator page in the other window. Then, add fake measures and watch the data update on the other window in real time. If you want to learn more about Meteor, check out the official website and the Vue integration repository. To summarize, we created a project using Meteor. We integrated Vue into the app and set up a Meteor reactive collection. Using a Meteor method, we inserted documents into the collection and displayed in real-time the data in a dashboard component. You read an excerpt from a book written by Guillaume Chau, titled Vue.js 2 Web Development Projects. This book will help you build exciting real world web projects from scratch and become proficient with Vue.js Web Development. Read More Building your first Vue.js 2 Web application Why has Vue.js become so popular? Installing and Using Vue.js    
Read more
  • 0
  • 3
  • 14017

article-image-building-arcore-android-application
Sugandha Lahoti
24 Apr 2018
9 min read
Save for later

Getting started with building an ARCore application for Android

Sugandha Lahoti
24 Apr 2018
9 min read
Google developed ARCore to be accessible from multiple development platforms (Android [Java], Web [JavaScript], Unreal [C++], and Unity [C#]), thus giving developers plenty of flexibility and options to build applications on various platforms. While each platform has its strengths and weaknesses, all the platforms essentially extend from the native Android SDK that was originally built as Tango. This means that regardless of your choice of platform, you will need to install and be somewhat comfortable working with the Android development tools. In this article, we will focus on setting up the Android development tools and building an ARCore application for Android. The following is a summary of the major topics we will cover in this post: Installing Android Studio Installing ARCore Build and deploy Exploring the code Installing Android Studio Android Studio is a development environment for coding and deploying Android applications. As such, it contains the core set of tools we will need for building and deploying our applications to an Android device. After all, ARCore needs to be installed to a physical device in order to test. Follow the given instructions to install Android Studio for your development environment: Open a browser on your development computer to https://developer.android.com/studio. Click on the green DOWNLOAD ANDROID STUDIO button. Agree to the Terms and Conditions and follow the instructions to download. After the file has finished downloading, run the installer for your system. Follow the instructions on the installation dialog to proceed. If you are installing on Windows, ensure that you set a memorable installation path that you can easily find later, as shown in the following example: Click through the remaining dialogs to complete the installation. When the installation is complete, you will have the option to launch the program. Ensure that the option to launch Android Studio is selected and click on Finish. Android Studio comes embedded with OpenJDK. This means we can omit the steps to installing Java, on Windows at least. If you are doing any serious Android development, again on Windows, then you should go through the steps on your own to install the full Java JDK 1.7 and/or 1.8, especially if you plan to work with older versions of Android. On Windows, we will install everything to C:Android; that way, we can have all the Android tools in one place. If you are using another OS, use a similar well-known path. Now that we have Android Studio installed, we are not quite done. We still need to install the SDK tools that will be essential for building and deployment. Follow the instructions in the next exercise to complete the installation: If you have not installed the Android SDK before, you will be prompted to install the SDK when Android Studio first launches, as shown: Select the SDK components and ensure that you set the installation path to a well-known location, again, as shown in the preceding screenshot. Leave the Welcome to Android Studio dialog open for now. We will come back to it in a later exercise. That completes the installation of Android Studio. In the next section, we will get into installing ARCore. Installing ARCore Of course, in order to work with or build any ARCore applications, we will need to install the SDK for our chosen platform. Follow the given instructions to install the ARCore SDK: We will use Git to pull down the code we need directly from the source. You can learn more about Git and how to install it on your platform at https://git-scm.com/book/en/v2/Getting-Started-Installing-Git or use Google to search: getting started installing Git. Ensure that when you install on Windows, you select the defaults and let the installer set the PATH environment variables. Open Command Prompt or Windows shell and navigate to the Android (C:Android on Windows) installation folder. Enter the following command: git clone https://github.com/google-ar/arcore-android-sdk.git This will download and install the ARCore SDK into a new folder called arcore-android-sdk, as illustrated in the following screenshot: Ensure that you leave the command window open. We will be using it again later. Installing the ARCore service on a device Now, with the ARCore SDK installed on our development environment, we can proceed with installing the ARCore service on our test device. Use the following steps to install the ARCore service on your device: NOTE: this step is only required when working with the Preview SDK of ARCore. When Google ARCore 1.0 is released you will not need to perform this step. Grab your mobile device and enable the developer and debugging options by doing the following: Opening the Settings app Selecting the System Scrolling to the bottom and selecting About phone Scrolling again to the bottom and tapping on Build number seven times Going back to the previous screen and selecting Developer options near the bottom Selecting USB debugging Download the ARCore service APK from https://github.com/google-ar/arcore-android-sdk/releases/download/sdk-preview/arcore-preview.apk to the Android installation folder (C:Android). Also note that this URL will likely change in the future. Connect your mobile device with a USB cable. If this is your first time connecting, you may have to wait several minutes for drivers to install. You will then be prompted to switch on the device to allow the connection. Select Allow to enable the connection. Go back to your Command Prompt or Windows shell and run the following command: adb install -r -d arcore-preview.apk //ON WINDOWS USE: sdkplatform-toolsadb install -r -d arcore-preview.apk After the command is run, you will see the word Success. This completes the installation of ARCore for the Android platform. In the next section, we will build our first sample ARCore application. Build and deploy Now that we have all the tedious installation stuff out of the way, it's time to build and deploy a sample app to your Android device. Let's begin by jumping back to Android Studio and following the given steps: Select the Open an existing Android Studio project option from the Welcome to Android Studio window. If you accidentally closed Android Studio, just launch it again. Navigate and select the Androidarcore-android-sdksamplesjava_arcore_hello_ar folder, as follows: Click on OK. If this is your first time running this project, you will encounter some dependency errors, such as the one here: In order to resolve the errors, just click on the link at the bottom of the error message. This will open a dialog, and you will be prompted to accept and then download the required dependencies. Keep clicking on the links until you see no more errors. Ensure that your mobile device is connected and then, from the menu, choose Run - Run. This should start the app on your device, but you may still need to resolve some dependency errors. Just remember to click on the links to resolve the errors. This will open a small dialog. Select the app option. If you do not see the app option, select Build - Make Project from the menu. Again, resolve any dependency errors by clicking on the links. "Your patience will be rewarded." - Alton Brown Select your device from the next dialog and click on OK. This will launch the app on your device. Ensure that you allow the app to access the device's camera. The following is a screenshot showing the app in action: Great, we have built and deployed our first Android ARCore app together. In the next section, we will take a quick look at the Java source code. Exploring the code Now, let's take a closer look at the main pieces of the app by digging into the source code. Follow the given steps to open the app's code in Android Studio: From the Project window, find and double-click on the HelloArActivity, as shown: After the source is loaded, scroll through the code to the following section: private void showLoadingMessage() { runOnUiThread(new Runnable() { @Override public void run() { mLoadingMessageSnackbar = Snackbar.make( HelloArActivity.this.findViewById(android.R.id.content), "Searching for surfaces...", Snackbar.LENGTH_INDEFINITE); mLoadingMessageSnackbar.getView().setBackgroundColor(0xbf323232); mLoadingMessageSnackbar.show(); } }); } Note the highlighted text—"Searching for surfaces..". Select this text and change it to "Searching for ARCore surfaces..". The showLoadingMessage function is a helper for displaying the loading message. Internally, this function calls runOnUIThread, which in turn creates a new instance of Runnable and then adds an internal run function. We do this to avoid thread blocking on the UI, a major no-no. Inside the run function is where the messaging is set and the message Snackbar is displayed. From the menu, select Run - Run 'app' to start the app on your device. Of course, ensure that your device is connected by USB. Run the app on your device and confirm that the message has changed. Great, now we have a working app with some of our own code. This certainly isn't a leap, but it's helpful to walk before we run. In this article, we started exploring ARCore by building and deploying an AR app for the Android platform. We did this by first installing Android Studio. Then, we installed the ARCore SDK and ARCore service onto our test mobile device. Next, we loaded up the sample ARCore app and patiently installed the various required build and deploy dependencies. After a successful build, we deployed the app to our device and tested. Finally, we tested making a minor code change and then deployed another version of the app. You read an excerpt from the book, Learn ARCore - Fundamentals of Google ARCore, written by Micheal Lanham. This book will help you will create next-generation Augmented Reality and Mixed Reality apps with the latest version of Google ARCore. Read More Google ARCore is pushing immersive computing forward Types of Augmented Reality targets
Read more
  • 0
  • 0
  • 7657

article-image-building-a-web-service-with-laravel-5
Kunal Chaudhari
24 Apr 2018
15 min read
Save for later

Building a Web Service with Laravel 5

Kunal Chaudhari
24 Apr 2018
15 min read
A web service is an application that runs on a server and allows a client (such as a browser) to remotely write/retrieve data to/from the server over HTTP. In this article we will be covering the following set of topics: Using Laravel to create a web service Writing database migrations and seed files Creating API endpoints to make data publicly accessible Serving images from Laravel The interface of a web service will be one or more API endpoints, sometimes protected with authentication, that will return data in an XML or JSON payload: Web services are a speciality of Laravel, so it won't be hard to create one for Vuebnb. We'll use routes for our API endpoints and represent the listings with Eloquent models that Laravel will seamlessly synchronize with the database: Laravel also has inbuilt features to add API architectures such as REST, though we won't need this for our simple use case. Mock data The mock listing data is in the file database/data.json. This file includes a JSON- encoded array of 30 objects, with each object representing a different listing. Having built the listing page prototype, you'll no doubt recognize a lot of the same properties on these objects, including the title, address, and description. database/data.json: [ { "id": 1, "title": "Central Downtown Apartment with Amenities", "address": "...", "about": "...", "amenity_wifi": true, "amenity_pets_allowed": true, "amenity_tv": true, "amenity_kitchen": true, "amenity_breakfast": true, "amenity_laptop": true, "price_per_night": "$89" "price_extra_people": "No charge", "price_weekly_discount": "18%", "price_monthly_discount": "50%", }, { "id": 2, ... }, ... ] Each mock listing includes several images of the room as well. Images aren't really part of a web service, but they will be stored in a public folder in our app to be served as needed. Database Our web service will require a database table for storing the mock listing data. To set this up we'll need to create a schema and migration. We'll then create a seeder that will load and parse our mock data file and insert it into the database, ready for use in the app. Migration A migration is a special class that contains a set of actions to run against the database, such as creating or modifying a database table. Migrations ensure your database gets set up identically every time you create a new instance of your app, for example, installing in production or on a teammate's machine. To create a new migration, use the make:migration Artisan CLI command. The argument of the command should be a snake-cased description of what the migration will do: $ php artisan make:migration create_listings_table You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. 2017_06_20_133317_create_listings_table.php: <?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateListingsTable extends Migration { public function up() { // } public function down() { // } } Schema A schema is a blueprint for the structure of a database. For a relational database such as MySQL, the schema will organize data into tables and columns. In Laravel, schemas are declared by using the Schema facade's create method. We'll now make a schema for a table to hold Vuebnb listings. The columns of the table will match the structure of our mock listing data. Note that we set a default false value for the amenities and allow the prices to have a NULL value. All other columns require a value. The schema will go inside our migration's up method. We'll also fill out the down with a call to Schema::drop. 2017_06_20_133317_create_listings_table.php: public function up() { Schema::create('listings', function (Blueprint $table) { $table->primary('id'); $table->unsignedInteger('id'); $table->string('title'); $table->string('address'); $table->longText('about'); // Amenities $table->boolean('amenity_wifi')->default(false); $table->boolean('amenity_pets_allowed')->default(false); $table->boolean('amenity_tv')->default(false); $table->boolean('amenity_kitchen')->default(false); $table->boolean('amenity_breakfast')->default(false); $table->boolean('amenity_laptop')->default(false); // Prices $table->string('price_per_night')->nullable(); $table->string('price_extra_people')->nullable(); $table->string('price_weekly_discount')->nullable(); $table->string('price_monthly_discount')->nullable(); }); } public function down() { Schema::drop('listings'); } A facade is an object-oriented design pattern for creating a static proxy to an underlying class in the service container. The facade is not meant to provide any new functionality; its only purpose is to provide a more memorable and easily readable way of performing a common action. Think of it as an object-oriented helper function. Execution Now that we've set up our new migration, let's run it with this Artisan command: $ php artisan migrate You should see an output like this in the Terminal: Migrating: 2017_06_20_133317_create_listings_table Migrated:            2017_06_20_133317_create_listings_table To confirm the migration worked, let's use Tinker to show the new table structure. If you've never used Tinker, it's a REPL tool that allows you to interact with a Laravel app on the command line. When you enter a command into Tinker it will be evaluated as if it were a line in your app code. Firstly, open the Tinker shell: $ php artisan tinker Now enter a PHP statement for evaluation. Let's use the DB facade's select method to run an SQL DESCRIBE query to show the table structure: >>>> DB::select('DESCRIBE listings;'); The output is quite verbose so I won't reproduce it here, but you should see an object with all your table details, confirming the migration worked. Seeding mock listings Now that we have a database table for our listings, let's seed it with the mock data. To do so we're going to have to do the following:  Load the database/data.json file  Parse the file  Insert the data into the listings table Creating a seeder Laravel includes a seeder class that we can extend called Seeder. Use this Artisan command to implement it: $ php artisan make:seeder ListingsTableSeeder When we run the seeder, any code in the run method is executed. database/ListingsTableSeeder.php: <?php use Illuminate\Database\Seeder; class ListingsTableSeeder extends Seeder { public function run() { // } } Loading the mock data Laravel provides a File facade that allows us to open files from disk as simply as File::get($path). To get the full path to our mock data file we can use the base_path() helper function, which returns the path to the root of our application directory as a string. It's then trivial to convert this JSON file to a PHP array using the built-in json_decode method. Once the data is an array, it can be directly inserted into the database given that the column names of the table are the same as the array keys. database/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); } Inserting the data In order to insert the data, we'll use the DB facade again. This time we'll call the table method, which returns an instance of Builder. The Builder class is a fluent query builder that allows us to query the database by chaining constraints, for example, DB::table(...)->where(...)->join(...) and so on. Let's use the insert method of the builder, which accepts an array of column names and values. database/seeds/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); DB::table('listings')->insert($data); } Executing the seeder To execute the seeder we must call it from the DatabaseSeeder.php file, which is in the same directory. database/seeds/DatabaseSeeder.php: <?php use Illuminate\Database\Seeder; class DatabaseSeeder extends Seeder { public function run() { $this->call(ListingsTableSeeder::class); } } With that done, we can use the Artisan CLI to execute the seeder: $ php artisan db:seed You should see the following output in your Terminal: Seeding: ListingsTableSeeder We'll again use Tinker to check our work. There are 30 listings in the mock data, so to confirm the seed was successful, let's check for 30 rows in the database: $ php artisan tinker >>>> DB::table('listings')->count(); # Output: 30 Finally, let's inspect the first row of the table just to be sure its content is what we expect: >>>> DB::table('listings')->get()->first(); Here is the output: => {#732 +"id": 1, +"title": "Central Downtown Apartment with Amenities", +"address": "No. 11, Song-Sho Road, Taipei City, Taiwan 105", +"about": "...", +"amenity_wifi": 1, +"amenity_pets_allowed": 1, +"amenity_tv": 1, +"amenity_kitchen": 1, +"amenity_breakfast": 1, +"amenity_laptop": 1, +"price_per_night": "$89", +"price_extra_people": "No charge", +"price_weekly_discount": "18%", +"price_monthly_discount": "50%" } If yours looks like that you're ready to move on! Listing model We've now successfully created a database table for our listings and seeded it with mock listing data. How do we access this data now from the Laravel app? We saw how the DB facade lets us execute queries on our database directly. But Laravel provides a more powerful way to access data via the Eloquent ORM. Eloquent ORM Object-Relational Mapping (ORM) is a technique for converting data between incompatible systems in object-oriented programming languages. Relational databases such as MySQL can only store scalar values such as integers and strings, organized within tables. We want to make use of rich objects in our app, though, so we need a means of robust conversion. Eloquent is the ORM implementation used in Laravel. It uses the active record design pattern, where a model is tied to a single database table, and an instance of the model is tied to a single row. To create a model in Laravel using Eloquent ORM, simply extend the Illuminate\Database\Eloquent\Model class using Artisan: $ php artisan make:model Listing This generates a new file. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { // } How do we tell the ORM what table to map to, and what columns to include? By default, the Model class uses the class name (Listing) in lowercase (listing) as the table name to use. And, by default, it uses all the fields from the table. Now, any time we want to load our listings we can use code such as this, anywhere in our app: <?php // Load all listings $listings = \App\Listing::all(); // Iterate listings, echo the address foreach ($listings as $listing) { echo $listing->address . '\n' ; } /* * Output: * * No. 11, Song-Sho Road, Taipei City, Taiwan 105 * 110, Taiwan, Taipei City, Xinyi District, Section 5, Xinyi Road, 7 * No. 51, Hanzhong Street, Wanhua District, Taipei City, Taiwan 108 * ... */ Casting The data types in a MySQL database don't completely match up to those in PHP. For example, how does an ORM know if a database value of 0 is meant to be the number 0, or the Boolean value of false? An Eloquent model can be given a $casts property to declare the data type of any specific attribute. $casts is an array of key/values where the key is the name of the attribute being cast, and the value is the data type we want to cast to. For the listings table, we will cast the amenities attributes as Booleans. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { protected $casts = [ 'amenity_wifi' => 'boolean', 'amenity_pets_allowed' => 'boolean', 'amenity_tv' => 'boolean', 'amenity_kitchen' => 'boolean', 'amenity_breakfast' => 'boolean', 'amenity_laptop' => 'boolean' ]; } Now these attributes will have the correct type, making our model more robust: echo  gettype($listing->amenity_wifi()); //  boolean Public interface The final piece of our web service is the public interface that will allow a client app to request the listing data. Since the Vuebnb listing page is designed to display one listing at a time, we'll at least need an endpoint to retrieve a single listing. Let's now create a route that will match any incoming GET requests to the URI /api/listing/{listing} where {listing} is an ID. We'll put this in the routes/api.php file, where routes are automatically given the /api/ prefix and have middleware optimized for use in a web service by default. We'll use a closure function to handle the route. The function will have a $listing argument, which we'll type hint as an instance of the Listing class, that is, our model. Laravel's service container will resolve this as an instance with the ID matching {listing}. We can then encode the model as JSON and return it as a response. routes/api.php: <?php use App\Listing; Route::get('listing/{listing}', function(Listing $listing) { return $listing->toJson(); }); We can test this works by using the curl command from the Terminal: $ curl http://vuebnb.test/api/listing/1 The response will be the listing with ID 1: Controller We'll be adding more routes to retrieve the listing data as the project progresses. It's a best practice to use a controller class for this functionality to keep a separation of concerns. Let's create one with Artisan CLI: $ php artisan make:controller ListingController We'll then move the functionality from the route into a new method, get_listing_api. app/Http/Controllers/ListingController.php: <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Listing; class ListingController extends Controller { public function get_listing_api(Listing $listing) { return $listing->toJson(); } } For the Route::get method we can pass a string as the second argument instead of a closure function. The string should be in the form [controller]@[method], for example, ListingController@get_listing_web. Laravel will correctly resolve this at runtime. routes/api.php: <?php Route::get('/listing/{listing}', 'ListingController@get_listing_api'); Images As stated at the beginning of the article, each mock listing comes with several images of the room. These images are not in the project code and must be copied from a parallel directory in the code base called images. Copy the contents of this directory into the public/images folder: $ cp -a ../images/. ./public/images Once you've copied these files, public/images will have 30 sub-folders, one for each mock listing. Each of these folders will contain exactly four main images and a thumbnail image: Accessing images Files in the public directory can be directly requested by appending their relative path to the site URL. For example, the default CSS file, public/css/app.css, can be requested at http://vuebnb.test/css/app.css. The advantage of using the public folder, and the reason we've put our images there, is to avoid having to create any logic for accessing them. A frontend app can then directly call the images in an img tag. You may think it's inefficient for our web server to serve images like this, and you'd be right. Let's try to open one of the mock listing images in our browser to test this thesis: http://vuebnb.test/images/1/Image_1.jpg: Image links The payload for each listing in the web service should include links to these new images so a client app knows where to find them. Let's add the image paths to our listing API payload so it looks like this: { "id": 1, "title": "...", "description": "...", ... "image_1": "http://vuebnb.test/app/image/1/Image_1.jpg", "image_2": "http://vuebnb.test/app/image/1/Image_2.jpg", "image_3": "http://vuebnb.test/app/image/1/Image_3.jpg", "image_4": "http://vuebnb.test/app/image/1/Image_4.jpg" } To implement this, we'll use our model's toArray method to make an array representation of the model. We'll then easily be able to add new fields. Each mock listing has exactly four images, numbered 1 to 4, so we can use a for loop and the asset helper to generate fully- qualified URLs to files in the public folder. We finish by creating an instance of the Response class by calling the response helper. We use the json; method and pass in our array of fields, returning the result. app/Http/Controllers/ListingController.php: public function get_listing_api(Listing $listing) { $model = $listing->toArray(); for($i = 1; $i <=4; $i++) { $model['image_' . $i] = asset( 'images/' . $listing->id . '/Image_' . $i . '.jpg' ); } return response()->json($model); } The /api/listing/{listing} endpoint is now ready for consumption by a client app. To summarize, we built a web service with Laravel to make the data publicly accessible. This involved setting up a database table using a migration and schema, then seeding the database with mock listing data. We then created a public interface for the web service using routes. You enjoyed an excerpt from a book written by Anthony Gore, titled Full-Stack Vue.js 2 and Laravel 5 which would help you bring the frontend and backend together with Vue, Vuex, and Laravel. Read More Testing RESTful Web Services with Postman How to develop RESTful web services in Spring        
Read more
  • 0
  • 0
  • 21588

article-image-advanced-programming-with-rust
Packt Editorial Staff
23 Apr 2018
7 min read
Save for later

Perform Advanced Programming with Rust

Packt Editorial Staff
23 Apr 2018
7 min read
Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety. In today’s tutorial we are focusing on equipping you with recipes to programming with Rust and also help you define expressions, constants, and variable bindings. Let us get started: Defining an expression An expression, in simple words, is a statement in Rust by using which we can create logic and workflows in the program and applications. We will deep dive into understanding expressions and blocks in Rust. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Follow the ensuing steps: Create a file named expression.rs with the next code snippet. Declare the main function and create the variables x_val, y_val, and z_val: //  main  point  of  execution fn  main()  { //  expression let  x_val  =  5u32; //  y  block let  y_val  =  { let  x_squared  =  x_val  *  x_val; let  x_cube  =  x_squared  *  x_val; //  This  expression  will  be  assigned  to  `y_val` x_cube  +  x_squared  +  x_val }; //  z  block let  z_val  =  { //  The  semicolon  suppresses  this  expression  and  `()`  is assigned  to  `z` 2  *  x_val; }; //  printing  the  final  outcomes println!("x  is  {:?}",  x_val); println!("y  is  {:?}",  y_val); println!("z  is  {:?}",  z_val); } You should get the ensuing output upon running the code. Please refer to the following screenshot: How it works... All the statements that end in a semicolon (;) are expressions. A block is a statement that has a set of statements and variables inside the {} scope. The last statement of a block is the value that will be assigned to the variable. When we close the last statement with a semicolon, it returns () to the variable. In the preceding recipe, the first statement which is a variable named x_val , is assigned to the value 5. Second, y_val is a block that performs certain operations on the variable x_val and a few more variables, which are x_squared and x_cube that contain the squared and cubic values of the variable x_val , respectively. The variables x_squared and x_cube , will be deleted soon after the scope of the block. The block where we declare the z_val variable has a semicolon at the last statement which assigns it to the value of (), suppressing the expression. We print out all the values in the end. We print all the declared variables values in the end. Defining constants Rust provides the ability to assign and maintain constant values across the code in Rust. These values are very useful when we want to maintain a global count, such as a timer-- threshold--for example. Rust provides two const keywords to perform this activity. You will learn how to deliver constant values globally in this recipe. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Follow these steps: Create a file named constant.rs with the next code snippet. Declare the global UPPERLIMIT using constant: //  Global  variables  are  declared  outside  scopes  of  other function const  UPPERLIMIT:  i32  =  12; Create the is_big function by accepting a single integer as input: //  function  to  check  if  bunber fn  is_big(n:  i32)  ->  bool  { //  Access  constant  in  some  function n  >  UPPERLIMIT } In the main function, call the is_big function and perform the decision-making statement: fn  main()  { let  random_number  =  15; //  Access  constant  in  the  main  thread println!("The  threshold  is  {}",  UPPERLIMIT); println!("{}  is  {}",  random_number,  if is_big(random_number)  {  "big"  }  else  {  "small" }); //  Error!  Cannot  modify  a  `const`. //  UPPERLIMIT  =  5; } You should get the following screenshot as output upon running the preceding code: How it works... The workflow of the recipe is fairly simple, where we have a function to check whether an integer is greater than a fixed threshold or not. The UPPERLIMIT variable defines the fixed threshold for the function, which is a constant whose value will not change in the code and is accessible throughout the program. We assigned 15 to random_number and passed it via is_big  (integer  value); and we then get a boolean output, either true or false, as the return type of the function is a bool type. The answer to our situation is false as 15 is not bigger than 12, which the UPPERLIMIT value set as the constant. We performed this condition checking using the if...else statement in Rust. We cannot change the UPPERLIMIT value; when attempted, it will throw an error, which is commented in the code section. Constants declare constant values. They represent a value, not a memory address: type  =  value; Performing variable bindings Variable binding refers to how a variable in the Rust code is bound to a type. We will cover pattern, mutability, scope, and shadow concepts in this recipe. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Perform the following step: Create a file named binding.rs and enter a code snippet that includes declaring the main function and different variables: fn  main()  { //  Simplest  variable  binding let  a  =  5; //  pattern let  (b,  c)  =  (1,  2); //  type  annotation let  x_val:  i32  =  5; //  shadow  example let  y_val:  i32  =  8; { println!("Value  assigned  when  entering  the scope  :  {}",  y_val);  //  Prints  "8". let  y_val  =  12; println!("Value  modified  within  scope  :{}",  y_val); //  Prints  "12". } println!("Value  which  was  assigned  first  :  {}",  y_val); //  Prints  "8". let  y_val  =  42; println!("New  value  assigned  :  {}",  y_val); //Prints  "42". } You should get the following screenshot as output upon running the preceding code: How it works... The let statement is the simplest way to create a binding, where we bind a variable to a value, which is the case with variable a. To create a pattern with the let statement, we assign the pattern values to b and c values in the same pattern. Rust is a statically typed language. This means that we have to specify our types during an assignment, and at compile time, it is checked to see if it is compatible. Rust also has the type reference feature that identifies the variable type automatically at compile time. The variable_name  : type is the format we use to explicitly mention the type in Rust. We read the assignment in the following format: x_val is a binding with the type i32 and the value 5. Here, we declared x_val as a 32-bit signed integer. However, Rust has many different primitive integer types that begin with i for signed integers and u for unsigned integers, and the possible integer sizes are 8, 16, 32, and 64 bits. Variable bindings have a scope that makes the variable alive only in the scope. Once it goes out of the scope, the resources are freed. A block is a collection of statements enclosed by {}. Function definitions are also blocks! We use a block to illustrate the feature in Rust that allows variable bindings to be shadowed. This means that a later variable binding can be done with the same name, which in our case is y_val. This goes through a series of value changes, as a new binding that is currently in scope overrides the previous binding. Shadowing enables us to rebind a name to a value of a different type. This is the reason why we are able to assign new values to the immutable y_val variable in and out of the block. [box type="shadow" class="" width=""]This article is an extract taken from Rust Cookbook written by Vigneshwer Dhinakaran. You will find more than 80 practical recipes written in Rust that will allow you to use the code samples right away in your existing applications.[/box] Read More 20 ways to describe programming in 5 words Top 5 programming languages for crunching Big Data effectively    
Read more
  • 0
  • 1
  • 8773

article-image-how-data-scientists-test-hypotheses-and-probability
Richard Gall
23 Apr 2018
4 min read
Save for later

How data scientists test hypotheses and probability

Richard Gall
23 Apr 2018
4 min read
Why hypotheses are important in statistical analysis Hypothesis testing allows researchers and statisticians to develop hypotheses which are then assessed to determine the probability or the likelihood of those findings. This statistics tutorial has been taken from Basic Statistics and Data Mining for Data Science. Whenever you wish to make an inference about a population from a sample, you must test a specific hypothesis. It’s common practice to state 2 different hypotheses: Null hypothesis which states that there is no effect Alternative/research hypothesis which states that there is an effect So, the null hypothesis is one which says that there is no difference. For example, you might be looking at the mean income between males and females, but the null hypothesis you are testing is that there is no difference between the 2 groups. The alternative hypothesis, meanwhile, is generally, although not exclusively, the one that researchers are really interested in. In this example, you might hypothesize that the mean income between males and females is different. Read more: How to predict Bitcoin prices from historical and live data. Why probability is important in statistical analysis In statistics, nothing is ever certain because we are always dealing with samples rather than populations. This is why we always have to work in probabilities. The way hypotheses are assessed is by calculating the probability or the likelihood of finding our result. A probability value, which can range from zero to one, corresponding to 0% and 100% in percentages, is essentially a way of measuring the likelihood of a particular event occurring. You can use these values to assess whether the likelihood of any of these differences that you have found are the result of random chance. How do hypotheses and probability interact? It starts getting really interesting once we begin looking at how hypotheses and probability interact. Here’s an example. Suppose you want to know who is going to win the Super Bowl. I ask a fellow statistician, and he tells me that she’s built a predictive model and that he knows which team is going to win. Fine - my next question is how confident he is in that prediction. He says he’s 50% confident - are you going to trust his prediction? Of course you’re not - there are only 2 possible outcomes and 50% is ultimately just random chance. So, say I ask another statistician. He also tells me that he has a prediction and that he has built a predictive model, and he’s 75% confident in the prediction he has made. You’re more likely to trust this prediction - you have a 75% chance of being right and a 25% chance of being wrong. But let’s say you’re feeling cautious - a 25% chance of being wrong is too high. So, you ask another statistician for their prediction. She tells me that she’s also built a predictive model which she has 90% confidence is correct. So, having formally stated our hypotheses we then have to select a criterion for acceptance or rejection of the null hypothesis. With probability tests like the chi-squared test, the t-test, or regression or correlation, you’re testing the likelihood that a statistic of the magnitude that you obtained or greater would have occurred by chance, assuming that the null hypothesis is true. It’s important to remember that you always assess the probability of the null hypothesis as true. You only reject the null hypothesis if you can say that the results would have been extremely unlikely under the conditions set by the null hypothesis. In this case, if you can reject the null hypothesis, you have found support for the alternative/research hypothesis. This doesn’t prove the alternative hypothesis, but it does tell you that the null hypothesis is unlikely to be true. The criterion we typically use is whether the significance level sits above or below 0.05 (5%), indicating that a statistic of the size that we obtained, would only be likely to occur on 5% of occasions. By choosing a 5% criterion you are accepting that you will make a mistake in rejecting the null hypothesis 1 in 20 times. Replication and data mining If in traditional statistics we work with hypotheses and probabilities to deal with the fact that we’re always working with a sample rather than a population, in data mining, we can work in a slightly different way - we can use something called replication instead. In a data mining project we might have 2 data sets - a training data set and a testing data set. We build our model on a training set and once we’ve done that, we take the results of that model and then apply it to a testing data set to see if we find similar results.
Read more
  • 0
  • 0
  • 5014
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-data-bindings-with-knockout-js
Vijin Boricha
23 Apr 2018
7 min read
Save for later

Data bindings with Knockout.js

Vijin Boricha
23 Apr 2018
7 min read
Today, we will learn about three data binding abilities of Knockout.js. Data bindings are attributes added by the framework for the purpose of data access between elements and view scope. While Observable arrays are efficient in accessing the list of objects with the number of operations on top of the display of the list using the foreach function, Knockout.js has provided three additional data binding abilities: Control-flow bindings Appearance bindings Interactive bindings Let us review these data bindings in detail in the following sections. Control-flow bindings As the name suggests, control-flow bindings help us access the data elements based on a certain condition. The if, if-not, and with are the control-flow bindings available from the Knockout.js. In the following example, we will be using if and with control-flow bindings. We have added a new attribute to the Employee object called age; we are displaying the age value in green only if it is greater than 20. Similarly, we have added another markedEmployee. With control-flow binding, we can limit the scope of access to that specific employee object in the following paragraph. Add the following code snippet to index.html and run the program to see the if and with control-flow bindings working: <!DOCTYPE html> <html> <head> <title>Knockout JS</title> </head> <body> <h1>Welcome to Knockout JS programming</h1> <table border="1" > <tr > <th colspan="2" style="padding:10px;"> <b>Employee Data - Organization : <span style="color:red" data-bind='text: organizationName'> </span> </b> </th> </tr> <tr> <td style="padding:10px;">Employee First Name:</td> <td style="padding:10px;"> <span data-bind='text: empFirstName'></span> </td> </tr> <tr> <td style="padding:10px;">Employee Last Name:</td> <td style="padding:10px;"> <span data-bind='text: empLastName'></span> </td> </tr> </table> <p>Organization Full Name : <span style="color:red" data-bind='text: orgFullName'></span> </p> <!-- Observable Arrays--> <h2>Observable Array Example : </h2> <table border="1"> <thead><tr> <th style="padding:10px;">First Name</th> <th style="padding:10px;">Last Name</th> <th style="padding:10px;">Age</th> </tr></thead> <tbody data-bind='foreach: organization'> <tr> <td style="padding:10px;" data-bind='text: firstName'></td> <td style="padding:10px;" data-bind='text: lastName'></td> <td data-bind="if: age() > 20" style="color: green;padding:10px;"> <span data-bind='text:age'></span> </td> </tr> </tbody> </table> <!-- with control flow bindings --> <p data-bind='with: markedEmployee'> Employee <strong data-bind="text: firstName() + ', ' + lastName()"> </strong> is marked with the age <strong data-bind='text: age'> </strong> </p> <h2>Add New Employee to Observable Array</h2> First Name : <input data-bind="value: newFirstName" /> Last Name : <input data-bind="value: newLastName" /> Age : <input data-bind="value: newEmpAge" /> <button data-bind='click: addEmployee'>Add Employee</button> <!-- JavaScript resources --> <script type='text/javascript' src='js/knockout-3.4.2.js'></script> <script type='text/javascript'> function Employee (firstName, lastName,age) { this.firstName = ko.observable(firstName); this.lastName = ko.observable(lastName); this.age = ko.observable(age); }; this.addEmployee = function() { this.organization.push(new Employee (employeeViewModel.newFirstName(), employeeViewModel.newLastName(), employeeViewModel.newEmpAge())); }; var employeeViewModel = { empFirstName: "Tony", empLastName: "Henry", //Observable organizationName: ko.observable("Sun"), newFirstName: ko.observable(""), newLastName: ko.observable(""), newEmpAge: ko.observable(""), //With control flow object markedEmployee: ko.observable(new Employee("Garry", "Parks", "65")), //Observable Arrays organization : ko.observableArray([ new Employee("John", "Kennedy", "24"), new Employee("Peter", "Hennes","18"), new Employee("Richmond", "Smith","54") ]) }; //Computed Observable employeeViewModel.orgFullName = ko.computed(function() { return employeeViewModel.organizationName() + " Limited"; }); ko.applyBindings(employeeViewModel); employeeViewModel.organizationName("Oracle"); </script> </body> </html> Run the preceding program to see the if control-flow acting on the Age field, and the with control-flow showing a marked employee record with age 65: Appearance bindings Appearance bindings deal with displaying the data from binding elements on view components in formats such as text and HTML, and applying styles with the help of a set of six bindings, as follows: Text: <value>—Sets the value to an element. Example: <td data-bind='text: name'></td> HTML: <value>—Sets the HTML value to an element. Example: //JavaScript: function Employee(firstname, lastname, age) { ... this.formattedName = ko.computed(function() { return "<strong>" + this.firstname() + "</strong>"; }, this); } //Html: <span data-bind='html: markedEmployee().formattedName'></span> Visible: <condition>—An element can be shown or hidden based on the condition. Example: <td data-bind='visible: age() > 20' style='color: green'> span data-bind='text:age'> CSS: <object>—An element can be associated with a CSS class. Example: //CSS: .strongEmployee { font-weight: bold; } //HTML: <span data-bind='text: formattedName, css: {strongEmployee}'> </span> Style: <object>—Associates an inline style to the element. Example: <span data-bind='text: age, style: {color: age() > 20 ? "green" :"red"}'> </span> Attr: <object>—Defines an attribute for the element. Example: <p><a data-bind='attr: {href: featuredEmployee().populatelink}'> View Employee</a></p> Interactive bindings Interactive bindings help the user interact with the form elements to be associated with corresponding viewmodel methods or events to be triggered in the pages. Knockout JS supports the following interactive bindings: Click: <method>—An element click invokes a ViewModel method. Example: <button data-bind='click: addEmployee'>Submit</button> Value:<property>—Associates the form element value to the ViewModel attribute. Example: <td>Age: <input data-bind='value: age' /></td> Event: <object>—With an user-initiated event, it invokes a method. Example: <p data-bind='event: {mouseover: showEmployee, mouseout: hideEmployee}'> Age: <input data-bind='value: Age' /> </p> Submit: <method>—With a form submit event, it can invoke a method. Example: <form data-bind="submit: addEmployee"> <!—Employee form fields --> <button type="submit">Submit</button> </form> Enable: <property>—Conditionally enables the form elements. Example: last name field is enabled only after adding first name field. Disable: <property>—Conditionally disables the form elements. Example: last name field is disabled after adding first name: <p>Last Name: <input data-bind='value: lastName, disable: firstName' /> </p> Checked: <property>—Associates a checkbox or radio element to the ViewModel attribute. Example: <p>Gender: <input data-bind='checked:gender' type='checkbox' /></p> Options: <array>—Defines a ViewModel array for the<select> element. Example: //Javascript: this.designations = ko.observableArray(['manager', 'administrator']); //Html: Designation: <select data-bind='options: designations'></select> selectedOptions: <array>—Defines the active/selected element from the <select> element. Example: Designation: <select data-bind='options: designations, optionsText:"Select", selectedOptions:defaultDesignation'> </select> hasfocus: <property>—Associates the focus attribute to the element. Example: First Name: <input data-bind='value: firstName, hasfocus: firstNameHasFocus' /> We learned about data binding abilities of Knockout.js. You can know more about external data access and Hybrid Mobile Application Development from the book Oracle JET for Developers. Read More Text and appearance bindings and form field bindings Getting to know KnockoutJS Templates    
Read more
  • 0
  • 0
  • 4552

article-image-r6-classes-retrieve-live-data-markets-wallets
Pravin Dhandre
23 Apr 2018
11 min read
Save for later

Using R6 classes in R to retrieve live data for markets and wallets

Pravin Dhandre
23 Apr 2018
11 min read
In this tutorial, you will learn to create a simple requester to request external information from an API over the internet. You will also learn to develop exchange and wallet infrastructure using R programming. Creating a simple requester to isolate API calls Now, we will focus on how we actually retrieve live data. This functionality will also be implemented using R6 classes, as the interactions can be complex. First of all, we create a simple Requester class that contains the logic to retrieve data from JSON APIs found elsewhere in the internet and that will be used to get our live cryptocurrency data for wallets and markets. We don't want logic that interacts with external APIs spread all over our classes, so we centralize it here to manage it as more specialized needs come into play later. As you can see, all this object does is offer the public request() method, and all it does is use the formJSON() function from the jsonlite package to call a URL that is being passed to it and send the data it got back to the user. Specifically, it sends it as a dataframe when the data received from the external API can be coerced into dataframe-form. library(jsonlite) Requester <- R6Class( "Requester", public = list( request = function(URL) { return(fromJSON(URL)) } ) ) Developing our exchanges infrastructure Our exchanges have multiple markets inside, and that's the abstraction we will define now. A Market has various private attributes, as we saw before when we defined what data is expected from each file, and that's the same data we see in our constructor. It also offers a data() method to send back a list with the data that should be saved to a database. Finally, it provides setters and getters as required. Note that the setter for the price depends on what units are requested, which can be either usd or btc, to get a market's asset price in terms of US Dollars or Bitcoin, respectively: Market <- R6Class( "Market", public = list( initialize = function(timestamp, name, symbol, rank, price_btc, price_usd) { private$timestamp <- timestamp private$name <- name private$symbol <- symbol private$rank <- rank private$price_btc <- price_btc private$price_usd <- price_usd }, data = function() { return(list( timestamp = private$timestamp, name = private$name, symbol = private$symbol, rank = private$rank, price_btc = private$price_btc, price_usd = private$price_usd )) }, set_timestamp = function(timestamp) { private$timestamp <- timestamp }, get_symbol = function() { return(private$symbol) }, get_rank = function() { return(private$rank) }, get_price = function(base) { if (base == 'btc') { return(private$price_btc) } else if (base == 'usd') { return(private$price_usd) } } ), private = list( timestamp = NULL, name = "", symbol = "", rank = NA, price_btc = NA, price_usd = NA ) ) Now that we have our Market definition, we proceed to create our Exchange definition. This class will receive an exchange name as name and will use the exchange_requester_factory() function to get an instance of the corresponding ExchangeRequester. It also offers an update_markets() method that will be used to retrieve market data with the private markets() method and store it to disk using the timestamp and storage objects being passed to it. Note that instead of passing the timestamp through the arguments for the private markets() method, it's saved as a class attribute and used within the private insert_metadata() method. This technique provides cleaner code, since the timestamp does not need to be passed through each function and can be retrieved when necessary. The private markets() method calls the public markets() method in the ExchangeRequester instance saved in the private requester attribute (which was assigned to by the factory) and applies the private insert_metadata() method to update the timestamp for such objects with the one sent to the public update_markets() method call before sending them to be written to the database: source("./requesters/exchange-requester-factory.R", chdir = TRUE) Exchange <- R6Class( "Exchange", public = list( initialize = function(name) { private$requester <- exchange_requester_factory(name) }, update_markets = function(timestamp, storage) { private$timestamp <- unclass(timestamp) storage$write_markets(private$markets()) } ), private = list( requester = NULL, timestamp = NULL, markets = function() { return(lapply(private$requester$markets(), private$insert_metadata)) }, insert_metadata = function(market) { market$set_timestamp(private$timestamp) return(market) } ) ) Now, we need to provide a definition for our ExchangeRequester implementations. As in the case of the Database, this ExchangeRequester will act as an interface definition that will be implemented by the CoinMarketCapRequester. We see that the ExchangeRequester specifies that all exchange requester instances should provide a public markets() method, and that a list is expected from such a method. From context, we know that this list should contain Market instances. Also, each ExchangeRequester implementation will contain a Requester object by default, since it's being created and assigned to the requester private attribute upon class instantiation. Finally, each implementation will also have to provide a create_market() private method and will be able to use the request() private method to communicate to the Requester method request() we defined previously: source("../../../utilities/requester.R") KNOWN_ASSETS = list( "BTC" = "Bitcoin", "LTC" = "Litecoin" ) ExchangeRequester <- R6Class( "ExchangeRequester", public = list( markets = function() list() ), private = list( requester = Requester$new(), create_market = function(resp) NULL, request = function(URL) { return(private$requester$request(URL)) } ) ) Now we proceed to provide an implementation for CoinMarketCapRequester. As you can see, it inherits from ExchangeRequester, and it provides the required method implementations. Specifically, the markets() public method calls the private request() method from ExchangeRequester, which in turn calls the request() method from Requester, as we have seen, to retrieve data from the private URL specified. If you request data from CoinMarketCap's API by opening a web browser and navigating to the URL shown (https:/​/​api.​coinmarketcap.​com/​v1/​ticker), you will get a list of market data. That is the data that will be received in our CoinMarketCapRequester instance in the form of a dataframe, thanks to the Requester object, and will be transformed into numeric data where appropriate using the private clean() method, so that it's later used to create Market instances with the apply() function call, which in turn calls the create_market() private method. Note that the timestamp is set to NULL for all markets created this way because, as you may remember from our Exchange class, it's set before writing it to the database. There's no need to send the timestamp information all the way down to the CoinMarketCapRequester, since we can simply write at the Exchange level right before we send the data to the database: source("./exchange-requester.R") source("../market.R") CoinMarketCapRequester <- R6Class( "CoinMarketCapRequester", inherit = ExchangeRequester, public = list( markets = function() { data <- private$clean(private$request(private$URL)) return(apply(data, 1, private$create_market)) } ), private = list( URL = "https://api.coinmarketcap.com/v1/ticker", create_market = function(row) { timestamp <- NULL return(Market$new( timestamp, row[["name"]], row[["symbol"]], row[["rank"]], row[["price_btc"]], row[["price_usd"]] )) }, clean = function(data) { data$price_usd <- as.numeric(data$price_usd) data$price_btc <- as.numeric(data$price_btc) data$rank <- as.numeric(data$rank) return(data) } ) ) Finally, here's the code for our exchange_requester_factory(). As you can see, it's basically the same idea we have used for our other factories, and its purpose is to easily let us add more implementations for our ExchangeRequeseter by simply adding else-if statements in it: source("./coinmarketcap-requester.R") exchange_requester_factory <- function(name) { if (name == "CoinMarketCap") { return(CoinMarketCapRequester$new()) } else { stop("Unknown exchange name") } } Developing our wallets infrastructure Now that we are able to retrieve live price data from exchanges, we turn to our Wallet definition. As you can see, it specifies the type of private attributes we expect for the data that it needs to handle, as well as the public data() method to create the list of data that needs to be saved to a database at some point. It also provides getters for email, symbol, and address, and the public pudate_assets() method, which will be used to get and save assets into the database, just as we did in the case of Exchange. As a matter of fact, the techniques followed are exactly the same, so we won't explain them again: source("./requesters/wallet-requester-factory.R", chdir = TRUE) Wallet <- R6Class( "Wallet", public = list( initialize = function(email, symbol, address, note) { private$requester <- wallet_requester_factory(symbol, address) private$email <- email private$symbol <- symbol private$address <- address private$note <- note }, data = function() { return(list( email = private$email, symbol = private$symbol, address = private$address, note = private$note )) }, get_email = function() { return(as.character(private$email)) }, get_symbol = function() { return(as.character(private$symbol)) }, get_address = function() { return(as.character(private$address)) }, update_assets = function(timestamp, storage) { private$timestamp <- timestamp storage$write_assets(private$assets()) } ), private = list( timestamp = NULL, requester = NULL, email = NULL, symbol = NULL, address = NULL, note = NULL, assets = function() { return (lapply ( private$requester$assets(), private$insert_metadata)) }, insert_metadata = function(asset) { timestamp(asset) <- unclass(private$timestamp) email(asset) <- private$email return(asset) } ) ) Implementing our wallet requesters The WalletRequester will be conceptually similar to the ExchangeRequester. It will be an interface, and will be implemented in our BTCRequester and LTCRequester interfaces. As you can see, it requires a public method called assets() to be implemented and to return a list of Asset instances. It also requires a private create_asset() method to be implemented, which should return individual Asset instances, and a private url method that will build the URL required for the API call. It offers a request() private method that will be used by implementations to retrieve data from external APIs: source("../../../utilities/requester.R") WalletRequester <- R6Class( "WalletRequester", public = list( assets = function() list() ), private = list( requester = Requester$new(), create_asset = function() NULL, url = function(address) "", request = function(URL) { return(private$requester$request(URL)) } ) ) The BTCRequester and LTCRequester implementations are shown below for completeness, but will not be explained. If you have followed everything so far, they should be easy to understand: source("./wallet-requester.R") source("../../asset.R") BTCRequester <- R6Class( "BTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/btc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Bitcoin", symbol = "BTC", total = total, address = private$address )) } ) ) source("./wallet-requester.R") source("../../asset.R") LTCRequester <- R6Class( "LTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/ltc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Litecoin", symbol = "LTC", total = total, address = private$address )) } ) ) The wallet_requester_factory() works just as the other factories; the only difference is that in this case, we have two possible implementations that can be returned, which can be seen in the if statement. If we decided to add a WalletRequester for another cryptocurrency, such as Ether, we could simply add the corresponding branch here, and it should work fine: source("./btc-requester.R") source("./ltc-requester.R") wallet_requester_factory <- function(symbol, address) { if (symbol == "BTC") { return(BTCRequester$new(address)) } else if (symbol == "LTC") { return(LTCRequester$new(address)) } else { stop("Unknown symbol") } } Hope you enjoyed this interesting tutorial and were able to retrieve live data for your application. To know more, do check out the R Programming By Example and start handling data efficiently with modular, maintainable and expressive codes. Read More Introduction to R Programming Language and Statistical Environment 20 ways to describe programming in 5 words  
Read more
  • 0
  • 0
  • 2567

article-image-this-week-on-packt-hub-20-april-2018
Aarthi Kumaraswamy
20 Apr 2018
4 min read
Save for later

This week on Packt Hub – 20 April 2018

Aarthi Kumaraswamy
20 Apr 2018
4 min read
It’s been another busy week on the Packt Hub with a lot of tech news developments, hands on tutorials and insights on the latest and trending technological trends like Vue.js, Kotlin, GDPR and more. There has been plenty of interesting stories too from around the world. Here’s what you might have missed in the last 7 days - Tutorials, insights and new on technology… Featured Interview Selenium and data-driven testing: An interview with Carl Cocchiaro Data-driven testing has become a lot easier thanks to tools like Selenium. That’s good news for everyone in software development. We spoke to Carl Cocchiato about data-driven testing and much more. Carl is the author of Selenium Framework Design in Data-Driven Testing. Tech news Bulletins Cloud and networking news bulletin – Friday 20 April Programming news bulletin – Thursday 19 April Security news bulletin – Wednesday 18 April Web development news bulletin – Tuesday 17 April Data science news bulletin – Monday 16 April Data news in depth MongoDB going relational with 4.0 release [Editor’s Pick] JupyterLab v0.32.0 releases TensorFlow 1.8.0-rc0 releases Development & programming news in depth What’s new in ECMAScript 2018 (ES9)? [Editor’s Pick] Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Scrivito launches serverless JavaScript CMS What’s new in Unreal Engine 4.19? [Editor’s Pick] Leap Motion open sources its $100 augmented reality headset, North Star [Editor’s Pick] Unity 2D & 3D game kits simplify Unity game development for beginners Cloud & networking news in depth What to expect from upcoming Ubuntu 18.04 release [Editor’s Pick] Google announces the largest overhaul of their Cloud Speech-to-Text What’s new in Docker Enterprise Edition 2.0? Couchbase mobile 2.0 is released Other news Splunk Industrial Asset Intelligence (Splunk IAI) targets Industrial IoT marketplace [Editor’s Pick] Tutorials This week we bring data and machine learning folks, tutorials on real-time streaming with Azure stream analytics, creating live visual dashboards in Power BI and using Q learning to build an options trading web app. For web developers, this week is all about Vue.js with a little Go on the side. Android rules mobile development while Kotlin is gaining more traction as a serious programming language. Data tutorials How to get started with Azure Stream Analytics and 7 reasons to choose it Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools Building a live interactive visual dashboard in Power BI with Azure Stream [Editor’s Pick] Cross-validation in R for predictive models How to build an options trading web app using Q-learning [Editor’s Pick] Development & programming tutorials Web development tutorials How to build a basic server-side chatbot using Go Building your first Vue.js 2 Web application [Editor’s Pick] How to test node applications using Mocha framework Game and Mobile development tutorials How to Secure and Deploy an Android App Packaging and publishing an Oracle JET Hybrid mobile application Build your first Android app with Kotlin Behavior Scripting in C# and Javascript for game developers [Editor’s Pick] Programming tutorials Getting started with Kotlin programming How to boost R codes using C++ and Fortran Other Tutorials Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless Build your first Raspberry Pi project This week’s opinions, analysis, and insights Some hot topics in focus this week are Chaos engineering, GDPR, AIOps, the reactive Manifesto, cloud security threats and more. See what we learned from the recently concluded IBM Think and VUECONF.US conferences in this week’s special coverage. Data Insights What is AIOps and why is it going to be important? AI on mobile: How AI is taking over the mobile devices marketspace How machine learning as a service is transforming cloud IBM Think 2018: 6 key takeaways for developers AI-powered Robotics: Autonomous machines in the making What your organization needs to know about GDPR [Editor’s Pick] What we learned from IBM Research’s ‘5 in 5’ predictions presented at Think 2018 [Editor’s Pick] What is a support vector machine? [Editor’s Pick] 4 Encryption options for your SQL Server GDPR is good for everyone: businesses, developers, customers Data science on Windows is a big no [Editor’s Pick] Development & Programming Insights What is the Reactive Manifesto? Vue.js developer special: What we learned from VUECONF.US 2018 [Editor’s Pick] Top 7 modern Virtual Reality hardware systems Cloud & Networking Insights AWS Fargate makes Container infrastructure management a piece of cake Top 5 cloud security threats to look out for in 2018 [Editor’s Pick] Other Insights Chaos Engineering: managing complexity by breaking things [Editor’s Pick] 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 1851

article-image-test-node-applications-using-mocha-framework
Sunith Shetty
20 Apr 2018
12 min read
Save for later

How to test node applications using Mocha framework

Sunith Shetty
20 Apr 2018
12 min read
In today’s tutorial, you will learn how to create your very first test case that tests whether your code is working as expected. If we make a function that's supposed to add two numbers together, we can automatically verify it's doing that. And if we have a function that's supposed to fetch a user from the database, we can make sure it's doing that as well. Now to get started in this section, we'll look at the very basics of setting up a testing suite inside a Node.js project. We'll be testing a real-world function. Installing the testing module In order to get started, we will make a directory to store our code for this chapter. We'll make one on the desktop using mkdir and we'll call this directory node-tests: mkdir node-tests Then we'll change directory inside it using cd, so we can go ahead and run npm init. We'll be installing modules and this will require a package.json file: cd node-tests npm init We'll run npm init using the default values for everything, simply hitting enter throughout every single step: Now once that package.json file is generated, we can open up the directory inside Atom. It's on the desktop and it's called node-tests. From here, we're ready to actually define a function we want to test. The goal in this section is to learn how to set up testing for a Node project, so the actual functions we'll be testing are going to be pretty trivial, but it will help illustrate exactly how to set up our tests. Testing a Node project To get started, let's make a fake module. This module will have some functions and we'll test those functions. In the root of the project, we'll create a brand new directory and I'll call this directory utils: We can assume this will store some utility functions, such as adding a number to another number, or stripping out whitespaces from a string, anything kind of hodge-podge that doesn't really belong to any specific location. We'll make a new file in the utils folder called utils.js, and this is a similar pattern to what we did when we created the weather and location directories in our weather app: You're probably wondering why we have a folder and a file with the same name. This will be clear when we start testing. Now before we can write our first test case to make sure something works, we need something to test. I'll make a very basic function that takes two numbers and adds them together. We'll create an adder function as shown in the following code block: module.exports.add = () => { } This arrow function (=>) will take two arguments, a and b, and inside the function, we'll return the value a + b. Nothing too complex here: module.exports.add = () => { return a + b; }; Now since we just have one expression inside our arrow function (=>) and we want to return it, we can actually use the arrow function (=>) expression syntax, which lets us add our expression as shown in the following code, a + b, and it'll be implicitly returned: module.exports.add = (a, b) => a + b; There's no need to explicitly add a return keyword on to the function. Now that we have utils.js ready to go, let's explore testing. We'll be using a framework called Mocha in order to set up our test suite. This will let us configure our individual test cases and also run all of our test files. This will be really important for creating and running tests. The goal here is to make testing simple and we'll use Mocha to do just that. Now that we have a file and a function we actually want to test, let's explore how to create and run a test suite. Mocha – the testing framework We'll be doing the testing using the super popular testing framework Mocha, which you can find at mochajs.org. This is a fantastic framework for creating and running test suites. It's super popular and their page has all the information you'd ever want to know about setting it up, configuring it, and all the cool bells and whistles it has included: If you scroll down on this page, you'll be able to see a table of contents: Here you can explore everything Mocha has to offer. We'll be covering most of it in this article, but for anything we don't cover, I do want to make you aware you can always learn about it on this page. Now that we've explored the Mocha documentation page, let's install it and start using it. Inside the Terminal, we'll install Mocha. First up, let's clear the Terminal output. Then we'll install it using the npm install command. When you use npm install, you can also use the shortcut npm i. This has the exact same effect. I'll use npm i with mocha, specifying the version @3.0.0. This is the most recent version of the library as of this filming: npm i [email protected] Now we do want to save this into the package.json file. Previously, we've used the save flag, but we'll talk about a new flag, called save-dev. The save-dev flag is will save this package for development purposes only—and that's exactly what Mocha will be for. We don't actually need Mocha to run our app on a service like Heroku. We just need Mocha locally on our machine to test our code. When you use the save-dev flag, it installs the module much the same way: npm i [email protected] --save-dev But if you explore package.json, you'll see things are a little different. Inside our package.json file, instead of a dependencies attribute, we have a devDependencies attribute: In there we have Mocha, with the version number as the value. The devDependencies are fantastic because they're not going to be installed on Heroku, but they will be installed locally. This will keep the Heroku boot times really, really quick. It won't need to install modules that it's not going to actually need. We'll be installing both devDependencies and dependencies in most of our projects from here on out. Creating a test file for the add function Now that we have Mocha installed, we can go ahead and create a test file. In the utils folder, we'll make a new file called utils.test.js:   This file will store our test cases. We'll not store our test cases in utils.js. This will be our application code. Instead, we'll make a file called utils.test.js. When we use this test.js extension, we're basically telling our app that this will store our test cases. When Mocha goes through our app looking for tests to run, it should run any file with this Extension. Now we have a test file, the only thing left to do is create a test case. A test case is a function that runs some code, and if things go well, great, the test is considered to have passed. And if things do not go well, the test is considered to have failed. We can create a new test case, using it. It is a function provided by Mocha. We'll be running our project test files through Mocha, so there's no reason to import it or do anything like that. We simply call it just like this: it(); Now it lets us define a new test case and it takes two arguments. These are: The first argument is a string The second argument is a function First up, we'll have a string description of what exactly the test is doing. If we're testing that the adder function works, we might have something like: it('should add two numbers'); Notice here that it plays into the sentence. It should read like this, it should add two numbers; describes exactly what the test will verify. This is called behavior-driven development, or BDD, and that's the principles that Mocha was built on. Now that we've set up the test string, the next thing to do is add a function as the second argument: it('should add two numbers', () => { }); Inside this function, we'll add the code that tests that the add function works as expected. This means it will probably call add and check that the value that comes back is the appropriate value given the two numbers passed in. That means we do need to import the util.js file up at the top. We'll create a constant, call utils, setting it equal to the return result from requiring utils. We're using ./ since we will be requiring a local file. It's in the same directory so I can simply type utils without the js extension as shown here: const utils = require('./utils'); it('should add two numbers', () => { }); Now that we have the utils library loaded in, inside the callback we can call it. Let's make a variable to store the return results. We'll call this one results. And we'll set it equal to utils.add passing in two numbers. Let's use something like 33 and 11: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); }); We would expect it to get 44 back. Now at this point, we do have some code inside of our test suites so we run it. We'll do that by configuring the test script. Currently, the test script simply prints a message to the screen saying that no tests exist. What we'll do instead is call Mocha. As shown in the following code, we'll be calling Mocha, passing in as the one and only argument the actual files we want to test. We can use a globbing pattern to specify multiple files. In this case, we'll be using ** to look in every single directory. We're looking for a file called utils.test.js: "scripts": { "test": "mocha **/utils.test.js" }, Now this is a very specific pattern. It's not going to be particularly useful. Instead, we can swap out the file name with a star as well. Now we're looking for any file on the project that has a file name ending in .test.js: "scripts": { "test": "mocha **/*.test.js" }, And this is exactly what we want. From here, we can run our test suite by saving package.json and moving to the Terminal. We'll use the clear command to clear the Terminal output and then we can run our test script using command shown as follows: npm test When we run this, we'll execute that Mocha command: It'll go off. It'll fetch all of our test files. It'll run all of them and print the results on the screen inside Terminal as shown in the preceding screenshot. Here we can see we have a green checkmark next to our test, should add two numbers. Next, we have a little summary, one passing test, and it happened in 8 milliseconds. It'll go off. It'll fetch all of our test files. It'll run all of them and print the results on the screen inside Terminal as shown in the preceding screenshot. Here we can see we have a green checkmark next to our test, should add two numbers. Next, we have a little summary, one passing test, and it happened in 8 milliseconds. Now in our case, we don't actually assert anything about the number that comes back. It could be 700 and we wouldn't care. The test will always pass. To make a test fail what we have to do is throw an error. That means we can throw a new error and we pass into the constructor function whatever message we want to use as the error as shown in the following code block. In this case, I could say something like Value not correct: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); throw new Error('Value not correct') }); Now with this in place, I can save the test file and rerun things from the Terminal by rerunning npm test, and when we do that now we have 0 tests passing and we have 1 test failing: Next we can see the one test is should add two numbers, and we get our error message, Value not correct. When we throw a new error, the test fails and that's exactly what we want to do for add. Creating the if condition for the test Now, we'll create an if statement for the test. If the response value is not equal to 44, that means we have a problem on our hands and we'll throw an error: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); if (res != 44){ } }); Inside the if condition, we can throw a new error and we'll use a template string as our message string because I do want to use the value that comes back in the error message. I'll say Expected 44, but got, then I'll inject the actual value, whatever happens to come back: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); if (res != 44){ throw new Error(`Expected 44, but got ${res}.`); } }); Now in our case, everything will line up great. But what if the add method wasn't working correctly? Let's simulate this by simply tacking on another addition, adding on something like 22 in utils.js: module.exports.add = (a, b) => a + b + 22; I'll save the file, rerun the test suite: Now we get an error message: Expected 44, but got 66. This error message is fantastic. It lets us know that something is going wrong with the test and it even tells us exactly what we got back and what we expected. This will let us go into the add function, look for errors, and hopefully fix them. Creating test cases doesn't need to be something super complex. In this case, we have a simple test case that tests a simple function. To summarize, we looked into basic testing of a node app. We explored the testing framework, Mocha which can be used for creating and running test suites. You read an excerpt from a book written by Andrew Mead, titled Learning Node.js Development. In this book, you will learn how to build, deploy, and test Node apps. Developing Node.js Web Applications How is Node.js Changing Web Development?        
Read more
  • 0
  • 0
  • 3196
article-image-build-first-raspberry-pi-project
Gebin George
20 Apr 2018
7 min read
Save for later

Build your first Raspberry Pi project

Gebin George
20 Apr 2018
7 min read
In today's tutorial, we will build a simple Raspberry Pi 3 project. Since our Raspberry Pi now runs Windows 10 IoT Core, .NET Core applications will run on it, including Universal Windows Platform (UWP) applications. From a blank solution, let's create our first Raspberry Pi application. Choose Add and New Project. In the Visual C# category, select Blank App (Universal Windows). Let's call our project FirstApp. Visual Studio will ask us for target and minimum platform versions. Check the screenshot and make sure the version you select is lower than the version installed on your Raspberry Pi. In our case, the Raspberry Pi runs Build 15063. This is the March 2017 release. So, we accept Build 14393 (July 2016) as the target version and Build 10586 (November 2015) as the minimum version. If you want to target the Windows 10 Fall Creators Update, which supports .NET Core 2, you should select Build 16299 for both. In the Solution Explorer, we should now see the files of our new UWP project: New project Adding NuGet packages We proceed by adding functionality to our app from downloadable packages, or NuGets. From the References node, right-click and select Manage NuGet Packages. First, go to the Updates tab and make sure the packages that you already have are updated. Next, go to the Browse tab, type Firmata in the search box, and press Enter. You should see the Windows-Remote-Arduino package. Make sure to install it in your project. In the same way, search for the Waher.Events package and install it. Aggregating capabilities Since we're going to communicate with our Arduino using a USB serial port, we must make a declaration in the Package.appxmanifest file stating this. If we don't do this, the runtime environment will not allow the app to do it. Since this option is not available in the GUI by default, you need to edit the file using the XML editor. Make sure the serialCommunication device capability is added, as follows: <Capabilities> <Capability Name="internetClient" /> <DeviceCapability Name="serialcommunication"> <Device Id="any"> <Function Type="name:serialPort" /> </Device> </DeviceCapability> </Capabilities> Initializing the application Before we do any communication with the Arduino, we need to initialize the application. We do this by finding the OnLaunched method in the App.xml.cs file. After the Window.Current.Activate() call, we make a call to our Init() method where we set up the application. Window.Current.Activate(); Task.Run((Action)this.Init); We execute our initialization method from the thread pool, instead of the standard thread. This is done by calling Task.Run(), defined in the System.Threading.Tasks namespace. The reason for this is that we want to avoid locking the standard thread. Later, there will be a lot of asynchronous calls made during initialization. To avoid problems, we should execute all these from the thread pool, instead of from the standard thread. We'll make the method asynchronous: private async void Init() { try { Log.Informational("Starting application."); ... } catch (Exception ex) { Log.Emergency(ex); MessageDialog Dialog = new MessageDialog(ex.Message, "Error"); await Dialog.ShowAsync(); } IoT Desktop } The static Log class is available in the Waher.Events namespace, belonging to the NuGet we included earlier. (MessageDialog is available in Windows.UI.Popups, which might be a new namespace if you're not familiar with UWP.) Communicating with the Arduino The Arduino is accessed using Firmata. To do that, we use the Windows.Devices.Enumeration, Microsoft.Maker.RemoteWiring, and Microsoft.Maker.Serial namespaces, available in the Windows-Remote-Arduino NuGet. We begin by enumerating all the devices it finds: DeviceInformationCollection Devices = await UsbSerial.listAvailableDevicesAsync(); foreach (DeviceInformationDeviceInfo in Devices) { If our Arduino device is found, we will have to connect to it using USB: if (DeviceInfo.IsEnabled&&DeviceInfo.Name.StartsWith("Arduino")) { Log.Informational("Connecting to " + DeviceInfo.Name); this.arduinoUsb = new UsbSerial(DeviceInfo); this.arduinoUsb.ConnectionEstablished += () => Log.Informational("USB connection established."); Attach a remote device to the USB port class: this.arduino = new RemoteDevice(this.arduinoUsb); We need to initialize our hardware, when the remote device is ready: this.arduino.DeviceReady += () => { Log.Informational("Device ready."); this.arduino.pinMode(13, PinMode.OUTPUT); // Onboard LED. this.arduino.digitalWrite(13, PinState.HIGH); this.arduino.pinMode(8, PinMode.INPUT); // PIR sensor. MainPage.Instance.DigitalPinUpdated(8, this.arduino.digitalRead(8)); this.arduino.pinMode(9, PinMode.OUTPUT); // Relay. this.arduino.digitalWrite(9, 0); // Relay set to 0 this.arduino.pinMode("A0", PinMode.ANALOG); // Light sensor. MainPage.Instance.AnalogPinUpdated("A0", this.arduino.analogRead("A0")); }; Important: the analog input must be set to PinMode.ANALOG, not PinMode.INPUT. The latter is for digital pins. If used for analog pins, the Arduino board and Firmata firmware may become unpredictable. Our inputs are then reported automatically by the Firmata firmware. All we need to do to read the corresponding values is to assign the appropriate event handlers. In our case, we forward the values to our main page, for display: this.arduino.AnalogPinUpdated += (pin, value) => { MainPage.Instance.AnalogPinUpdated(pin, value); }; this.arduino.DigitalPinUpdated += (pin, value) => { MainPage.Instance.DigitalPinUpdated(pin, value); }; Communication is now set up. If you want, you can trap communication errors, by providing event handlers for the ConnectionFailed and ConnectionLost events. All we need to do now is to initiate communication. We do this with a simple call: this.arduinoUsb.begin(57600, SerialConfig.SERIAL_8N1); Testing the app Make sure the Arduino is still connected to your PC via USB. If you run the application now (by pressing F5), it will communicate with the Arduino, and display any values read to the event log. In the GitHub project, I've added a couple of GUI components to our main window, that display the most recently read pin values on it. It also displays any event messages logged. We leave the relay for later chapters. For a more generic example, see the Waher.Service.GPIO project at https://github.com/PeterWaher/IoTGateway/tree/master/Services/Waher.Service.GPIO. This project allows the user to read and control all pins on the Arduino, as well as the GPIO pins available on the Raspberry Pi directly. Deploying the app You are now ready to test the app on the Raspberry Pi. You now need to disconnect the Arduino board from your PC and install it on top of the Raspberry Pi. The power of the Raspberry Pi should be turned off when doing this. Also, make sure the serial cable is connected to one of the USB ports of the Raspberry Pi. Begin by switching the target platform, from Local Machine to Remote Machine, and from x86 to ARM: Run on a remote machine with an ARM processor Your Raspberry Pi should appear automatically in the following dialog. You should check the address with the IoT Dashboard used earlier, to make sure you're selecting the correct machine: Select your Raspberry Pi You can now run or debug your app directly on the Raspberry Pi, using your local PC. The first deployment might take a while since the target system needs to be properly prepared. Subsequent deployments will be much faster. Open the Device Portal from the IoT Dashboard, and take a Screenshot, to see the results. You can also go to the Apps Manager in the Device Portal, and configure the app to be started automatically at startup: App running on the Raspberry Pi To summarize, we saw how to practically build a simple application using Raspberry Pi 3 and C#. You read an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you design and implement scalable IoT solutions with ease. Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless AI and the Raspberry Pi: Machine Learning and IoT, What’s the Impact?    
Read more
  • 0
  • 0
  • 6295

article-image-chaos-engineering-managing-complexity-by-breaking-things
Richard Gall
20 Apr 2018
7 min read
Save for later

Chaos Engineering: managing complexity by breaking things

Richard Gall
20 Apr 2018
7 min read
Chaos Engineering is based on a fundamental assertion about software infrastructure today: that it is inherently chaotic. Or, to be more specific, it is chaotic because it is complex. Whereas software infrastructure used to be centralized, owned and licensed by large enterprise vendors, today much of the software that comprises infrastructure is open source. This is where we get back to chaos - because software infrastructure is comprised of many different parts, the way these parts can be unpredictable. Chaos Engineering is an attempt to acknowledge that fact and develop software accordingly. Who invented Chaos Engineering? Chaos Engineering began at Netflix. That makes sense when you consider the complexity of the Netflix technology stack and the way the company have scaled over the last 5 years or so. It built a number of tools to help adopt this chaos-first approach, the most prominent being Chaos Monkey. First launched in 2011 and open-sourced in 2012, Chaos Monkey was a tool that randomly selects instances in production and pulls them down; a little bit like monkeys pulling off your windscreen wipers in a safari park. However, Chaos Monkey became part of a wider suite of tools - called the Simian Army - that were built by Netflix to cause chaos in different part of its infrastructure. Here are the other two components used to simulate chaos: Chaos Gorilla causes big trouble by pulling down an entire AWS availability zone Latency monkey delays communication, essentially simulating poor network performance From that point Chaos Engineering grew. A number of large Silicon Valley organizations have adopted a similar approaches. For example, Facebook's Project Storm simulates data center failures on a huge scale, while Uber uses a tool called uDestroy. Slack has recently spoken in detail on the importance of stress testing their software too; the company is looking to build an engineering team simply to perform Chaos Engineering and improve Slack's reliability. One of the most interesting figures in Chaos Engineering is a man called Kolton Andrus. Andrus used to work at Amazon and Google, but today he is the CEO and founder of Gremlin, a startup that "helps engineers build resilient systems". Essentially, Andrus helped to develop the concept of Chaos Engineering while he was working at Netflix. Gremlin is his vehicle that is making it accessible to others. Chaos Engineering in practice Now the conceptual stuff is out of the way, here's how chaos engineering works. It's actually quite straightforward: Chaos Engineering simulates all sorts of unpredictable situations and scenarios in order to see how the system responds. It's effectively a form of stress testing. As we've seen, over the past few years companies have built their own tools to allow them to stress test their infrastructure. But Gremlin is taking the approach of offering this as a service. It's product is described as 'resiliency-as-a-service.' Its' product is a whole library of 'attacks' which can replicate different types of outages within a system. These are what it calls 'chaos experiments' that allows you to 'identify weak points in your system and fix them before they become a problem'. In this sense, Chaos Engineering is a bit like using the principles of penetration testing an applying it to software testing more broadly. By simulating everything that could possibly go wrong it allows you to make much better optimization decisions. The principles of Chaos Engineering are documented here. This is effectively its 'manifesto'. There's a lot in there worth reading, but here are the 5 principles that any sort of testing or experimentation should aspire to: Base your testing hypothesis on steady state behavior. Consider your infrastructure holistically, making individual parts work is important but not the priority. Simulate a variety of real-world events. This could be hardware or software failures, or simply external changes like spikes in traffic. What's important is that they're all unpredictable. Test in production. Your tests should be authentic. Automate! Testing things could be laborious and require a lot of manual work. Make use of automation tools to do lots of different tests without taking up too much of your time. Don't cause unnecessary pain. While it's important that your stress-tests are authentic, the impact must be contained and minimized by the engineer. Why Chaos Engineering now? Chaos Engineering isn't particularly new. As you've seen, Netflix has been doing it since 2011. But it does feel more urgent and relevant today. That's because the complexity of the software infrastructure behind many of the biggest Silicon Valley companies is now mainstream. It's normal. Cloud isn't an exotic buzzword any more - it's a reality (a reality that often has failures). Microservices are common - they're a commonsense way of building better applications and websites. Alongside this increased complexity, there is also a growing awareness of how much software outages can cost businesses. In a white paper, Gremlin make a big deal out of how much money is lost due to outages. Gremlin cite BAs system failure in summer 2017, which led to passengers stranded all over the world. This outage was estimated to have cost BA $135 million. It also refers to the Amazon S3 outage in March 2017, which is believed to have cost Amazon's customers $150 million. So - outages cost money. Yes, it's marketing spiel from Gremlin, but it's also true. It doesn't take a genius to work out that if you're eCommerce site is down for an hour, you're going to have lost a lot of money. Because software performance is so tied up with business performance, it feels incredibly fragile. That's why Chaos Engineering is perhaps more important and popular than ever. It's a way of countering that fragility. The key challenges of Chaos Engineering Chaos Engineering poses many challenges to software engineering teams. First and foremost, it requires a big cultural change. If you're intent on breaking everything, there are no rules about how things should work or what you're trying to build. Instead you're looking for the best way to build software that performs for the user. More practically, Chaos Engineering isn't that easy to do in a cost-effective manner. Everything Gremlin details in its white paper is very much true - of course outages cost a hell of a lot. But creative destruction and experimentation feels like an expensive route through software projects. It's not hard to see how it might appear self-indulgent, especially to a company or organization where software isn't properly understood. And more to the point, how often do businesses actually do the smart thing when they're building software? Long term projects are always difficult. So much software evolves pragmatically - often for the worse.  Adding in an extra layer of experimentation and detailed testing is a weird mix of bacchanalian and hyper-organized, something that many organizations just couldn't process or properly understand. Chaos engineering and the future of software development Chaos Engineering certainly looks like the future of software development. The only question is whether services like those provided by Gremlin will take off. To understand the true value of stress testing your infrastructure you do need at least a modicum awareness of the complexity of your infrastructure. Indeed, you probably need to have a conversation about what services and dependencies are most business critical. Or rather, which ones most impact the user. That's something this TechCrunch piece addresses: "Testing can... be very political. Finding the points of failure in a system might force deep conversations about a particular software architecture and its robustness in the face of tough situations. A particular company might be deeply invested in a specific technical roadmap (e.g. microservices) that chaos engineering tests show is not as resilient to failures as originally predicted." This means there is going to be a question mark over the extent to which Chaos Engineering ever really enters the mainstream. How many businesses want to have these conversations? It's not just about the inclination - it's also about the time and money. It's an innovative software engineering approach that really calls people's bluff when they talk about innovation. It asks difficult questions about how and why you innovate: do you do new things because you think you should? Is this new thing going to be good for the business? And how well will it work for users? Of course these questions are vital when you're building software. But they rarely make building software easier.
Read more
  • 0
  • 0
  • 6936

article-image-how-to-secure-and-deploy-an-android-app
Sugandha Lahoti
20 Apr 2018
17 min read
Save for later

How to Secure and Deploy an Android App

Sugandha Lahoti
20 Apr 2018
17 min read
In this article, we will be covering two extremely important Android-related topics: Android application security Android application deployment We will kick off the post by discussing Android application security. Securing an Android application It should come as no surprise that security is an important consideration when building software. Besides the security measures put in place in the Android operating system, it is important that developers pay extra attention to ensure that their applications meet the set security standards. In this section, a number of important security considerations and best practices will be broken down for your understanding. Following these best practices will make your applications less vulnerable to malicious programs that may be installed on a client device. Data storage All things being equal, the privacy of data saved by an application to a device is the most common security concern in developing an Android application. Some simple rules can be followed to make your application data more secure. Securing your data when using internal storage As we saw in the previous chapter, internal storage is a good way to save private data on a device. Every Android application has a corresponding internal storage directory in which private files can be created and written to. These files are private to the creating application, and as such cannot be accessed by other applications on the client device. As a rule of thumb, if data should only be accessible by your application and it is possible to store it in internal storage, do so. Feel free to refer to the previous chapter for a refresher on how to use internal storage. Securing your data when using external storage External storage files are not private to applications, and, as such, can be easily accessed by other applications on the same client device. As a result of this, you should consider encrypting application data before storing it in external storage. There are a number of libraries and packages that can be used to encrypt data prior to its saving to external storage. Facebook's Conceal (http://facebook.github.io/conceal/) library is a good option for external-storage data encryption. In addition to this, as another rule of thumb, do not store sensitive data in external storage. This is because external storage files can be manipulated freely. Validation should also be performed on input retrieved from external storage. This validation should be done as a result of the untrustworthy nature of data stored in external storage. Securing your data when using internal storage. Content providers can either prevent or enable external access to your application data. Use the android:exported attribute when registering your content provider in the manifest file to specify whether external access to the content provider should be permitted. Set android:exported to true if you wish the content provider to be exported, otherwise set the attribute to false. In addition to this, content provider query methods—for example, query(), update(), and delete()—should be used to prevent SQL injection (a code injection technique that involves the execution of malicious SQL statements in an entry field by an attacker). Networking security Best practices for your Android App development There are a number of best practices that should be followed when performing network transactions via an Android application. These best practices can be split into different categories. We shall speak about Internet Protocol (IP) networking and telephony networking best practices in this section. IP networking When communicating with a remote computer via IP, it is important to ensure that your application makes use of HTTPs wherever possible (thus wherever it is supported in the server). One major reason for doing this is because devices often connect to insecure networks, such as public wireless connections. HTTPs ensure encrypted communication between clients and servers, regardless of the network they are connected to. In Java, an HttpsURLConnection can be used for secure data transfer over a network. It is important to note that data received via an insecure network connection should not be trusted. Telephony networking In instances where data needs to be transferred freely across a server and client applications, Firebase Cloud Messaging (FCM)—along with IP networking—should be utilized instead of other means, such as the Short Messaging Service (SMS) protocol. FCM is a multi-platform messaging solution that facilitates the seamless and reliable transfer of messages between applications. SMS is not a good candidate for transferring data messages, because: It is not encrypted It is not strongly authenticated Messages sent via SMS are subject to spoofing SMS messages are subject to interception Input validation The validation of user input is extremely important in order to avoid security risks that may arise. One such risk, as explained in the Using content providers section, is SQL injection. The malicious injection of SQL script can be prevented by the use of parameterized queries and the extensive sanitation of inputs used in raw SQL queries. In addition to this, inputs retrieved from external storage must be appropriately validated because external storage is not a trusted data source. Working with user credentials The risk of phishing can be alleviated by reducing the requirement of user credential input in an application. Instead of constantly requesting user credentials, consider using an authorization token. Eliminate the need for storing usernames and passwords on the device. Instead, make use of a refreshable authorization token. Code obfuscation Before publishing an Android application, it is imperative to utilize a code obfuscation tool, such as ProGuard, to prevent individuals from getting unhindered access to your source code by utilizing various means, such as decompilation. ProGuard is prepackaged included within the Android SDK, and, as such, no dependency inclusion is required. It is automatically included in the build process if you specify your build type to be a release. You can find out more about ProGuard here: https://www.guardsquare.com/en/proguard . Securing broadcast receivers By default, a broadcast receiver component is exported and as a result can be invoked by other applications on the same device. You can control access of applications to your apps's broadcast receiver by applying security permissions to it. Permissions can be set for broadcast receivers in an application's manifest file with the <receiver> element. Securing your Dynamically loading code In scenarios in which the dynamic loading of code by your application is necessary, you must ensure that the code being loaded comes from a trusted source. In addition to this, you must make sure to reduce the risk of tampering code at all costs. Loading and executing code that has been tampered with is a huge security threat. When code is being loaded from a remote server, ensure it is transferred over a secure, encrypted network. Keep in mind that code that is dynamically loaded runs with the same security permissions as your application (the permissions you defined in your application's manifest file). Securing services Unlike broadcast receivers, services are not exported by the Android system by default. The default exportation of a service only happens when an intent filter is added to the declaration of a service in the manifest file. The android:exported attribute should be used to ensure services are exported only when you want them to be. Set android:exported to true when you want a service to be exported and false otherwise. Deploying your Android Application So far, we have taken an in-depth look at the Android system, application development in Android, and some other important topics, such as Android application security. It is time for us to cover our final topic for this article pertaining to the Android ecosystem—launching and publishing an Android application. You may be wondering at this juncture what the words launch and publish mean. A launch is an activity that involves the introduction of a new product to the public (end users). Publishing an Android application is simply the act of making an Android application available to users. Various activities and processes must be carried out to ensure the successful launch of an Android application. There are 15 of these activities in all. They are: Understanding the Android developer program policies Preparing your Android developer account Localization planning Planning for simultaneous release Testing against the quality guideline Building a release-ready APK Planning your application's Play Store listing Uploading your application package to the alpha or beta channel Device compatibility definition Pre-launch report assessment Pricing and application distribution setup Distribution option selection In-app products and subscriptions setup Determining your application's content rating Publishing your application Wow! That's a long list. Don't fret if you don't understand everything on the list. Let's look at each item in more detail. Understanding the Android developer program policies There is a set of developer program policies that were created for the sole purpose of making sure that the Play Store remains a trusted source of software for its users. Consequences exist for the violation of these defined policies. As a result, it is important that you peruse and fully understand these developer policies—their purposes and consequences—before continuing with the process of launching your application. Preparing your Android developer account You will need an Android developer account to launch your application on the Play Store. Ensure that you set one up by signing up for a developer account and confirming the accuracy of your account details. If you ever need to sell products on an Android application of yours, you will need to set up a merchant account. Localization planning Sometimes, for the purpose of localization, you may have more than one copy of your application, with each localized to a different language. When this is the case, you will need to plan for localization early on and follow the recommended localization checklist for Android developers. You can view this checklist here: https://developer.android.com/distribute/best-practices/launch/localization-checklist.html. Planning for simultaneous release You may want to launch a product on multiple platforms. This has a number of advantages, such as increasing the potential market size of your product, reducing the barrier of access to your product, and maximizing the number of potential installations of your application. Releasing on numerous platforms simultaneously is generally a good idea. If you wish to do this with any product of yours, ensure you plan for this well in advance. In cases where it is not possible to launch an application on multiple platforms at once, ensure you provide a means by which interested potential users can submit their contact details so as to ensure that you can get in touch with them once your product is available on their platform of choice. Testing against the quality guidelines Quality guidelines provide testing templates that you can use to confirm that your application meets the fundamental functional and non-functional requirements that are expected by Android users. Ensure that you run your applications through these quality guides before launch. You can access these application quality guides here: https://developer.android.com/develop/quality-guidelines/index.html. Building a release-ready application package (APK) A release-ready APK is an Android application that has been packaged with optimizations and then built and signed with a release key. Building a release-ready APK is an important step in the launch of an Android application. Pay extra attention to this step. Planning your application's Play Store listing This step involves the collation of all resources necessary for your product's Play Store listing. These resources include, but are not limited to, your application's log, screenshots, descriptions, promotional graphics, and videos, if any. Ensure you include a link to your application's privacy policy along with your application's Play Store listing. It is also important to localize your application's product listing to all languages that your application supports. Uploading your application package to the alpha or beta channel As testing is an efficient and battle-tested way of detecting defects in software and improving software quality, it is a good idea to upload your application package to alpha and beta channels to facilitate carrying out alpha and beta software testing on your product. Alpha testing and beta testing are both types of acceptance testing. Device compatibility definition This step involves the declaration of Android versions and screen sizes that your application was developed to work on. It is important to be as accurate as possible in this step as defining inaccurate Android versions and screen sizes will invariably lead to users experiencing problems with your application. Pre-launch report assessment Pre-launch reports are used to identify issues found after the automatic testing of your application on various Android devices. Pre-launch reports will be delivered to you, if you opt in to them, when you upload an application package to an alpha or beta channel. Pricing and application distribution setup First, determine the means by which you want to monetize you application. After determining this, set up your application as either a free install or a paid download. After you have set up the desired pricing of your application, select the countries you wish to distribute you applications to. Distribution option selection This step involves the selection of devices and platforms—for example, Android TV and Android Wear—that you wish to distribute your app on. After doing this, the Google Play team will be able to review your application. If your application is approved after its review, Google Play will make it more discoverable. In-app products and subscriptions setup If you wish to sell products within your application, you will need to set up your in-app products and subscriptions. Here, you will specify the countries that you can sell into and take care of various monetary-related issues, such as tax considerations. In this step, you will also set up your merchant account. Determining your application's content rating It is necessary that you provide an accurate rating for the application you are publishing to the Play Store. This step is mandated by the Android Developer Program Policies for good reason. It aids the appropriate age group you are targeting to discover your application. Publishing your application Once you have catered for the necessary steps prior to this, you are ready to publish your application to the production channel of the Play Store. Firstly, you will need to roll out a release. A release allows you to upload the APK files of your application and roll out your application to a specific track. At the end of the release procedure, you can publish your application by clicking Confirm rollout. So, that was all we need to know to publish a new application on the Play Store. In most cases, you will not need to follow all these steps in a linear manner, you will just need to follow a subset of the steps—more specifically, those pertaining to the type of application you wish to publish. Releasing your Android app Having signed your  application, you can proceed with completing the required application details toward the goal of releasing your app. Firstly, you need to create a suitable store listing for the application. Open the application  in the Google Play Console and navigate to the store-listing page (this can be done by selecting Store Listing on the side navigation bar). You will need to fill out all the required information in the store listing page before we proceed further. This information includes product details, such as a title, short description, full description, as well as graphic assets and categorization information—including the application type, category and content rating, contact details, and privacy policy. The Google Play Console store listing page is shown in the following screenshot: Once the store listing information has been filled in, the next thing to fill in is the pricing and distribution information. Select Pricing & distribution on the left navigation bar to open up its preference selection page. For the sake of this demonstration, we set the pricing of this app to FREE. We also selected five random countries to distribute this application to. These countries are Nigeria, India, the United States of America, the United Kingdom, and Australia: Besides selecting the type of pricing and the available countries for product distribution, you will need to provide additional preference information. The necessary information to be provided includes device category information, user program information, and consent information. It is now time to add our signed APK to our Google Play Console app. Navigate to App releases | MANAGE BETA | EDIT RELEASE. In the page that is presented to you, you may be asked whether you want to opt into Google play app signing: For the sake of this example, select OPT-OUT. Once OPT-OUT is selected, you will be able to choose your APK file for upload from your computer's file system. Select your APK for upload by clicking BROWSE FILES, as shown in the following screenshot: After selecting an appropriate APK, it will be uploaded to the Google Play Console. Once the upload is done, the play console will automatically add a suggested release name for your beta release. This release name is based on the version name of the uploaded APK. Modify the release name if you are not comfortable with the suggestion. Next, add a suitable release note in the text field provided. Once you are satisfied with the data you have input, save and continue by clicking the Review button at the bottom of the web page. After reviewing the beta release, you can roll it out if you have added beta testers to your app. Rolling out a beta release is not our focus, so let's divert back to our main goal: publishing the Messenger app. Having uploaded an APK for your application, you can now complete the mandatory content rating questionnaire. Click the Content rating navigation item on the sidebar and follow the instructions to do this. Once the questionnaire is complete, appropriate ratings for your application will be generated: With the completion of the content rating questionnaire, the application is ready to be published to production. Applications that are published to production are made available to all users on the Google Play Store. On the play console, navigate to App releases | Manage Production | Create releases. When prompted to upload an APK, click the ADD APK FROM LIBRARY button to the right of the screen and select the APK we previously uploaded (the APK with a version name of 1.0) and complete the necessary release details similar to how we did when creating a beta release. Click the review button at the bottom of the page once you are ready to proceed. You will be given a brief release summary in the page that follows: Go through the information presented in the summary carefully. Start the roll out to production once you have asserted that you are satisfied with the information presented to you in the summary. Once you start the roll out to production, you will be prompted to confirm your understanding that your app will become available to users of the Play Store: Click Confirm once you are ready for the app to go live on the Play Store. Congratulations! You have now published your first application to the Google Play Store! In this article, we learned how to secure and publish Android applications to the Google Play Store. We identified security threats to Android applications and fully explained ways to alleviate them, we also noted best practices to follow when developing applications for the Android ecosystem.  Finally, we took a deep dive into the process of application publication to the Play Store covering all the necessary steps for the successful publication of an Android application. You enjoyed an excerpt from the book, Kotlin Programming By Example, written by Iyanu Adelekan. This book will take on Android development with Kotlin, from building a classic game Tetris to a messenger app, a level up in terms of complexity. Build your first Android app with Kotlin Creating a custom layout implementation for your Android app  
Read more
  • 0
  • 0
  • 4670
article-image-getting-started-with-kotlin-programming
Sugandha Lahoti
19 Apr 2018
14 min read
Save for later

Getting started with Kotlin programming

Sugandha Lahoti
19 Apr 2018
14 min read
Learning a programming language is a daunting experience for many people and not one that most individuals generally choose to undertake. Regardless of the problem domain that you may wish to build solutions for, be it application development, networking, or distributed systems, Kotlin programming is a good choice for the development of systems to achieve the required solutions. In other words, a developer can't go wrong with learning Kotlin.  In this article, you will learn the following: The fundamentals of the Kotlin programming language The installation of Kotlin Compiling and running Kotlin programs Working with an IDE Kotlin is a strongly-typed, object-oriented language that runs on the Java Virtual Machine (JVM) and can be used to develop applications in numerous problem domains. In addition to running on the JVM, Kotlin can be compiled to JavaScript, and as such, is an equally strong choice for developing client-side web applications. Kotlin can also be compiled directly into native binaries that run on systems without a virtual machine via Kotlin/Native. The Kotlin programming language was primarily developed by JetBrains – a company based in Saint Petersburg, Russia. The developers at JetBrains are the current maintainers of the language. Kotlin was named after Kotlin island – an island near Saint Petersburg. Kotlin was designed for use in developing industrial-strength software in many domains but has seen the majority of its users come from the Android ecosystem. At the time of writing this post, Kotlin is one of the three languages that have been declared by Google as an official language for Android. Kotlin is syntactically similar to Java. As a matter of fact, it was designed to be a better alternative to Java. As a consequence, there are numerous significant advantages to using Kotlin instead of Java in software development.  Getting started with Kotlin In order to develop the Kotlin program, you will first need to install the Java Runtime Environment (JRE) on your computer. The JRE can be downloaded prepackaged along with a Java Development Kit (JDK). For the sake of this installation, we will be using the JDK. The easiest way to install a JDK on a computer is to utilize one of the JDK installers made available by Oracle (the owners of Java). There are different installers available for all major operating systems. Releases of the JDK can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/index.html: Clicking on the JDK download button takes you to a web page where you can download the appropriate JDK for your operating system and CPU architecture. Download a JDK suitable for your computer and continue to the next section: JDK installation In order to install the JDK on your computer, check out the necessary installation information from the following sections, based on your operating system. Installation on Windows The JDK can be installed on Windows in four easy steps: Double-click the downloaded installation file to launch the JDK installer. Click the Next button in the welcome window. This action will lead you to a window where you can select the components you want to install. Leave the selection at the default and click Next. The following window prompts the selection of the destination folder for the installation. For now, leave this folder as the default (also take note of the location of this folder, as you will need it in a later step). Click Next. Follow the instructions in the upcoming windows and click Next when necessary. You may be asked for your administrator's password, enter it when necessary. Java will be installed on your computer. After the JDK installation has concluded, you will need to set the JAVA_HOME environment variable on your computer. To do this: Open your Control Panel. Select Edit environment variable. In the window that has opened, click the New button. You will be prompted to add a new environment variable. Input JAVA_HOME as the variable name and enter the installation path of the JDK as the variable value. Click OK once to add the environment variable. Installation on macOS In order to install the JDK on macOS, perform the following steps: Download your desired JDK .dmg file. Locate the downloaded .dmg file and double-click it. A finder window containing the JDK package icon is opened. Double-click this icon to launch the installer. Click Continue on the introduction window. Click Install on the installation window that appears. Enter the administrator login and password when required and click Install Software. The JDK will be installed and a confirmation window displayed. Installation on Linux Installation of the JDK on Linux is easy and straightforward using apt-get: Update the package index of your computer. From your terminal, run: sudo apt-get update Check whether Java is already installed by running the following: java -version You'll know Java is installed if the version information for a Java install on your system is printed. If no version is currently installed, run: sudo apt-get install default-jdk That's it! The JDK will be installed on your computer. Compiling Kotlin programs Now that we have the JDK set up and ready for action, we need to install a means to actually compile and run our Kotlin programs. Kotlin programs can be either compiled directly with the Kotlin command-line compiler or built and run with the Integrated Development Environment (IDE). Working with the command-line compiler The command-line compiler can be installed via Homebrew, SDKMAN!, and MacPorts. Another option for setting up the command-line compiler is by manual installation. Installing the command-line compiler on macOS The Kotlin command-line compiler can be installed on macOS in various ways. The two most common methods for its installation on macOS are via Homebrew and MacPorts. Homebrew Homebrew is a package manager for the macOS systems. It is used extensively for the installation of packages required for building software projects. To install Homebrew, locate your macOS terminal and run: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" You will have to wait a few seconds for the download and installation of Homebrew. After installation, check to see whether Homebrew is working properly by running the following command in your terminal: brew -v If the current version of Homebrew installed on your computer is printed out in the terminal, Homebrew has been successfully installed on your computer. After properly installing Homebrew, locate your terminal and execute the following command: brew install kotlin Wait for the installation to finish, after which you are ready to compile Kotlin programs with the command-line compiler. MacPorts Similar to HomeBrew, MacPorts is a package manager for macOS. Installing MacPorts is easy. It can be installed on a system by: Installing Xcode and the Xcode command-line tools. Agreeing to the Xcode license. This can be done in the terminal by running xcodebuild -license. Installing the required version of MacPorts. MacPort versions can be downloaded from https://www.macports.org/install.php. Once downloaded, locate your terminal and run port install kotlin as the superuser: sudo port install kotlin Installing the command-line compiler on Linux Linux users can easily install the command-line compiler for Kotlin with SDKMAN! SDKMAN! This can be used to install packages on Unix-based systems such as Linux and its various distributions, for example, Fedora and Solaris. SDKMAN! can be installed in three easy steps: Download the software on to your system with curl. Locate your terminal and run: curl -s "https://get.sdkman.io" | bash After you run the preceding command, a set of instructions will come up in your terminal. Follow these instructions to complete the installation. Upon completing the instructions, run: source "$HOME/.sdkman/bin/sdkman-init.sh" Run the following: sdk version If the version number of SDKMAN! just installed is printed in your terminal window, the installation was successful. Now that we have SDKMAN! successfully installed on our system, we can install the command-line compiler by running: sdk install kotlin Installing the command-line compiler on Windows In order to use the Kotlin command-line compilers on Windows: Download a GitHub release of the software from https://github.com/JetBrains/kotlin/releases/tag/v1.2.30 Locate and unzip the downloaded file Open the extracted kotlincbin folder Start the command prompt with the folder path You can now make use of the Kotlin compiler from your command line. Running your first Kotlin program Now that we have our command-line compiler set up, let's try it out with a simple Kotlin program. Navigate to your home directory and create a new file named Hello.kt. All Kotlin files have a .kt extension appended to the end of the filename. Open the file you just created in a text editor of your choosing and input the following: // The following program prints Hello world to the standard system output. fun main (args: Array<String>) { println("Hello world!") } Save the changes made to the program file. After the changes have been saved, open your terminal window and input the following command: kotlinc hello.kt -include-runtime -d hello.jar The preceding command compiles your program into an executable, hello.jar. The -include- runtime flag is used to specify that you want the compiled JAR to be self-contained. By adding this flag to the command, the Kotlin runtime library will be included in your JAR. The -d flag specifies that, in this case, we want the output of the compiler to be called. Now that we have compiled our first Kotlin program, we need to run it—after all, there's no fun in writing programs if they can't be run later on. Open your terminal, if it's not already open, and navigate to the directory where the JAR was saved to (in this case, the home directory).  To run the compiled JAR, perform the following: java -jar hello.jar After running the preceding command, you should see Hello world! printed on your display. Congratulations, you have just written your first Kotlin program! Writing scripts with Kotlin As previously stated, Kotlin can be used to write scripts. Scripts are programs that are written for specific runtime environments for the common purpose of automating the execution of tasks. In Kotlin, scripts have the .kts file extension appended to the file name. Writing a Kotlin script is similar to writing a Kotlin program. In fact, a script written in Kotlin is exactly like a regular Kotlin program! The only significant difference between a Kotlin script and regular Kotlin program is the absence of a main function. Create a file in a directory of your choosing and name it NumberSum.kts. Open the file and input the following program: val x: Int = 1 val y: Int = 2 val z: Int = x + y println(z) As you've most likely guessed, the preceding script will print the sum of 1 and 2 to the standard system output. Save the changes to the file and run the script: kotlinc -script NumberSum.kts A significant thing to take note of is that a Kotlin script does not need to be compiled. Using the REPL REPL is an acronym that stands for Read–Eval–Print Loop. An REPL is an interactive shell environment in which programs can be executed with immediate results given. The interactive shell environment can be invoked by running the kotlinc command without any arguments. The Kotlin REPL can be started by running kotlinc in your terminal. If the REPL is successfully started, a welcome message will be printed in your terminal followed by >>> on the next line, alerting us that the REPL is awaiting input. Now you can type in code within the terminal, as you would in any text editor, and get immediate feedback from the REPL. This is demonstrated in the following screenshot: In the preceding screenshot, the 1 and 2 integers are assigned to x and y, respectively. The sum of x and y is stored in a new z variable and the value held by z is printed to the display with the print() function. Working with an IDE Writing programs with the command line has its uses, but in most cases, it is better to use software built specifically for the purpose of empowering developers to write programs. This is especially true in cases where a large project is being worked on. An IDE is a computer application that hosts a collection of tools and utilities for computer programmers for software development. There are a number of IDEs that can be used for Kotlin development. Out of these IDEs, the one with the most comprehensive set of features for the purpose of developing Kotlin applications is IntelliJ IDEA. As IntelliJ IDEA is built by the creators of Kotlin, there are numerous advantages in using it over other IDEs, such as an unparalleled feature set of tools for writing Kotlin programs, as well as timely updates that cater to the newest advancements and additions to the Kotlin programming language. Installing IntelliJ IDEA IntelliJ IDEA can be downloaded for Windows, macOS, and Linux directly from JetBrains' website: https://www.jetbrains.com/idea/download. On the web page, you are presented with two available editions for download: a paid Ultimate edition and a free Community edition. The Community edition is sufficient if you wish to run the programs in this chapter. Select the edition you wish to download: Once the download is complete, double-click on the downloaded file and install it on your operating system as you would any program. Setting up a Kotlin project with IntelliJ The process of setting up a Kotlin project with IntelliJ is straightforward: Start the IntelliJ IDE application. Click Create New Project. Select Java from the available project options on the left-hand side of the newly opened window. Add Kotlin/JVM as an additional library to the project. Pick a project SDK from the drop-down list in the window. Click Next. Select a template if you wish to use one, then continue to the next screen. Provide a project name in the input field provided. Name the project HelloWorld for now. Set a project location in the input field. Click Finish. Your project will be created and you will be presented with the IDE window: To the left of the window, you will immediately see the project view. This view shows the logical structure of your project files. Two folders are present. These are: .idea: This contains IntelliJ's project-specific settings files. src: This is the source folder of your project. You will place your program files in this folder. Now that the project is set up, we will write a simple program. Add a file named hello.kt to the source folder (right-click the src folder, select New | Kotlin File/Class, and name the file hello). Copy and paste the following code into the file: fun main(args: Array<String>) { println("Hello world!") } To run the program, click the Kotlin logo adjacent to the main function and select Run HelloKt: The project will be built and run, after which, Hello world! will be printed to the standard system output. Advantages of Kotlin As previously discussed, Kotlin was designed to be a better Java, and as such, there are a number of advantages to using Kotlin over Java: Null safety: One common occurrence in Java programs is the throwing of NullPointerException. Kotlin alleviates this issue by providing a null-safe type system. Presence of extension functions: Functions can easily be added to classes defined in program files to extend their functionality in various ways. This can be done with extension functions in Kotlin. Singletons: It is easy to implement the singleton pattern in Kotlin programs. The implementation of a singleton in Java takes considerably more effort than when it is done with Kotlin. Data classes: When writing programs, it is a common scenario to have to create a class for the sole purpose of holding data in variables. This often leads to the writing of many lines of code for such a mundane task. Data classes in Kotlin make it extremely easy to create such classes that hold data with a single line of code. Function types: Unlike Java, Kotlin has function types. This enables functions to accept other functions as parameters and the definition of functions that return functions. To summarize, we introduced Kotlin and explored the fundamentals. In the process, we learned how to install, write and run Kotlin scripts on a computer and how to use the REPL and IDE. This tutorial is an excerpt from the book, Kotlin Programming By Example, written by Iyanu Adelekan. This book will help you enhance your Kotlin programming skills by building real-world applications. Build your first Android app with Kotlin How to convert Java code into Kotlin  
Read more
  • 0
  • 0
  • 14017

article-image-how-to-build-a-basic-server-side-chatbot-using-go
Sunith Shetty
19 Apr 2018
20 min read
Save for later

How to build a basic server side chatbot using Go

Sunith Shetty
19 Apr 2018
20 min read
It's common nowadays to see chatbots (also known as agents) service the needs of website users for a wide variety of purposes, from deciding what shoes to purchase to providing tips on what stocks would look good on a client's portfolio. In a real-world scenario, this functionality would be an attractive proposition for both product sales and technical support usage scenarios. For instance, if a user has a particular question on a product listed on the website, they can freely browse through the website and have a live conversation with the agent. In today’s tutorial, we will examine the functionality required to implement the live chat feature on the server side chatbot. Let’s look at how to implement a live chat feature on various product related pages. In order to have the chat box present in all sections of the website, we will need to place the chat box div container right below the primary content div container in the web page layout template (layouts/webpage_layout.tmpl): <!doctype html> <html> {{ template "partials/header_partial" . }} <div id="primaryContent" class="pageContent"> {{ template "pagecontent" . }} </div> <div id="chatboxContainer" class="containerPulse"> </div> {{ template "partials/footer_partial" . }} </html> The chat box will be implemented as a partial template in the chatbox_partial.tmpl source file in the shared/templates/partials folder: <div id="chatbox"> <div id="chatboxHeaderBar" class="chatboxHeader"> <div id="chatboxTitle" class="chatboxHeaderTitle"><span>Chat with {{.AgentName}}</span></div> <div id="chatboxCloseControl">X</div> </div> <div class="chatboxAgentInfo"> <div class="chatboxAgentThumbnail"><img src="{{.AgentThumbImagePath}}" height="81px"></div> <div class="chatboxAgentName">{{.AgentName}}</div> <div class="chatboxAgentTitle">{{.AgentTitle}}</div> </div> <div id="chatboxConversationContainer"> </div> <div id="chatboxMsgInputContainer"> <input type="text" id="chatboxInputField" placeholder="Type your message here..."> </input> </div> <div class="chatboxFooter"> <a href="http://www.isomorphicgo.org" target="_blank">Powered by Isomorphic Go</a> </div> </div> This is the HTML markup required to implement the wireframe design of the live chat box. Note that the input textfield having the id "chatboxInputField". This is the input field where the user will be able to type their message. Each message created, both the one that the user writes, as well as the one that the bot writes, will use the livechatmsg_partial.tmpl template: <div class="chatboxMessage"> <div class="chatSenderName">{{.Name}}</div> <div class="chatSenderMsg">{{.Message}}</div> </div> Each message is inside its own div container that has two div containers (shown in bold) housing the name of the sender of the message and the message itself. There are no buttons necessary in the live chat feature, since we will be adding an event listener to listen for the press of the Enter key to submit the user's message to the server over the WebSocket connection. Now that we've implemented the HTML markup that will be used to render the chat box, let's examine the functionality required to implement the live chat feature on the server side. Live chat's server-side functionality When the live chat feature is activated, we will create a persistent, WebSocket connection, between the web client and the web server. The Gorilla Web Toolkit provides an excellent implementation of the WebSocket protocol in their websocket package. To fetch the websocket package, you may issue the following command: $ go get github.com/gorilla/websocket The Gorilla web toolkit also provides a useful example web chat application. Rather than reinventing the wheel, we will repurpose Gorilla's example web chat application to fulfill the live chat feature. The source files needed from the web chat example have been copied over to the chat folder. There are three major changes we need to make to realize the live chat feature using the example chat application provided by Gorilla: Replies from the chatbot (the agent) should be targeted to a specific user, and not be sent out to every connected user We need to create the functionality to allow the chatbot to send a message back to the user We need to implement the front-end portion of the chat application in Go Let's consider each of these three points in more detail. First, Gorilla's web chat example is a free-for-all chat room. Any user can come in, type a message, and all other users connected to the chat server will be able to see the message. A major requirement for the live chat feature is that each conversation between the chatbot and the human should be exclusive. Replies from the agent must be targeted to a specific user, and not to all connected users. Second, the example web chat application from the Gorilla web toolkit doesn't send any messages back to the user. This is where the custom chatbot comes into the picture. The agent will communicate directly with the user over the established WebSocket connection. Third, the front-end portion of the example web chat application is implemented as a HTML document containing inline CSS and JavaScript. As you may have guessed already, we will implement the front-end portion for the live chat feature in Go, and the code will reside in the client/chat folder. Now that we have established our plan of action to implement the live chat feature using the Gorilla web chat example as a foundation to start from, let's begin the implementation. The modified web chat application that we will create contains two main types: Hub and Client. The hub type The chat hub is responsible for maintaining a list of client connections and directing the chatbot to broadcast a message to the relevant client. For example, if Alice asked the question "What is Isomorphic Go?", the answer from the chatbot should go to Alice and not to Bob (who may not have even asked a question yet). Here's what the Hub struct looks like: type Hub struct {  chatbot bot.Bot  clients map[*Client]bool  broadcastmsg chan *ClientMessage register chan *Client  unregister chan *Client } The chatbot is a chat bot (agent) that implements the Bot interface. This is the brain that will answer the questions received from clients. The clients map is used to register clients. The key-value pair stored in the map consists of the key, a pointer to a Client instance, and the value consists of a Boolean value set to true to indicate that the client is connected. Clients communicate with the hub over the broadcastmsg, register, and unregister channels. The register channel registers a client with the hub. The unregister channel, unregisters a client with the hub. The client sends the message entered by the user over the broadcastmsg channel, a channel of type ClientMessage. Here's the ClientMessage struct that we have introduced: type ClientMessage struct {  client *Client  message []byte } To fulfill the first major change we laid out previously, that is, the exclusivity of the conversation between the agent and the user, we use the ClientMessage struct to store, both the pointer to the Client instance that sent the user's message along with the user's message itself (a byte slice). The constructor function, NewHub, takes in chatbot that implements the Bot interface and returns a new Hub instance: func NewHub(chatbot bot.Bot) *Hub {  return &Hub{    chatbot: chatbot,    broadcastmsg: make(chan *ClientMessage), register: make(chan    *Client), unregister:        make(chan *Client),    clients: make(map[*Client]bool),  } } We implement an exported getter method, ChatBot, so that the chatbot can be accessed from the Hub object: func (h *Hub) ChatBot() bot.Bot {  return h.chatbot } This action will be significant when we implement a Rest API endpoint to send the bot's details (its name, title, and avatar image) to the client. The SendMessage method is responsible for broadcasting a message to a particular client: func (h *Hub) SendMessage(client *Client, message []byte) {  client.send <- message } The method takes in a pointer to Client, and the message, which is a byte slice, that should be sent to that particular client. The message will be sent over the client's send channel. The Run method is called to start the chat hub: func (h *Hub) Run() { for { select { case client := <-h.register: h.clients[client] = true greeting := h.chatbot.Greeting() h.SendMessage(client, []byte(greeting)) case client := <-h.unregister: if _, ok := h.clients[client]; ok { delete(h.clients, client) close(client.send) } case clientmsg := <-h.broadcastmsg: client := clientmsg.client reply := h.chatbot.Reply(string(clientmsg.message)) h.SendMessage(client, []byte(reply)) } } } We use the select statement inside the for loop to wait on multiple client operations. In the case that a pointer to a Client comes in over the hub's register channel, the hub will register the new client by adding the client pointer (as the key) to the clients map and set a value of true for it. We will fetch a greeting message to return to the client by calling the Greeting method on chatbot. Once we get the greeting (a string value), we call the SendMessage method passing in the client and the greeting converted to a byte slice. In the case that a pointer to a Client comes in over the hub's unregister channel, the hub will remove the entry in map for the given client and close the client's send channel, which signifies that the client won't be sending any more messages to the server. In the case that a pointer to a ClientMessage comes in over the hub's broadcastmsg channel, the hub will pass the client's message (as a string value) to the Reply method of the chatbot object. Once we get reply (a string value) from the agent, we call the SendMessage method passing in the client and the reply converted to a byte slice. The client type The Client type acts as a broker between Hub and the websocket connection. Here's what the Client struct looks like: type Client struct {  hub *Hub  conn *websocket.Conn send chan []byte } Each Client value contains a pointer to Hub, a pointer to a websocket connection, and a buffered channel, send, meant for outbound messages. The readPump method is responsible for relaying inbound messages coming in over the websocket connection to the hub: func (c *Client) readPump() { defer func() { c.hub.unregister <- c c.conn.Close() }() c.conn.SetReadLimit(maxMessageSize) c.conn.SetReadDeadline(time.Now().Add(pongWait)) c.conn.SetPongHandler(func(string) error { c.conn.SetReadDeadline(time.Now().Add(pongWait)); return nil }) for { _, message, err := c.conn.ReadMessage() if err != nil { if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway) { log.Printf("error: %v", err) } break } message = bytes.TrimSpace(bytes.Replace(message, newline, space, -1)) // c.hub.broadcast <- message clientmsg := &ClientMessage{client: c, message: message} c.hub.broadcastmsg <- clientmsg } } We had to make a slight change to this function to fulfill the requirements of the live chat feature. In the Gorilla web chat example, the message alone was relayed over to Hub. Since we are directing chat bot responses, back to the client that sent them, not only do we need to send the message to the hub, but also the client that happened to send the message (shown in bold). We do so by creating a ClientMessage struct: type ClientMessage struct {  client *Client  message []byte } The ClientMessage struct contains fields to hold both the pointer to the client as well as the message, a byte slice. Going back to the readPump function in the client.go source file, the following two lines are instrumental in allowing the Hub to know which client sent the message: clientmsg := &ClientMessage{client: c, message: message}  c.hub.broadcastmsg <- clientmsg The writePump method is responsible for relaying outbound messages from the client's send channel over the websocket connection: func (c *Client) writePump() { ticker := time.NewTicker(pingPeriod) defer func() { ticker.Stop() c.conn.Close() }() for { select { case message, ok := <-c.send: c.conn.SetWriteDeadline(time.Now().Add(writeWait)) if !ok { // The hub closed the channel. c.conn.WriteMessage(websocket.CloseMessage, []byte{}) return } w, err := c.conn.NextWriter(websocket.TextMessage) if err != nil { return } w.Write(message) // Add queued chat messages to the current websocket message. n := len(c.send) for i := 0; i < n; i++ { w.Write(newline) w.Write(<-c.send) } if err := w.Close(); err != nil { return } case <-ticker.C: c.conn.SetWriteDeadline(time.Now().Add(writeWait)) if err := c.conn.WriteMessage(websocket.PingMessage, []byte{}); err != nil { return } } } } The ServeWS method is meant to be registered as an HTTP handler by the web application: func ServeWs(hub *Hub) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { conn, err := upgrader.Upgrade(w, r, nil) if err != nil { log.Println(err) return } client := &Client{hub: hub, conn: conn, send: make(chan []byte, 256)} client.hub.register <- client go client.writePump() client.readPump() }) } This method performs two important tasks. The method upgrades the normal HTTP connection to a websocket connection and registers the client to the hub. Now that we've set up the code for our web chat server, it's time to activate it in our web application. Activating the chat server In the igweb.go source file, we have included a function called startChatHub, which is responsible for starting the Hub: func startChatHub(hub *chat.Hub) {  go hub.Run() } We add the following code in the main function to create a new chatbot, associate it with the Hub, and start the Hub: chatbot := bot.NewAgentCase() hub := chat.NewHub(chatbot) startChatHub(hub) When we call the registerRoutes function to register all the routes for the server-side web application, note that we also pass in the hub value to the function: r := mux.NewRouter() registerRoutes(&env, r, hub) In the registerRoutes function, we need the hub to register the route handler for the Rest API endpoint that returns the agent's information: r.Handle("/restapi/get-agent-info", endpoints.GetAgentInfoEndpoint(env, hub.ChatBot())) The hub is also used to register the route handler for the WebSocket route, /ws. We register the ServeWS handler function, passing in the hub: r.Handle("/ws", chat.ServeWs(hub)) Now that we have everything in place to activate the chat server, it's time to focus on the star of the live chat feature—the chat agent. The agent's brain The chat bot type that we will use for the live chat feature, AgentCase, will implement the following Bot interface: type Bot interface { Greeting() string Reply(string) string Name() string Title() string ThumbnailPath() string SetName(string) SetTitle(string) SetThumbnailPath(string) } The Greeting method will be used to send an initial greeting to the user, enticing them to interact with the chat bot. The Reply method accepts a question (a string) and returns a reply (also a string) for the given question. The rest of the methods implemented are for purely psychological reasons to give humans the illusion that they are communicating with someone, rather than something. The Name method is a getter method that returns the chat bot's name. The Title method is a getter method that returns the chat bot's title. The ThumbnailPath method is a getter method that returns the path to the chat bot's avatar image. Each of the getter methods has a corresponding setter method: SetName, SetTitle, and SetThumbnailPath. By defining the Bot interface, we are clearly stating the expectations of a chat bot. This allows us to make the chat bot solution extensible in the future. For example, the intelligence that Case exhibits may be too rudimentary and limiting. In the near future, we may want to implement a bot named Molly, whose intelligence may be implemented using a more powerful algorithm. As long as the Molly chat bot implements the Bot interface, the new chat bot can be easily plugged into our web application. In fact, from the perspective of the server-side web application, it would just be a one-line code change. Instead of instantiating an AgentCase instance, we would instantiate an AgentMolly instance instead. Besides the difference in intelligence, the new chat bot, Molly, would come with its own name, title, and avatar image, so humans would be able to differentiate it from Case. Here's the AgentCase struct: type AgentCase struct { Bot name string title string thumbnailPath string knowledgeBase map[string]string knowledgeCorpus []string sampleQuestions []string } We have embedded the Bot interface to the struct definition, indicating that the AgentCase type will implement the Bot interface. The field name is for the name of the agent. The field title is for the title of the agent. The field thumbnailPath is used to specify the path to the chat bot's avatar image. The knowledgeBase field is  map of type map[string]string. This is essentially the agent's brain. Keys in the map are the common terms found in a particular question. Values in the map are the answers to the question. The knowledgeCorpus field, a string byte slice, is a knowledge corpus of the terms that may exist in questions that the bot will be asked. We use the keys of the knowledgeBase map to construct the knowledgeCorpus. A corpus is a collection of text that is used to conduct linguistic analysis. In our case, we will conduct the linguistic analysis based on the question (the query) that the human user provided to the bot. The sampleQuestions field, a string byte slice, will contain a list of sample questions that the user may ask the chat bot. The chat bot will provide the user with a sample question when it greets them to entice the human user into a conversation. It is understood that the human user is free to paraphrase the sample question or ask an entirely different question depending on their preference. The initializeIntelligence method is used to initialize Case's brain: func (a *AgentCase) initializeIntelligence() { a.knowledgeBase = map[string]string{ "isomorphic go isomorphic go web applications": "Isomorphic Go is the methodology to create isomorphic web applications using the Go (Golang) programming language. An isomorphic web application, is a web application, that contains code which can run, on both the web client and the web server.", "kick recompile code restart web server instance instant kickstart lightweight mechanism": "Kick is a lightweight mechanism to provide an instant kickstart to a Go web server instance, upon the modification of a Go source file within a particular project directory (including any subdirectories). An instant kickstart consists of a recompilation of the Go code and a restart of the web server instance. Kick comes with the ability to take both the go and gopherjs commands into consideration when performing the instant kickstart. This makes it a really handy tool for isomorphic golang projects.", "starter code starter kit": "The isogoapp, is a basic, barebones web app, intended to be used as a starting point for developing an Isomorphic Go application. Here's the link to the github page: https://github.com/isomorphicgo/isogoapp", "lack intelligence idiot stupid dumb dummy don't know anything": "Please don't question my intelligence, it's artificial after all!", "find talk topic presentation lecture subject": "Watch the Isomorphic Go talk by Kamesh Balasubramanian at GopherCon India: https://youtu.be/zrsuxZEoTcs", "benefits of the technology significance of the technology importance of the technology": "Here are some benefits of Isomorphic Go: Unlike JavaScript, Go provides type safety, allowing us to find and eliminate many bugs at compile time itself. Eliminates mental context-shifts between back- end and front-end coding. Page loading prompts are not necessary.", "perform routing web app register routes define routes": "You can implement client-side routing in your web application using the isokit Router preventing the dreaded full page reload.", "render templates perform template rendering": "Use template sets, a set of project templates that are persisted in memory and are available on both the server-side and the client-side", "cogs reusable components react-like react": "Cogs are reuseable components in an Isomorphic Go web application.", } a.knowledgeCorpus = make([]string, 1) for k, _ := range a.knowledgeBase { a.knowledgeCorpus = append(a.knowledgeCorpus, k) } a.sampleQuestions = []string{"What is isomorphic go?", "What are the benefits of this technology?", "Does isomorphic go offer anything react- like?", "How can I recompile code instantly?", "How can I perform routing in my web app?", "Where can I get starter code?", "Where can I find a talk on this topic?"} } There are three important tasks that occur within this method: First, we set Case's knowledge base. Second, we set Case's knowledge corpus. Third, we set the sample questions, which Case will utilize when greeting the human user. The first task we must take care of is to set Case's knowledge base. This consists of setting the knowledgeBase property of the AgentCase instance. As mentioned earlier, the keys in the map refer to terms found in the question, and the values in the map are the answers to the question. For example, the "isomorphic go isomorphic go web applications" key could service the following questions: What is Isomorphic Go? What can you tell me about Isomorphic Go? Due to the the large amount of text contained within the map literal declaration for the knowledgeBase map, I encourage you to view the source file, agentcase.go, on a computer. The second task we must take care of is to set Case's corpus, the collection of text used for linguistic analysis used against the user's question. The corpus is constructed from the keys of the knowledgeBase map. We set the knowledgeCorpus field property of the AgentCase instance to a newly created string byte slice using the built-in make function. Using a for loop, we iterate through all the entries in the knowledgeBase map and append each key to the knowledgeCorpus field slice. The third and last task we must take care of is to set the sample questions that Case will present to the human user. We simply populate the sampleQuestions property of the AgentCase instance. We use the string literal declaration to populate all the sample questions that are contained in the string byte slice. Here are the getter and setter methods of the AgentCase type: func (a *AgentCase) Name() string { return a.name } func (a *AgentCase) Title() string { return a.title } func (a *AgentCase) ThumbnailPath() string { return a.thumbnailPath } func (a *AgentCase) SetName(name string) { a.name = name } func (a *AgentCase) SetTitle(title string) { a.title = title } func (a *AgentCase) SetThumbnailPath(thumbnailPath string) { a.thumbnailPath = thumbnailPath } These methods are used to get and set the name, title, and thumbnailPath fields of the AgentCase object. Here's the constructor function used to create a new AgentCase instance: func NewAgentCase() *AgentCase {  agentCase := &AgentCase{name: "Case", title: "Resident Isomorphic  Gopher Agent",     thumbnailPath: "/static/images/chat/Case.png"}  agentCase.initializeIntelligence() return agentCase } We declare and initialize the agentCase variable with a new AgentCase instance, setting the fields for name, title, and thumbnailPath. We then call the initializeIntelligence method to initialize Case's brain. Finally, we return the newly created and initialized AgentCase instance. To summarize, we introduced you to the websocket package from the Gorilla toolkit project. We learned how to establish a persistent connection between the web server and the web client to create a server-side chatbot using WebSocket functionality. You read an excerpt from a book written by Kamesh Balasubramanian titled Isomorphic Go. In this book, you will learn how to build and deploy Isomorphic Go web applications. Top 4 chatbot development frameworks for developers How to create a conversational assistant or chatbot using Python Build a generative chatbot using recurrent neural networks (LSTM RNNs)    
Read more
  • 0
  • 0
  • 8435