Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-tensorflow-models-mobile-embedded-devices
Savia Lobo
15 May 2018
12 min read
Save for later

How to Build TensorFlow Models for Mobile and Embedded devices

Savia Lobo
15 May 2018
12 min read
TensorFlow models can be used in applications running on mobile and embedded platforms. TensorFlow Lite and TensorFlow Mobile are two flavors of TensorFlow for resource-constrained mobile devices. TensorFlow Lite supports a subset of the functionality compared to TensorFlow Mobile. It results in better performance due to smaller binary size with fewer dependencies. The article covers topics for training a model to integrate TensorFlow into an application. The model can then be saved and used for inference and prediction in the mobile application. [box type="note" align="" class="" width=""]This article is an excerpt from the book Mastering TensorFlow 1.x written by Armando Fandango. This book will help you leverage the power of TensorFlow and Keras to build deep learning models, using concepts such as transfer learning, generative adversarial networks, and deep reinforcement learning.[/box] To learn how to use TensorFlow models on mobile devices, following topics are covered: TensorFlow on mobile platforms TF Mobile in Android apps TF Mobile demo on Android TF Mobile demo on iOS TensorFlow Lite TF Lite demo on Android TF Lite demo on iOS TensorFlow on mobile platforms TensorFlow can be integrated into mobile apps for many use cases that involve one or more of the following machine learning tasks: Speech recognition Image recognition Gesture recognition Optical character recognition Image or text classification Image, text, or speech synthesis Object identification To run TensorFlow on mobile apps, we need two major ingredients: A trained and saved model that can be used for predictions A TensorFlow binary that can receive the inputs, apply the model, produce the predictions, and send the predictions as output The high-level architecture looks like the following figure: The mobile application code sends the inputs to the TensorFlow binary, which uses the trained model to compute predictions and send the predictions back. TF Mobile in Android apps The TensorFlow ecosystem enables it to be used in Android apps through the interface class  TensorFlowInferenceInterface, and the TensorFlow Java API in the jar file libandroid_tensorflow_inference_java.jar. You can either use the jar file from the JCenter, download a precompiled jar from ci.tensorflow.org, or build it yourself. The inference interface has been made available as a JCenter package and can be included in the Android project by adding the following code to the build.gradle file: allprojects  { repositories  { jcenter() } } dependencies  { compile  'org.tensorflow:tensorflow-android:+' } Note : Instead of using the pre-built binaries from the JCenter, you can also build them yourself using Bazel or Cmake by following the instructions at this link: https://github.com/tensorflow/tensorflow/blob/r1.4/ tensorflow/contrib/android/README.md Once the TF library is configured in your Android project, you can call the TF model with the following four steps:  Load the model: TensorFlowInferenceInterface  inferenceInterface  = new  TensorFlowInferenceInterface(assetManager,  modelFilename);  Send the input data to the TensorFlow binary: inferenceInterface.feed(inputName, floatValues,  1,  inputSize,  inputSize,  3);  Run the prediction or inference: inferenceInterface.run(outputNames,  logStats);  Receive the output from the TensorFlow binary: inferenceInterface.fetch(outputName,  outputs); TF Mobile demo on Android In this section, we shall learn about recreating the Android demo app provided by the TensorFlow team in their official repo. The Android demo will install the following four apps on your Android device: TF  Classify: This is an object identification app that identifies the images in the input from the device camera and classifies them in one of the pre-defined classes. It does not learn new types of pictures but tries to classify them into one of the categories that it has already learned. The app is built using the inception model pre-trained by Google. TF  Detect: This is an object detection app that detects multiple objects in the input from the device camera. It continues to identify the objects as you move the camera around in continuous picture feed mode. TF  Stylize: This is a style transfer app that transfers one of the selected predefined styles to the input from the device camera. TF  Speech: This is a speech recognition app that identifies your speech and if it matches one of the predefined commands in the app, then it highlights that specific command on the device screen. Note: The sample demo only works for Android devices with an API level greater than 21 and the device must have a modern camera that supports FOCUS_MODE_CONTINUOUS_PICTURE. If your device camera does not have this feature supported, then you have to add the path submitted to TensorFlow by the author: https://github.com/ tensorflow/tensorflow/pull/15489/files. The easiest way to build and deploy the demo app on your device is using Android Studio. To build it this way, follow these steps:  Install Android Studio. We installed Android Studio on Ubuntu 16.04 from the instructions at the following link: https://developer.android.com/studio/ install.html  Check out the TensorFlow repository, and apply the patch mentioned in the previous tip. Let's assume you checked out the code in the tensorflow folder in your home directory.  Using Android Studio, open the Android project in the path ~/tensorflow/tensorflow/examples/Android.     Your screen will look similar to this:  Expand the Gradle Scripts option from the left bar and then open the  build.gradle file.  In the build.gradle file, locate the def  nativeBuildSystem definition and set it to 'none'. In the version of  the code we checked out, this definition is at line 43: def  nativeBuildSystem  =  'none'  Build the demo and run it on either a real or simulated device. We tested the app on these devices: 7.  You can also build the apk and install the apk file on the virtual or actual connected device. Once the app installs on the device, you will see the four apps we discussed earlier: You can also build the whole demo app from the source using Bazel or Cmake by following the instructions at this link: https://github.com/tensorflow/tensorflow/tree/r1.4/tensorflow/examples/android TF Mobile in iOS apps TensorFlow enables support for iOS apps by following these steps:  Include TF Mobile in your app by adding a file named Profile in the root directory of your project. Add the following content to the Profile: target  'Name-Of-Your-Project' pod  'TensorFlow-experimental'  Run the pod  install command to download and install the TensorFlow Experimental pod.  Run the myproject.xcworkspace command to open the workspace so you can add the      prediction code to your application logic. Note: To create your own TensorFlow binaries for iOS projects, follow the instructions at this link: https://github.com/tensorflow/tensorflow/ tree/master/tensorflow/examples/ios Once the TF library is configured in your iOS project, you can call the TF model with the following four steps:  Load the model: PortableReadFileToProto(file_path,  &tensorflow_graph);  Create a session: tensorflow::Status  s  =  session->Create(tensorflow_graph);  Run the prediction or inference and get the outputs: std::string  input_layer  =  "input"; std::string  output_layer  =  "output"; std::vector<tensorflow::Tensor>  outputs; tensorflow::Status  run_status  =  session->Run( {{input_layer,  image_tensor}}, {output_layer},  {},  &outputs);  Fetch the output data: tensorflow::Tensor*  output  =  &outputs[0]; TF Mobile demo on iOS In order to build the demo on iOS, you need Xcode 7.3 or later. Follow these steps to build the iOS demo apps:  Check out the TensorFlow code in a tensorflow folder in your home directory.  Open a terminal window and execute the following commands from your home folder to download the Inception V1 model, extract the label and graph files, and move these files into the data folders inside the sample app code: $ mkdir -p ~/Downloads $ curl -o ~/Downloads/inception5h.zip https://storage.googleapis.com/download.tensorflow.org/models/incep tion5h.zip && unzip ~/Downloads/inception5h.zip -d ~/Downloads/inception5h $ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/benchmark/data/ $ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/camera/data/ $ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/simple/data/  Navigate to one of the sample folders and download the experimental pod: $ cd ~/tensorflow/tensorflow/examples/ios/camera $ pod install  Open the Xcode workspace: $ open tf_simple_example.xcworkspace  Run the sample app in the device simulator. The sample app will appear with a Run Model button. The camera app requires an Apple device to be connected, while the other two can run in a simulator too. TensorFlow Lite TF Lite is the new kid on the block and still in the developer view at the time of writing this book. TF Lite is a very small subset of TensorFlow Mobile and TensorFlow, so the binaries compiled with TF Lite are very small in size and deliver superior performance. Apart from reducing the size of binaries, TensorFlow employs various other techniques, such as: The kernels are optimized for various device and mobile architectures The values used in the computations are quantized The activation functions are pre-fused It leverages specialized machine learning software or hardware available on the device, such as the Android NN API The workflow for using the models in TF Lite is as follows:  Get the model: You can train your own model or pick a pre-trained model available from different sources, and use the pre-trained as is or retrain it with your own data, or retrain after modifying some parts of the model. As long as you have a trained model in the file with an extension .pb or .pbtxt, you are good to proceed to the next step. We learned how to save the models in the previous chapters.  Checkpoint the model: The model file only contains the structure of the graph, so you need to save the checkpoint file. The checkpoint file contains the serialized variables of the model, such as weights and biases. We learned how to save a checkpoint in the previous chapters.  Freeze the model: The checkpoint and the model files are merged, also known as freezing the graph. TensorFlow provides the freeze_graph tool for this step, which can be executed as follows: $ freeze_graph --input_graph=mymodel.pb --input_checkpoint=mycheckpoint.ckpt --input_binary=true --output_graph=frozen_model.pb --output_node_name=mymodel_nodes  Convert the model: The frozen model from step 3 needs to be converted to TF Lite format with the toco tool provided by TensorFlow: $ toco --input_file=frozen_model.pb --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_type=FLOAT --input_arrays=input_nodes --output_arrays=mymodel_nodes --input_shapes=n,h,w,c  The .tflite model saved in step 4 can now be used inside an Android or iOS app that employs the TFLite binary for inference. The process of including the TFLite binary in your app is continuously evolving, so we recommend the reader follows the information at this link to include the TFLite binary in your Android or iOS app: https://github.com/tensorflow/tensorflow/tree/master/ tensorflow/contrib/lite/g3doc Generally, you would use the graph_transforms:summarize_graph tool to prune the model obtained in step 1. The pruned model will only have the paths that lead from input to output at the time of inference or prediction. Any other nodes and paths that are required only for training or for debugging purposes, such as saving checkpoints, are removed, thus making the size of the final model very small. The official TensorFlow repository comes with a TF Lite demo that uses a pre-trained mobilenet to classify the input from the device camera in the 1001 categories. The demo app displays the probabilities of the top three categories. TF Lite Demo on Android To build a TF Lite demo on Android, follow these steps: Install Android Studio. We installed Android Studio on Ubuntu 16.04 from the instructions at the following link: https://developer.android.com/studio/ install.html Check out the TensorFlow repository, and apply the patch mentioned in the previous tip. Let's assume you checked out the code in the tensorflow folder in your home directory. Using Android Studio, open the Android project from the path ~/tensorflow/tensorflow/contrib/lite/java/demo. If it complains about a missing SDK or Gradle components, please install those components and sync Gradle. Build the project and run it on a virtual device with API > 21. We received the following warnings, but the build succeeded. You may want to resolve the warnings if the build fails: Warning:The  Jack  toolchain  is  deprecated  and  will  not run.  To  enable  support  for  Java  8 language  features  built into  the  plugin,  remove  'jackOptions  {  ...  }'  from  your build.gradle  file, and  add android.compileOptions.sourceCompatibility  1.8 android.compileOptions.targetCompatibility  1.8 Note:  Future  versions  of  the  plugin  will  not  support  usage 'jackOptions'  in  build.gradle. To learn  more,  go  to https://d.android.com/r/tools/java-8-support-message.html Warning:The  specified  Android  SDK  Build  Tools  version (26.0.1)  is  ignored,  as  it  is  below  the minimum  supported version  (26.0.2)  for  Android  Gradle  Plugin  3.0.1. Android  SDK  Build  Tools 26.0.2  will  be  used. To  suppress  this  warning,  remove  "buildToolsVersion '26.0.1'"  from  your  build.gradle  file,  as  each  version  of the  Android  Gradle  Plugin  now  has  a  default  version  of the  build  tools. TF Lite demo on iOS In order to build the demo on iOS, you need Xcode 7.3 or later. Follow these steps to build the iOS demo apps:  Check out the TensorFlow code in a  tensorflow folder in your home directory.  Build the TF Lite binary for iOS from the instructions at this link: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite  Navigate to the sample folder and download the pod: $ cd ~/tensorflow/tensorflow/contrib/lite/examples/ios/camera $ pod install  Open the Xcode workspace: $ open tflite_camera_example.xcworkspace  Run the sample app in the device simulator. We learned about using TensorFlow models on mobile applications and devices. TensorFlow provides two ways to run on mobile devices: TF Mobile and TF Lite. We learned how to build TF Mobile and TF Lite apps for iOs and Android. We used TensorFlow demo apps as an example.   If you found this post useful, do check out the book Mastering TensorFlow 1.x  to skill up for building smarter, faster, and efficient machine learning and deep learning systems. The 5 biggest announcements from TensorFlow Developer Summit 2018 Getting started with Q-learning using TensorFlow Implement Long-short Term Memory (LSTM) with TensorFlow  
Read more
  • 0
  • 0
  • 9604

article-image-chat-application-kotlin-node-js-javascript
Sugandha Lahoti
14 May 2018
30 min read
Save for later

Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform

Sugandha Lahoti
14 May 2018
30 min read
When one mentions server-side JavaScript technology, Node.js is what comes to our mind first. Node.js is an extremely powerful and robust platform. Using this JavaScript platform, we can build server-side applications very easily. In today's tutorial, we will focus on creating a chat application that uses Kotlin by using the Node.js technology. So, basically, we will transpile Kotlin code to JavaScript on the server side. This article is an excerpt from the book, Kotlin Blueprints, written by Ashish Belagali, Hardik Trivedi, and Akshay Chordiya. With this book, you will get to know the building blocks of Kotlin and best practices when using quality world-class applications. Kotlin is a modern language and is gaining popularity in the JavaScript community day by day. The Kotlin language with modern features and statically typed; is superior to JavaScript. Similar to JavaScript, developers that know Kotlin can use the same language skills on both sides but, they also have the advantage of using a better language. The Kotlin code gets transpiled to JavaScript and that in turn works with the Node.js. This is the mechanism that lets you use the Kotlin code to work with a server-side technology, such as Node.js. Creating a chat application Our chat app will have following functionalities: User can log in by entering their nickname User can see the list of online users User will get notified when a new user joins User can receive chats from anyone User can perform a group chat in a chat room User will receive a notification when any user leaves the chat To visualize the app that we will develop, take a look at the following screenshots. The following screenshot is a page where the user will enter a nickname and gain an entry in our chat app: In the following screen, you can see a chat window and a list of online users: We have slightly configured this application in a different way. We have kept the backend code module and frontend code module separate using the following method: Create a new project named kotlin_node_chat_app Now, create a new Gradle module named backend and select Kotlin (JavaScript) under the libraries and additional information window, and follow the remaining steps. Similarly, also create a Gradle module named webapp. The backend module will contain all the Kotlin code that will be converted into Node.JS code later, and the webapp module will contain all the Kotlin code that will later be converted into the JavaScript code. We have referred to the directory structure from Github. After performing the previous steps correctly, your project will have three build.gradle files. We have highlighted all three files in the project explorer section, as shown in the following screenshot: Setting up the Node.js server We need to initialize our root directory for the node. Execute npm init and it will create package.json. Now our login page is created. To run it, we need to set up the Node.js server. We want to create the server in such a way that by executing npm start, it should start the server. To achieve it, our package.json file should look like the following piece of code: { "name": "kotlin_node_chat_app", "version": "1.0.0", "description": "", "main": "backend/server/app.js", "scripts": { "start": "node backend/server/app.js" }, "author": "Hardik Trivedi", "license": "ISC", "dependencies": { "ejs": "^2.5.7", "express": "^4.16.2", "kotlin": "^1.1.60", "socket.io": "^2.0.4" } } We have specified a few dependencies here as well: EJS to render HTML pages Express.JS as its framework, which makes it easier to deal with Node.js Kotlin, because, ultimately, we want to write our code into Kotlin and want it compiled into the Node.js code Socket.IO to perform chat Execute npm install on the Terminal/Command Prompt and it should trigger the download of all these dependencies. Specifying the output files Now, it's very important where your output will be generated once you trigger the build. For that, build.gradle will help us. Specify the following lines in your module-level build.gradle file. The backend module's build.gradle will have the following lines of code: compileKotlin2Js { kotlinOptions.outputFile = "${projectDir}/server/app.js" kotlinOptions.moduleKind = "commonjs" kotlinOptions.sourceMap = true } The webapp module's build.gradle will have the following lines of code: compileKotlin2Js { kotlinOptions.metaInfo = true kotlinOptions.outputFile = "${projectDir}/js/main.js" kotlinOptions.sourceMap = true kotlinOptions.main = "call" } In both the compileKotlin2Js nodes, kotlinOptions.outputFile plays a key role. This basically tells us that once Kotlin's code gets compiled, it will generate app.js and main.js for Node.js and JavaScript respectively. In the index.ejs file, you should define a script tag to load main.js. It will look something like the following line of code: <script type="text/javascript" src="js/main.js"></script> Along with this, also specify the following two tags: <script type="text/javascript" src="lib/kotlin/kotlin.js"></script> <script type="text/javascript" src="lib/kotlin/kotlinx-html-js.js"> </script> Examining the compilation output The kotlin.js and kotlinx-html-js.js files are nothing but the Kotlin output files. It's not compilation output, but actually transpiled output. The following are output compilations: kotlin.js: This is the runtime and standard library. It doesn't change between applications, and it's tied to the version of Kotlin being used. {module}.js: This is the actual code from the application. All files are compiled into a single JavaScript file that has the same name as the module. {file}.meta.js: This metafile will be used for reflection and other functionalities. Let's assume our final Main.kt file will look like this: fun main(args: Array<String>) { val socket: dynamic = js("window.socket") val chatWindow = ChatWindow { println("here") socket.emit("new_message", it) } val loginWindow = LoginWindow { chatWindow.showChatWindow(it) socket.emit("add_user", it) } loginWindow.showLogin() socket.on("login", { data -> chatWindow.showNewUserJoined(data) chatWindow.showOnlineUsers(data) }) socket.on("user_joined", { data -> chatWindow.showNewUserJoined(data) chatWindow.addNewUsers(data) }) socket.on("user_left", { data -> chatWindow.showUserLeft(data) }) socket.on("new_message", { data -> chatWindow.showNewMessage(data) }) } For this, inside main.js, our main function will look like this: function main(args) { var socket = window.socket; var chatWindow = new ChatWindow(main$lambda(socket)); var loginWindow = new LoginWindow(main$lambda_0(chatWindow, socket)); loginWindow.showLogin(); socket.on('login', main$lambda_1(chatWindow)); socket.on('user_joined', main$lambda_2(chatWindow)); socket.on('user_left', main$lambda_3(chatWindow)); socket.on('new_message', main$lambda_4(chatWindow)); } The actual main.js file will be bulkier because it will have all the code transpiled, including other functions and LoginWindow and ChatWindow classes. Keep a watchful eye on how the Lambda functions are converted into simple JavaScript functions. Lambda functions, for all socket events, are transpiled into the following piece of code: function main$lambda_1(closure$chatWindow) { return function (data) { closure$chatWindow.showNewUserJoined_qk3xy8$(data); closure$chatWindow.showOnlineUsers_qk3xy8$(data); }; } function main$lambda_2(closure$chatWindow) { return function (data) { closure$chatWindow.showNewUserJoined_qk3xy8$(data); closure$chatWindow.addNewUsers_qk3xy8$(data); }; } function main$lambda_3(closure$chatWindow) { return function (data) { closure$chatWindow.showUserLeft_qk3xy8$(data); }; } function main$lambda_4(closure$chatWindow) { return function (data) { closure$chatWindow.showNewMessage_qk3xy8$(data); }; } As can be seen, Kotlin aims to create very concise and readable JavaScript, allowing us to interact with it as needed. Specifying the router We need to write a behavior in the route.kt file. This will let the server know which page to load when any request hits the server. The router.kt file will look like this: fun router() { val express = require("express") val router = express.Router() router.get("/", { req, res -> res.render("index") }) return router } This simply means that whenever a get request with no name approaches the server, it should display an index page to the user. We are told to instruct the framework to refer to the router.kt file by writing the following line of code: app.use("/", router()) Starting the node server Now let's create a server. We should create an app.kt file under the backend module at the backend/src/kotlin path. Refer to the source code to verify. Write the following piece of code in app.kt: external fun require(module: String): dynamic external val process: dynamic external val __dirname: dynamic fun main(args: Array<String>) { println("Server Starting!") val express = require("express") val app = express() val path = require("path") val http = require("http") /** * Get port from environment and store in Express. */ val port = normalizePort(process.env.PORT) app.set("port", port) // view engine setup app.set("views", path.join(__dirname, "../../webapp")) app.set("view engine", "ejs") app.use(express.static("webapp")) val server = http.createServer(app) app.use("/", router()) app.listen(port, { println("Chat app listening on port http://localhost:$port") }) } fun normalizePort(port: Int) = if (port >= 0) port else 7000 These are multiple things to highlight here: external: This is basically an indicator for Kotlin that the line written along with this a pure JavaScript code. Also, when this code gets compiled into the respected language, the compiler understands that the class, function, or property written along with that will be provided by the developer, and so no JavaScript code should be generated for that invocation. The external modifier is automatically applied to nested declarations. For example, consider the following code block. We declare the class as external and automatically all its functions and properties are treated as external: external class Node { val firstChild: Node fun append(child: Node): Node fun removeChild(child: Node): Node // etc } dynamic: You will often see the usage of dynamic while working with JavaScript. Kotlin is a statically typed language, but it still has to interoperate with languages such as JavaScript. To support such use cases with a loosely or untyped programming language, dynamic is useful. It turns off Kotlin's type checker. A value of this type can be assigned to any variable or passed anywhere as a parameter. Any value can be assigned to a variable of dynamic type or passed to a function that takes dynamic as a parameter. Null checks are disabled for such values. require("express"): We typically use ExpressJS with Node.js. It's a framework that goes hand in hand with Node.js. It's designed with the sole purpose of developing web applications. A Node.js developer must be very familiar with it. process.env.PORT: This will find an available port on the server, as simple as that. This line is required if you want to deploy your application on a utility like Heroku. Also, notice the normalizePort function. See how concise it is. The if…else condition is written as an expression. No explicit return keyword is required. Kotlin compiler also identifies that if (port >= 0) port else 7000 will always return Int, hence no explicit return type is required. Smart, isn't it! __dirname: This is always a location where your currently executing script is present. We will use it to create a path to indicate where we have kept our web pages. app.listen(): This is a crucial one. It starts the socket and listens for the incoming request. It takes multiple parameters. Mainly, we will use two parameterized functions, that take the port number and connection callback as an argument. The app.listen() method is identical to http.Server.listen(). In Kotlin, it takes a Lambda function. Now, it's time to kick-start the server. Hit the Gradle by using ./gradlew build. All Kotlin code will get compiled into Node.js code. On Terminal, go to the root directory and execute npm start. You should be able to see the following message on your Terminal/Command Prompt: Creating a login page Now, let's begin with the login page. Along with that, we will have to enable some other settings in the project as well. If you refer to a screenshot that we mentioned at the beginning of the previous section, you can make out that we will have the title, the input filed, and a button as a part of the login page. We will create the page using Kotlin and the entire HTML tree structure, and by applying CSS to them, the will be part of our Kotlin code. For that, you should refer to the Main.kt and LoginWindow files. Creating an index.ejs file We will use EJS (effective JavaScript templating) to render HTML content on the page. EJS and Node.js go hand in hand. It's simple, flexible, easy to debug, and increases development speed. Initially, index.ejs would look like the following code snippet: <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial- scale=1.0"/> </head> <body> <div id="container" class="mainContainer"> </div> </body> </html> The <div> tag will contain all different views, for example, the Login View, Chat Window View, and so on. Using DSL DSL stands for domain-specific language. As the name indicates, it gives you the feeling as if you are writing code in a language using terminology particular to a given domain without being geeky, but then, this terminology is cleverly embedded as a syntax in a powerful language. If you are from the Groovy community, you must be aware of builders. Groovy builders allow you to define data in a semi-declarative way. It's a kind of mini-language of its own. Builders are considered good for generating XML and laying out UI components. The Kotlin DSL uses Lambdas a lot. The DSL in Kotlin is a type-safe builder. It means we can detect compilation errors in IntelliJ's beautiful IDE. The type-check builders are much better than the dynamically typed builders of Groovy. Using kotlinx.html The DSL to build HTML trees is a pluggable dependency. We, therefore, need to set it up and configure it for our project. We are using Gradle as a build tool and Gradle has the best way to manage the dependencies. We will define the following line of code in our build.gradle file to use kotlinx.html: compile("org.jetbrains.kotlinx:kotlinx-html-js:$html_version") Gradle will automatically download this dependency from jcenter(). Build your project from menu Build | Build Project. You can also trigger a build from the terminal/command prompt. To build a project from the Terminal, go to the root directory of your project and then execute ./gradlew build. Now create the index.ejs file under the webapp directory. At this moment, your index.ejs file may look like the following: Inside your LoginWindow class file, you should write the following piece of code: class LoginWindow(val callback: (String) -> Unit) { fun showLogin() { val formContainer = document.getElementById("container") as HTMLDivElement val loginDiv = document.create.div { id = "loginDiv" h3(classes = "title") { +"Welcome to Kotlin Blueprints chat app" } input(classes = "nickNameInput") { id = "nickName" onInputFunction = onInput() maxLength = 16.toString() placeholder = "Enter your nick name" } button(classes = "loginButton") { +"Login" onClickFunction = onLoginButtonClicked() } } formContainer.appendChild(loginDiv) } } Observe how we have provided the ID, input types, and a default ZIP code value. A default ZIP code value is optional. Let's spend some time understanding the previous code. The div, input, button, and h3 all these are nothing but functions. They are basically Lambda functions. The following are the functions that use Lambda as the last parameter. You can call them in different ways: someFunction({}) someFunction("KotlinBluePrints",1,{}) someFunction("KotlinBluePrints",1){} someFunction{} Lambda functions Lambda functions are nothing but functions without a name. We used to call them anonymous functions. A function is basically passed into a parameter of a function call, but as an expression. They are very useful. They save us a lot of time by not writing specific functions in an abstract class or interface. Lambda usage can be as simple the following code snippet, where it seems like we are simply binding a block an invocation of the helloKotlin function: fun main(args: Array<String>) { val helloKotlin={println("Hello from KotlinBlueprints team!")} helloKotlin() } At the same time, lambda can be a bit complex as well, just like the following code block: fun <T> lock(lock: Lock, body: () -> T): T { lock.lock() try { return body() } finally { lock.unlock() } } In the previous function, we are acquiring a lock before executing a function and releasing it when the function gets executed. This way, you can synchronously call a function in a multithreaded environment. So, if we have a use case where we want to execute sharedObject.someCrucialFunction() in a thread-safe environment, we will call the preceding lock function like this: lock(lock,{sharedObject.someCrucialFunction()}) Now, the lambda function is the last parameter of a function call, so it can be easily written like this: lock(lock) { sharedObject.someCrucialFunction() } Look how expressive and easy to understand the code is. We will dig more into the Lambda in the upcoming section. Reading the nickname In the index.ejs page, we will have an input field with the ID nickName when it is rendered. We can simply read the value by writing the following lines of code: val nickName = (document.getElementById("nickName") as? HTMLInputElement)?.value However, to cover more possibilities, we have written it in a slightly different way. We have written it as if we are taking the input as an event. The following code block will continuously read the value that is entered into the nickName input field: private fun onInput(): (Event) -> Unit { return { val input = it.currentTarget as HTMLInputElement when (input.id) { "nickName" -> nickName = input.value "emailId" -> email = input.value } } } Check out, we have used the when function, which is a replacement for the switch case. The preceding code will check whether the ID of the element is nickName or emailId, and, based on that, it will assign the value to the respective objects by reading them from the in-out field. In the app, we will only have the nickname as the input file, but using the preceding approach, you can read the value from multiple input fields. In its simplest form, it looks like this: when (x) { 1 -> print("x == 1") 2 -> print("x == 2") else -> { // Note the block print("x is neither 1 nor 2") } } The when function compares its argument against all branches, top to bottom until some branch condition is met. The when function can be used either as an expression or as a statement. The else branch is evaluated if none of the other branch conditions are satisfied. If when is used as an expression, the else branch is mandatory, unless the compiler can prove that all possible cases are covered with branch conditions. If many cases should be handled in the same way, the branch conditions may be combined with a comma, as shown in the following code: when (x) { 0, 1 -> print("x == 0 or x == 1") else -> print("otherwise") } The following uses arbitrary expressions (not only constants) as branch conditions: when (x) { parseInt(s) -> print("s encodes x") else -> print("s does not encode x") } The following is used to check a value for being in or !in a range or a collection: when (x) { in 1..10 -> print("x is in the range") in validNumbers -> print("x is valid") !in 10..20 -> print("x is outside the range") else -> print("none of the above") } Passing nickname to the server Once our setup is done, we are able to start the server and see the login page. It's time to pass the nickname or server and enter the chat room. We have written a function named onLoginButtonClicked(). The body for this function should like this: private fun onLoginButtonClicked(): (Event) -> Unit { return { if (!nickName.isBlank()) { val formContainer = document.getElementById("loginDiv") as HTMLDivElement formContainer.remove() callback(nickName) } } } The preceding function does two special things: Smart casting Registering a simple callback Smart cast Unlike any other programming language, Kotlin also provides class cast support. The document.getElementById() method returns an Element type if instance. We basically want it to cast into HTMLDivElement to perform some <div> related operation. So, using as, we cast the Element into HTMLDivElement. With the as keyword, it's unsafe casting. It's always better to use as?. On a successful cast, it will give an instance of that class; otherwise, it returns null. So, while using as?, you have to use Kotlin's null safety feature. This gives your app a great safety net onLoginButtonClicked can be refactored by modifying the code a bit. The following code block is the modified version of the function. We have highlighted the modification in bold: private fun onLoginButtonClicked(): (Event) -> Unit { return { if (!nickName.isBlank()) { val formContainer = document.getElementById("loginDiv") as? HTMLDivElement formContainer?.remove() callback(nickName) } } } Registering a callback Oftentimes, we need a function to notify us when something gets done. We prefer callbacks in JavaScript. To write a click event for a button, a typical JavaScript code could look like the following: $("#btn_submit").click(function() { alert("Submit Button Clicked"); }); With Kotlin, it's simple. Kotlin uses the Lambda function to achieve this. For the LoginWindow class, we have passed a callback as a constructor parameter. In the LoginWindow class (val callback: (String) -> Unit), the class header specifies that the constructor will take a callback as a parameter, which will return a string when invoked. To pass a callback, we will write the following line of code: callback(nickName) To consume a callback, we will write code that will look like this: val loginWindow = LoginWindow { chatWindow.showChatWindow(it) socket.emit("add_user", it) } So, when callback(nickName) is called, chatWindow.showChatWindow will get called and the nickname will be passed. Without it, you are accessing nothing but the nickname. Establishing a socket connection We shall be using the Socket.IO library to set up sockets between the server and the clients. Socket.IO takes care of the following complexities: Setting up connections Sending and receiving messages to multiple clients Notifying clients when the connection is disconnected Read more about Socket.IO at https://socket.io/. Setting up Socket.IO We have already specified the dependency for Socket.IO in our package.json file. Look at this file. It has a dependency block, which is mentioned in the following code block: "dependencies": { "ejs": "^2.5.7", "express": "^4.16.2", "kotlin": "^1.1.60", "socket.io": "^2.0.4" } When we perform npm install, it basically downloads the socket-io.js file and keeps node_modules | socket.io inside. We will add this JavaScript file to our index.ejs file. There we can find the following mentioned <script> tag inside the <body> tag: <script type="text/javascript" src="/socket.io/socket.io.js"> </script> Also, initialize socket in the same index.js file like this: <script> window.socket = io(); </script> Listening to events With the Socket.IO library, you should open a port and listen to the request using the following lines of code. Initially, we were directly using app.listen(), but now, we will pass that function as a listener for sockets: val io = require("socket.io").listen(app.listen(port, { println("Chat app listening on port http://localhost:$port") })) The server will listen to the following events and based on those events, it will perform certain tasks: Listen to the successful socket connection with the client Listen for the new user login events Whenever a new user joins the chat, add it to the online users chat list and broadcast it to every client so that they will know that a new member has joined the chat Listen to the request when someone sends a message Receive the message and broadcast it to all the clients so that the client can receive it and show it in the chat window Emitting the event The Socket.IO library works on a simple principle—emit and listen. Clients emit the messages and a listener listens to those messages and performs an action associated with them. So now, whenever a user successfully logs in, we will emit an event named add_user and the server will add it to an online user's list. The following code line emits the message: socket.emit("add_user", it) The following code snippet listens to the message and adds a user to the list: socket.on("add_user", { nickName -> socket.nickname= nickName numOfUsers = numOfUsers.inc() usersList.add(nickName as String) }) The socket.on function will listen to the add_user event and store the nickname in the socket. Incrementing and decrementing operator overloading There are a lot of things operator overloading can do, and we have used quite a few features here. Check out how we increment a count of online users: numOfUsers = numOfUsers.inc() It is a much more readable code compared to numOfUsers = numOfUsers+1, umOfUsers += 1, or numOfUsers++. Similarly, we can decrement any number by using the dec() function. Operator overloading applies to the whole set of unary operators, increment-decrement operators, binary operators, and index access operator. Read more about all of them here. Showing a list of online users Now we need to show the list of online users. For this, we need to pass the list of all online users and the count of users along with it. Using the data class The data class is one of the most popular features among Kotlin developers. It is similar to the concept of the Model class. The compiler automatically derives the following members from all properties declared in the primary constructor: The equals()/hashCode() pair The toString() of the User(name=John, age=42) form The componentN() functions corresponding to the properties in their order of declaration The copy() function A simple version of the data class can look like the following line of code, where name and age will become properties of a class: data class User(val name: String, val age: Int) With this single line and, mainly, with the data keyword, you get equals()/hasCode(), toString() and the benefits of getters and setters by using val/var in the form of properties. What a powerful keyword! Using the Pair class In our app, we have chosen the Pair class to demonstrate its usage. The Pair is also a data class. Consider the following line of code: data class Pair<out A, out B> : Serializable It represents a generic pair of two values. You can look at it as a key–value utility in the form of a class. We need to create a JSON object of a number of online users with the list of their nicknames. You can create a JSON object with the help of a Pair class. Take a look at the following lines of code: val userJoinedData = json(Pair("numOfUsers", numOfUsers), Pair("nickname", nickname), Pair("usersList", usersList)) The preceding JSON object will look like the following piece of code in the JSON format: { "numOfUsers": 3, "nickname": "Hardik Trivedi", "usersList": [ "Hardik Trivedi", "Akshay Chordiya", "Ashish Belagali" ] } Iterating list The user's list that we have passed inside the JSON object will be iterated and rendered on the page. Kotlin has a variety of ways to iterate over the list. Actually, anything that implements iterable can be represented as a sequence of elements that can be iterated. It has a lot of utility functions, some of which are mentioned in the following list: hasNext(): This returns true if the iteration has more elements hasPrevious(): This returns true if there are elements in the iteration before the current element next(): This returns the next element in the iteration nextIndex(): This returns the index of the element that would be returned by a subsequent call to next previous(): This returns the previous element in the iteration and moves the cursor position backward previousIndex(): This returns the index of the element that would be returned by a subsequent call to previous() There are some really useful extension functions, such as the following: .asSequence(): This creates a sequence that returns all elements from this iterator. The sequence is constrained to be iterated only once. .forEach(operation: (T) -> Unit): This performs the given operation on each element of this iterator. .iterator(): This returns the given iterator itself and allows you to use an instance of the iterator in a for the loop. .withIndex(): Iterator<IndexedValue<T>>: This returns an iterator and wraps each value produced by this iterator with the IndexedValue, containing a value, and its index. We have used forEachIndexed; this gives the extracted value at the index and the index itself. Check out the way we have iterated the user list: fun showOnlineUsers(data: Json) { val onlineUsersList = document.getElementById("onlineUsersList") onlineUsersList?.let { val usersList = data["usersList"] as? Array<String> usersList?.forEachIndexed { index, nickName -> it.appendChild(getUserListItem(nickName)) } } } Sending and receiving messages Now, here comes the interesting part: sending and receiving a chat message. The flow is very simple: The client will emit the new_message event, which will be consumed by the server, and the server will emit it in the form of a broadcast for other clients. When the user clicks on Send Message, the onSendMessageClicked method will be called. It sends the value back to the view using callback and logs the message in the chat window. After successfully sending a message, it clears the input field as well. Take a look at the following piece of code: private fun onSendMessageClicked(): (Event) -> Unit { return { if (chatMessage?.isNotBlank() as Boolean) { val formContainer = document.getElementById("chatInputBox") as HTMLInputElement callback(chatMessage!!) logMessageFromMe(nickName = nickName, message = chatMessage!!) formContainer.value = "" } } } Null safety We have defined chatMessage as nullable. Check out the declaration here: private var chatMessage: String? = null Kotlin is, by default, null safe. This means that, in Kotlin, objects cannot be null. So, if you want any object that can be null, you need to explicitly state that it can be nullable. With the safe call operator ?., we can write if(obj !=null) in the easiest way ever. The if (chatMessage?.isNotBlank() == true) can only be true if it's not null, and does not contain any whitespace. We do know how to use the Elvis operator while dealing with null. With the help of the Elvis operator, we can provide an alternative value if the object is null. We have used this feature in our code in a number of places. The following are some of the code snippets that highlight the usage of the safe call operator. Removing the view if not null: formContainer?.remove() Iterating over list if not null: usersList?.forEachIndexed { _, nickName -> it.appendChild(getUserListItem(nickName)) } Appending a child if the div tag is not null: onlineUsersList?.appendChild(getUserListItem (data["nickName"].toString())) Getting a list of all child nodes if the <ul> tag is not null: onlineUsersList?.childNodes Checking whether the string is not null and not blank: chatMessage?.isNotBlank() Force unwraps Sometimes, you will have to face a situation where you will be sure that the object will not be null at the time of accessing it. However, since you have declared nullable at the beginning, you will have to end up using force unwraps. Force unwraps have the syntax of !!. This means you have to fetch the value of the calling object, irrespective of it being nullable. We are explicitly reading the chatMessage value to pass its value in the callback. The following is the code: callback(chatMessage!!) Force unwraps are something we should avoid. We should only use them while dealing with interoperability issues. Otherwise, using them is basically nothing but throwing away Kotlin's beautiful features. Using the let function With the help of Lambda and extension functions, Kotlin is providing yet another powerful feature in the form of let functions. The let() function helps you execute a series of steps on the calling object. This is highly useful when you want to perform some code where the calling object is used multiple times and you want to avoid a null check every time. In the following code block, the forEach loop will only get executed if onlineUsersList is not null. We can refer to the calling object inside the let function using it: fun showOnlineUsers(data: Json) { val onlineUsersList = document.getElementById("onlineUsersList") onlineUsersList?.let { val usersList = data["usersList"] as? Array<String> usersList?.forEachIndexed { _, nickName -> it.appendChild(getUserListItem(nickName)) } } } Named parameter What if we told you that while calling, it's not mandatory to pass the parameter in the same sequence that is defined in the function signature? Believe us. With Kotlin's named parameter feature, it's no longer a constraint. Take a look at the following function that has a nickName parameter and the second parameter is message: private fun logMessageFromMe(nickName: String, message: String) { val onlineUsersList = document.getElementById("chatMessages") val li = document.create.li { div(classes = "sentMessages") { span(classes = "chatMessage") { +message } span(classes = "filledInitialsMe") { +getInitials(nickName) } } } onlineUsersList?.appendChild(li) } If you call a function such as logMessageForMe(mrssage,nickName), it will be a blunder. However, with Kotlin, you can call a function without worrying about the sequence of the parameter. The following is the code for this: fun showNewMessage(data: Json) { logMessage(message = data["message"] as String, nickName = data["nickName"] as String) } Note how the showNewMessage() function is calling it, passing message as the first parameter and nickName as the second parameter. Disconnecting a socket Whenever any user leaves the chat room, we will show other online users a message saying x user left. Socket.IO will send a notification to the server when any client disconnects. Upon receiving the disconnect, the event server will remove the user from the list, decrement the count of online users, and broadcast the event to all clients. The code can look something like this: socket.on("disconnect", { usersList.remove(socket.nicknameas String) numOfUsers = numOfUsers.dec() val userJoinedData = json(Pair("numOfUsers", numOfUsers), Pair("nickName", socket.nickname)) socket.broadcast.emit("user_left", userJoinedData) }) Now, it's the client's responsibility to show the message for that event on the UI. The client will listen to the event and the showUsersLeft function will be called from the ChatWindow class. The following code is used for receiving the user_left broadcast: socket.on("user_left", { data -> chatWindow.showUserLeft(data) }) The following displays the message with the nickname of the user who left the chat and the count of the remaining online users: fun showUserLeft(data: Json) { logListItem("${data["nickName"]} left") logListItem(getParticipantsMessage(data["numOfUsers"] as Int)) } Styling the page using CSS We saw how to build a chat application using Kotlin, but without showing the data on a beautiful UI, the user will not like the web app. We have used some simple CSS to give a rich look to the index.ejs page. The styling code is kept inside webapp/css/ styles.css. However, we have done everything so far entirely and exclusively in Kotlin. So, it's better we apply CSS using Kotlin as well. You may have already observed that there are a few mentions of classes. It's nothing but applying the CSS in a Kotlin way. Take a look at how we have applied the classes while making HTML tree elements using a DSL: fun showLogin() { val formContainer = document.getElementById("container") as HTMLDivElement val loginDiv = document.create.div { id = "loginDiv" h3(classes = "title") { +"Welcome to Kotlin Blueprints chat app" } input(classes = "nickNameInput") { id = "nickName" onInputFunction = onInput() maxLength = 16.toString() placeholder = "Enter your nick name" } button(classes = "loginButton") { +"Login" onClickFunction = onLoginButtonClicked() } } formContainer.appendChild(loginDiv) } We developed an entire chat application using Kotlin. If you liked this extract, read our book Kotlin Blueprints to build a REST API using Kotlin. Read More Top 4 chatbot development frameworks for developers How to build a basic server-side chatbot using Go 5 reasons to choose Kotlin over Java
Read more
  • 0
  • 0
  • 16346

article-image-building-a-microsoft-power-bi-data-model
Amarabha Banerjee
14 May 2018
11 min read
Save for later

Building a Microsoft Power BI Data Model

Amarabha Banerjee
14 May 2018
11 min read
"The data model is what feeds and what powers Power BI." - Kasper de Jonge, Senior Program Manager, Microsoft Data models developed in Power BI Desktop are at the center of Power BI projects, as they expose the interface in support of data exploration and drive the analytical queries visualized in reports and dashboards. Well-designed data models leverage the data connectivity and transformation capabilities to provide an integrated view of distinct business processes and entities. Additionally, data models contain predefined calculations, hierarchies groupings, and metadata to greatly enhance both the analytical power of the dataset and its ease of use. The combination of, Building a Power BI data model, querying and modeling, serves as the foundation for the BI and analytical capabilities of Power BI. In this article, we explore how to design and develop robust data models. Common challenges in dimensional modeling are mapped to corresponding features and approaches in Power BI Desktop, including multiple grains and many-to-many relationships. Examples are also provided to embed business logic and definitions, develop analytical calculations with the DAX language, and configure metadata settings to increase the value and sustainability of models. [box type="note" align="" class="" width=""]Our article is an excerpt from the book Microsoft Power BI Cookbook, written by Brett Powell. This book contains powerful tutorials and techniques to help you with Data Analytics and visualization with Microsoft Power BI.[/box] Designing a multi fact data model Power BI Desktop lends itself to rapid, agile development in which significant value can be obtained quickly despite both imperfect data sources and an incomplete understanding of business requirements and use cases. However, rushing through the design phase can undermine the sustainability of the solution as future needs cannot be met without structural revisions to the model or complex workarounds. A balanced design phase in which fundamental decisions such as DirectQuery versus in-memory are analyzed while a limited prototype model is used to generate visualizations and business feedback can address both short- and long-term needs. This recipe describes a process for designing a multiple fact table data model and identifies some of the primary questions and factors to consider. Setting business expectations Everyone has seen impressive Power BI demonstrations and many business analysts have effectively used Power BI Desktop independently. These experiences may create an impression that integration, rich analytics, and collaboration can be delivered across many distinct systems and stakeholders very quickly or easily. It's important to reign in any unrealistic expectations and confirm feasibility. For example, Power BI Desktop is not an enterprise BI tool like SSIS or SSAS in terms of scalability, version control, features, and configurations. Power BI datasets cannot be incrementally refreshed like partitions in SSAS, and the current 1 GB file limit (after compression) places a hard limit on the amount of data a single model can store. Additionally, if multiple data sources are needed within the model, then DirectQuery models are not an option. Finally, it's critical to distinguish the data model as a platform supporting robust analysis of business processes, not an individual report or dashboard itself. Identify the top pain points and unanswered business questions in the current state. Contrast this input with an assessment of feasibility and complexity (for example, data quality and analytical needs) and Target realistic and sustainable deliverables. How to do it Dimensional modeling best practices and star schema designs are directly applicable to Power BI data models. Short, collaborative modeling sessions can be scheduled with subject matter experts and main stakeholders. With the design of the model in place, an informed decision of the model's data mode (Import or DirectQuery) can be made prior to Development. Four-step dimensional design process Choose the business process The number and nature of processes to include depends on the scale of the sources and scope of the project In this example, the chosen processes are Internet Sales, Reseller Sales and General Ledger Declare the granularity For each business process (or fact) to be modeled from step 1, define the meaning of each row: These should be clear, concise business definitions--each fact table should only contain one grain Consider scalability limitations with Power BI Desktop and balance the needs between detail and history (for example, greater history but lower granularity) Example: One Row per Sales Order Line, One Row per GL Account Balance per fiscal period Separate business processes, such as plan and sales should never be integrated into the same table. Likewise, a single fact table should not contain distinct processes such as shipping and receiving. Fact tables can be related to common dimensions but should never be related to each other in the data model (for example, PO Header and Line level). Identify the dimensions These entities should have a natural relationship with the business process or event at the given granularity Compare the dimension with any existing dimensions and hierarchies in the organization (for example, Store) If so, determine if there's a conflict or if additional columns are required Be aware of the query performance implications with large, high cardinality dimensions such as customer tables with over 2 million rows. It may be necessary to optimize this relationship in the model or the measures and queries that use this relationship. See Chapter 11, Enhancing and Optimizing Existing Power BI Solutions, for more details. Identify the facts These should align with the business processes being modeled: For example, the sum of a quantity or a unique count of a dimension Document the business and technical definition of the primary facts and compare this with any existing reports or metadata repository (for example, Net Sales = Extended Amount - Discounts). Given steps 1-3, you should be able to walk through top business  questions and check whether the planned data model will support it. Example: "What was the variance between Sales and Plan for last month in Bikes?" Any clear gaps require modifying the earlier steps, removing the question from the scope of the data model, or a plan to address the issue with additional logic in the model (M or DAX). Focus only on the primary facts at this stage such as the individual source columns that comprise the cost facts. If the business definition or logic for core fact has multiple steps and conditions, check if the data model will naturally simplify it or if the logic can be developed in the data retrieval to avoid complex measures. Data warehouse and implementation bus matrix The Power BI model should preferably align with a corporate data architecture framework of standard facts and dimensions that can be shared across models. Though consumed into Power BI Desktop, existing data definitions and governance should be observed. Any new facts, dimensions, and measures developed with Power BI should supplement this  architecture. Create a data warehouse bus matrix: A matrix of business processes (facts) and standard dimensions is a primary tool for designing and managing data models and communicating the overall BI architecture. In this example, the business processes selected for the model are Internet Sales, Reseller Sales, and General Ledger. Create an implementation bus matrix: An outcome of the model design process should include a more detailed implementation bus matrix. Clarity and approval of the grain of the fact tables, the definitions of the primary measures, and all dimensions gives confidence when entering the development phase. Power BI queries (M) and analysis logic (DAX) should not be considered a long-term substitute for issues with data quality, master data management, and the data warehouse. If it is necessary to move forward, document the "technical debts" incurred and consider long-term solutions such as Master Data Services (MDS). Choose the dataset storage mode - Import or DirectQuery With the logical design of a model in place, one of the top design questions is whether to implement this model with DirectQuery mode or with the default imported In-Memory mode. In-Memory mode The default in-memory mode is highly optimized for query performance and supports additional modeling and development flexibility with DAX functions. With compression, columnar storage, parallel query plans, and other techniques an import mode model is able to support a large amount of data (for example, 50M rows) and still perform well with complex analysis expressions. Multiple data sources can be accessed and integrated in a single data model and all DAX functions are supported for measures, columns, and role security. However, the import or refresh process must be scheduled and this is currently limited to eight refreshes per day for datasets in shared capacity (48X per day in premium capacity). As an alternative to scheduled refreshes in the Power BI service, REST APIs can be used to trigger a data refresh of a published dataset. For example, an HTTP request to a Power BI REST API calling for the refresh of a dataset can be added to the end of a nightly update or ETL process script such that published Power BI content remains aligned with the source systems. More importantly, it's not currently possible to perform an incremental refresh such as the Current Year rows of a table (for example, a table partition) or only the source rows that have changed. In-Memory mode models must maintain a file size smaller than the current limits (1 GB compressed currently, 10GB expected for Premium capacities by October 2017) and must also manage refresh schedules in the Power BI Service. Both incremental data refresh and larger dataset sizes are identified as planned capabilities of the Microsoft Power BI Premium Whitepaper (May 2017). DirectQuery mode A DirectQuery mode model provides the same semantic layer interface for users and contains the same metadata that drives model behaviors as In-Memory models. The performance of DirectQuery models, however, is dependent on the source system and how this data is presented to the model. By eliminating the import or refresh process, DirectQuery provides a means to expose reports and dashboards to source data as it changes. This also avoids the file size limit of import mode models. However, there are several limitations and restrictions to be aware of with DirectQuery: Only a single database from a single, supported data source can be used in a DirectQuery model. When deployed for widespread use, a high level of network traffic can be generated thus impacting performance. Power BI visualizations will need to query the source system, potentially via an on-premises data gateway. Some DAX functions cannot be used in calculated columns or with role security. Additionally, several common DAX functions are not optimized for DirectQuery performance. Many M query transformation functions cannot be used with DirectQuery. MDX client applications such as Excel are supported but less metadata (for example, hierarchies) is exposed. Given these limitations and the importance of a "speed of thought" user experience with Power BI, DirectQuery should generally only be used on centralized and smaller projects in which visibility to updates of the source data is essential. If a supported DirectQuery system (for example, Teradata or Oracle) is available, the performance of core measures and queries should be tested. Confirm referential integrity in the source database and use the Assume Referential Integrity relationship setting in DirectQuery mode models. This will generate more efficient inner join SQL queries against the source Database. How it works DAX formula and storage engine Power BI Datasets and SQL Server Analysis Services (SSAS) share the same database engine and architecture. Both tools support both Import and DirectQuery data models and both DAX and MDX client applications such as Power BI (DAX) and Excel (MDX). The DAX Query Engine is comprised of a formula and a storage engine for both Import and DirectQuery models. The formula engine produces query plans, requests data from the storage engine, and performs any remaining complex logic not supported by the storage engine against this data such as IF and SWITCH functions In DirectQuery models, the data source database is the storage engine--it receives SQL queries from the formula engine and returns the results to the formula engine. For In- Memory models, the imported and compressed columnar memory cache is the storage engine. We discussed about building data models using Microsoft power BI. If you liked our post, be sure to check out Microsoft Power BI Cookbook to gain more information on using Microsoft power BI for data analysis and visualization. Unlocking the secrets of Microsoft Power BI Microsoft spring updates for PowerBI and PowerApps How to build a live interactive visual dashboard in Power BI with Azure Stream  
Read more
  • 0
  • 0
  • 12032

article-image-this-week-on-packt-hub-11-may-2018
Aarthi Kumaraswamy
11 May 2018
3 min read
Save for later

This week on Packt Hub – 11 May 2018

Aarthi Kumaraswamy
11 May 2018
3 min read
May’s continues on a high note. Plenty of big announcements and major new releases announced in two of the biggest events in the tech world: Google I/O, Microsoft Build and PyCon. Read about them and more in our tech news section. Here’s what you may have missed in the last 7 days – Tech news, insights and tutorials… Tech news Conferences in focus this week Top 5 Google I/O 2018 conference Day 1 Highlights What we learned from Qlik Qonnections 2018 Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence Data news in depth Microsoft Open Sources ML.NET, a cross-platform machine learning framework Linux Foundation launches the Acumos Al Project to make AI accessible Nvidia’s Volta Tensor Core GPU hits performance milestones. But is it the best? Development & programming news in depth Google’s Android Things, developer preview 8: First look Put your game face on! Unity 2018.1 is now available What’s new in Vapor 3, the popular Swift based web framework Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! Windows 10 IoT Core: What you need to know Google Daydream powered Lenovo Mirage solo hits the market GCC 8.1 Standards released! Google open sources Seurat to bring high precision graphics to Mobile VR Cloud & networking news in depth What to expect from vSphere 6.7 What’s new in Wireshark 2.6? Get DevOps eBooks and videos while supporting charity Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Kali Linux 2018.2 released Tutorials Data tutorials Getting Started with Automated Machine Learning (AutoML) Analyzing CloudTrail Logs using Amazon Elasticsearch Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence Distributed TensorFlow: Working with multiple GPUs and servers Implementing 3 Naive Bayes classifiers in scikit-learn Development & programming tutorials Web development tutorials Getting started with Angular CLI and build your first Angular Component How to implement Internationalization and localization in your Node.js app Programming tutorials How to install and configure TypeScript NativeScript: What is it, and how to set it up Building functional programs with F# Applying Single Responsibility principle from SOLID in .NET Core Unit Testing in .NET Core with Visual Studio 2017 for better code quality Cloud & Networking tutorials How to secure a private cloud using IAM How to create your own AWS CloudTrail How to secure ElasticCache in AWS This week’s opinions, analysis, and insights Data Insights Google News’ AI revolution strikes balance between personalization and the bigger picture Why Drive.ai is going to struggle to disrupt public transport 6 reasons to choose MySQL 8 for designing database solutions Are Recurrent Neural Networks capable of warping time? Development & Programming Insights 8 recipes to master Promises in ECMAScript 2018 Forget C and Java. Learn Kotlin: the next universal programming language
Read more
  • 0
  • 0
  • 2046

article-image-how-to-secure-elasticcache-in-aws
Savia Lobo
11 May 2018
5 min read
Save for later

How to secure ElasticCache in AWS

Savia Lobo
11 May 2018
5 min read
AWS offers services to handle the cache management process. Earlier, we were using Memcached or Redis installed on VM, which was a very complex and tough task to manage in terms of ensuring availability, patching, scalability, and security. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,'Cloud Security Automation'. In this book, you'll learn the basics of why cloud security is important and how automation can be the most effective way of controlling cloud security.[/box] On AWS, we have this service available as ElastiCache. This gives you the option to use any engine (Redis or Memcached) to manage your cache. It's a scalable platform that will be managed by AWS in the backend. ElastiCache provides a scalable and high-performance caching solution. It removes the complexity associated with creating and managing distributed cache clusters using Memcached or Redis. Now, let's look at how to secure ElastiCache. Secure ElastiCache in AWS For enhanced security, we deploy ElastiCache clusters inside VPC. When they are deployed inside VPC, we can use a security group and NACL to add a level of security on the communication ports at network level. Apart from this, there are multiple ways to enable security for ElastiCache. VPC-level security Using a security group at VPC—when we deploy AWS ElastiCache in VPC, it gets associated with a subnet, a security group, and the routing policy of that VPC. Here, we define a rule to communicate with the ElastiCache cluster on a specific port. ElastiCache clusters can also be accessed from on-premise applications using VPN and Direct Connect. Authentication and access control We use IAM in order to implement the authentication and access control on ElastiCache. For authentication, you can have the following identity type: Root user: It's a superuser that is created while setting up an AWS account. It has super administrator privileges for all the AWS services. However, it's not recommended to use the root user to access any of the services. IAM user: It's a user identity in your AWS account that will have a specific set of permissions for accessing the ElastiCache service. IAM role: We also can define an IAM role with a specific set of permissions and associate it with the services that want to access ElastiCache. It basically generates temporary access keys to use ElastiCache. Apart from this, we can also specify federated access to services where we have an IAM role with temporary credentials for accessing the service. To access ElastiCache, service users or services must have a specific set of permissions such as create, modify, and reboot the cluster. For this, we define an IAM policy and associate it with users or roles. Let's see an example of an IAM policy where users will have permission to perform system administration activity for ElastiCache cluster: { "Version": "2012-10-17", "Statement":[{ "Sid": "ECAllowSpecific", "Effect":"Allow", "Action":[ "elasticache:ModifyCacheCluster", "elasticache:RebootCacheCluster", "elasticache:DescribeCacheClusters", "elasticache:DescribeEvents", "elasticache:ModifyCacheParameterGroup", "elasticache:DescribeCacheParameterGroups", "elasticache:DescribeCacheParameters", "elasticache:ResetCacheParameterGroup", "elasticache:DescribeEngineDefaultParameters"], "Resource":"*" } ] } Authenticating with Redis authentication AWS ElastiCache also adds an additional layer of security with the Redis authentication command, which asks users to enter a password before they are granted permission to execute Redis commands on a password-protected Redis server. When we use Redis authentication, there are the following few constraints for the authentication token while using ElastiCache: Passwords must have at least 16 and a maximum of 128 characters Characters such as @, ", and / cannot be used in passwords Authentication can only be enabled when you are creating clusters with the in-transit encryption option enabled The password defined during cluster creation cannot be changed To make the policy harder or more complex, there are the following rules related to defining the strength of a password: A password must include at least three characters of the following character types: Uppercase characters Lowercase characters Digits Non-alphanumeric characters (!, &, #, $, ^, <, >, -) A password must not contain any word that is commonly used A password must be unique; it should not be similar to previous passwords Data encryption AWS ElastiCache and EC2 instances have mechanisms to protect against unauthorized access of your data on the server. ElastiCache for Redis also has methods of encryption for data run-in on Redis clusters. Here, too, you have data-in-transit and data-at-rest encryption methods. Data-in-transit encryption ElastiCache ensures the encryption of data when in transit from one location to another. ElastiCache in-transit encryption implements the following features: Encrypted connections: In this mode, SSL-based encryption is enabled for server and client communication Encrypted replication: Any data moving between the primary node and the replication node are encrypted Server authentication: Using data-in-transit encryption, the client checks the authenticity of a connection—whether it is connected to the right server Client authentication: After using data-in-transit encryption, the server can check the authenticity of the client using the Redis authentication feature Data-at-rest encryption ElastiCache for Redis at-rest encryption is an optional feature that increases data security by encrypting data stored on disk during sync and backup or snapshot operations. However, there are the following few constraints for data-at-rest encryption: It is supported only on replication groups running Redis version 3.2.6. It is not supported on clusters running Memcached. It is supported only for replication groups running inside VPC. Data-at-rest encryption is supported for replication groups running on any node type. During the creation of the replication group, you can define data-at-rest encryption. Data-at-rest encryption once enabled, cannot be disabled. To summarize, we learned how to secure ElastiCache and ensured security for PaaS services, such as database and analytics services. If you've enjoyed reading this article, do check out 'Cloud Security Automation' for hands-on experience of automating your cloud security and governance. How to start using AWS AWS Sydney Summit 2018 is all about IoT AWS Fargate makes Container infrastructure management a piece of cake    
Read more
  • 0
  • 0
  • 11451

article-image-install-configure-typescript
Amey Varangaonkar
11 May 2018
9 min read
Save for later

How to install and configure TypeScript

Amey Varangaonkar
11 May 2018
9 min read
In this tutorial, we will look at the installation process of TypeScript and the editor setup for TypeScript development. Microsoft does well in providing easy-to-perform steps to install TypeScript on all platforms, namely Windows, macOS, and Linux. [box type="shadow" align="" class="" width=""]The following excerpt is taken from the book TypeScript 2.x By Example written by Sachin Ohri. This book presents hands-on examples and projects to learn the fundamental concepts of the popular TypeScript programming language.[/box] Installation of TypeScript TypeScript's official website is the best source to install the latest version. On the website, go to the Download section. There, you will find details on how to install TypeScript. Node.js and Visual Studio are the two most common ways to get it. It supports a host of other editors and has plugins available for them in the same link. We will be installing TypeScript using Node.js and using Visual Studio Code as our primary editor. You can use any editor of your choice and be able to run the applications seamlessly. If you use full-blown Visual Studio as your primary development IDE, then you can use either of the links, Visual Studio 2017 or Visual Studio 2013, to download the TypeScript SDK. Visual Studio does come with a TypeScript compiler but it's better to install it from this link so as to get the latest version. To install TypeScript using Node.js, we will use npm (node package manager), which comes with Node.js. Node.js is a popular JavaScript runtime for building and running server-side JavaScript applications. As TypeScript compiles into JavaScript, Node is an ideal fit for developing server-side applications with the TypeScript language. As mentioned on the website, just running the following command in the Terminal (on macOS) / Command Prompt (on Windows) window will install the latest version: npm install -g typescript To load any package from Node.js, the npm command starts with npm install; the -g flag identifies that we are installing the package globally. The last parameter is the name of the package that we are installing. Once it is installed, you can check the version of TypeScript by running the following command in the Terminal window: tsc -v You can use the following command to get the help for all the other options that are available with tsc: tsc -h TypeScript editors One of the outstanding features of TypeScript is its support for editors. All the editors provide support for language services, thereby providing features such as IntelliSense, statement completion, and error highlighting. If you are coming from a .NET background, then Visual Studio 2013/2015/2017 is a good option for you. Visual Studio does not require any configuration and it's easy to start using TypeScript. As we discussed earlier, just install the SDK and you are good to go. If you are from a Java background, TypeScript supports Eclipse as well. It also supports plugins for Sublime, WebStorm, and Atom, and each of these provides a rich set of features. Visual Studio Code (VS Code) is another good option for an IDE. It's a smaller, lighter version of Visual Studio and primarily used for web application development. VS Code is lightweight and cross-platform, capable of running on Windows, Linux, and macOS. It has an ever-increasing set of plugins to help you write better code, such as TSLint, a static analysis tool to help TypeScript code for readability, maintainability, and error checking. VS Code has a compelling case to be the default IDE for all sorts of web application development. In this post, we will briefly look at the Visual Studio and VS Code setup for TypeScript. Visual Studio Visual Studio is a full-blown IDE provided by Microsoft for all .NET based development, but now Visual Studio also has excellent support for TypeScript with built-in project templates. A TypeScript compiler is integrated into Visual Studio to allow automatic transpiling of code to JavaScript. Visual Studio also has the TypeScript language service integrated to provide IntelliSense and design-time error checking, among other things. With Visual Studio, creating a project with a TypeScript file is as simple as adding a new file with a .ts extension. Visual Studio will provide all the features out of the box. VS Code VS Code is a lightweight IDE from Microsoft used for web application development. VS Code can be installed on Windows, macOS, and Linux-based systems. VS Code can recognize the different type of code files and comes with a huge set of extensions to help in development. You can install VS Code from https://code.visualstudio.com/download. VS Code comes with an integrated TypeScript compiler, so we can start creating projects directly. The following screenshot shows a TypeScript file opened in VS Code: To run the project in VS Code, we need a task runner. VS Code includes multiple task runners which can be configured for the project, such as Gulp, Grunt, and TypeScript. We will be using the TypeScript task runner for our build. VS Code has a Command Palette which allows you to access various different features, such as Build Task, Themes, Debug options, and so on. To open the Command Palette, use Ctrl + Shift + P on a Windows machine or Cmd + Shift + P on a macOS. In the Command Palette, type Build, as shown in the following screenshot, which will show the command to build the project: When the command is selected, VS Code shows an alert, No built task defined..., as follows: We select Configure Build Task and, from all the available options as shown in the following screenshot, choose TypeScript build: This creates a new folder in your project, .vscode and a new file, task.json. This JSON file is used to create the task that will be responsible for compiling TypeScript code in VS Code. TypeScript needs another JSON file (tsconfig.json) to be able to configure compiler options. Every time we run the code, tsc will look for a file with this name and use this file to configure itself. TypeScript is extremely flexible in transpiling the code to JavaScript as per developer requirements, and this is achieved by configuring the compiler options of TypeScript. TypeScript compiler The TypeScript compiler is called tsc and is responsible for transpiling the TypeScript code to JavaScript. The TypeScript compiler is also cross-platform and supported on Windows, macOS, and Linux. To run the TypeScript compiler, there are a couple of options. One is to integrate the compiler in your editor of choice, which we explained in the previous section. In the previous section, we also integrated the TypeScript compiler with VS Code, which allowed us to build our code from the editor itself. All the compiler configurations that we would want to use are added to the tsconfig.json file. Another option is to use tsc directly from the command line / Terminal window. TypeScript's tsc command takes compiler configuration options as parameters and compiles code into JavaScript. For example, create a simple TypeScript file in Notepad and add the following lines of code to it. To create a file as a TypeScript file, we just need to make sure we have the file extension as *.ts: class Editor { constructor(public name: string,public isTypeScriptCompatible : Boolean) {} details() { console.log('Editor: ' + this.name + ', TypeScript installed: ' + this.isTypeScriptCompatible); } } class VisualStudioCode extends Editor{ public OSType: string constructor(name: string,isTypeScriptCompatible : Boolean, OSType: string) { super(name,isTypeScriptCompatible); this.OSType = OSType; } } let VS = new VisualStudioCode('VSCode', true, 'all'); VS.details(); This is the same code example we used in the TypeScript features section of this chapter. Save this file as app.ts (you can give it any name you want, as long as the extension of the file is *.ts). In the command line / Terminal window, navigate to the path where you have saved this file and run the following command: tsc app.ts This command will build the code and the transpile it into JavaScript. The JavaScript file is also saved in the same location where we had TypeScript. If there is any build issue, tsc will show these messages on the command line only. As you can imagine, running the tsc command manually for medium- to large-scale projects is not a productive approach. Hence, we prefer to use an editor that has TypeScript integrated. The following table shows the most commonly used TypeScript compiler configurations. We will be discussing these in detail in upcoming chapters: Compiler option Type Description allowUnusedLabels boolean By default, this flag is false. This option tells the compiler to flag unused labels. alwaysStrict boolean By default, this flag is false. When turned on, this will cause the compiler to compile in strict mode and emit use strict in the source file. module string Specify module code generation: None, CommonJS, AMD, System, UMD, ES6, or ES2015. moduleResolution string Determines how the module is resolved. noImplicitAny boolean This property allows an error to be raised if there is any code which implies data type as any. This flag is recommended to be turned off if you are migrating a JavaScript project to TypeScript in an incremental manner. noImplicitReturn boolean Default value is false; raises an error if not all code paths return a value. noUnusedLocals boolean Reports an error if there are any unused locals in the code. noUnusedParameter boolean Reports an error if there are any unused parameters in the code. outDir string Redirects output structure to the directory. outFile string Concatenates and emits output to a single file. The order of concatenation is determined by the list of files passed to the compiler on the command line along with triple-slash references and imports. See the output file order documentation for more details. removeComments boolean Remove all comments except copyright header comments beginning with /*!. sourcemap boolean Generates corresponding .map file. Target string Specifies ECMAScript target version: ES3(default), ES5, ES6/ES2015, ES2016, ES2017, or ESNext. Watch Runs the compiler in watch mode. Watches input files and triggers recompilation on changes. We saw it is quite easy to set up and configure TypeScript, and we are now ready to get started with our first application! To learn more about writing and compiling your first TypeScript application, make sure you check out the book TypeScript 2.x By Example. Introduction to TypeScript Introducing Object Oriented Programmng with TypeScript Elm and TypeScript – Static typing on the Frontend
Read more
  • 0
  • 1
  • 9870
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-getting-started-with-automated-machine-learning-automl
Kunal Chaudhari
10 May 2018
7 min read
Save for later

Anatomy of an automated machine learning algorithm (AutoML)

Kunal Chaudhari
10 May 2018
7 min read
Machine learning has always been dependent on the selection of the right features within a given model; even the selection of the right algorithm. But deep learning changed this. The selection process is now built into the models themselves. Researchers and engineers are now shofting their focus from feature engineering to network engineering. Out of this, AutoML, or meta learning, has become an increasingly important part of deep learning. AutoML is an emerging research topic which aims at auto-selecting the most efficient neural network for a given learning task. In other words, AutoML represents a set of methodologies for learning how to learn efficiently. Consider for instance the tasks of machine translation, image recognition, or game playing. Typically, the models are manually designed by a team of engineers, data scientist, and domain experts. If you consider that a typical 10-layer network can have ~1010 candidate network, you understand how expensive, error prone, and ultimately sub-optimal the process can be. This article is an excerpt from a book written by Antonio Gulli and Amita Kapoor titled TensorFlow 1.x Deep Learning Cookbook. This book is an easy-to-follow guide that lets you explore reinforcement learning, GANs, autoencoders, multilayer perceptrons and more. AutoML with recurrent networks and with reinforcement learning The key idea to tackle this problem is to have a controller network which proposes a child model architecture with probability p, given a particular network given in input. The child is trained and evaluated for the particular task to be solved (say for instance that the child gets accuracy R). This evaluation R is passed back to the controller which, in turn, uses R to improve the next candidate architecture. Given this framework, it is possible to model the feedback from the candidate child to the controller as the task of computing the gradient of p and then scale this gradient by R. The controller can be implemented as a Recurrent Neural Network (see the following figure). In doing so, the controller will tend to privilege iteration after iterations candidate areas of architecture that achieve better R and will tend to assign a lower probability to candidate areas that do not score so well. For instance, a controller recurrent neural network can sample a convolutional network. The controller can predict many hyper-parameters such as filter height, filter width, stride height, stride width, and the number of filters for one layer and then can repeat. Every prediction can be carried out by a softmax classifier and then fed into the next RNN time step as input. This is well expressed by the following images taken from Neural Architecture Search with Reinforcement Learning, Barret Zoph, Quoc V. Le: Predicting hyperparameters is not enough as it would be optimal to define a set of actions to create new layers in the network. This is particularly difficult because the reward function that describes the new layers is most likely not differentiable. This makes it impossible to optimize using standard techniques such as SGD. The solution comes from reinforcement learning. It consists of adopting a policy gradient network. Besides that, parallelism can be used for optimizing the parameters of the controller RNN. Quoc Le & Barret Zoph proposed to adopt a parameter-server scheme where we have a parameter server of S shards, that store the shared parameters for K controller replicas. Each controller replica samples m different child architectures that are trained in parallel as illustrated in the following images, taken from Neural Architecture Search with Reinforcement Learning, Barret Zoph, Quoc V. Le: Quoc and Barret applied AutoML techniques for Neural Architecture Search to the Penn Treebank dataset, a well-known benchmark for language modeling. Their results improve the manually designed networks currently considered the state-of-the-art. In particular, they achieve a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. Similarly, on the CIFAR-10 dataset, starting from scratch, the method can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. The proposed CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. Meta-learning blocks In Learning Transferable Architectures for Scalable Image Recognition, Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le, 2017. propose to learn an architectural building block on a small dataset that can be transferred to a large dataset. The authors propose to search for the best convolutional layer (or cell) on the CIFAR-10 dataset and then apply this learned cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters. Precisely, all convolutional networks are made of convolutional layers (or cells) with identical structures but different weights. Searching for the best convolutional architectures is therefore reduced to searching for the best cell structures, which is faster more likely to generalize to other problems. Although the cell is not learned directly on ImageNet, an architecture constructed from the best learned cell achieves, among the published work, state-of-the-art accuracy of 82.7 percent top-1 and 96.2 percent top-5 on ImageNet. The model is 1.2 percent better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS—a reduction of 28% from the previous state of the art model. What is also important to notice is that the model learned with RNN+RL (Recurrent Neural Networks + Reinforcement Learning) is beating the baseline represented by Random Search (RS) as shown in the figure taken from the paper. In the mean performance of the top-5 and top-25 models identified in RL versus RS, RL is always winning: AutoML and learning new tasks Meta-learning systems can be trained to achieve a large number of tasks and are then tested for their ability to learn new tasks. A famous example of this kind of meta-learning is transfer learning, where networks can successfully learn new image-based tasks from relatively small datasets. However, there is no analogous pre-training scheme for non-vision domains such as speech, language, and text. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, Chelsea Finn, Pieter Abbeel, Sergey Levine, 2017, proposes a model- agnostic approach names MAML, compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. The meta-learner aims at finding an initialization that rapidly adapts to various problems quickly (in a small number of steps) and efficiently (using only a few examples). A model represented by a parametrized function fθ with parameters θ.When adapting to a new task Ti, the model's parameters θ become θi  . In MAML, the updated parameter vector θi  is computed using one or more gradient descent updates on task Ti. For example, when using one gradient update, θ ~ = θ − α∇θLTi (fθ) where LTi is the loss function for the task T and α is a meta-learning parameter. The MAML algorithm is reported in this figure: MAML was able to substantially outperform a number of existing approaches on popular few-shot image classification benchmark. Few shot image is a quite challenging problem aiming at learning new concepts from one or a few instances of that concept. As an example, Human-level concept learning through probabilistic program induction, Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum, 2015, suggested that humans can learn to identify novel two-wheel vehicles from a single picture such as the one contained in the box as follows: If you enjoyed this excerpt, check out the book TensorFlow 1.x Deep Learning Cookbook, to skill up and implement tricky neural networks using Google's TensorFlow 1.x AmoebaNets: Google’s new evolutionary AutoML AutoML : Developments and where is it heading to What is Automated Machine Learning (AutoML)?
Read more
  • 0
  • 0
  • 4997

article-image-cloudtrail-logs-amazon-elasticsearch
Vijin Boricha
10 May 2018
5 min read
Save for later

Analyzing CloudTrail Logs using Amazon Elasticsearch

Vijin Boricha
10 May 2018
5 min read
Log management and analysis for many organizations start and end with just three letters: E, L, and K, which stands for Elasticsearch, Logstash, and Kibana. In today's tutorial, we will learn about analyzing CloudTrail logs which are E, L and K. [box type="shadow" align="" class="" width=""]This tutorial is an excerpt from the book AWS Administration - The Definitive Guide - Second Edition written by Yohan Wadia. This book will help you enhance your application delivery skills with the latest AWS services while also securing and monitoring the environment workflow.[/box] The three open-sourced products are essentially used together to aggregate, parse, search and visualize logs at an enterprise scale: Logstash: Logstash is primarily used as a log collection tool. It is designed to collect, parse, and store logs originating from multiple sources, such as applications, infrastructure, operating systems, tools, services, and so on. Elasticsearch: With all the logs collected in one place, you now need a query engine to filter and search through these logs for particular events. That's exactly where Elasticsearch comes into play. Elasticsearch is basically a search server based on the popular information retrieval software library, Lucene. It provides a distributed, full-text search engine along with a RESTful web interface for querying your logs. Kibana: Kibana is an open source data visualization plugin, used in conjunction with Elasticsearch. It provides you with the ability to create and export your logs into various visual graphs, such as bar charts, scatter graphs, pie charts, and so on. You can easily download and install each of these components in your AWS environment, and get up and running with your very own ELK stack in a matter of hours! Alternatively, you can also leverage AWS own Elasticsearch service! Amazon Elasticsearch is a managed ELK service that enables you to quickly deploy operate, and scale an ELK stack as per your requirements. Using Amazon Elasticsearch, you eliminate the need for installing and managing the ELK stack's components on your own, which in the long run can be a painful experience. For this particular use case, we will leverage a simple CloudFormation template that will essentially set up an Amazon Elasticsearch domain to filter and visualize the captured CloudTrail Log files, as depicted in the following diagram: To get started, log in to the CloudFormation dashboard, at https://console.aws.amazon.com/cloudformation. Next, select the option Create Stack to bring up the CloudFormation template selector page. Paste http://s3.amazonaws.com/concurrencylabs-cfn-templates/cloudtrail-es-cluster/cloudtrail-es-cluster.json in, the Specify an Amazon S3 template URL field, and click on Next to continue. In the Specify Details page, provide a suitable Stack name and fill out the following required parameters: AllowedIPForEsCluster: Provide the IP address that will have access to the nginx proxy and, in turn, have access to your Elasticsearch cluster. In my case, I've provided my laptop's IP. Note that you can change this IP at a later stage, by visiting the security group of the nginx proxy once it has been created by the CloudFormation template. CloudTrailName: Name of the CloudTrail that we set up at the beginning of this chapter. KeyName: You can select a key-pair for obtaining SSH to your nginx proxy instance: LogGroupName: The name of the CloudWatch Log Group that will act as the input to our Elasticsearch cluster. ProxyInstanceTypeParameter: The EC2 instance type for your proxy instance. Since this is a demonstration, I've opted for the t2.micro instance type. Alternatively, you can select a different instance type as well. Once done, click on Next to continue. Review the settings of your stack and hit Create to complete the process. The stack takes a good few minutes to deploy as a new Elasticsearch domain is created. You can monitor the progress of the deployment by either viewing the CloudFormation's Output tab or, alternatively, by viewing the Elasticsearch dashboard. Note that, for this deployment, a default t2.micro.elasticsearch instance type is selected for deploying Elasticsearch. You should change this value to a larger instance type before deploying the stack for production use. You can view information on Elasticsearch Supported Instance Types at http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-instance-types.html. With the stack deployed successfully, copy the Kibana URL from the CloudFormation Output tab: "KibanaProxyEndpoint": "http://<NGINX_PROXY>/_plugin/kibana/" The Kibana UI may take a few minutes to load. Once it is up and running, you will need to configure a few essential parameters before you can actually proceed. Select Settings and hit the Indices option. Here, fill in the following details: Index contains time-based events: Enable this checkbox to index time-based events Use event times to create index names: Enable this checkbox as well Index pattern interval: Set the Index pattern interval to Daily from the drop-down list Index name of pattern: Type [cwl-]YYYY.MM.DD in to this field Time-field name: Select the @timestamp value from the drop-down list Once completed, hit Create to complete the process. With this, you should now start seeing logs populate on to Kibana's dashboard. Feel free to have a look around and try out the various options and filters provided by Kibana:   Phew! That was definitely a lot to cover! But wait, there's more! AWS provides yet another extremely useful governance and configuration management service AWS Config, know more from this book AWS Administration - The Definitive Guide - Second Edition. The Cloud and the DevOps Revolution Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 8257

article-image-tensor-processing-unit-tpu-3-0-cloud-ready-ai
Amey Varangaonkar
10 May 2018
5 min read
Save for later

Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence

Amey Varangaonkar
10 May 2018
5 min read
It won’t be wrong to say that the first day of the ongoing Google I/O 2018 conference was largely dominated by Artificial Intelligence. CEO Sundar Pichai didn’t hide his excitement in giving us a sneak-peek of a whole host of features across various products driven by AI, which Google plan to roll out to the masses in the coming days. One of the biggest announcements was the unveiling of the next-gen silicon chip - the Tensor Processing Unit 3.0. The TPU has been central to Google’s AI market dominance strategy since its inception in 2016, and the latest iteration of this custom-made silicon chip promises to deliver faster and more powerful machine learning capabilities. What’s new in TPU 3.0? TPU is Google’s premium AI hardware offering for its cloud platform, with the objective of making it easier to run machine learning systems in a fast, cheap and easy manner. In his I/O 2018 keynote, Sundar Pichai declared that TPU 3.0 will be 8x more powerful than its predecessor. [embed]https://www.youtube.com/watch?v=ogfYd705cRs&t=5750[/embed] A TPU 3.0 pod is expected to crunch numbers at approximately 100 petaflops, as compared to 11.5 petaflops delivered by TPU 2.0. Pichai did not comment about the precision of the processing in these benchmarks - something which can make a lot of difference in real-world applications. Not a lot of other TPU 3.0 features were disclosed. The chip is still only for Google’s use for powering their applications. It also powers the Google Cloud Platform to handle customers’ workloads. High performance - but at a cost? An important takeaway from Pichai’s announcement is that TPU 3.0 is expected to be power-hungry - so much so that Google’s data centers deploying the chips now require liquid cooling to take care of the heat dissipation problem. This is not necessarily a good thing, as the need for dedicated cooling systems will only increase as Google scale up their infrastructure. A few analysts and experts, including Patrick Moorhead, tech analyst and founder of Moor Insights & Strategy, have raised concerns about this on twitter. [embed]https://twitter.com/PatrickMoorhead/status/993905641390391296[/embed] [embed]https://twitter.com/ryanshrout/status/993906310197506048[/embed] [embed]https://twitter.com/LostInBrittany/status/993904650888724480[/embed] The TPU is keeping up with Google’s growing AI needs The evolution of the Tensor Processing Unit has been rather interesting. When TPU was initially released way back in 2016, it was just a simple math accelerator used for training models, supporting barely close to 8 software instructions. However, Google needed more computing power to keep up with their neural networks to power their applications on the cloud. TPU 2.0 supported single-precision floating calculations and added 8GB of HBM (High Bandwidth Memory) for faster, more improved performance. With 3.0, TPU have stepped up another notch, delivering the power and performance needed to process data and run their AI models effectively. The significant increase in the processing capability from 11 petaflops to more than 100 petaflops is a clear step in this direction. Optimized for Tensorflow - the most popular machine learning framework out there - it is clear that TPU 3.0 will have an important role to play as Google look to infuse AI into all their major offerings. A proof of this is some of the smart features that were announced in the conference - the smart compose option in Gmail, improved Google Assistant, Gboard, Google Duplex, and more. TPU 3.0 was needed, with the competition getting serious It comes as no surprise to anyone that almost all the major tech giants are investing in cloud-ready AI technology. These companies are specifically investing in hardware to make machine learning faster and more efficient, to make sense of the data at scale, and give intelligent predictions which are used to improve their operations. There are quite a few examples to demonstrate this. Facebook’s infrastructure is being optimized for the Caffe2 and Pytorch frameworks, designed to process the massive information it handles on a day to day basis. Intel have come up with their neural network processors in a bid to redefine AI. It is also common knowledge that even the cloud giants like Amazon want to build an efficient cloud infrastructure powered by Artificial Intelligence. Just a few days back, Microsoft previewed their Project Brainwave in the Build 2018 conference, claiming super-fast Artificial Intelligence capabilities which rivaled Google’s very own TPU. We can safely infer that Google needed a TPU 3.0 like hardware to join the elite list of prime enablers of Artificial Intelligence in the cloud, empowering efficient data management and processing. Check out our coverage of Google I/O 2018, for some exciting announcements on other Google products in store for the developers and Android fans. Read also AI chip wars: Is Brainwave Microsoft’s Answer to Google’s TPU? How machine learning as a service is transforming cloud The Deep Learning Framework Showdown: TensorFlow vs CNTK
Read more
  • 0
  • 0
  • 4293

article-image-secure-private-cloud-iam
Savia Lobo
10 May 2018
11 min read
Save for later

How to secure a private cloud using IAM

Savia Lobo
10 May 2018
11 min read
In this article, we look at securing the private cloud using IAM. For IAM, OpenStack uses the Keystone project. Keystone provides the identity, token, catalog, and policy services, which are used specifically by OpenStack services. It is organized as a group of internal services exposed on one or many endpoints. For example, an authentication call validates the user and project credentials with the identity service. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,'Cloud Security Automation'. In this book, you'll learn how to work with OpenStack security modules and learn how private cloud security functions can be automated for better time and cost-effectiveness.[/box] Authentication Authentication is an integral part of an OpenStack deployment and so we must be careful about the system design. Authentication is the process of confirming a user's identity, which means that a user is actually who they claim to be. For example, providing a username and a password when logging into a system. Keystone supports authentication using the username and password, LDAP, and external authentication methods. After successful authentication, the identity service provides the user with an authorization token, which is further used for subsequent service requests. Transport Layer Security (TLS) provides authentication between services and users using X.509 certificates. The default mode for TLS is server-side only authentication, but we can also use certificates for client authentication. However, in authentication, there can also be the case where a hacker is trying to access the console by guessing your username and password. If we have not enabled the policy to handle this, it can be disastrous. For this, we can use the Failed Login Policy, which states that a maximum number of attempts are allowed for a failed login; after that, the account is blocked for a certain number of hours and the user will also get a notification about it. However, the identity service provided in Keystone does not provide a method to limit access to accounts after repeated unsuccessful login attempts. For this, we need to rely on an external authentication system that blocks out an account after a configured number of failed login attempts. Then, the account might only be unlocked with further side-channel intervention, or on request, or after a certain duration. We can use detection techniques to the fullest only when we have a prevention method available to save them from damage. In the detection process, we frequently review the access control logs to identify unauthorized attempts to access accounts. During the review of access control logs, if we find any hints of a brute force attack (where the user tries to guess the username and password to log in to the system), we can define a strong username and password or block the source of the attack (IP) through firewall rules. When we define firewall rules on Keystone node, it restricts the connection, which helps to reduce the attack surface. Apart from this, reviewing access control logs also helps to examine the account activity for unusual logins and suspicious actions, so that we can take corrective actions such as disabling the account. To increase the level of security, we can also utilize MFA for network access to the privileged user accounts. Keystone supports external authentication services through the Apache web server that can provide this functionality. Servers can also enforce client-side authentication using certificates. This will help to get rid of brute force and phishing attacks that may compromise administrator passwords. Authentication methods – internal and external Keystone stores user credentials in a database or may use an LDAP-compliant directory server. The Keystone identity database can be kept separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials. When we use the username and password to authenticate, identity does not apply policies for password strength, expiration, or failed authentication attempts. For this, we need to implement external authentication services. To integrate an external authentication system or organize an existing directory service to manage users account management, we can use LDAP. LDAP simplifies the integration process. In OpenStack authentication and authorization, the policy may be delegated to another service. For example, an organization that is going to deploy a private cloud and already has a database of employees and users in an LDAP system. Using this LDAP as an authentication authority, requests to the Identity service (Keystone) are transferred to the LDAP system, which allows or denies requests based on its policies. After successful authentication, the identity service generates a token for access to the authorized services. Now, if the LDAP has already defined attributes for the user such as the admin, finance, and HR departments, these must be mapped into roles and groups within identity for use by the various OpenStack services. We need to define this mapping into Keystone node configuration files stored at /etc/keystone/keystone.conf. Keystone must not be allowed to write to the LDAP used for authentication outside of the OpenStack Scope, as there is a chance to allow a sufficiently privileged Keystone user to make changes to the LDAP directory, which is not desirable from a security point of view. This can also lead to unauthorized access of other information and resources. So, if we have other authentication providers such as LDAP or Active Directory, then user provisioning always happens at other authentication provider systems. For external authentication, we have the following methods: MFA: The MFA service requires the user to provide additional layers of information for authentication such as a one-time password token or X.509 certificate (called MFA token). Once MFA is implemented, the user will have to enter the MFA token after putting the user ID and password in for a successful login. Password policy enforcement: Once the external authentication service is in place, we can define the strength of the user passwords to conform to the minimum standards for length, diversity of characters, expiration, or failed login attempts. Keystone also supports TLS-based client authentication. TLS client authentication provides an additional authentication factor, apart from the username and password, which provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords are compromised. However, TLS-based authentication is not cost effective as we need to have a certificate for each of the clients. Authorization Keystone also provides the option of groups and roles. Users belong to groups where a group has a list of roles. All of the OpenStack services, such as Cinder, Glance, nova, and Horizon, reference the roles of the user attempting to access the service. OpenStack policy enforcers always consider the policy rule associated with each resource and use the user’s group or role, and their association, to determine and allow or deny the service access. Before configuring roles, groups, and users, we should document your required access control policies for the OpenStack installation. The policies must be as per the regulatory or legal requirements of the organization. Additional changes to the access control configuration should be done as per the formal policies. These policies must include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. One needs to review these policies from time to time and ensure that the configuration is in compliance with the approved policies. For user creation and administration, there must be a user created with the admin role in Keystone for each OpenStack service. This account will provide the service with the authorization to authenticate users. Nova (compute) and Swift (object storage) can be configured to use the Identity service to store authentication information. For the test environment, we can have tempAuth, which records user credentials in a text file, but it is not recommended for the production environment. The OpenStack administrator must protect sensitive configuration files from unauthorized modification with mandatory access control frameworks such as SELinux or DAC. Also, we need to protect the Keystone configuration files, which are stored at /etc/keystone/keystone.conf, and also the X.509 certificates. It is recommended that cloud admin users must authenticate using the identity service (Keystone) and an external authentication service that supports two-factor authentication. Getting authenticated with two-factor authentication reduces the risk of compromised passwords. It is also recommended in the NIST guideline called NIST 800-53 IA-2(1). Which defines MFA for network access to privileged accounts, when one factor is provided by a separate device from the system being accessed. Policy, tokens, and domains In OpenStack, every service defines the access policies for its resources in a policy file, where a resource can be like an API access, it can create and attach Cinder volume, or it can create an instance. The policy rules are defined in JSON format in a file called policy.json. Only administrators can modify the service-based policy.json file, to control the access to the various resources. However, one has to also ensure that any changes to the access control policies do not unintentionally breach or create an option to breach the security of any resource. Any changes made to policy.json are applied immediately and it does not need any service restart. After a user is authenticated, a token is generated for authorization and access to an OpenStack environment. A token can have a variable lifespan, but the default value is 1 hour. It is also recommended to lower the lifespan of the token to a certain level so that within the specified timeframe the internal service can complete the task. If the token expires before task completion, the system can be unresponsive. Keystone also supports token revocation. For this, it uses an API to revoke a token and to list the revoked tokens. In OpenStack Newton release, there are four supported token types: UUID, PKI, PKIZ, and fernet. After the OpenStack Ocata release, there are two supported token types: UUID and fernet. We'll see all of these token types in detail here: UUID: These tokens are persistent tokens. UUID tokens are 32 bytes in length, which must be persisted in the backend. They are stored in the Keystone backend, along with the metadata for authentication. All of the clients must pass their UUID token to the Keystone (identity service) in order to validate it. PKI and PKIZ: These are signed documents that contain the authentication content, as well as the service catalog. The difference between the PKI and PKIZ is that PKIZ tokens are compressed to help mitigate the size issues of PKI (sometimes PKI tokens becomes very long). Both of these tokens have become obsolete after the Ocata release. The length of PKI and PKIZ tokens typically exceeds 1,600 bytes. The Identity service uses public and private key pairs and certificates in order to create and validate these tokens. Fernet: These tokens are the default supported token provider for OpenStack Pike Release. It is a secure messaging format explicitly designed for use in API tokens. They are nonpersistent, lightweight (fall in the range of 180 to 240 bytes), and reduce the operational overhead. Authentication and authorization metadata is neatly bundled into a message-packed payload, which is then encrypted and signed in as a fernet token. In the OpenStack, the Keystone Service domain is a high-level container for projects, users, and groups. Domains are used to centrally manage all Keystone-based identity components. Compute, storage, and other resources can be logically grouped into multiple projects, which can further be grouped under a master account. Users of different domains can be represented in different authentication backends and have different attributes that must be mapped to a single set of roles and privileges in the policy definitions to access the various service resources. Domain-specific authentication drivers allow the identity service to be configured for multiple domains, using domain-specific configuration files stored at keystone.conf. Federated identity Federated identity enables you to establish trusts between identity providers and the cloud environment (OpenStack Cloud). It gives you secure access to cloud resources using your existing identity. You do not need to remember multiple credentials to access your applications. Now, the question is, what is the reason for using federated identity? This is answered as follows: It enables your security team to manage all of the users (cloud or noncloud) from a single identity application It enables you to set up different identity providers on the basis of the application that somewhere creates an additional workload for the security team and leads the security risk as well It gives ease of life to users by proving them a single credential for all of the apps so that they can save the time they spend on the forgot password page Federated identity enables you to have a single sign-on mechanism. We can implement it using SAML 2.0. To do this, you need to run the identity service provider under Apache. We learned about securing your private cloud and the authentication process therein. If you've enjoyed this article, do check out 'Cloud Security Automation' for a hands-on experience of automating your cloud security and governance. Top 5 cloud security threats to look out for in 2018 Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)
Read more
  • 0
  • 0
  • 4187
article-image-angular-cli-build-angular-components
Amarabha Banerjee
09 May 2018
13 min read
Save for later

Getting started with Angular CLI and build your first Angular Component

Amarabha Banerjee
09 May 2018
13 min read
When it comes to Angular development, there are some things that are good to know and some things that we need to know to embark on our great journey. One of the things that is good to know is semantic versioning. This is good to know because it is the way the Angular team has chosen to deal with changes. This will hopefully make it easier to find the right solutions to future app development challenges when you go to https://angular.io/ or Stack Overflow and other sites to search for solutions. In this tutorial, we will discuss Angular components and few practical examples to help you get real world understanding of Angular components. This article is an excerpt from the book Learning Angular Second edition, written by Christoffer Noring & Pablo Deeleman. Web components Web components is a concept that encompasses four technologies designed to be used together to build feature elements with a higher level of visual expressivity and reusability, thereby leading to a more modular, consistent, and maintainable web. These four technologies are as follows: Templates: These are pieces of HTML that structure the content we aim to render. Custom elements: These templates not only contain traditional HTML elements, but also the custom wrapper items that provide further presentation elements or API functionalities. Shadow DOM: This provides a sandbox to encapsulate the CSS layout rules and JavaScript behaviors of each custom element. HTML imports: HTML is no longer constrained to host HTML elements, but to other HTML documents as well. In theory, an Angular component is indeed a custom element that contains a template to host the HTML structure of its layout, the latter being governed by a scoped CSS style sheet encapsulated within a shadow DOM container. Let's try to rephrase this in plain English. Think of the range input control type in HTML5. It is a handy way to give our users a convenient input control for entering a value ranging between two predefined boundaries. If you have not used it before, insert the following piece of markup in a blank HTML template and load it in your browser: <input id="mySlider" type="range" min="0" max="100" step="10"> You will see a nice input control featuring a horizontal slider in your browser. Inspecting such control with the browser developer tools will unveil a concealed set of HTML tags that were not present at the time you edited your HTML template. There you have an example of shadow DOM in action, with an actual HTML template governed by its own encapsulated CSS with advanced dragging functionality. You will probably agree that it would be cool to do that yourself. Well, the good news is that Angular gives you the tool set required for delivering this very same functionality, to build our own custom elements (input controls, personalized tags, and self-contained widgets). We can feature the inner HTML markup of our choice and our very own style sheet that is not affected (nor is impacted) by the CSS of the page hosting our component. Why TypeScript over other syntaxes to code Angular apps? Angular applications can be coded in a wide variety of languages and syntaxes: ECMAScript 5, Dart, ECMAScript 6, TypeScript, or ECMAScript 7. TypeScript is a typed superset of ECMAScript 6 (also known as ECMAScript 2015) that compiles to plain JavaScript and is widely supported by modern OSes. It features a sound object-oriented design and supports annotations, decorators, and type checking. The reason why we picked (and obviously recommend) TypeScript as the syntax of choice for instructing how to develop Angular applications is based on the fact that Angular itself is written in this language. Being proficient in TypeScript will give the developer an enormous advantage when it comes to understanding the guts of the framework. On the other hand, it is worth remarking that TypeScript's support for annotations and type introspection turns out to be paramount when it comes to managing dependency injection and type binding between components with a minimum code footprint. Check out the book, Learning Angular 2nd edition, to learn how to do this. Ultimately, you can carry out your Angular projects in plain ECMAScript 6 syntax if that is your preference. Even the examples provided in this book can be easily ported to ES6 by removing type annotations and interfaces, or replacing the way dependency injection is handled in TypeScript with the most verbose ES6 way. For the sake of brevity, we will only cover examples written in TypeScript. We recommend its usage because of its higher expressivity thanks to type annotations, and its neat way of approaching dependency injection based on type introspection out of such type annotations. Setting up our workspace with Angular CLI There are different ways to get started, either using the Angular quickstart repository on the https://angular.io/ site, or installing the scaffolding tool Angular CLI, or, you could use Webpack to set up your project. It is worth pointing out that the standard way of creating a new Angular project is through using Angular CLI and scaffold your project. Systemjs, used by the quickstart repository, is something that used to be the default way of building Angular projects. It is now rapidly diminishing, but it is still a valid way of setting up an Angular project. Setting up a frontend project today is more cumbersome than ever. We used to just include the necessary script with our JavaScript code and a link tag for our CSS and img tag for our [SN1] assets and so on. Life used to be simple. Then frontend development became more ambitious and we started splitting up our code in modules, we started using preprocessors for both our code and CSS. All in all, our projects became more complicated and we started to rely on build systems such as Grunt, Gulp, Webpack, and so on. Most developers out there are not huge fans of configuration, they just want to focus on building apps. Modern browsers, however, do more to support the latest ECMAScript standard and some browsers have even started to support modules, which are resolved at runtime. This is far from being widely supported though. In the meantime, we still have to rely on tools for bundling and module support. Setting up a project with leading frameworks such as React or Angular can be quite difficult. You need to know what libraries to import and ensure that files are processed in the correct order, which leads us to the topic of scaffolding tools. For AngularJS, it was quite popular to use Yeoman to scaffold up a new application quickly and get a lot of nice things preconfigured. React has a scaffolder tool called create-react-app, which you probably have saved and it saves countless hours for React developers. Scaffolder tools becomes almost a necessity as complexity grows, but also where every hour counts towards producing business value rather than fighting configuration problems. The main motivation behind creating the Angular CLI tool was to help developers focus on app building and not so much on configuration. Essentially, with a simple command, you should be able to scaffold an application, add a new construct to it, run tests, or create a production grade bundle. Angular CLI supports all that. Prerequisites for installing Angular CLI What you need to get started is to have Git and Node.js installed. Node.js will also install something called NPM, a node package manager that you will use later to install files you need for your project. After this is done, you are ready to set up your Angular application. You can find installation files to Node.js. The easiest way to have it installed is to go to the site: Installing Node.js will also install something called NPM, Node Package Manager, which you will need to install dependencies and more. The Angular CLI requires Node 6.9.0 and NPM 3 or higher. Currently on the site, you can choose between an LTS version and the current version. The LTS version should be enough. Angular CLI Installation Installing the Angular CLI is as easy as running the following command in your Terminal: npm install -g @angular/cli On some systems, you may need to have elevated permissions to do so; in that case, run your Terminal window as an administrator and on Linux/macOS instead run the command like this: sudo npm install -g @angular/cli Building your first app Once the Angular CLI is in place the time has come to create your first project. To do so place yourself in a directory of your choice and type the following: ng new <give it a name here> Type the following: ng new TodoApp This will create a directory called TodoApp. After you have run the preceding command, there are two things you need to do to see your app in a browser: Navigate to the just created directory Serve up the application This will be accomplished by the following commands: cd TodoApp npm start At this point, open up your browser on http://localhost:4200 and you should see the following: Testing your app The Angular CLI doesn't just come with code that makes your app work. It also comes with code that sets up testing and includes a test. Running the said test is as easy as typing the following in the Terminal: You should see the following: How come this works? Let's have a look at the package.json file that was just created and the scripts tag. Everything specified here can be run using the following syntax: npm run <key> In some cases, it is not necessary to type run and it will be enough to just type: npm <key> This is the case with the start and test commands. The following listing makes it clear that it is possible to run more commands than start and test that we just learned about: "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test", "lint": "ng lint", "e2e": "ng e2e" } So far we have learned how to install the Angular CLI. Using the Angular CLI we have learned to:    Scaffold a new project.    Serve up the project and see it displayed in a browser.    Run tests. That is quite an accomplishment. We will revisit the Angular CLI in a later chapter as it is a very competent tool, capable of a lot more. Hello Angular We are about to take the first trembling steps into building our first component. The Angular CLI has already scaffolded our project and thereby carried out a lot of heavy lifting. All we need to do is to create new file and starting filling it with content. The million dollar question is what to type? So let's venture into building our first component. There are three steps you need to take in creating a component. Those are:    Import the component decorator construct.    Decorate a class with a component decorator.    Add a component to its module (this might be in two different places). Creating the component First off, let's import the component decorator: import { Component } from '@angular/core'; Then create the class for your component: class AppComponent { title:string = 'hello app'; } Then decorate your class using the Component decorator: @Component({ selector: 'app', template: `<h1>{{ title }}</h1>` }) export class AppComponent { title: string = 'hello app'; } We give the Component decorator, which is function, an object literal as an input parameter. The object literal consists at this point of the selector and template keys, so let's explain what those are. Selector A selector is what it should be referred to if used in a template somewhere else. As we call it app, we would refer to it as: <app></app> Template/templateUrl The template or templateUrl is your view. Here you can write HTML markup. Using the  template keyword, in our object literal, means we get to define the HTML markup in the same file as the component class. Were we to use templateUrl, we would then place our HTML markup in a separate file. The preceding  example also lists the following double curly braces, in the markup: <h1>{{ title }}</h1> This will be treated as an interpolation and the expression will be replaced with the value of AppComponent's title field. The component, when rendered, will therefore look like this: hello app Telling the module Now we need to introduce a completely new concept, an Angular module. All types of constructs that you create in Angular should be registered with a module. An Angular module serves as a facade to the outside world and it  is nothing more than a class that is decorated by the decorate @NgModule. Just like the @Component decorator, the @NgModule decorator takes an object literal as an input parameter. To register our component with our Angular module, we need to give the object literal the property declarations. The declarations property is of a type array and by adding our component to that array we are registering it with the Angular module. The following code shows the creation of an Angular module and the component being registered with it by being added to declarations keyword array: import { AppComponent } from './app.component'; @NgModule({ declarations: [AppComponent] }) export class AppModule {} At this point, our Angular module knows about the component. We need to add one more property to our module, bootstrap. The bootstrap keyword states that whatever is placed in here serves as the entry component for the entire application. Because we only have one component, so far, it makes sense to register our component with this bootstrap keyword: @NgModule({ declarations:  [AppComponent], bootstrap: [AppComponent] }) export class AppModule {} It's definitely possible to have more than one entry component, but the usual scenario is that there is only one. For any future components, however, we will only need to add them to the declarations property, to ensure the module knows about them. So far we have created a component and an Angular module and registered the component with said the module. We don't really have a working application yet, as there is one more step we need to take. We need to set up the bootstrapping. To summarize, we have shown how to get started with the Angular CLI and create your first Angular component efficiently. If you are interested to know more, check out Learning Angular Second edition, to get your way through Angular and create dynamic applications with it. Building Components Using Angular Why switch to Angular for web development – Interview Insights 8 built-in Angular Pipes in Angular 4 that you should know    
Read more
  • 0
  • 0
  • 5613

article-image-microsofts-azure-container-service-acs-is-now-azure-kubernetes-services-aks
Savia Lobo
09 May 2018
2 min read
Save for later

Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS)

Savia Lobo
09 May 2018
2 min read
At the Build 2018, Microsoft announced that its Azure Container Service (ACS), its managed Kubernetes service is now Azure Kubernetes Service (AKS), which is currently in preview and will soon be generally available. AKS is also a part of the Kubernetes Conformance Program, which is a certification program run by the Cloud Native Computing Foundation. The Azure Kubernetes Service (AKS) adds automated support for upgrades and scaling capabilities. It also includes self-healing aspects that aims to make spinning up containers on Kubernetes easier for developers. Developers now have added advantages with AKS, which include: A DevOps Project support for AKS : Now, with a few clicks developers can create a new AKS cluster, containerize their applications, deploy with a VSTS CI/CD pipeline, and view integrated App Insights telemetry with the DevOps project. New Azure Portal experience for AKS : This includes AKS create and browse experiences inside the Azure Portal, which makes it easier for cluster operators to configure and manage Kubernetes. Some features of AKS Custom VNET with Azure CNI : AKS now supports deploying Kubernetes nodes into custom VNETs using Azure CNI, with configurable IP ranges for Kubernetes networking components. Integration with Azure Monitor : AKS is now integrated directly into Azure Monitor for control plane telemetry, log aggregation, and container health monitoring. This provides operational visibility into one’s Kubernetes environment directly from the Azure portal. HTTP application routing : AKS also supports exposing public applications natively, using an Azure-integrated Kubernetes ingress controller. With this, customers can access their applications without having to configure DNS records and nameservers. Microsoft has also introduces a new Dev Spaces capability. With the AKS and the Dev Spaces, all a new developer needs is their IDE and the Azure CLI. The developers can simply create a new Dev Space inside AKS and can begin working on any component of their microservice environment safely, without impeding production traffic flows. Dev Spaces for AKS makes developing against a complex microservices environment simple. It is now available in private preview. To know more about the AKS in detail, visit Microsoft Azure Blog. Here is a quick recap of what happened at Day 1 of the Microsoft Build Conference 2018, if you are interested.   Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes Kubernetes 1.10 released The key differences between Kubernetes and Docker Swarm
Read more
  • 0
  • 0
  • 8572

article-image-create-aws-cloudtrail
Vijin Boricha
09 May 2018
8 min read
Save for later

How to create your own AWS CloudTrail

Vijin Boricha
09 May 2018
8 min read
AWS provides a wide variety of tools and managed services which allow you to safeguard your applications running on the cloud, such as AWS WAF and AWS Shield. But this, however, just forms one important piece of a much larger jigsaw puzzle! What about compliance monitoring, risk auditing, and overall governance of your environments? How do you effectively analyze events occurring in your environment and mitigate against the same? Well, luckily for us, AWS has the answer to our problems in the form of AWS CloudTrail. In today's post, we will explore AWS CloudTrail and learn how to create our own CloudTrail trail. [box type="shadow" align="" class="" width=""]This tutorial is an excerpt from the book AWS Administration - The Definitive Guide - Second Edition, written by Yohan Wadia.  This book will help you create a highly secure, fault-tolerant, and scalable Cloud environment for your applications to run on.[/box] AWS CloudTrail provides you with the ability to log every single action taken by a user, service, role, or even API, from within your AWS account. Each action recorded is treated as an event which can then be analyzed for enhancing the security of your AWS environment. The following are some of the key benefits that you can obtain by enabling CloudTrail for your AWS accounts: In-depth visibility: Using CloudTrail, you can easily gain better insights into your account's usage by recording each user's activities, such as which user initiated a new resource creation, from which IP address was this request initiated, which resources were created and at what time, and much more! Easier compliance monitoring: With CloudTrail, you can easily record and log events occurring within your AWS account, whether they may originate from the Management Console, or the AWS CLI, or even from other AWS tools and services. The best thing about this is that you can integrate CloudTrail with another AWS service, such as Amazon CloudWatch, to alert and respond to out-of-compliance events. Security automations: Automating responses to security threats not only enables you to mitigate the potential threats faster, but also provides you with a mechanism to stop all further attacks. The same can be applied to AWS CloudTrail as well! With its easy integration with Amazon CloudWatch events, you can now create corresponding Lambda functions that trigger automatically each time a compliance is not met, all in a matter of seconds! CloudTrail's essential concepts and terminologies With these key points in mind, let's have a quick look at some of CloudTrail's essential concepts and terminologies: Events Events are the basic unit of measurement in CloudTrail. Essentially, an event is nothing more than a record of a particular activity either initiated by the AWS services, roles, or even an AWS user. These activities are all logged as API calls that can originate from the Management Console, the AWS SDK, or even the AWS CLI as well. By default, events are stored by CloudTrail with S3 buckets for a period of 7 days. You can view, search, and even download these events by leveraging the events history feature provided by CloudTrail. Trails Trails are essentially the delivery mechanism, using which events are dumped to S3 buckets. You can use these trails to log specific events within specific buckets, as well as to filter events and encrypt the transmitted log files. By default, you can have a maximum of five trails created per AWS region, and this limit cannot by increased. CloudTrail Logs Once your CloudTrail starts capturing events, it sends these events to an S3 bucket in the form of a CloudTrail Log file. The log files are JSON text files that are compressed using the .gzip format. Each file can contain one or more events within itself. Here is a simple representation of what a CloudTrail Log looks like. In this case, the event was created when I tried to add an existing user by the name of Mike to an administrator group using the AWS Management Console: {"Records": [{ "eventVersion": "1.0", "userIdentity": { "type": "IAMUser", "principalId": "12345678", "arn": "arn:aws:iam::012345678910:user/yohan", "accountId": "012345678910", "accessKeyId": "AA34FG67GH89", "userName": "Alice", "sessionContext": {"attributes": { "mfaAuthenticated": "false", "creationDate": "2017-11-08T13:01:44Z" }} }, "eventTime": "2017-11-08T13:09:44Z", "eventSource": "iam.amazonaws.com", "eventName": "AddUserToGroup", "awsRegion": "us-east-1", "sourceIPAddress": "127.0.0.1", "userAgent": "AWSConsole", "requestParameters": { "userName": "Mike", "groupName": "administrator" }, "responseElements": null }]} You can view your own CloudTrail Log files by visiting the S3 bucket that you specify during the trail's creation. Each log file is named uniquely using the following format: AccountID_CloudTrail_RegionName_YYYYMMDDTHHmmZ_UniqueString.json.gz Where: AccountID: Your AWS account ID. RegionName: AWS region where the event was captured: us-east-1, and so on. YYYYMMDDTTHHmmz: Specifies the year, month, day, hour (24 hours), minutes, and seconds. The z indicates time in UTC. UniqueString: A randomly generated 16-character-long string that is simply used so that there is no overwriting of the log files. With the basics in mind, let's quickly have a look at how you can get started with CloudTrail for your own AWS environments! Creating your first CloudTrail Trail To get started, log in to your AWS Management Console and filter the CloudTrail service from the AWS services filter. On the CloudTrail dashboard, select the Create Trail option to get started: This will bring up the Create Trail wizard. Using this wizard, you can create a maximum of five-trails per region. Type a suitable name for the Trail into the Trail name field, to begin with. Next, you can either opt to Apply trail to all regions or only to the region out of which you are currently operating. Selecting all regions enables CloudTrail to record events from each region and dump the corresponding log files into an S3 bucket that you specify. Alternatively, selecting to record out of one region will only capture the events that occur from the region out of which you are currently operating. In my case, I have opted to enable the Trail only for the region I'm currently working out of. In the subsequent sections, we will learn how to change this value using the AWS CLI: Next, in the Management events section, select the type of events you wish to capture from your AWS environment. By default, CloudTrail records all management events that occur within your AWS account. These events can be API operations, such as events caused due to the invocation of an EC2 RunInstances or TerminateInstances operation, or even non-API based events, such as a user logging into the AWS Management Console, and so on. For this particular use case, I've opted to record All management events. Selecting the Read-only option will capture all the GET API operations, whereas the Write-only option will capture only the PUT API operations that occur within your AWS environment. Moving on, in the Storage location section, provide a suitable name for the S3 bucket that will store your CloudTrail Log files. This bucket will store all your CloudTrail Log files, irrespective of the regions the logs originated from. You can alternatively select an existing bucket from the S3 bucket selection field: Next, from the Advanced section, you can optionally configure a Log file prefix. By default, the logs will automatically get stored under a folder-like hierarchy that is usually of the form AWSLogs/ACCOUNT_ID/CloudTrail/REGION. You can also opt to Encrypt log files with the help of an AWS KMS key. Enabling this feature is highly recommended for production use. Selecting Yes in the Enable log file validation field enables you to verify the integrity of the delivered log files once they are delivered to the S3 bucket. Finally, you can even enable CloudTrail to send you notifications each time a new log file is delivered to your S3 bucket by selecting Yes against the Send SNS notification for every log file delivery option. This will provide you with an additional option to either select a predefined SNS topic or alternatively create a new one specifically for this particular CloudTrail. Once all the required fields are filled in, click on Create to continue. With this, you should be able to see the newly created Trail by selecting the Trails option from the CloudTrail dashboard's navigation pane, as shown in the following screenshot: We learned to create a new trail and enable notifications each time a new log file is delivered. If you are interested to learn more about CloudTrail Logs and AWS Config you may refer to this book  AWS Administration - The Definitive Guide - Second Edition. AWS SAM (AWS Serverless Application Model) is now open source! How to run Lambda functions on AWS Greengrass AWS Greengrass brings machine learning to the edge
Read more
  • 0
  • 0
  • 5160
article-image-nativescript-set-up
Amey Varangaonkar
09 May 2018
9 min read
Save for later

NativeScript: What is it, and how to set it up

Amey Varangaonkar
09 May 2018
9 min read
In this tutorial, we introduce you to the NativeScript library, which allows you to create and deploy a web application on a mobile device and use it like a mobile app, rather than as a web or a hybrid application. [box type="shadow" align="" class="" width=""]The following excerpt is taken from the book TypeScript 2.x By Example written by Sachin Ohri. This book presents essential techniques to leverage the power of TypeScript 2.x to build efficient web applications.[/box] What is NativeScript? NativeScript is the open source framework for building native Android and iOS applications with web technologies. This means we can develop native mobile applications with JavaScript, TypeScript, and/or Angular. It is based on the thinking of write once and run everywhere. Applications developed with NativeScript are pure mobile apps when compared to applications developed with technologies such as PhoneGap. As they are native mobile applications, we can use all the richness of the mobile platform and provide the performance associated with that. We use native APIs and use native controls to render, which allows us to create more sophisticated applications compared to a hybrid approach. Hybrid applications do not provide the same level of flexibility or performance because they are hosted on a separate framework and do not get to interact with low-level mobile APIs directly. The best part is that it does not require us to learn a new programming language, unlike developing an iOS-based application, for which you need to know Objective C or Swift. So, we can use our existing skills to develop mobile applications. NativeScript design NativeScript is a runtime that sits on top of the native mobile operating system and uses the JavaScript Virtual Machine (JVM) V8 on Android and JavaScriptCore on iOS. Having access to these platforms allows NativeScript to expose a unified API system for developers, which is then converted into the native API at runtime. This translation between the JavaScript APIs and the native platform APIs is possible through reflection, which NativeScript uses to create its own set of interfaces. Another advantage of using JavaScript by NativeScript is its independence from specific editors. You can use any of your favorite editors to develop a NativeScript application, and you will have access to all the native APIs rather than using Xcode for iOS-based apps and Android Studio for Android-based apps. Architecture The following is a high-level diagram of NativeScript and its interaction with the mobile platform: As we can see, the runtime is responsible for converting JavaScript application code to the native platform code. It has various components that work together to convert and call the native APIs. Because NativeScript uses JVM and JavaScriptCore, it has access to all the latest ECMAScript language specifications for development, which allows us to use the latest ES6 feature set. One of the main components that we need to understand in NativeScript design is modules. Modules The NativeScript team made sure that the platform was developed in a modular fashion, much like plugins, which allow us to include only the modules that we need in our development. These modules provide us with the abstraction of native APIs and allow us to write code that work on both platforms. It has separate APIs for each logical functionality. For example, if you want to use SQLite for your storage needs, there is a package for that; if you want to use a filesystem, there is a package for that. Let's take one example to see how these modules help us write consistent code for a multiplatform environment. If you want to access a filesystem on the native platform using NativeScript, you will write code similar to what you see in the following code snippet: var filesystem = require("file-system"); new filesystem.file(path) This code is written in pure JavaScript, which first gets a reference to a file-system module, and then, using the API of the file-system module calls a file method. This code, when executed by the NativeScript runtime, first checks the platform it wants to run on and then converts the code accordingly, as shown in the following code snippets. The Android version of the code will be as follows: new java.io.file(path) The iOS version of the code will be as follows: nsFileManager.defaultManager(); fileManager.createFileAtPathContentsAttributes(path); If you have worked on any of the mobile platforms before, you will recognize this code as using the native filesystem API to access the file path. NativeScript versus web applications Until now, we have been mentioning that we can use our web technologies to write mobile applications with the help of NativeScript. So, can we write a pure web application and use the it in runtime to create a mobile application? Yes and no. Yes, we can, and we will see with our application that we can use the same code base to write with NativeScript. No, because not all components of web applications can be directly used. NativeScript allows us to use our existing JavaScript/TypeScript and CSS skills for developing the business logic and the design for our application. But because the native platforms are not web-based and do not have a DOM, we cannot use HTML as the template for our applications. Although you will see that the extension of our template files will be HTML, the element tags will be somewhat different. To give you a brief example, it does not have UI elements such as <div> or <span>, but has elements such as <StackLayout> and <DockLayout>, which allow us to arrange our UI components. Another thing to note here is that these UI elements are then converted into native elements based on the platform. So, if we use the <Button> control in NativeScript, it will get converted into android.widget.Button on the Android platform and UIButton on iOS. Setting up your NativeScript environment NativeScript provides very good documentation about installing and setting up your development environment. You can find the documentation at https://docs.nativescript.org/angular/start/quick-setup. We will briefly go through the setup process here, but recommend that you go through the documentation to understand the process. NativeScript CLI The best way to use is through the NativeScript CLI. You can install it from npm using the following command: npm install -g nativescript This command will install the NativeScript library in your global scope. To confirm that the installation has been successful, you can try running the following command from the command-line window: tns The tns command is a short form for Telerik NativeScript, and will show the array of commands associated with it. The NativeScript CLI comes with a host of commands to assist in our development, commands such as create, which helps us create a basic startup project, and deploy, which informs the NativeScript CLI to deploy the application to the device (the device can be a connected device or an emulator). You can check all the commands available with the NativeScript CLI by using the help command as follows: tns --help Installing mobile platform dependencies To build native applications, we need to install the dependencies for those mobile platforms. It is important to remember that if we want to build a NativeScript application for iOS and run it on an iOS-compatible device, we need to use macOS; for building Android applications, we can use both Windows and macOS. It provides an easy single script for Windows and macOS that takes care of the responsibility to install all the tools and framework required. The script for Windows is as shown in the following code: @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://www.nativescript.org/setup/win'))" The script for iOS is as shown in the following code: ruby -e "$(curl -fsSL https://www.nativescript.org/setup/mac)" It's important to note that these scripts require administrator-level privileges, so you may need to run them using the sudo command. It also provides a step-by-step guide to installing all these dependencies manually; details can be found at https://docs.nativescript.org/start/ns-setup-win. Once you have installed all the packages, you can check if the installation was successful by running the following command: tns doctor This command checks all the required prerequisites for building a NativeScript application, and if there are no issues identified, this command will return a success message, No issues were detected. Installing an Android Virtual Device Once you have installed all the dependencies, the next step is to install an Android emulator, which can be used for testing instead of connecting real devices. To be able to create an emulator, you need to have Android Studio on your machine. You can install Android Studio from https://developer.android.com/studio/index.html. Once you have installed Android Studio, you can check whether you have the correct Android SDK version. The NativeScript CLI needs Android SDK version 25 or higher; if you see that you do not have the required Android SDK version, then you can install it either using the following command or using the Android Studio IDE: "%ANDROID_HOME%\tools\bin\sdkmanager" "tools" "platform-tools" "platforms;android-25" "build-tools;25.0.2" "extras;android;m2repository" "extras;google;m2repository" To install the Android emulator, we use Android Studio, the details of which can be found at https://docs.nativescript.org/tooling/android-virtual-devices. On macOS, we need to make sure we have hXcodeCode installed, or else, we will not be able to run iOS-based applications. Again, you can use the tns doctor command to check if your installation was successful. And that's it! You have successfully installed and set up the NativeScript environment. Want to learn how to develop native web apps? We've got it covered. All you have to do is check out this book TypeScript 2.x By Example to create and deploy web app as a native app in a step-by-step manner. Tools in TypeScript Introducing Object Oriented Programmng with TypeScript Writing SOLID JavaScript code with TypeScript  
Read more
  • 0
  • 0
  • 5338

article-image-functional-programs-with-f
Kunal Chaudhari
08 May 2018
23 min read
Save for later

Building functional programs with F#

Kunal Chaudhari
08 May 2018
23 min read
Functional programming treats programs as mathematical expressions and evaluates expressions. It focuses on functions and constants, which don't change, unlike variables and states. Functional programming solves complex problems with simple code; it is a very efficient programming technique for writing bug-free applications; for example, the null exception can be avoided using this technique. In today's tutorial, we will learn how to build functional programs with F# that leverage .NET Core. Here are some rules to understand functional programming better: In functional programming, a function's output never gets affected by outside code changes and the function always gives the same result for the same parameters. This gives us confidence in the function's behavior that it will give the expected result in all the scenarios, and this is helpful for multithread or parallel programming. In functional programming, variables are immutable, which means we cannot modify a variable once it is initialized, so it is easy to determine the value of a variable at any given point at program runtime. Functional programming works on referential transparency, which means it doesn't use assignment statements in a function. For example, if a function is assigning a new value to a variable such as shown here: Public int sum(x) { x = x + 20 ; return x; } This is changing the value of x, but if we write it as shown here: Public int sum(x) { return x + 20 ; } This is not changing the variable value and the function returns the same result. Functional programming uses recursion for looping. A recursive function calls itself and runs till the condition is satisfied. Functional programming features Let's discuss some functional programming features: Higher-order functions Purity Recursion Currying Closure Function composition Higher-order functions (HOF) One function can take an input argument as another function and it can return a function. This originated from calculus and is widely used in functional programming. An order can be determined by domain and range of order such as order 0 has no function data and order 1 has a domain and range of order 0, if the order is higher than 1, it is called a higher-order function. For example, the ComplexCalc function takes another function as input and returns a different function as output: open System let sum y = x+x let divide y = x/x Let ComplexCalc func = (func 2) Printfn(ComplexCalc sum) // 4 Printfn(ComplexCalc divide) //1 In the previous example, we created two functions, sum and divide. We pass these two functions as parameters to the ComplexCalc function, and it returns a value of 4 and 1, respectively. Purity In functional programming, a function is referred to as a pure function if all its input arguments are known and all its output results are also well known and declared; or we can say the input and output result has no side-effects. Now, you must be curious to know what the side-effect could be, let's discuss it. Let's look at the following example: Public int sum(int x) { return x+x; } In the previous example, the function sum takes an integer input and returns an integer value and predefined result. This kind of function is referred to as a pure function. Let's investigate the following example: Public void verifyData() { Employee emp = OrgQueue.getEmp(); If(emp != null) { ProcessForm(emp); } } In the preceding example, the verifyData() function does not take any input parameter and does not return anything, but this function is internally calling the getEmp() function so verifyData() depends on the getEmp() function. If the output of getEmp() is not null, it calls another function, called ProcessForm() and we pass the getEmp() function output as input for ProcessForm(emp). In this example, both the functions, getEmp() and ProcessForm(), are unknown at the verifyData() function level call, also emp is a hidden value. This kind of program, which has hidden input and output, is treated as a side-effect of the program. We cannot understand what it does in such functions. This is different from encapsulation; encapsulation hides the complexity but in such function, the functionality is not clear and input and output are unreliable. These kinds of function are referred to as impure functions. Let's look at the main concepts of pure functions: Immutable data: Functional programming works on immutable data, it removes the side-effect of variable state change and gives a guarantee of an expected result. Referential transparency: Large modules can be replaced by small code blocks and reuse any existing modules. For example, if a = b*c and d = b*c*e then the value of d can be written as d = a*e. Lazy evaluation: Referential transparency and immutable data give us the flexibility to calculate the function at any given point of time and we will get the same result because a variable will not change its state at any time. Recursion In functional programming, looping is performed by recursive functions. In F#, to make a function recursive, we need to use the rec keyword. By default, functions are not recursive in F#, we have to rectify this explicitly using the rec keyword. Let's take an example: let rec summation x = if x = 0 then 0 else x + summation(x-1) printfn "The summation of first 10 integers is- %A" (summation 10) In this code, we used the keyword rec for the recursion function and if the value passed is 0, the sum would be 0; otherwise it will add x + summation(x-1), like 1+0 then 2+1 and so on. We should take care with recursion because it can consume memory heavily. Currying This converts a function with multiple input parameter to a function which takes one parameter at a time, or we can say it breaks the function into multiple functions, each taking one parameter at a time. Here is an example: int sum = (a,b) => a+b int sumcurry = (a) =>(b) => a+b sumcurry(5)(6) // 11 int sum8 = sumcurry(8) // b=> 8+b sum8(5) // 13 Closure Closure is a feature which allows us to access a variable which is not within the scope of the current module. It is a way of implementing lexically scoped named binding, for example: int add = x=> y=> x+y int addTen = add(10) addTen(5) // this will return 15 In this example, the add() function is internally called by the addTen() function. In an ideal world, the variables x and y should not be accessible when the add() function finishes its execution, but when we are calling the function addTen(), it returns 15. So, the state of the function add() is saved even though code execution is finished, otherwise, there is no way of knowing the add(10) value, where x = 10. We are able to find the value of x because of lexical scoping and this is called closure. Function composition As we discussed earlier in HOF, function composition means getting two functions together to create a third new function where the output of a function is the input of another function. There are n number of functional programming features. Functional programming is a technique to solve problems and write code in an efficient way. It is not language-specific, but many languages support functional programming. We can also use non-functional languages (such as C#) to write programs in a functional way. F# is a Microsoft programming language for concise and declarative syntax. Getting started with F# In this section, we will discuss F# in more detail. Classes Classes are types of object which can contain functions, properties, and events. An F# class must have a parameter and a function attached to a member. Both properties and functions can use the member keyword. The following is the class definition syntax: type [access-modifier] type-name [type-params] [access-modifier] (parameter-list) [ as identifier ] = [ class ] [ inherit base-type-name(base-constructor-args) ] [ let-bindings ] [ do-bindings ] member-list [ end ] // Mutually recursive class definitions: type [access-modifier] type-name1 ... and [access-modifier] type-name2 ... Let’s discuss the preceding syntax for class declaration: type: In the F# language, class definition starts with a type keyword. access-modifier: The F# language supports three access modifiers—public, private, and internal. By default, it considers the public modifier if no other access modifier is provided. The Protected keyword is not used in the F# language, and the reason is that it will become object-oriented rather than functional programming. For example, F# usually calls a member using a lambda expression and if we make a member type protected and call an object of a different instance, it will not work. type-name: It is any of the previously mentioned valid identifiers; the default access modifier is public. type-params: It defines optional generic type parameters. parameter-list: It defines constructor parameters; the default access modifier for the primary constructor is public. identifier: It is used with the optional as keyword, the as keyword gives a name to an instance variable which can be used in the type definition to refer to the instance of the type. Inherit: This keyword allows us to specify the base class for a class. let-bindings: This is used to declare fields or function values in the context of a class. do-bindings: This is useful for the execution of code to create an object member-list: The member-list comprises extra constructors, instance and static method declarations, abstract bindings, interface declarations, and event and property declarations. Here is an example of a class: type StudentName(firstName,lastName) = member this.FirstName = firstName member this.LastName = lastName In the previous example, we have not defined the parameter type. By default, the program considers it as a string value but we can explicitly define a data type, as follows: type StudentName(firstName:string,lastName:string) = member this.FirstName = firstName member this.LastName = lastName Constructor of a class In F#, the constructor works in a different way to any other .NET language. The constructor creates an instance of a class. A parameter list defines the arguments of the primary constructor and class. The constructor contains let and do bindings, which we will discuss next. We can add multiple constructors, apart from the primary constructor, using the new keyword and it must invoke the primary constructor, which is defined with the class declaration. The syntax of defining a new constructor is as shown: new (argument-list) = constructor-body Here is an example to explain the concept. In the following code, the StudentDetail class has two constructors: a primary constructor that takes two arguments and another constructor that takes no arguments: type StudentDetail(x: int, y: int) = do printfn "%d %d" x y new() = StudentDetail(0, 0) A let and do binding A let and do binding creates the primary constructor of a class and runs when an instance of a class is created. A function is compiled into a member if it has a let binding. If the let binding is a value which is not used in any function or member, then it is compiled into a local variable of a constructor; otherwise, it is compiled into a field of the class. The do expression executes the initialized code. As any extra constructors always call the primary constructor, let and do bindings always execute, irrespective of which constructor is called. Fields that are created by let bindings can be accessed through the methods and properties of the class, though they cannot be accessed from static methods, even if the static methods take an instance variable as a parameter: type Student(name) as self = let data = name do self.PrintMessage() member this.PrintMessage() = printf " Student name is %s" data Generic type parameters F# also supports a generic parameter type. We can specify multiple generic type parameters separated by a comma. The syntax of a generic parameter declaration is as follows: type MyGenericClassExample<'a> (x: 'a) = do printfn "%A" x The type of the parameter infers where it is used. In the following code, we call the MyGenericClassExample method and pass a sequence of tuples, so here the parameter type became a sequence of tuples: let g1 = MyGenericClassExample( seq { for i in 1 .. 10 -> (i, i*i) } ) Properties Values related to an object are represented by properties. In object-oriented programming, properties represent data associated with an instance of an object. The following snippet shows two types of property syntax: // Property that has both get and set defined. [ attributes ] [ static ] member [accessibility-modifier] [self- identifier.]PropertyName with [accessibility-modifier] get() = get-function-body and [accessibility-modifier] set parameter = set-function-body // Alternative syntax for a property that has get and set. [ attributes-for-get ] [ static ] member [accessibility-modifier-for-get] [self-identifier.]PropertyName = get-function-body [ attributes-for-set ] [ static ] member [accessibility-modifier-for-set] [self- identifier.]PropertyName with set parameter = set-function-body There are two kinds of property declaration: Explicitly specify the value: We should use the explicit way to implement the property if it has non-trivial implementation. We should use a member keyword for the explicit property declaration. Automatically generate the value: We should use this when the property is just a simple wrapper for a value. There are many ways of implementing an explicit property syntax based on need: Read-only: Only the get() method Write-only: Only the set() method Read/write: Both get() and set() methods An example is shown as follows: // A read-only property. member this.MyReadOnlyProperty = myInternalValue // A write-only property. member this.MyWriteOnlyProperty with set (value) = myInternalValue <- value // A read-write property. member this.MyReadWriteProperty with get () = myInternalValue and set (value) = myInternalValue <- value Backing stores are private values that contain data for properties. The keyword, member val instructs the compiler to create backing stores automatically and then gives an expression to initialize the property. The F# language supports immutable types, but if we want to make a property mutable, we should use get and set. As shown in the following example, the MyClassExample class has two properties: propExample1 is read-only and is initialized to the argument provided to the primary constructor, and propExample2 is a settable property initialized with a string value ".Net Core 2.0": type MyClassExample(propExample1 : int) = member val propExample1 = property1 member val propExample2 = ".Net Core 2.0" with get, set Automatically implemented properties don't work efficiently with some libraries, for example, Entity Framework. In these cases, we should use explicit properties. Static and instance properties We can further categorize properties as static or instance properties. Static, as the name suggests, can be invoked without any instance. The self-identifier is neglected by the static property while it is necessary for the instance property. The following is an example of the static property: static member MyStaticProperty with get() = myStaticValue and set(value) = myStaticValue <- value Abstract properties Abstract properties have no implementation and are fully abstract. They can be virtual. It should not be private and if one accessor is abstract all others must be abstract. The following is an example of the abstract property and how to use it: // Abstract property in abstract class. // The property is an int type that has a get and // set method [<AbstractClass>] type AbstractBase() = abstract Property1 : int with get, set // Implementation of the abstract property type Derived1() = inherit AbstractBase() let mutable value = 10 override this.Property1 with get() = value and set(v : int) = value <- v // A type with a "virtual" property. type Base1() = let mutable value = 10 abstract Property1 : int with get, set default this.Property1 with get() = value and set(v : int) = value <- v // A derived type that overrides the virtual property type Derived2() = inherit Base1() let mutable value2 = 11 override this.Property1 with get() = value2 and set(v) = value2 <- v Inheritance and casts In F#, the inherit keyword is used while declaring a class. The following is the syntax: type MyDerived(...) = inherit MyBase(...) In a derived class, we can access all methods and members of the base class, but it should not be a private member. To refer to base class instances in the F# language, the base keyword is used. Virtual methods and overrides  In F#, the abstract keyword is used to declare a virtual member. So, here we can write a complete definition of the member as we use abstract for virtual. F# is not similar to other .NET languages. Let's have a look at the following example: type MyClassExampleBase() = let mutable x = 0 abstract member virtualMethodExample : int -> int default u. virtualMethodExample (a : int) = x <- x + a; x type MyClassExampleDerived() = inherit MyClassExampleBase () override u. virtualMethodExample (a: int) = a + 1 In the previous example, we declared a virtual method, virtualMethodExample, in a base class, MyClassExampleBase, and overrode it in a derived class, MyClassExampleDerived. Constructors and inheritance An inherited class constructor must be called in a derived class. If a base class constructor contains some arguments, then it takes parameters of the derived class as input. In the following example, we will see how derived class arguments are passed in the base class constructor with inheritance: type MyClassBase2(x: int) = let mutable z = x * x do for i in 1..z do printf "%d " i type MyClassDerived2(y: int) = inherit MyClassBase2(y * 2) do for i in 1..y do printf "%d " i If a class has multiple constructors, such as new(str) or new(), and this class is inherited in a derived class, we can use a base class constructor to assign values. For example, DerivedClass, which inherits BaseClass, has new(str1,str2), and in place of the first string, we pass inherit BaseClass(str1). Similarly, for blank, we wrote inherit BaseClass(). Let's explore the following example in more detail: type BaseClass = val string1 : string new (str) = { string1 = str } new () = { string1 = "" } type DerivedClass = inherit BaseClass val string2 : string new (str1, str2) = { inherit BaseClass(str1); string2 = str2 } new (str2) = { inherit BaseClass(); string2 = str2 } let obj1 = DerivedClass("A", "B") let obj2 = DerivedClass("A") Functions and lambda expressions A lambda expression is one kind of anonymous function, which means it doesn't have a name attached to it. But if we want to create a function which can be called, we can use the fun keyword with a lambda expression. We can pass the kind parameter in the lambda function, which is created using the fun keyword. This function is quite similar to a normal F# function. Let's see a normal F# function and a lambda function: // Normal F# function let addNumbers a b = a+b // Evaluating values let sumResult = addNumbers 5 6 // Lambda function and evaluating values let sumResult = (fun (a:int) (b:int) -> a+b) 5 6 // Both the function will return value sumResult = 11 Handling data – tuples, lists, record types, and data manipulation F# supports many kind data types, for example: Primitive types: bool, int, float, string values. Aggregate type: class, struct, union, record, and enum Array: int[], int[ , ], and float[ , , ] Tuple: type1 * type2 * like (a,1,2,true) type is—char * int * int * bool Generic: list<’x>, dictionary < ’key, ’value> In an F# function, we can pass one tuple instead parameters of multiple parameters of different types. Declaration of a tuple is very simple and we can assign values of a tuple to different variables, for example: let tuple1 = 1,2,3 // assigning values to variables , v1=1, v2= 2, v3=3 let v1,v2,v3 = tuple1 // if we want to assign only two values out of three, use “_” to skip the value. Assigned values: v1=1, //v3=3 let v1,_,v3 = tuple In the preceding examples, we saw that tuple supports pattern matching. These are option types and an option type in F# supports the idea that the value may or not be present at runtime. List List is a generic type implementation. An F# list is similar to a linked list implementation in any other functional language. It has a special opening and closing bracket construct, a short form of the standard empty list ([ ]) syntax: let empty = [] // This is an empty list of untyped type or we can say //generic type. Here type is: 'a list let intList = [10;20;30;40] // this is an integer type list The cons operator is used to prepend an item to a list using a double colon cons(prepend,::). To append another list to one list, we use the append operator—@: // prepend item x into a list let addItem xs x = x :: xs let newIntList = addItem intList 50 // add item 50 in above list //“intlist”, final result would be- [50;10;20;30;40] // using @ to append two list printfn "%A" (["hi"; "team"] @ ["how";"are";"you"]) // result – ["hi"; "team"; "how";"are";"you"] Lists are decomposable using pattern matching into a head and a tail part, where the head is the first item in the list and the tail part is the remaining list, for example: printfn "%A" newIntList.Head printfn "%A" newIntList.Tail printfn "%A" newIntList.Tail.Tail.Head let rec listLength (l: 'a list) = if l.IsEmpty then 0 else 1 + (listLength l.Tail) printfn "%d" (listLength newIntList) Record type The class, struct, union, record, and enum types come under aggregate types. The record type is one of them, it can have n number of members of any individual type. Record type members are by default immutable but we can make them mutable. In general, a record type uses the members as an immutable data type. There is no way to execute logic during instantiation as a record type don't have constructors. A record type also supports match expression, depending on the values inside those records, and they can also again decompose those values for individual handling, for example: type Box = {width: float ; height:int } let giftbox = {width = 6.2 ; height = 3 } In the previous example, we declared a Box with float a value width and an integer height. When we declare giftbox, the compiler automatically detects its type as Box by matching the value types. We can also specify type like this: let giftbox = {Box.width = 6.2 ; Box.height = 3 } or let giftbox : Box = {width = 6.2 ; height = 3 } This kind of type declaration is used when we have the same type of fields or field type declared in more than one type. This declaration is called a record expression. Object-oriented programming in F# F# also supports implementation inheritance, the creation of object, and interface instances. In F#, constructed types are fully compatible .NET classes which support one or more constructors. We can implement a do block with code logic, which can run at the time of class instance creation. The constructed type supports inheritance for class hierarchy creation. We use the inherit keyword to inherit a class. If the member doesn't have implementation, we can use the abstract keyword for declaration. We need to use the abstractClass attribute on the class to inform the compiler that it is abstract. If the abstractClass attribute is not used and type has all abstract members, the F# compiler automatically creates an interface type. Interface is automatically inferred by the compiler as shown in the following screenshot: The override keyword is used to override the base class implementation; to use the base class implementation of the same member, we use the base keyword. In F#, interfaces can be inherited from another interface. In a class, if we use the construct interface, we have to implement all the members in the interface in that class, as well. In general, it is not possible to use interface members from outside the class instance, unless we upcast the instance type to the required interface type. To create an instance of a class or interface, the object expression syntax is used. We need to override virtual members if we are creating a class instance and need member implementation for interface instantiation: type IExampleInterface = abstract member IntValue: int with get abstract member HelloString: unit -> string type PrintValues() = interface IExampleInterface with member x.IntValue = 15 member x.HelloString() = sprintf "Hello friends %d" (x :> IExampleInterface).IntValue let example = let varValue = PrintValues() :> IExampleInterface { new IExampleInterface with member x.IntValue = varValue.IntValue member x.HelloString() = sprintf "<b>%s</b>" (varValue.HelloString()) } printfn "%A" (example.HelloString()) Exception handling The exception keyword is used to create a custom exception in F#; these exceptions adhere to Microsoft best practices, such as constructors supplied, serialization support, and so on. The keyword raise is used to throw an exception. Apart from this, F# has some helper functions, such as failwith, which throws a failure exception at F# runtime, and invalidop, invalidarg, which throw the .NET Framework standard type invalid operation and invalid argument exception, respectively. try/with is used to catch an exception; if an exception occurred on an expression or while evaluating a value, then the try/with expression could be used on the right side of the value evaluation and to assign the value back to some other value. try/with also supports pattern matching to check an individual exception type and extract an item from it. try/finally expression handling depends on the actual code block. Let's take an example of declaring and using a custom exception: exception MyCustomExceptionExample of int * string raise (MyCustomExceptionExample(10, "Error!")) In the previous example, we created a custom exception called MyCustomExceptionExample, using the exception keyword, passing value fields which we want to pass. Then we used the raise keyword to raise exception passing values, which we want to display while running the application or throwing the exception. However, as shown here, while running this code, we don't get our custom message in the error value and the standard exception message is displayed: We can see in the previous screenshot that the exception message doesn't contain the message that we passed. In order to display our custom error message, we need to override the standard message property on the exception type. We will use pattern matching assignment to get two values and up-cast the actual type, due to the internal representation of the exception object. If we run this program again, we will get the custom message in the exception: exception MyCustomExceptionExample of int * string with override x.Message = let (MyCustomExceptionExample(i, s)) = upcast x sprintf "Int: %d Str: %s" i s raise (MyCustomExceptionExample(20, "MyCustomErrorMessage!")) Now, we will get the following error message: In the previous screenshot, we can see our custom message with integer and string values included in the output. We can also use the helper function, failwith, to raise a failure exception, as it includes our message as an error message, as follows: failwith "An error has occurred" The preceding error message can be seen in the following screenshot: Here is a detailed exception screenshot: An example of the invalidarg helper function follows. In this factorial function, we are checking that the value of x is greater than zero. For cases where x is less than 0, we call invalidarg, pass x as the parameter name that is invalid, and then some error message saying the value should be greater than 0. The invalidarg helper function throws an invalid argument exception from the standard system namespace in .NET: let rec factorial x = if x < 0 then invalidArg "x" "Value should be greater than zero" match x with | 0 -> 1 | _ -> x * (factorial (x - 1)) To summarize, we discussed functional programming and its features, such as higher-order functions, purity, lazy evaluation and how to write functions and lambda expressions in F#, exception handling, and so on. You enjoyed an excerpt from a book written by Rishabh Verma and Neha Shrivastava, titled  .NET Core 2.0 By Example. This book will give a detailed walkthrough on functional programming with F# and .NET Core from scratch. What is functional reactive programming? Functional Programming in C#  
Read more
  • 0
  • 0
  • 4171