Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-azure-stream-analytics-7-reasons-to-choose
Sugandha Lahoti
19 Apr 2018
11 min read
Save for later

How to get started with Azure Stream Analytics and 7 reasons to choose it

Sugandha Lahoti
19 Apr 2018
11 min read
In this article, we will introduce Azure Stream Analytics, and show how to configure it. We will then look at some of key the advantages of the Stream Analytics platform including how it will enhance developer productivity, reduce and improve the Total Cost of Ownership (TCO) of building and maintaining a scaling streaming solution among other factors. What is Azure Stream Analytics and how does it work? Microsoft Azure Stream Analytics falls into the category of PaaS services where the customers don't need to manage the underlying infrastructure. However, they are still responsible for and manage an application that they build on the top of PaaS service and more importantly the customer data. Azure Stream Analytics is a fully managed server-less PaaS service that is built for real-time analytics computations on streaming data. The service can consume from a multitude of sources. Azure will take care of the hosting, scaling, and management of the underlying hardware and software ecosystem. The following are some of the examples of different use cases for Azure Stream Analytics. When we are designing the solution that involves streaming data, in almost every case, Azure Stream Analytics will be part of a larger solution that the customer was trying to deploy. This can be real-time dashboarding for monitoring purposes or real-time monitoring of IT infrastructure equipment, preventive maintenance (auto-manufacturing, vending machines, and so on), and fraud detection. This means that the streaming solution needs to be thoughtful about providing out-of-the-box integration with a whole plethora of services that could help build a solution in a relatively quick fashion. Let's review a usage pattern for Azure Stream Analytics using a canonical model: We can see devices and applications that generate data on the left in the preceding illustration that can connect directly or through cloud gateways to your stream ingest sources. Azure Stream Analytics can pick up the data from these ingest sources, augment it with reference data, run necessary analytics, gather insights and push them downstream for action. You can trigger business processes, write the data to a database or directly view the anomalies on a dashboard. In the previous canonical pattern, the number of streaming ingest technologies are used; let's review them in the following section: Event Hub: Global scale event ingestion system, where one can publish events from millions of sensors and applications. This will guarantee that as soon as an event comes in here, a subscriber can pick that event up within a few milliseconds. You can have one or more subscriber as well depending on your business requirements. A typical use case for an Event Hub is real-time financial fraud detection and social media sentiment analytics. IoT Hub: IoT Hub is very similar to Event Hub but takes the concept a lot further forward—in that you can take bidirectional actions. It will not only ingest data from sensors in real time but can also send commands back to them. It also enables you to do things like device management. Enabling fundamental aspects such as security is a primary need for IoT built with it. Azure Blob: Azure Blob is a massively scalable object storage for unstructured data, and is accessible through HTTP or HTTPS. Blob storage can expose data publicly to the world or store application data privately. Reference Data: This is auxiliary data that is either static or that changes slowly. Reference data can be used to enrich incoming data to perform correlation and lookups. On the ingress side, with a few clicks, you can connect to Event Hub, IOT Hub, or Blob storage. The Streaming data can be enriched with reference data in the Blob store. Data from the ingress process will be consumed by the Azure Stream Analytics service; we can call machine learning (ML) for event scoring in real time. The data can be egressed to live Dashboarding to Power BI, or could also push data back to Event Hub from where dashboards and reports can pick it up. The following is a summary of the ingress, egress, and archiving options: Ingress choices: Event Hub IoT Hub Blob storage Egress choices: Live Dashboards: PowerBI Event Hub Driving workflows: Event Hubs Service Bus Archiving and post analysis: Blob storage Document DB Data Lake SQL Server Table storage Azure Functions One key point to note is there the number of customers who push data from Stream Analytics processing (egress point) to Event Hub and then add Azure website-as hosted solutions into their own custom dashboard. One can drive workflows by pushing the events to Azure Service Bus and PowerBI. For example, customer can build IoT support solutions to detect an anomaly in connected appliances and pushing the result into Azure Service Bus. A worker role can run as a daemon to pull the messages and create support tickets using Dynamics CRM API. Then use Power BI on the ticket can be archived for post analysis. This solution eliminates the need for the customer to log a ticket , but the system will automatically do it based on predefined anomaly thresholds. This is just one sample of real-time connected solution. There are a number of use cases that don't even involve real-time alerts. You can also use it to aggregate data, filter data, and store it in Blob storage, Azure Data Lake (ADL), Document DB, SQL, and then run U-SQL Azure Data Lake Analytics (ADLA), HDInsight, or even call ML models for things like predictive maintenance. Configuring Azure Stream Analytics Azure Stream Analytics (ASA) is a fully managed, cost-effective real-time event processing engine. Stream Analytics makes it easy to set up real-time analytic computations on data streaming from devices, sensors, websites, social media, applications, infrastructure systems, and more. The service can be hosted with a few clicks in the Azure portal; users can author a Stream Analytics job specifying the input source of the streaming data, the output sink for the results of your job, and a data transformation expressed in a SQL-like language. The jobs can be monitored and you can adjust the scale/speed of the job in the Azure portal to scale from a few kilobytes to a gigabyte or more of events processed per second. Let's review how to configure Azure Stream Analytics step by step: Log in to the Azure portal using your Azure credentials, click on New, and search for Stream Analytics job: 2. Click on Create to create an Azure Stream Analytics instance: 3. Provide a Job Name and Resource group name for the Azure Stream Analytics job deployment: 4. After a few minutes, the deployment will be complete: 5. Review the following in the deployment--audit trail of the creation: 6. Ability stream up and down using a simple UI: 7. Build in the Query interface to run queries: 8. Run Queries using a SQL-like interface, with the ability to accept late-arriving events with simple GUI-based configuration: Key advantages of Azure Stream Analytics Let's quickly review how traditional streaming solutions are built; the core deployment starts with procuring and setting up the basic infrastructure necessary to host the streaming solution. Once this is done, we can then build the ingress and egress solution on top of the deployed infrastructure. Once the core infrastructure is built, customer tools will be used to build business intelligence (BI) or machine-learning integration. After the system goes into production, scaling during runtime needs to be taken care of by capturing the telemetry and building and configuration of HW/SW resources as necessary. As business needs ramp up, so does the monitoring and troubleshooting. Security Azure Stream Analytics provides a number of inbuilt security mechanics in areas such as authentication, authorization, auditing, segmentation, and data protection. Let's quickly review them. Authentication support: Authentication support in Azure Stream Analytics is done at portal level. Users should have a valid subscription ID and password to access the Azure Stream Analytics job. Authorization: Authorization is the process during login where users provide their credentials (for example, user account name and password, smart card and PIN, Secure ID and PIN, and so on) to prove their Microsoft identity so that they can retrieve their access token from the authentication server. Authorization is supported by Azure Stream Analytics. Only authenticated/authorized users can access the Azure Stream Analytics job. Support for encryption: Data-at-rest using client-side encryption and TDE. Support for key management: Key management is supported through ingress and egress points. Programmer productivity One of the key features of Azure Stream Analytics is developer productivity, and it is driven a lot by the query language that is based on SQL constructs. It provides a wide array of functions for analytics on streaming data, all the way from simple data manipulation functions, data and time functions, temporal functions, mathematical, string, scaling, and much more. It provides two features natively out of the box. Let's review the features in detail in the next section Declarative SQL constructs Built-in temporal semantics Declarative SQL constructs A simple-to-use UI is provided and queries can be constructed using the provided user interface. The following is the feature set of the declarative SQL constructs: Filters (Where) Projections (Select) Time-window and property-based aggregates (Group By) Time-shifted joins (specifying time bounds within which the joining events must occur) All combinations thereof The following is a summary of different constructs to manipulate streaming data: Data manipulation: SELECT, FROM, WHERE GROUP BY, HAVING, CASE WHEN THEN ELSE, INNER/LEFT OUTER JOIN, UNION, CROSS/OUTER APPLY, CAST, INTO, ORDER BY ASC, DSC Date and time functions: DateName, DatePart, Day, Month, Year, DateDiff, DateTimeFromParts, DateAdd Temporal functions: Lag, IsFirst, LastCollectTop Aggregate functions: SUM, COUNT, AVG, MIN, MAX, STDEV, STDEVP, VAR VARP, TopOne Mathematical functions: ABS, CEILING, EXP, FLOOR POWER, SIGN, SQUARE, SQRT String functions: Len, Concat, CharIndex Substring, Lower Upper, PatIndex Scaling extensions: WITH, PARTITION BY OVER Geospatial: CreatePoint, CreatePolygon, CreateLineString, ST_DISTANCE, ST_WITHIN, ST_OVERLAPS, ST_INTERSECTS Built-in temporal semantics Azure Stream Analytics provides prebuilt temporal semantics to query time-based information and merge streams with multiple timelines. Here is a list of temporal semantics: Application or ingest timestamp Windowing functions Policies for event ordering Policies to manage latencies between ingress sources Manage streams with multiple timelines Join multiple streams of temporal windows Join streaming data with data-at-rest Lowest total cost of ownership Azure Stream Analytics is a fully managed PaaS service on Azure. There are no upfront costs or costs involved in setting up computer clusters and complex hardware wiring like you would do with an on-prem solution. It's a simple job service where there is no cluster provisioning and customers pay for what they use. A key consideration is the variable workloads. With Azure Stream Analytics, you do not need to design your system for peak throughput and can add more compute footprint as you go. If you have scenarios where data comes in spurts, you do not want to design a system for peak usage and leave it unutilized for other times. Let's say you are building a traffic monitoring solution—naturally, there is the expectation that it will expect peaks to show up during morning and evening rush hours. However, you would not want to design your system or investments to cater to these extremes. Cloud elasticity that Azure offers is a perfect fit here. Azure Stream Analytics also offers fast recovery by checkpointing and at-least-once event delivery. Mission-critical and enterprise-less scalability and availability Azure Stream Analytics is available across multiple worldwide data centers and sovereign clouds. Azure Stream Analytics promises 3-9s availability that is financially guaranteed with built-in auto recovery so that you will never lose the data. The good thing is customers do not need to write a single line of code to achieve this. The bottom-line is that enterprise readiness is built into the platform. Here is a summary of the Enterprise-ready features: Distributed scale-out architecture Ingests millions of events per second Accommodates variable loads Easily adds incremental resources to scale Available across multiple data centres and sovereign clouds Global compliance In addition, Azure Stream Analytics is compliant with many industries and government certifications. It is already HIPPA-compliant built-in and suitable to host healthcare applications. That's how customers can scale up their businesses confidently. Here is a summary of global compliance: ISO 27001 ISO 27018 SOC 1 Type 2 SOC 2 Type 2 SOC 3 Type 2 HIPAA/HITECH PCI DSS Level 1 European Union Model Clauses China GB 18030 Thus we reviewed Azure Stream Analytics and understood its key advantages. These advantages included: Ease in terms of developer productivity, Ease of development and how to reduces total cost of ownership, Global compliance certifications, The value of the PaaS based streaming solution to host mission-critical applications and security This post is taken from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. This book will help you to understand Azure Stream Analytics so that you can develop efficient analytics solutions that can work with any type of data. Say hello to Streaming Analytics How to build a live interactive visual dashboard in Power BI with Azure Stream Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools    
Read more
  • 0
  • 0
  • 6236

article-image-performing-vehicle-telemetry-job-analysis-with-azure-stream-analytics-tools
Sugandha Lahoti
18 Apr 2018
8 min read
Save for later

Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools

Sugandha Lahoti
18 Apr 2018
8 min read
This tutorial is a step-by-step blueprint for a Vehicle Telemetry job analysis on Azure using Streaming Analytics tools for Visual Studio.For connected car and real-time predictive Vehicle Telemetry Analysis, there's a necessity to specify opportunities for new solutions. These opportunities include   How a car could be shipped globally with the required smart hardware to connect to the internet within the next few years. How the embedded connections could define Vehicle Telemetry predictive health status so automotive companies will be able to collect data on the performance of cars, How to send interactive updates and patches to car's instrumentation remotely, How to avoid car equipment damage with precautionary measures with prior notification All these require an intelligent vehicle health telemetry analysis which you can implement using Azure Streaming. Stream Analytics tools for Visual Studio The Stream Analytics tools for Visual Studio help prepare, build, and deploy real-time events on Azure. Optionally, the tools enable you to monitor the streaming job using local sample data as job input testing as well as real-time monitoring, job metrics, diagram view, and so on. This tool provides a complete development setup for the implementation and deployment of real-world Azure Stream Analytics jobs using Visual Studio. Developing a Stream Analytics job using Visual Studio Post the installation of the Stream Analytics tool, a new stream analytics job can be created in Visual Studio. You can get started in Visual Studio IDE from File | New Project. Under the Templates, select Stream Analytics and choose Azure Stream Analytics Application. 2. Next, the job name, project, and solution location should be provided. Under Solution menu, you may also select options such as Add to solution or Create new instance apart from Create new solution from the available drop-down menu during Visual Studio Stream Analytics job creation: 3. Once the ASA job is created, in the Solution Explorer, the job topology folder structure could be viewed as Inputs (job input), Outputs (job output), JobConfig.json, Script.asaql (Stream Analytics Query file), Azure Functions (optional), and so on: 4. Next, provide the job topology data input and output event source settings by selecting Input.json and Output.json from Inputs and Outputs directories, respectively. 5. For a Vehicle Telemetry Predictive Analysis demo using an Azure Stream Analytics job, we need to take two different job data streams. One should be a Stream type for an illimitable sequence of real-time events processed through Azure Event Hub along with Hub policy name, policy key, event serialization format, and so on: Defining a Stream Analytics query for Vehicle Telemetry job analysis using Stream Analytics tools To assign the streaming analytics query definition, the Script.asasql file from the ASA project should be selected by specifying the data and reference stream input joining operation along with supplying analyzed output to Blob storage as configured in job properties. Query to define Vehicle Telemetry (Connected Car) engine health status and pollution index over cities For connected car and real-time predictive Vehicle Telemetry Analysis, there's a necessity to specify opportunities for new solutions in terms of how a car could be shipped globally with the required smart hardware to connect to the internet within the next few years. How the embedded connections could define Vehicle Telemetry predictive health status so automotive companies will be able to collect data on the performance of cars, to send interactive updates and patches to car's instrumentation remotely, and just to avoid car equipment damage with precautionary measures with prior notification through intelligent vehicle health telemetry analysis using Azure Streaming. The solution architecture of the Co:tUlected Car-Vehicle Telemetry Analysis case study used in this demo, with Azure Stream Analytics for real-time predictive analysis, is as follows: Testing Stream Analytics queries locally or in the cloud Azure Stream Analytics tools in Visual Studio offer the flexibility to execute the queries either locally or directly in the cloud. In the Script.asaql file, you need to provide the respective query of your streaming job and test against local input stream/Reference data for query testing before processing in Azure: 2. To run the Stream Analytics job query locally, first select Add Local Input by right-clicking on the ASA project in VS Solution Explorer, and choose to Add Local Input: 3. Define the local input for each Event Hub Data Stream and Blob storage data and execute the job query locally before publishing it in Azure: 4. After adding each local input test data, you can test the Stream Analytics job query locally in VS editor by clicking on the Run Locally button in the top left corner of VS IDE: Vehicle diagnostic Usage-based insurance Engine emission control Engine performance remapping Eco-driving Roadside assistance call Fleet management So, specify the following schema during the designing of a connected car streaming job query with Stream Analytics using parameters such as Vehicle Index no, Model, outside temperature, engine speed, fuel meter, tire pressure, and brake status, by defining INNER join with Event Hub data streams along with Blob storage reference streams containing vehicle model information: Select input.vin, BlobSource.Model, input.timestamp, input.outsideTemperature, input.engineTemperature, input.speed, input.fuel, input.engineoil, input.tirepressure, input.odometer, input.city, input.accelerator_pedal_position, input.parking_brake_status, input.headlamp_status, input.brake_pedal_status, input.transmission_gear_position, input.ignition_status, input.windshield_wiper_status, input.abs into output from input join BlobSource on input.vin = BlobSource.VIN The query could be further customized for complex event processing analysis in terms of defining windowing concepts like Tumbling window function, which assigns equal length non-overlapping series of events in streams with a fixed time slice. The following Vehicle Telemetry analytics query will specify a smart car health index parameter with complex streams from a specified two-second timestamp interval in the form of a fixed length series of events: select BlobSource.Model, input.city,count(vin) as cars, avg(input.engineTemperature) as engineTemperature, avg(input.speed) as Speed, avg(input.fuel) as Fuel, avg(input.engineoil) as EngineOil,avg(input.tirepressure) as TirePressure, avg(input.odometer) as Odometer into EventHubOut from input join BlobSource on input.vin = BlobSource.VIN group by BlobSource.model, input.city, TumblingWindow(second,2) The following Vehicle Telemetry analytics query will specify a smart car health index parameter with complex streams from a specified two-second timestamp interval in the form of a fixed length series of events: 5. The query could be executed locally or submitted to Azure. While running the job locally, a Command Prompt will appear asserting the local Stream Analytics job's running status, with the output data folder location: 6. If run locally, the job output folder would contain two files in the project disk location within the ASALocalRun directory named, with the current date timestamp. Two output files would be present in .csv and .json formats respectively: Now, if submitted the job to Azure from the Stream Analytics project in Visual Studio, it offers a beautiful job dashboard while providing an interactive job diagram view, job metrics graph, and errors (if any). The Vehicle Telemetry Predictive Health Analytics job dashboard in Visual Studio provides a nice job diagram with Real-Time Insights of events, with a display refreshed at a minimum rate of every 30 minutes: The Stream Analytics job metrics graph provides interactive insights on input and output events, out of order events, late events, runtime errors, and data conversion errors related to the job as appropriate: For Connected Car-Predictive Vehicle Telemetry Analytics, you may configure the data input streams processed with complex events by using a definite timestamp interval in a non-overlapping mode such as Tumbling window over a two-second time slicer. The output sink should be configured as Service Bus Event Hub in a data partitioning unit of 32, with a maximum message retention period of 7 days. The output job sink processed events in Event Hub could be archived as well in Azure blob storage for a long-term infrequent access perspective: The Azure Service Bus, Event Hub job output metrics dashboard view configured for vehicle telemetry analysis is as follows: On the left side of the job dashboard, the Job Summary provides a comprehensive view controller of the job parameters such as job status, creation time, job output start time, start mode, last output timestamp, output error handling mechanism provided for quick reference logs, late event arrival tolerance windows, and so on. The job can be stopped and started, deleted, or even refreshed by selecting icons from the top left menu of the job view dashboard in VS: Optionally, a Stream Analytics complete project clone can also be generated by clicking on the Generate Project icon from the top menu of the job dashboard. This article is an excerpt from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. This book provides lessons on Real-time data processing for quick insights using Azure Stream Analytics. Say hello to Streaming Analytics How to get started with Azure Stream Analytics and 7 reasons to choose it  
Read more
  • 0
  • 0
  • 4491

article-image-boost-r-codes-using-c-fortran
Pravin Dhandre
18 Apr 2018
16 min read
Save for later

How to boost R codes using C++ and Fortran

Pravin Dhandre
18 Apr 2018
16 min read
Sometimes, R code just isn't fast enough. Maybe you've used profiling to figure out where your bottlenecks are, and you've done everything you can think of within R, but your code still isn't fast enough. In those cases, a useful alternative can be to delegate some parts of the implementation to Fortran or C++. This is an advanced technique that can often prove to be quite useful if know how to program in such languages. In today’s tutorial, we will try to explore techniques to boost R codes and calculations using efficient languages like Fortran and C++. Delegating code to other languages can address bottlenecks such as the following: Loops that can't be easily vectorized due to iteration dependencies Processes that involve calling functions millions of times Inefficient but necessary data structures that are slow in R Delegating code to other languages can provide great performance benefits, but it also incurs the cost of being more explicit and careful with the types of objects that are being moved around. In R, you can get away with simple things such as being imprecise about a number being an integer or a real. In these other languages, you can't; every object must have a precise type, and it remains fixed for the entire execution. Boost R codes using an old-school approach with Fortran We will start with an old-school approach using Fortran first. If you are not familiar with it, Fortran is the oldest programming language still under use today. It was designed to perform lots of calculations very efficiently and with very few resources. There are a lot of numerical libraries developed with it, and many high-performance systems nowadays still use it, either directly or indirectly. Here's our implementation, named sma_fortran(). The syntax may throw you off if you're not used to working with Fortran code, but it's simple enough to understand. First, note that to define a function technically known as a subroutine in Fortran, we use the subroutine keyword before the name of the function. As our previous implementations do, it receives the period and data (we use the dataa name with an extra a at the end because Fortran has a reserved keyword data, which we shouldn't use in this case), and we will assume that the data is already filtered for the correct symbol at this point. Next, note that we are sending new arguments that we did not send before, namely smas and n. Fortran is a peculiar language in the sense that it does not return values, it uses side effects instead. This means that instead of expecting something back from a call to a Fortran subroutine, we should expect that subroutine to change one of the objects that was passed to it, and we should treat that as our return value. In this case, smas fulfills that role; initially, it will be sent as an array of undefined real values, and the objective is to modify its contents with the appropriate SMA values. Finally, the n represents the number of elements in the data we send. Classic Fortran doesn't have a way to determine the size of an array being passed to it, and it needs us to specify the size manually; that's why we need to send n. In reality, there are ways to work around this, but since this is not a tutorial about Fortran, we will keep the code as simple as possible. Next, note that we need to declare the type of objects we're dealing with as well as their size in case they are arrays. We proceed to declare pos (which takes the place of position in our previous implementation, because Fortran imposes a limit on the length of each line, which we don't want to violate), n, endd (again, end is a keyword in Fortran, so we use the name endd instead), and period as integers. We also declare dataa(n), smas(n), and sma as reals because they will contain decimal parts. Note that we specify the size of the array with the (n) part in the first two objects. Once we have declared everything we will use, we proceed with our logic. We first create a for loop, which is done with the do keyword in Fortran, followed by a unique identifier (which are normally named with multiples of tens or hundreds), the variable name that will be used to iterate, and the values that it will take, endd and 1 to n in this case, respectively. Within the for loop, we assign pos to be equal to endd and sma to be equal to 0, just as we did in some of our previous implementations. Next, we create a while loop with the do…while keyword combination, and we provide the condition that should be checked to decide when to break out of it. Note that Fortran uses a very different syntax for the comparison operators. Specifically, the .lt. operator stand for less-than, while the .ge. operator stands for greater-than-or-equal-to. If any of the two conditions specified is not met, then we will exit the while loop. Having said that, the rest of the code should be self-explanatory. The only other uncommon syntax property is that the code is indented to the sixth position. This indentation has meaning within Fortran, and it should be kept as it is. Also, the number IDs provided in the first columns in the code should match the corresponding looping mechanisms, and they should be kept toward the left of the logic-code. For a good introduction to Fortran, you may take a look at Stanford's Fortran 77 Tutorial (https:/​/​web.​stanford.​edu/​class/​me200c/​tutorial_​77/​). You should know that there are various Fortran versions, and the 77 version is one of the oldest ones. However, it's also one of the better supported ones: subroutine sma_fortran(period, dataa, smas, n) integer pos, n, endd, period real dataa(n), smas(n), sma do 10 endd = 1, n pos = endd sma = 0.0 do 20 while ((endd - pos .lt. period) .and. (pos .ge. 1)) sma = sma + dataa(pos) pos = pos - 1 20 end do if (endd - pos .eq. period) then sma = sma / period else sma = 0 end if smas(endd) = sma 10 continue end Once your code is finished, you need to compile it before it can be executed within R. Compilation is the process of translating code into machine-level instructions. You have two options when compiling Fortran code: you can either do it manually outside of R or you can do it within R. The second one is recommended since you can take advantage of R's tools for doing so. However, we show both of them. The first one can be achieved with the following code: $ gfortran -c sma-delegated-fortran.f -o sma-delegated-fortran.so This code should be executed in a Bash terminal (which can be found in Linux or Mac operating systems). We must ensure that we have the gfortran compiler installed, which was probably installed when R was. Then, we call it, telling it to compile (using the -c option) the sma-delegated-fortran.f file (which contains the Fortran code we showed before) and provide an output file (with the -o option) named sma-delegatedfortran.so. Our objective is to get this .so file, which is what we need within R to execute the Fortran code. The way to compile within R, which is the recommended way, is to use the following line: system("R CMD SHLIB sma-delegated-fortran.f") It basically tells R to execute the command that produces a shared library derived from the sma-delegated-fortran.f file. Note that the system() function simply sends the string it receives to a terminal in the operating system, which means that you could have used that same command in the Bash terminal used to compile the code manually. To load the shared library into R's memory, we use the dyn.load() function, providing the location of the .so file we want to use, and to actually call the shared library that contains the Fortran implementation, we use the .Fortran() function. This function requires type checking and coercion to be explicitly performed by the user before calling it. To provide a similar signature as the one provided by the previous functions, we will create a function named sma_delegated_fortran(), which receives the period, symbol, and data parameters as we did before, also filters the data as we did earlier, calculates the length of the data and puts it in n, and uses the .Fortran() function to call the sma_fortran() subroutine, providing the appropriate parameters. Note that we're wrapping the parameters around functions that coerce the types of these objects as required by our Fortran code. The results list created by the .Fortran() function contains the period, dataa, smas, and n objects, corresponding to the parameters sent to the subroutine, with the contents left in them after the subroutine was executed. As we mentioned earlier, we are interested in the contents of the sma object since they contain the values we're looking for. That's why we send only that part back after converting it to a numeric type within R. The transformations you see before sending objects to Fortran and after getting them back is something that you need to be very careful with. For example, if instead of using single(n) and as.single(data), we use double(n) and as.double(data), our Fortran implementation will not work. This is something that can be ignored within R, but it can't be ignored in the case of Fortran: system("R CMD SHLIB sma-delegated-fortran.f") dyn.load("sma-delegated-fortran.so") sma_delegated_fortran <- function(period, symbol, data) { data <- data[which(data$symbol == symbol), "price_usd"] n <- length(data) results <- .Fortran( "sma_fortran", period = as.integer(period), dataa = as.single(data), smas = single(n), n = as.integer(n) ) return(as.numeric(results$smas)) } Just as we did earlier, we benchmark and test for correctness: performance <- microbenchmark( sma_12 <- sma_delegated_fortran(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_12 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$me In this case, our median time is of 148.0335 microseconds, making this the fastest implementation up to this point. Note that it's barely over half of the time from the most efficient implementation we were able to come up with using only R. Take a look at the following table: Boost R codes using a modern approach with C++ Now, we will show you how to use a more modern approach using C++. The aim of this section is to provide just enough information for you to start experimenting using C++ within R on your own. We will only look at a tiny piece of what can be done by interfacing R with C++ through the Rcpp package (which is installed by default in R), but it should be enough to get you started. If you have never heard of C++, it's a language used mostly when resource restrictions play an important role and performance optimization is of paramount importance. Some good resources to learn more about C++ are Meyer's books on the topic, a popular one being Effective C++ (Addison-Wesley, 2005), and specifically for the Rcpp package, Eddelbuettel's Seamless R and C++ integration with Rcpp by Springer, 2013, is great. Before we continue, you need to ensure that you have a C++ compiler in your system. On Linux, you should be able to use gcc. On Mac, you should install Xcode from the  application store. O n Windows, you should install Rtools. Once you test your compiler and know that it's working, you should be able to follow this section. We'll cover more on how to do this in Appendix, Required Packages. C++ is more readable than Fortran code because it follows more syntax conventions we're used to nowadays. However, just because the example we will use is readable, don't think that C++ in general is an easy language to use; it's not. It's a very low-level language and using it correctly requires a good amount of knowledge. Having said that, let's begin. The #include line is used to bring variable and function definitions from R into this file when it's compiled. Literally, the contents of the Rcpp.h file are pasted right where the include statement is. Files ending with the .h extensions are called header files, and they are used to provide some common definitions between a code's user and its developers. The using namespace Rcpp line allows you to use shorter names for your function. Instead of having to specify Rcpp::NumericVector, we can simply use NumericVector to define the type of the data object. Doing so in this example may not be too beneficial, but when you start developing for complex C++ code, it will really come in handy. Next, you will notice the // [[Rcpp::export(sma_delegated_cpp)]] code. This is a tag that marks the function right below it so that R know that it should import it and make it available within R code. The argument sent to export() is the name of the function that will be accessible within R, and it does not necessarily have to match the name of the function in C++. In this case, sma_delegated_cpp() will be the function we call within R, and it will call the smaDelegated() function within C++: #include using namespace Rcpp; // [[Rcpp::export(sma_delegated_cpp)]] NumericVector smaDelegated(int period, NumericVector data) { int position, n = data.size(); NumericVector result(n); double sma; for (int end = 0; end < n; end++) { position = end; sma = 0; while(end - position < period && position >= 0) { sma = sma + data[position]; position = position - 1; } if (end - position == period) { sma = sma / period; } else { sma = NA_REAL; } result[end] = sma; } return result; } Next, we will explain the actual smaDelegated() function. Since you have a good idea of what it's doing at this point, we won't explain its logic, only the syntax that is not so obvious. The first thing to note is that the function name has a keyword before it, which is the type of the return value for the function. In this case, it's NumericVector, which is provided in the Rcpp.h file. This is an object designed to interface vectors between R and C++. Other types of vector provided by Rcpp are IntegerVector, LogicalVector, and CharacterVector. You also have IntegerMatrix, NumericMatrix, LogicalMatrix, and CharacterMatrix available. Next, you should note that the parameters received by the function also have types associated with them. Specifically, period is an integer (int), and data is NumericVector, just like the output of the function. In this case, we did not have to pass the output or length objects as we did with Fortran. Since functions in C++ do have output values, it also has an easy enough way of computing the length of objects. The first line in the function declare a variables position and n, and assigns the length of the data to the latter one. You may use commas, as we do, to declare various objects of the same type one after another instead of splitting the declarations and assignments into its own lines. We also declare the vector result with length n; note that this notation is similar to Fortran's. Finally, instead of using the real keyword as we do in Fortran, we use the float or double keyword here to denote such numbers. Technically, there's a difference regarding the precision allowed by such keywords, and they are not interchangeable, but we won't worry about that here. The rest of the function should be clear, except for maybe the sma = NA_REAL assignment. This NA_REAL object is also provided by Rcpp as a way to denote what should be sent to R as an NA. Everything else should result familiar. Now that our function is ready, we save it in a file called sma-delegated-cpp.cpp and use R's sourceCpp() function to bring compile it for us and bring it into R. The .cpp extension denotes contents written in the C++ language. Keep in mind that functions brought into R from C++ files cannot be saved in a .Rdata file for a later session. The nature of C++ is to be very dependent on the hardware under which it's compiled, and doing so will probably produce various errors for you. Every time you want to use a C++ function, you should compile it and load it with the sourceCpp() function at the moment of usage. library(Rcpp) sourceCpp("./sma-delegated-cpp.cpp") sma_delegated_cpp <- function(period, symbol, data) { data <- as.numeric(data[which(data$symbol == symbol), "price_usd"]) return(sma_cpp(period, data)) } If everything worked fine, our function should be usable within R, so we benchmark and test for correctness. I promise this is the last one: performance <- microbenchmark( sma_13 <- sma_delegated_cpp(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_13 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$median #> [1] 80.6415 This time, our median time was 80.6415 microseconds, which is three orders of magnitude faster than our first implementation. Think about it this way: if you provide an input for sma_delegated_cpp() so that it took around one hour for it to execute, sma_slow_1() would take around 1,000 hours, which is roughly 41 days. Isn't that a surprising difference? When you are in situations that take that much execution time, it's definitely worth it to try and make your implementations as optimized as possible. You may use the cppFunction() function to write your C++ code directly inside an .R file, but you should not do so. Keep that just for testing small pieces of code. Separating your C++ implementation into its own files allows you to use the power of your editor of choice (or IDE) to guide you through the development as well as perform deeper syntax checks for you. You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book provides step-by-step guide to build simple-to-advanced applications through examples in R using modern tools. Getting Inside a C++ Multithreaded Application Understanding the Dependencies of a C++ Application    
Read more
  • 0
  • 0
  • 3788

article-image-packaging-and-publishing-an-oracle-jet-hybrid-mobile-application
Vijin Boricha
17 Apr 2018
4 min read
Save for later

Packaging and publishing an Oracle JET Hybrid mobile application

Vijin Boricha
17 Apr 2018
4 min read
Today, we will learn how to package and publish an Oracle JET mobile application on Apple Store and Android Play Store. We can package and publish Oracle JET-based hybrid mobile applications to Google Play or Apple App stores, using framework support and third-party tools. Packaging a mobile application An Oracle JET Hybrid mobile application can be packaged with the help of Grunt build release commands, as described in the following steps: The Grunt release command can be issued with the desired platform as follows: grunt build:release --platform={ios|android} Code sign the application based on the platform in the buildConfig.json component. Note: Further details regarding code sign per platform are available at: Android: https:// cordova.apache.org/docs/en/latest/guide/platforms/android/tools. html.iOS:https:// cordova.apache.org/docs/en/latest/guide/platforms/ios/tools.Html. We can pass the code sign details and rebuild the application using the following command: grunt build:release --platform={ios|android} --buildConfig=path/buildConfig.json The application can be tested after the preceding changes using the following serve command: grunt serve:release --platform=ios|android [--web=true --serverPort=server-port-number --livereloadPort=live-reload-port-number - -destination=emulator-name|device] --buildConfig==path/buildConfig.json Publishing a mobile application Publishing an Oracle JET hybrid mobile application is as per the platform-specific guidelines. Each platform has defined certain standards and procedures to distribute an app on the respective platforms. Publishing on an iOS platform The steps involved in publishing iOS applications include: Enrolling in the Apple Developer Program to distribute the app. Adding advanced, integrated services based on the application type. Preparing our application with approval and configuration according to iOS standards. Testing our app on numerous devices and application releases. Submitting and releasing the application as a mobile app in the store. Alternatively, we can also distribute the app outside the store. The iOS distribution process is represented in the following diagram: Please note that the process for distributing applications on iOS presented in the preceding diagram is the latest, as of writing this chapter. It may be altered by the iOS team at a later point. The preceding said process is up-to-date, as of writing this chapter. Note: For the latest iOS app distribution procedure, please refer to the official iOS documentation at: https://developer.apple.com/library/ios/documentation/IDEs/ Conceptual/AppDistributionGuide/Introduction/Introduction.html Publishing on an Android platform There are multiple approaches to publish our application on Android platforms. The following are the steps involved in publishing the app on Android: Preparing the app for release. We need to perform the following activities to prepare an app for release: Configuring the app for release, including logs and manifests. Testing the release version of the app on multiple devices. Updating the application resources for the release. Preparing any remote applications or services the app interacts with. 2. Releasing the app to the market through Google Play. We need to perform the following activities to prepare an app for release through Google Play: Prepare any promotional documentation required for the app. Configure all the default options and prepare the components. Publish the prepared release version of the app to Google Play. 3. Alternately, we can publish the application through email, or through our own website, for users to download and install. The steps involved in publishing Android applications are advised in the following diagram: Please note that the process for distributing applications on Android presented in the preceding diagram is the latest, as of writing this chapter. It may be altered by the Android team at a later point. Note: For the latest Android app distribution procedure, please refer to the official Android documentation at http://developer.android.com/tools/publishing/publishing_overview. html#publishing-release. You enjoyed an excerpt  from Oracle JET for Developers written by Raja Malleswara Rao Pattamsetti.  With this book, you will learn to leverage Oracle JavaScript Extension Toolkit (JET) to develop efficient client-side applications. Auditing Mobile Applications Creating and configuring a basic mobile application    
Read more
  • 0
  • 0
  • 2472

article-image-how-to-build-a-live-interactive-visual-dashboard-in-power-bi-with-azure-stream
Sugandha Lahoti
17 Apr 2018
4 min read
Save for later

How to build a live interactive visual dashboard in Power BI with Azure Stream

Sugandha Lahoti
17 Apr 2018
4 min read
Azure Stream Analytics is a managed complex event processing interactive data engine. As a built-in output connector, it offers the facility of building live interactive intelligent BI charts and graphics using Microsoft's cloud-based Business Intelligent tool called Power BI. In this tutorial we implement a data architecture pipeline by designing a visual dashboard using Microsoft Power BI and Stream Analytics. Prerequisites of building an interactive visual live dashboard in Power BI with Stream Analytics: Azure subscription Power BI Office365 account (the account email ID should be the same for both Azure and Power BI). It can be a work or school account Integrating Power BI as an output job connector for Stream Analytics To start with connecting the Power BI portal as an output of an existing Stream Analytics job, follow the given steps:  First, select Outputs in the Azure portal under JOB TOPOLOGY:  After clicking on Outputs, click on +Add in the top left corner of the job window, as shown in the following screenshot:  After selecting +Add, you will be prompted to enter the New output connectors of the job. Provide details such as job Output name/alias; under Sink, choose Power BI from the drop-down menu.  On choosing Power BI as the streaming job output Sink, it will automatically prompt you to authorize the Power BI work/personal account with Azure. Additionally, you may create a new Power BI account by clicking on Signup. By authorizing, you are granting access to the Stream Analytics output permanently in the Power BI dashboard. You can also revoke the access by changing the password of the Power BI account or deleting the output/job.  Post the successful authorization of the Power BI account with Azure, there will be options to select Group Workspace, which is the Power BI tenant workspace where you may create the particular dataset to configure processed Stream Analytics events. Furthermore, you also need to define the Table Name as data output. Lastly, click on the Create button to integrate the Power BI data connector for real-time data visuals:   Note: If you don't have any custom workspace defined in the Power BI tenant, the default workspace is My Workspace. If you define a dataset and table name that already exists in another Stream Analytics job/output, it will be overwritten. It is also recommended that you just define the dataset and table name under the specific tenant workspace in the job portal and not explicitly create them in Power BI tenants as Stream Analytics automatically creates them once the job starts and output events start to push into the Power BI dashboard.   On starting the Streaming job with output events, the Power BI dataset would appear under the dataset tab following workspace. The  dataset can contain maximum 200,000 rows and supports real-time streaming events and historical BI report visuals as well: Further Power BI dashboard and reports can be implemented using the streaming dataset. Alternatively, you may also create tiles in custom dashboards by selecting CUSTOM STREAMING  DATA under REAL-TIME OATA, as shown in the following screenshot:  By selecting Next, the streaming dataset should be selected and then the visual type, respective fields, Axis, or legends, can be defined: Thus, a complete interactive near real-time Power BI visual dashboard can be implemented with analyzed streamed data from Stream Analytics, as shown in the following screenshot, from the real-world Connected Car-Vehicle Telemetry analytics dashboard: In this article we saw a step-by-step implementation of a real-time visual dashboard using Microsoft Power BI with processed data from Azure Stream Analytics as the output data connector. This article is an excerpt from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. To learn more on designing and managing Stream Analytics jobs using reference data and utilizing petabyte-scale enterprise data store with Azure Data Lake Store, you may refer to this book. Unlocking the secrets of Microsoft Power BI Ride the third wave of BI with Microsoft Power BI  
Read more
  • 0
  • 0
  • 6781

article-image-cross-validation-r-predictive-models
Pravin Dhandre
17 Apr 2018
8 min read
Save for later

Cross-validation in R for predictive models

Pravin Dhandre
17 Apr 2018
8 min read
In today’s tutorial, we will efficiently train our first predictive model, we will use Cross-validation in R as the basis of our modeling process. We will build the corresponding confusion matrix. Most of the functionality comes from the excellent caret package. You can find more information on the vast features of caret package that we will not explore in this tutorial. Before moving to the training tutorial, lets understand what a confusion matrix is. A confusion matrix is a summary of prediction results on a classification problem. The number of correct and incorrect predictions are summarized with count values and broken down by each class. This is the key to the confusion matrix. The confusion matrix shows the ways in which your classification model is confused when it makes predictions. It gives you insight not only into the errors being made by your classifier but more importantly the types of errors that are being made. Training our first predictive model Following best practices, we will use Cross Validation (CV) as the basis of our modeling process. Using CV we can create estimates of how well our model will do with unseen data. CV is powerful, but the downside is that it requires more processing and therefore more time. If you can take the computational complexity, you should definitely take advantage of it in your projects. Going into the mathematics behind CV is outside of the scope of this tutorial. If interested, you can find out more information on cross validation on Wikipedia . The basic idea is that the training data will be split into various parts, and each of these parts will be taken out of the rest of the training data one at a time, keeping all remaining parts together. The parts that are kept together will be used to train the model, while the part that was taken out will be used for testing, and this will be repeated by rotating the parts such that every part is taken out once. This allows you to test the training procedure more thoroughly, before doing the final testing with the testing data. We use the trainControl() function to set our repeated CV mechanism with five splits and two repeats. This object will be passed to our predictive models, created with the caret package, to automatically apply this control mechanism within them: cv.control <- trainControl(method = "repeatedcv", number = 5, repeats = 2) Our predictive models pick for this example are Random Forests (RF). We will very briefly explain what RF are, but the interested reader is encouraged to look into James, Witten, Hastie, and Tibshirani's excellent "Statistical Learning" (Springer, 2013). RF are a non-linear model used to generate predictions. A tree is a structure that provides a clear path from inputs to specific outputs through a branching model. In predictive modeling they are used to find limited input-space areas that perform well when providing predictions. RF create many such trees and use a mechanism to aggregate the predictions provided by this trees into a single prediction. They are a very powerful and popular Machine Learning model. Let's have a look at the random forests example: Random forests aggregate trees To train our model, we use the train() function passing a formula that signals R to use MULT_PURCHASES as the dependent variable and everything else (~ .) as the independent variables, which are the token frequencies. It also specifies the data, the method ("rf" stands for random forests), the control mechanism we just created, and the number of tuning scenarios to use: model.1 <- train( MULT_PURCHASES ~ ., data = train.dfm.df, method = "rf", trControl = cv.control, tuneLength = 5 ) Improving speed with parallelization If you actually executed the previous code in your computer before reading this, you may have found that it took a long time to finish (8.41 minutes in our case). As we mentioned earlier, text analysis suffers from very high dimensional structures which take a long time to process. Furthermore, using CV runs will take a long time to run. To cut down on the total execution time, use the doParallel package to allow for multi-core computers to do the training in parallel and substantially cut down on time. We proceed to create the train_model() function, which takes the data and the control mechanism as parameters. It then makes a cluster object with the makeCluster() function with a number of available cores (processors) equal to the number of cores in the computer, detected with the detectCores() function. Note that if you're planning on using your computer to do other tasks while you train your models, you should leave one or two cores free to avoid choking your system (you can then use makeCluster(detectCores() -2) to accomplish this). After that, we start our time measuring mechanism, train our model, print the total time, stop the cluster, and return the resulting model. train_model <- function(data, cv.control) { cluster <- makeCluster(detectCores()) registerDoParallel(cluster) start.time <- Sys.time() model <- train( MULT_PURCHASES ~ ., data = data, method = "rf", trControl = cv.control, tuneLength = 5 ) print(Sys.time() - start.time) stopCluster(cluster) return(model) } Now we can retrain the same model much faster. The time reduction will depend on your computer's available resources. In the case of an 8-core system with 32 GB of memory available, the total time was 3.34 minutes instead of the previous 8.41 minutes, which implies that with parallelization, it only took 39% of the original time. Not bad right? Let's have look at how the model is trained: model.1 <- train_model(train.dfm.df, cv.control) Computing predictive accuracy and confusion matrices Now that we have our trained model, we can see its results and ask it to compute some predictive accuracy metrics. We start by simply printing the object we get back from the train() function. As can be seen, we have some useful metadata, but what we are concerned with right now is the predictive accuracy, shown in the Accuracy column. From the five values we told the function to use as testing scenarios, the best model was reached when we used 356 out of the 2,007 available features (tokens). In that case, our predictive accuracy was 65.36%. If we take into account the fact that the proportions in our data were around 63% of cases with multiple purchases, we have made an improvement. This can be seen by the fact that if we just guessed the class with the most observations (MULT_PURCHASES being true) for all the observations, we would only have a 63% accuracy, but using our model we were able to improve toward 65%. This is a 3% improvement. Keep in mind that this is a randomized process, and the results will be different every time you train these models. That's why we want a repeated CV as well as various testing scenarios to make sure that our results are robust: model.1 #> Random Forest #> #> 212 samples #> 2007 predictors #> 2 classes: 'FALSE', 'TRUE' #> #> No pre-processing #> Resampling: Cross-Validated (5 fold, repeated 2 times) #> Summary of sample sizes: 170, 169, 170, 169, 170, 169, ... #> Resampling results across tuning parameters: #> #> mtry Accuracy Kappa #> 2 0.6368771 0.00000000 #> 11 0.6439092 0.03436849 #> 63 0.6462901 0.07827322 #> 356 0.6536545 0.16160573 #> 2006 0.6512735 0.16892126 #> #> Accuracy was used to select the optimal model using the largest value. #> The final value used for the model was mtry = 356. To create a confusion matrix, we can use the confusionMatrix() function and send it the model's predictions first and the real values second. This will not only create the confusion matrix for us, but also compute some useful metrics such as sensitivity and specificity. We won't go deep into what these metrics mean or how to interpret them since that's outside the scope of this tutorial, but we highly encourage the reader to study them using the resources cited in this tutorial: confusionMatrix(model.1$finalModel$predicted, train$MULT_PURCHASES) #> Confusion Matrix and Statistics #> #> Reference #> Prediction FALSE TRUE #> FALSE 18 19 #> TRUE 59 116 #> #> Accuracy : 0.6321 #> 95% CI : (0.5633, 0.6971) #> No Information Rate : 0.6368 #> P-Value [Acc > NIR] : 0.5872 #> #> Kappa : 0.1047 #> Mcnemar's Test P-Value : 1.006e-05 #> #> Sensitivity : 0.23377 #> Specificity : 0.85926 #> Pos Pred Value : 0.48649 #> Neg Pred Value : 0.66286 #> Prevalence : 0.36321 #> Detection Rate : 0.08491 #> Detection Prevalence : 0.17453 #> Balanced Accuracy : 0.54651 #> #> 'Positive' Class : FALSE You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book gets you familiar with R’s fundamentals and its advanced features to get you hands-on experience with R’s cutting edge tools for software development. Getting Started with Predictive Analytics Here’s how you can handle the bias variance trade-off in your ML models    
Read more
  • 0
  • 0
  • 4658
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-building-first-vue-js-web-application
Kunal Chaudhari
17 Apr 2018
11 min read
Save for later

Building your first Vue.js 2 Web application

Kunal Chaudhari
17 Apr 2018
11 min read
Vue is a relative newcomer in the JavaScript frontend landscape, but a very serious challenger to the current leading libraries. It is simple, flexible, and very fast, while still providing a lot of features and optional tools that can help you build a modern web app efficiently. In today’s tutorial, we will explore Vue.js library and then we will start creating our first web app. Why another frontend framework? Its creator, Evan You, calls it the progressive framework. Vue is incrementally adoptable, with a core library focused on user interfaces that you can use in existing projects You can make small prototypes all the way up to large and sophisticated web applications Vue is approachable-- beginners can pick up the library easily, and confirmed developers can be productive very quickly Vue roughly follows a Model-View-ViewModel architecture, which means the View (the user interface) and the Model (the data) are separated, with the ViewModel (Vue) being a mediator between the two. It handles the updates automatically and has been already optimized for you. Therefore, you don't have to specify when a part of the View should update because Vue will choose the right way and time to do so. The library also takes inspiration from other similar libraries such as React, Angular, and Polymer. The following is an overview of its core features: A reactive data system that can update your user interface automatically, with a lightweight virtual-DOM engine and minimal optimization efforts, is required Flexible View declaration--artist-friendly HTML templates, JSX (HTML inside JavaScript), or hyperscript render functions (pure JavaScript) Composable user interfaces with maintainable and reusable components Official companion libraries that come with routing, state management, scaffolding, and more advanced features, making Vue a non-opinionated but fully fleshed out frontend framework Vue.js - A trending project Evan You started working on the first prototype of Vue in 2013, while working at Google, using Angular. The initial goal was to have all the cool features of Angular, such as data binding and data-driven DOM, but without the extra concepts that make a framework opinionated and heavy to learn and use. The first public release was published on February 2014 and had immediate success the very first day, with HackerNews frontpage, /r/javascript at the top spot and 10k unique visits on the official website. The first major version 1.0 was reached in October 2015, and by the end of that year, the npm downloads rocketed to 382k ytd, the GitHub repository received 11k stars, the official website had 363k unique visitors, and the popular PHP framework Laravel had picked Vue as its official frontend library instead of React. The second major version, 2.0, was released in September 2016, with a new virtual DOM- based renderer and many new features such as server-side rendering and performance improvements. This is the version we will use in this article. It is now one of the fastest frontend libraries, outperforming even React according to a comparison refined with the React team. At the time of writing this article, Vue was the second most popular frontend library on GitHub with 72k stars, just behind React and ahead of Angular 1. The next evolution of the library on the roadmap includes more integration with Vue-native libraries such as Weex and NativeScript to create native mobile apps with Vue, plus new features and improvements. Today, Vue is used by many companies such as Microsoft, Adobe, Alibaba, Baidu, Xiaomi, Expedia, Nintendo, and GitLab. Compatibility requirements Vue doesn't have any dependency and can be used in any ECMAScript 5 minimum- compliant browser. This means that it is not compatible with Internet Explorer 8 or less, because it needs relatively new JavaScript features such as Object.defineProperty, which can't be polyfilled on older browsers. In this article, we are writing code in JavaScript version ES2015 (formerly ES6), so you will need a modern browser to run the examples (such as Edge, Firefox, or Chrome). At some point, we will introduce a compiler called Babel that will help us make our code compatible with older browsers. One-minute setup Without further ado, let's start creating our first Vue app with a very quick setup. Vue is flexible enough to be included in any web page with a simple script tag. Let's create a very simple web page that includes the library, with a simple div element and another script tag: <html> <head> <meta charset="utf-8"> <title>Vue Project Guide setup</title> </head> <body> <!-- Include the library in the page --> <script src="https://unpkg.com/vue/dist/vue.js"></script> <!-- Some HTML --> <div id="root"> <p>Is this an Hello world?</p> </div> <!-- Some JavaScript →> <script> console.log('Yes! We are using Vue version', Vue.version) </script> </body> </html> In the browser console, we should have something like this: Yes! We are using Vue version 2.0.3 As you can see in the preceding code, the library exposes a Vue object that contains all the features we need to use it. We are now ready to go. Creating an app For now, we don't have any Vue app running on our web page. The whole library is based on Vue instances, which are the mediators between your View and your data. So, we need to create a new Vue instance to start our app: // New Vue instance var app = new Vue({ // CSS selector of the root DOM element el: '#root', // Some data data () { return { message: 'Hello Vue.js!', } }, }) The Vue constructor is called with the new keyword to create a new instance. It has one argument--the option object. It can have multiple attributes (called options). For now, we are using only two of them. With the el option, we tell Vue where to add (or "mount") the instance on our web page using a CSS selector. In the example, our instance will use the <div id="root"> DOM element as its root element. We could also use the $mount method of the Vue instance instead of the el option: var app = new Vue({ data () { return { message: 'Hello Vue.js!', } }, }) // We add the instance to the page app.$mount('#root') Most of the special methods and attributes of a Vue instance start with a dollar character. We will also initialize some data in the data option with a message property that contains a string. Now the Vue app is running, but it doesn't do much, yet. You can add as many Vue apps as you like on a single web page. Just create a new Vue instance for each of them and mount them on different DOM elements. This comes in handy when you want to integrate Vue in an existing project. Vue devtools An official debugger tool for Vue is available on Chrome as an extension called Vue.js devtools. It can help you see how your app is running to help you debug your code. You can download it from the Chrome Web Store (https://chrome.google.com/webstore/ search/vue) or from the Firefox addons registry (https://addons.mozilla.org/en-US/ firefox/addon/vue-js-devtools/?src=ss). For the Chrome version, you need to set an additional setting. In the extension settings, enable Allow access to file URLs so that it can detect Vue on a web page opened from your local drive: On your web page, open the Chrome Dev Tools with the F12 shortcut (or Shift + command + c on OS X) and search for the Vue tab (it may be hidden in the More tools... dropdown). Once it is opened, you can see a tree with our Vue instance named Root by convention. If you click on it, the sidebar displays the properties of the instance: You can drag and drop the devtools tab to your liking. Don't hesitate to place it among the first tabs, as it will be hidden in the page where Vue is not in development mode or is not running at all. You can change the name of your instance with the name option: var app = new Vue({ name: 'MyApp', // ...         }) This will help you see where your instance in the devtools is when you will have many more: Templates make your DOM dynamic With Vue, we have several systems at our disposal to write our View. For now, we will start with templates. A template is the easiest way to describe a View because it looks like HTML a lot, but with some extra syntax to make the DOM dynamically update very easily. Displaying text The first template feature we will see is the text interpolation, which is used to display dynamic text inside our web page. The text interpolation syntax is a pair of double curly braces containing a JavaScript expression of any kind. Its result will replace the interpolation when Vue will process the template. Replace the <div id="root"> element with the following: <div id="root"> <p>{{ message }}</p> </div> The template in this example has a <p> element whose content is the result of the message JavaScript expression. It will return the value of the message attribute of our instance. You should now have a new text displayed on your web page--Hello Vue.js!. It doesn't seem like much, but Vue has done a lot of work for us here--we now have the DOM wired with our data. To demonstrate this, open your browser console and change the app.message value and press Enter on the keyboard: app.message = 'Awesome!' The message has changed. This is called data-binding. It means that Vue is able to automatically update the DOM whenever your data changes without requiring anything from your part. The library includes a very powerful and efficient reactivity system that keeps track of all your data and is able to update what's needed when something changes. All of this is very fast indeed. Adding basic interactivity with directives Let's add some interactivity to our otherwise quite static app, for example, a text input that will allow the user to change the message displayed. We can do that in templates with special HTML attributes called directives. All the directives in Vue start with v- and follow the kebab-case syntax. That means you should separate the words with a dash. Remember that HTML attributes are case insensitive (whether they are uppercase or lowercase doesn't matter). The directive we need here is v-model, which will bind the value of our <input> element with our message data property. Add a new <input> element with the v-model="message" attribute inside the template: <div id="root"> <p>{{ message }}</p> <!-- New text input --> <input v-model="message" /> </div> Vue will now update the message property automatically when the input value changes. You can play with the content of the input to verify that the text updates as you type and the value in the devtools changes: There are many more directives available in Vue, and you can even create your own. To summarize, we quickly set up a web page to get started using Vue and wrote a simple app. We created a Vue instance to mount the Vue app on the page and wrote a template to make the DOM dynamic. Inside this template, we used a JavaScript expression to display text, thanks to text interpolations. Finally, we added some interactivity with an input element that we bound to our data with the v-model directive. You read an excerpt from a book written by Guillaume Chau, titled Vue.js 2 Web Development Projects. Its a project-based, practical guide to get hands-on into Vue.js 2.5 development by building beautiful, functional and performant web. Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js      
Read more
  • 0
  • 0
  • 4742

article-image-raspberry-pi-family-raspberry-pi-zero-w-wireless
Packt Editorial Staff
16 Apr 2018
12 min read
Save for later

Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless

Packt Editorial Staff
16 Apr 2018
12 min read
In early 2017, Raspberry Pi community announced a new board with wireless extension. It is a highly promising board allowing everyone to connect their devices to the Internet. Offering a wireless functionality where everyone can develop their own projects without cables and components. It uses their skills to develop projects including software and hardware. This board is the new toy of any engineer interested in Internet of Things, security, automation and more! Comparing the new board with Raspberry Pi 3 Model B we can easily figure that it is quite small with many possibilities over the Internet of Things. But what is a Raspberry Pi Zero W and why do you need it? In today’s post, we will cover the following topics: Overview of the Raspberry Pi family Introduction to the new Raspberry Pi Zero W Distributions Common issues Raspberry Pi family As said earlier Raspberry Pi Zero W is the new member of Raspberry Pi family boards. All these years Raspberry Pi evolved and became more user friendly with endless possibilities. Let's have a short look at the rest of the family so we can understand the difference of the Pi Zero board. Right now, the heavy board is named Raspberry Pi 3 Model B. It is the best solution for projects such as face recognition, video tracking, gaming or anything else that is in demand: It is the 3rd generation of Raspberry Pi boards after Raspberry Pi 2 and has the following specs: A 1.2GHz 64-bit quad-core ARMv8 CPU 11n Wireless LAN Bluetooth 4.1 Bluetooth Low Energy (BLE) Like the Pi 2, it also has 1GB RAM 4 USB ports 40 GPIO pins Full HDMI port Ethernet port Combined 3.5mm audio jack and composite video Camera interface (CSI) Display interface (DSI) Micro SD card slot (now push-pull rather than push-push) VideoCore IV 3D graphics core The next board is Raspberry Pi Zero, in which the Zero W was based. A small low cost and power board able to do many things: The specs of this board can be found as follows: 1GHz, Single-core CPU 512MB RAM Mini-HDMI port Micro-USB OTG port Micro-USB power HAT-compatible 40-pin header Composite video and reset headers CSI camera connector (v1.3 only) At this point we should not forget to mention that apart from the boards mentioned earlier there are several other modules and components such as the Sense Hat or Raspberry Pi Touch Display available which will work great for advance projects. The 7″ Touchscreen Monitor for Raspberry Pi gives users the ability to create all-in-one, integrated projects such as tablets, infotainment systems and embedded projects: Where Sense HAT is an add-on board for Raspberry Pi, made especially for the Astro Pi mission. The Sense HAT has an 8×8 RGB LED matrix, a five-button joystick and includes the following sensors: Gyroscope Accelerometer Magnetometer Temperature Barometric pressure Humidity Stay tuned with more new boards and modules at the official website: https://www.raspberrypi.org/ Raspberry Pi Zero W Raspberry Pi Zero W is a small device that has the possibilities to be connected either on an external monitor or TV and of course it is connected to the internet. The operating system varies as there are many distros in the official page and almost everyone is baled on Linux systems. With Raspberry Pi Zero W you have the ability to do almost everything, from automation to gaming! It is a small computer that allows you easily program with the help of the GPIO pins and some other components such as a camera. Its possibilities are endless! Specifications If you have bought Raspberry PI 3 Model B you would be familiar with Cypress CYW43438 wireless chip. It provides 802.11n wireless LAN and Bluetooth 4.0 connectivity. The new Raspberry Pi Zero W is equipped with that wireless chip as well. Following you can find the specifications of the new board: Dimensions: 65mm × 30mm × 5mm SoC:Broadcom BCM 2835 chip ARM11 at 1GHz, single core CPU 512ΜΒ RAM Storage: MicroSD card Video and Audio:1080P HD video and stereo audio via mini-HDMI connector Power:5V, supplied via micro USB connector Wireless:2.4GHz 802.11 n wireless LAN Bluetooth: Bluetooth classic 4.1 and Bluetooth Low Energy (BLE) Output: Micro USB GPIO: 40-pin GPIO, unpopulated Notice that all the components are on the top side of the board so you can easily choose your case without any problems and keep it safe. As far as the antenna concern, it is formed by etching away copper on each layer of the PCB. It may not be visible as it is in other similar boards but it is working great and offers quite a lot functionalities: Also, the product is limited to only one piece per buyer and costs 10$. You can buy a full kit with microsd card, a case and some more extra components for about 45$ or choose the camera full kit which contains a small camera component for 55$. Camera support Image processing projects such as video tracking or face recognition require a camera. Following you can see the official camera support of Raspberry Pi Zero W. The camera can easily be mounted at the side of the board using a cable like the Raspberry Pi 3 Model B board: Depending on your distribution you many need to enable the camera though command line. More information about the usage of this module will be mentioned at the project. Accessories Well building projects with the new board there are some other gadgets that you might find useful working with. Following there is list of some crucial components. Notice that if you buy Raspberry Pi Zero W kit, it includes some of them. So, be careful and don't double buy them: OTG cable powerHUB GPIO header microSD card and card adapter HDMI to miniHDMI cable HDMI to VGA cable Distributions The official site https://www.raspberrypi.org/downloads/ contains several distributions for downloading. The two basic operating systems that we will analyze after are RASPBIAN and NOOBS. Following you can see how the desktop environment looks like. Both RASPBIAN and NOOBS allows you to choose from two versions. There is the full version of the operating system and the lite one. Obviously the lite version does not contain everything that you might use so if you tend to use your Raspberry with a desktop environment choose and download the full version. On the other side if you tend to just ssh and do some basic stuff pick the lite one. It' s really up to you and of course you can easily download again anything you like and re-write your microSD card: NOOBS distribution Download NOOBS: https://www.raspberrypi.org/downloads/noobs/. NOOBS distribution is for the new users with not so much knowledge in linux systems and Raspberry PI boards. As the official page says it is really "New Out Of the Box Software". There is also pre-installed NOOBS SD cards that you can purchase from many retailers, such as Pimoroni, Adafruit, and The Pi Hut, and of course you can download NOOBS and write your own microSD card. If you are having trouble with the specific distribution take a look at the following links: Full guide at https://www.raspberrypi.org/learning/software-guide/. View the video at https://www.raspberrypi.org/help/videos/#noobs-setup. NOOBS operating system contains Raspbian and it provides various of other operating systems available to download. RASPBIAN distribution Download RASPBIAN: https://www.raspberrypi.org/downloads/raspbian/. Raspbian is the official supported operating system. It can be installed though NOOBS or be downloading the image file at the following link and going through the guide of the official website. Image file: https://www.raspberrypi.org/documentation/installation/installing-images/README.md. It has pre-installed plenty of software such as Python, Scratch, Sonic Pi, Java, Mathematica, and more! Furthermore, more distributions like Ubuntu MATE, Windows 10 IOT Core or Weather Station are meant to be installed for more specific projects like Internet of Things (IoT) or weather stations. To conclude with, the right distribution to install actually depends on your project and your expertise in Linux systems administration. Raspberry Pi Zero W needs an microSD card for hosting any operating system. You are able to write Raspbian, Noobs, Ubuntu MATE, or any other operating system you like. So, all that you need to do is simple write your operating system to that microSD card. First of all you have to download the image file from https://www.raspberrypi.org/downloads/ which, usually comes as a .zip file. Once downloaded, unzip the zip file, the full image is about 4.5 Gigabytes. Depending on your operating system you have to use different programs: 7-Zip for Windows The Unarchiver for Mac Unzip for Linux Now we are ready to write the image in the MicroSD card. You can easily write the .img file in the microSD card by following one of the next guides according to your system. For Linux users dd tool is recommended. Before connecting your microSD card with your adaptor in your computer run the following command: df -h Now connect your card and run the same command again. You must see some new records. For example if the new device is called /dev/sdd1 keep in your mind that the card is at /dev/sdd (without the 1). The next step is to use the dd command and copy the image to the microSD card. We can do this by the following command: dd if=<path to your image> of=</dev/***> Where if is the input file (image file or the distribution) and of is the output file (microSD card). Again be careful here and use only /dev/sdd or whatever is yours without any numbers. If you are having trouble with that please use the full manual at the following link https://www.raspberrypi.org/documentation/installation/installing-images/linux.md. A good tool that could help you out for that job is GParted. If it is not installed on your system you can easily install it with the following command: sudo apt-get install gparted Then run sudogparted to start the tool. Its handles partitions very easily and you can format, delete or find information about all your mounted partitions. More information about ddcan be found here: https://www.raspberrypi.org/documentation/installation/installing-images/linux.md For Mac OS users dd tool is always recommended: https://www.raspberrypi.org/documentation/installation/installing-images/mac.md For Windows users Win32DiskImager utility is recommended: https://www.raspberrypi.org/documentation/installation/installing-images/windows.md There are several other ways to write an image file in a microSD card. So, if you are against any kind of problems when following the guides above feel free to use any other guide available on the Internet. Now, assuming that everything is ok and the image is ready. You can now gently plugin the microcard to your Raspberry PI Zero W board. Remember that you can always confirm that your download was successful with the sha1 code. In Linux systems you can use sha1sum followed by the file name (the image) and print the sha1 code that should and must be the same as it is at the end of the official page where you downloaded the image. Common issues Sometimes, working with Raspberry Pi boards can lead to issues. We all have faced some of them and hope to never face them again. The Pi Zero is so minimal and it can be tough to tell if it is working or not. Since, there is no LED on the board, sometimes a quick check if it is working properly or something went wrong is handy. Debugging steps With the following steps you will probably find its status: Take your board, with nothing in any slot or socket. Remove even the microSD card! Take a normal micro-USB to USB-ADATA SYNC cable and connect the one side to your computer and the other side to the Pi's USB, (not the PWR_IN). If the Zero is alive: On Windows the PC will go ding for the presence of new hardware and you should see BCM2708 Boot in Device Manager. On Linux, with a ID 0a5c:2763 Broadcom Corp message from dmesg. Try to run dmesg in a Terminal before your plugin the USB and after that. You will find a new record there. Output example: [226314.048026] usb 4-2: new full-speed USB device number 82 using uhci_hcd [226314.213273] usb 4-2: New USB device found, idVendor=0a5c, idProduct=2763 [226314.213280] usb 4-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [226314.213284] usb 4-2: Product: BCM2708 Boot [226314.213] usb 4-2: Manufacturer: Broadcom If you see any of the preceding, so far so good, you know the Zero's not dead. microSD card issue Remember that if you boot your Raspberry and there is nothing working, you may have burned your microSD card wrong. This means that your card many not contain any boot partition as it should and it is not able to boot the first files. That problem occurs when the distribution is burned to /dev/sdd1 and not to /dev/sdd as we should. This is a quite common mistake and there will be no errors in your monitor. It will just not work! Case protection Raspberry Pi boards are electronics and we never place electronics in metallic surfaces or near magnetic objects. It will affect the booting operation of the Raspberry and it will probably not work. So a tip of advice, spend some extra money for the Raspberry PI Case and protect your board from anything like that. There are many problems and issues when hanging your raspberry pi using tacks. To summarize, we introduced the new Raspberry Pi Zero board with the rest of its family and a brief analysis on some extra components that are must buy as well. [box type="shadow" align="aligncenter" class="" width=""]This article is an excerpt from the book Raspberry Pi Zero W Wireless Projects written by Vasilis Tzivaras. The Raspberry Pi has always been the go–to, lightweight ARM-based computer. This book will help you design and build interesting DIY projects using the Raspberry Pi Zero W board.[/box] Introduction to Raspberry Pi Zero W Wireless Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 3002

article-image-what-is-a-support-vector-machine
Packt Editorial Staff
16 Apr 2018
7 min read
Save for later

What is a support vector machine?

Packt Editorial Staff
16 Apr 2018
7 min read
Support vector machines are machine learning algorithms whereby a model 'learns' to categorize data around a linear classifier. The linear classifier is, quite simply, a line that classifies. It's a line that that distinguishes between 2 'types' of data, like positive sentiment and negative language. This gives you control over data, allowing you to easily categorize and manage different data points in a way that's useful too. This tutorial is an extract from Statistics for Machine Learning. But support vector machines do more than linear classification - they are multidimensional algorithms, which is why they're so powerful. Using something called a kernel trick, which we'll look at in more detail later, support vector machines are able to create non-linear boundaries. Essentially they work at constructing a more complex linear classifier, called a hyperplane. Support vector machines work on a range of different types of data, but they are most effective on data sets with very high dimensions relative to the observations, for example: Text classification, in which language has the very dimensions of word vectors For the quality control of DNA sequencing by labeling chromatograms correctly Different types of support vector machines Support vector machines are generally classified into three different groups: Maximum margin classifiers Support vector classifiers Support vector machines Let's take a look at them now. Maximum margin classifiers People often use the term maximum margin classifier interchangeably with support vector machines. They're the most common type of support vector machine, but as you'll see, there are some important differences. The maximum margin classifier tackles the problem of what happens when your data isn't quite clear or clean enough to draw a simple line between two sets - it helps you find the best line, or hyperplane out of a range of options. The objective of the algorithm is to find  furthest distance between the two nearest points in two different categories of data - this is the 'maximum margin', and the hyperplane sits comfortably within it. The hyperplane is defined by this equation: So, this means that any data points that sit directly on the hyperplane have to follow this equation. There are also data points that will, of course, fall either side of this hyperplane. These should follow these equations: You can represent the maximum margin classifier like this: Constraint 2 ensures that observations will be on the correct side of the hyperplane by taking the product of coefficients with x variables and finally, with a class variable indicator. In the diagram below, you can see that we could draw a number of separate hyperplanes to separate the two classes (blue and red). However, the maximum margin classifier attempts to fit the widest slab (maximize the margin between positive and negative hyperplanes) between two classes and the observations touching both the positive and negative hyperplanes. These are the support vectors. It's important to note that in non-separable cases, the maximum margin classifier will not have a separating hyperplane - there's no feasible solution. This issue will be solved with support vector classifiers. Support vector classifiers Support vector classifiers are an extended version of maximum margin classifiers. Here, some violations are 'tolerated' for non-separable cases. This means a best fit can be created. In fact, in real-life scenarios, we hardly find any data with purely separable classes; most classes have a few or more observations in overlapping classes. The mathematical representation of the support vector classifier is as follows, a slight correction to the constraints to accommodate error terms: In constraint 4, the C value is a non-negative tuning parameter to either accommodate more or fewer overall errors in the model. Having a high value of C will lead to a more robust model, whereas a lower value creates the flexible model due to less violation of error terms. In practice, the C value would be a tuning parameter as is usual with all machine learning models. The impact of changing the C value on margins is shown in the two diagrams below. With the high value of C, the model would be more tolerating and also have space for violations (errors) in the left diagram, whereas with the lower value of C, no scope for accepting violations leads to a reduction in margin width. C is a tuning parameter in Support Vector Classifiers: Support vector machines Support vector machines are used when the decision boundary is non-linear. It's useful when it becomes impossible to separate with support vector classifiers. The diagram below explains the non-linearly separable cases for both 1-dimension and 2-dimensions: Clearly, you can't classify using support vector classifiers whatever the cost value is. This is why you would want to then introduce something called the kernel trick. In the diagram below, a polynomial kernel with degree 2 has been applied in transforming the data from 1-dimensional to 2-dimensional data. By doing so, the data becomes linearly separable in higher dimensions. In the left diagram, different classes (red and blue) are plotted on X1 only, whereas after applying degree 2, we now have 2-dimensions, X1 and X21 (the original and a new dimension). The degree of the polynomial kernel is a tuning parameter. You need to tune them with various values to check where higher accuracy might be possible with the model: However, in the 2-dimensional case, the kernel trick is applied as below with the polynomial kernel with degree 2. Observations have been classified successfully using a linear plane after projecting the data into higher dimensions: Different types of kernel functions Kernel functions are the functions that, given the original feature vectors, return the same value as the dot product of its corresponding mapped feature vectors. Kernel functions do not explicitly map the feature vectors to a higher-dimensional space, or calculate the dot product of the mapped vectors. Kernels produce the same value through a different series of operations that can often be computed more efficiently. The main reason for using kernel functions is to eliminate the computational requirement to derive the higher-dimensional vector space from the given basic vector space, so that observations be separated linearly in higher dimensions. Why someone needs to like this is, derived vector space will grow exponentially with the increase in dimensions and it will become almost too difficult to continue computation, even when you have a variable size of 30 or so. The following example shows how the size of the variables grows. Here's an example: When we have two variables such as x and y, with a polynomial degree kernel, it needs to compute x2, y2, and xy dimensions in addition. Whereas, if we have three variables x, y, and z, then we need to calculate the x2, y2, z2, xy, yz, xz, and xyz vector spaces. You will have realized by this time that the increase of one more dimension creates so many combinations. Hence, care needs to be taken to reduce its computational complexity; this is where kernels do wonders. Kernels are defined more formally in the following equation: Polynomial kernels are often used, especially with degree 2. In fact, the inventor of support vector machines, Vladimir N Vapnik, developed using a degree 2 kernel for classifying handwritten digits. Polynomial kernels are given by the following equation: Radial Basis Function kernels (sometimes called Gaussian kernels) are a good first choice for problems requiring nonlinear models. A decision boundary that is a hyperplane in the mapped feature space is similar to a decision boundary that is a hypersphere in the original space. The feature space produced by the Gaussian kernel can have an infinite number of dimensions, a feat that would be impossible otherwise. RBF kernels are represented by the following equation: This is sometimes simplified as the following equation: It is advisable to scale the features when using support vector machines, but it is very important when using the RBF kernel. When the value of the gamma value is small, it gives you a pointed bump in the higher dimensions. A larger value gives you a softer, broader bump. A small gamma will give you low bias and high variance solutions; on the other hand, a high gamma will give you high bias and low variance solutions and that is how you control the fit of the model using RBF kernels: Learn more about support vector machines Support vector machines as a classification engine [read now] 10 machine learning algorithms every engineer needs to know [read now]
Read more
  • 0
  • 0
  • 8699

article-image-4-encryption-options-for-your-sql-server
Vijin Boricha
16 Apr 2018
7 min read
Save for later

4 Encryption options for your SQL Server

Vijin Boricha
16 Apr 2018
7 min read
In today’s tutorial, we will learn about cryptographic elements like T-SQL functions, service master key, and more. SQL Server cryptographic elements Encryption is the process of obfuscating data by the use of a key or password. This can make the data useless without the corresponding decryption key or password. Encryption does not solve access control problems. However, it enhances security by limiting data loss even if access controls are bypassed. For example, if the database host computer is misconfigured and a hacker obtains sensitive data, that stolen information might be useless if it is encrypted. SQL Server provides the following building blocks for the encryption; based on them you can implement all supported features, such as backup encryption, Transparent Data Encryption, column encryption and so on. We already know what the symmetric and asymmetric keys are. The basic concept is the same in SQL Server implementation. Later in the chapter you will practice how to create and implement all elements from the Figure 9-3. Let me explain the rest of the items. T-SQL functions SQL Server has built in support for handling encryption elements and features in the forms of T-SQL functions. You don't need any third-party software to do that, as you do with other database platforms. Certificates A public key certificate is a digitally-signed statement that connects the data of a public key to the identity of the person, device, or service that holds the private key. Certificates are issued and signed by a certification authority (CA). You can work with self-signed certificates, but you should be careful here. This can be misused for the large set of network attacks. SQL Server encrypts data with a hierarchical encryption. Each layer encrypts the layer beneath it using certificates, asymmetric keys, and symmetric keys. In a nutshell, the previous image means that any key in a hierarchy is guarded (encrypted) with the key above it. In practice, if you miss just one element from the chain, decryption will be impossible. This is an important security feature, because it is really hard for an attacker to compromise all levels of security. Let me explain the most important elements in the hierarchy. Service Master Key SQL Server has two primary applications for keys: a Service Master Key (SMK) generated on and for a SQL Server instance, and a database master key (DMK) used for a database. The SMK is automatically generated during installation and the first time the SQL Server instance is started. It is used to encrypt the next first key in the chain. The SMK should be backed up and stored in a secure, off-site location. This is an important step, because this is the first key in the hierarchy. Any damage at this level can prevent access to all encrypted data in the layers below. When the SMK is restored, the SQL Server decrypts all the keys and data that have been encrypted with the current SMK, and then encrypts them with the SMK from the backup. Service Master Key can be viewed with the following system catalog view: 1> SELECT name, create_date 2> FROM sys.symmetric_keys 3> GO name create_date ------------------------- ----------------------- ##MS_ServiceMasterKey## 2017-04-17 17:56:20.793 (1 row(s) affected) Here is an example of how you can back up your SMK to the /var/opt/mssql/backup folder. Note: In the case that you don't have /var/opt/mssql/backup folder execute all 5 bash lines. In the case you don't have permissions to /var/opt/mssql/backup folder execute all lines without first one. # sudo mkdir /var/opt/mssql/backup # sudo chown mssql /var/opt/mssql/backup/ # sudo chgrp mssql /var/opt/mssql/backup/ # sudo /opt/mssql/bin/mssql-conf set filelocation.defaultbackupdir /var/opt/mssql/backup/ # sudo systemctl restart mssql-server 1> USE master 2> GO Changed database context to 'master'. 1> BACKUP SERVICE MASTER KEY TO FILE = '/var/opt/mssql/backup/smk' 2> ENCRYPTION BY PASSWORD = 'S0m3C00lp4sw00rd' 3> --In the real scenarios your password should be more complicated 4> GO exit The next example is how to restore SMK from the backup location: 1> USE master 2> GO Changed database context to 'master'. 1> RESTORE SERVICE MASTER KEY 2> FROM FILE = '/var/opt/mssql/backup/smk' 3> DECRYPTION BY PASSWORD = 'S0m3C00lp4sw00rd' 4> GO You can examine the contents of your SMK with the ls command or some internal Linux file views, such is in Midnight Commander (MC). Basically there is not much to see, but that is the power of encryption. The SMK is the foundation of the SQL Server encryption hierarchy. You should keep a copy at an offsite location. Database master key The DMK is a symmetric key used to protect the private keys of certificates and asymmetric keys that are present in the database. When it is created, the master key is encrypted by using the AES 256 algorithm and a user-supplied password. To enable the automatic decryption of the master key, a copy of the key is encrypted by using the SMK and stored in both the database (user and in the master database). The copy stored in the master is always updated whenever the master key is changed. The next T-SQL code show how to create DMK in the Sandbox database: 1> CREATE DATABASE Sandbox 2> GO 1> USE Sandbox 2> GO 3> CREATE MASTER KEY 4> ENCRYPTION BY PASSWORD = 'S0m3C00lp4sw00rd' 5> GO Let's check where the DMK is with the sys.sysmmetric_keys system catalog view: 1> SELECT name, algorithm_desc 2> FROM sys.symmetric_keys 3> GO name algorithm_desc -------------------------- --------------- ##MS_DatabaseMasterKey## AES_256 (1 row(s) affected) This default can be changed by using the DROP ENCRYPTION BY SERVICE MASTER KEY option of ALTER MASTER KEY. A master key that is not encrypted by the SMK must be opened by using the OPEN MASTER KEY statement and a password. Now that we know why the DMK is important and how to create one, we will continue with the following DMK operations: ALTER OPEN CLOSE BACKUP RESTORE DROP These operations are important because all other encryption keys, on database-level, are dependent on the DMK. We can easily create a new DMK for Sandbox and re-encrypt the keys below it in the encryption hierarchy, assuming that we have the DMK created in the previous steps: 1> ALTER MASTER KEY REGENERATE 2> WITH ENCRYPTION BY PASSWORD = 'S0m3C00lp4sw00rdforN3wK3y' 3> GO Opening the DMK for use: 1> OPEN MASTER KEY 2> DECRYPTION BY PASSWORD = 'S0m3C00lp4sw00rdforN3wK3y' 3> GO Note: If the DMK was encrypted with the SMK, it will be automatically opened when it is needed for decryption or encryption. In this case, it is not necessary to use the OPEN MASTER KEY statement. Closing the DMK after use: 1> CLOSE MASTER KEY 2> GO Backing up the DMK: 1> USE Sandbox 2> GO 1> OPEN MASTER KEY 2> DECRYPTION BY PASSWORD = 'S0m3C00lp4sw00rdforN3wK3y'; 3> BACKUP MASTER KEY TO FILE = '/var/opt/mssql/backup/Snadbox-dmk' 4> ENCRYPTION BY PASSWORD = 'fk58smk@sw0h%as2' 5> GO Restoring the DMK: 1> USE Sandbox 2> GO 1> RESTORE MASTER KEY 2> FROM FILE = '/var/opt/mssql/backup/Snadbox-dmk' 3> DECRYPTION BY PASSWORD = 'fk58smk@sw0h%as2' 4> ENCRYPTION BY PASSWORD = 'S0m3C00lp4sw00rdforN3wK3y'; 5> GO When the master key is restored, SQL Server decrypts all the keys that are encrypted with the currently active master key, and then encrypts these keys with the restored master. Dropping the DMK: 1> USE Sandbox 2> GO 1> DROP MASTER KEY 2> GO You read an excerpt  from the book SQL Server on Linux, written by Jasmin Azemović.  From this book, you will learn to configure and administer database solutions on Linux. How SQL Server handles data under the hood SQL Server basics Creating reports using SQL Server 2016 Reporting Services  
Read more
  • 0
  • 0
  • 5520
article-image-behavior-scripting-in-c-and-javascript-for-game-developers
Packt Editorial Staff
16 Apr 2018
16 min read
Save for later

Behavior Scripting in C# and Javascript for game developers

Packt Editorial Staff
16 Apr 2018
16 min read
The common idea about game behaviors - things like enemy AI, or sequences of events, or the rules of a puzzle – are expressed in a scripting language, probably in a simple top-to-bottom recipe form, without using objects or much branching. Behaviour scripts are often associated with an object instance in game code – expressed in an object-oriented language such as C++ or C# – which does the work. In today’s post, we will introduce you to new classes and behavior scripts. The details of a new C# behavior and a new JavaScript behavior are also covered. We will further explore: Wall attack Declaring public variables Assigning scripts to objects Moving the camera To take your first steps into programming, we will look at a simple example of the same functionality in both C# and JavaScript, the two main programming languages used by Unity developers. It is also possible to write Boo-based scripts, but these are rarely used except by those with existing experience in the language. To follow the next steps, you may choose either JavaScript or C#, and then continue with your preferred language. To begin, click on the Create button on the Project panel, then choose either JavaScript or C# script, or simply click on the Add Component button on the Main CameraInspector panel. Your new script will be placed into the Project panel named NewBehaviourScript, and will show an icon of a page with either JavaScript or C# written on it. When selecting your new script, Unity offers a preview of what is already in the script, in the view of the Inspector, and an accompanying Edit button that when clicked on will launch the script into the default script editor, MonoDevelop. You can also launch a script in your script editor at any time by double-clicking on its icon in the Project panel. New behaviour script or class New scripts can be thought of as a new class in Unity terms. If you are new to programming, think of a class as a set of actions, properties, and other stored information that can be accessed under the heading of its name. For example, a class called Dogmay contain properties such as color, breed, size, or genderand have actions such as rolloveror fetchStick. These properties can be described as variables, while the actions can be written in functions, also known as methods. In this example, to refer to the breedvariable, a property of the Dogclass, we might refer to the class it is in, Dog, and use a period (full stop) to refer to this variable, in the following way: Dog.breed; If we want to call a function within the Dogclass, we might say, for example, the following: Dog.fetchStick(); We can also add arguments into functions-these aren't the everyday arguments we have with one another! Think of them as more like modifying the behavior of a function, for example, with our fetchStickfunction, we might build in an argument that defines how quickly our dog will fetch the stick. This might be called as follows: Dog.fetchStick(25); While these are abstract examples, often it can help to transpose coding into commonplace examples in order to make sense of them. As we continue, think back to this example or come up with some examples of your own, to help train yourself to understand classes of information and their properties. When you write a script in C# or JavaScript, you are writing a new class or classes with their own properties (variables) and instructions (functions) that you can call into play at the desired moment in your games. What's inside a new C# behaviour When you begin with a new C# script, Unity gives you the following code to get started: usingUnityEngine; usingSystem.Collections; publicclassNewBehaviourScript:MonoBehaviour{ //UsethisforinitializationvoidStart(){ } //UpdateiscalledonceperframevoidUpdate(){ } } This begins with the necessary two calls to the Unity Engine itself: usingUnityEngine; usingSystem.Collections; It goes on to establish the class named after the script. With C#, you'll be required to name your scripts with matching names to the class declared inside the script itself. This is why you will see publicclassNewBehaviourScript:MonoBehaviour{at the beginning of a new C# document, as NewBehaviourScriptis the default name that Unity gives to newly generated scripts. If you rename your script in the Project panel when it is created, Unity will rewrite the class name in your C# script. Code in classes When writing code, most of your functions, variables, and other scripting elements will be placed within the class of a script in C#. Within-in this context-means that it must occur after the class declaration, and following the corresponding closing }of that, at the bottom of the script. So, unless told otherwise, while following the instructions, assume that your code should be placed within the class established in the script. In JavaScript, this is less relevant as the entire script is the class; it is not explicitly established. Basic functions Unity as an engine has many of its own functions that can be used to call different features of the game engine, and it includes two important ones when you create a new script in C#. Functions (also known as methods) most often start with the voidterm in C#. This is the function's return type, which is the kind of data a function may result in. As most functions are simply there to carry out instructions rather than return information, often you will see voidat the beginning of their declaration, which simply means that a certain type of data will not be returned. Some basic functions are explained as follows: Start(): This is called when the scene first launches, so it is often used as it is suggested in the code, for initialization. For example, you may have a core variable that must be set to 0when the game scene begins or perhaps a function that spawns your player character in the correct place at the start of a level. Update(): This is called in every frame that the game runs, and is crucial for checking the state of various parts of your game during this time, as many different conditions of game objects may change while the game is running. Variables in C# To store information in a variable in C#, you will use the following syntax: typeOfDatanameOfVariable=value; Consider the following example: intcurrentScore=5; Another example would be: floatcurrentVelocity=5.86f; Note that the examples here show numerical data, with intmeaning integer, that is, a whole number, and floatmeaning floating point, that is, a number with a decimal place, which in C# requires a letter fto be placed at the end of the value. This syntax is somewhat different from JavaScript. Refer to the Variables in JavaScript section. What's inside a new JavaScript behaviour? While fulfilling the same functions as a C# file, a new empty JavaScript file shows you less as the entire script itself is considered to be the class, and the empty space in the script is considered to be within the opening and closing of the class, as the class declaration itself is hidden. You will also note that the lines usingUnityEngine;and usingSystem. Collections;are also hidden in JavaScript, so in a new JavaScript, you will simply be shown the Update()function: functionUpdate(){ } You will note that in JavaScript, you declare functions differently, using the term functionbefore the name. You will also need to write a declaration of variables and various other scripted elements with a slightly different syntax. We will look at examples of this as we progress. Variables in JavaScript The syntax for variables in JavaScript works as follows, and is always preceded by the prefix var, as shown: varvariableName:TypeOfData=value; For example: varcurrentScore:int=0; Another example is: varcurrentVelocity:float=5.86; As you must have noticed, the floatvalue does not require a letter ffollowing its value as it does in C#. You will notice as you see further, comparing the scripts written in the two different languages that C# often has stricter rules about how scripts are written, especially regarding implicitly stating types of data that are being used. Comments In both C# and JavaScript in Unity, you can write comments using: //twoforwardslashessymbolsforasinglelinecomment Another way of doing this would be: /*forward-slash,startoopenamultilinecommentsandattheendofit,star,forward-slashtoclose*/ You may write comments in the code to help you remember what each part does as you progress. Remember that because comments are not executed as code, you can write whatever you like, including pieces of code. As long as they are contained within a comment they will never be treated as working code. Wall attack Now let's put some of your new scripting knowledge into action and turn our existing scene into an interactive gameplay prototype. In the Project panel in Unity, rename your newly created script Shooterby selecting it, pressing return (Mac) or F2 (Windows), and typing in the new name. If you are using C#, remember to ensure that your class declaration inside the script matches this name of the script: publicclassShooter:MonoBehaviour{ As mentioned previously, JavaScript users will not need to do this. To kick-start your knowledge of using scripting in Unity, we will write a script to control the camera and allow shooting of a projectile at the wall that we have built. To begin with, we will establish three variables: bullet: This is a variable of type Rigidbody, as it will hold a reference to a physics controlled object we will make power: This is a floating point variable number we will use to set the power of shooting moveSpeed: This is another floating point variable number we will use to define the speed of movement of the camera using the arrow keys These variables must be public member variables, in order for them to display as adjustable settings in the Inspector. You'll see this in action very shortly! Declaring public variables Public variables are important to understand as they allow you to create variables that will be accessible from other scripts-an important part of game development as it allows for simpler inter-object communication. Public variables are also really useful as they appear as settings you can adjust visually in the Inspector once your script is attached to an object. Private variables are the opposite-designed to be only accessible within the scope of the script, class, or function they are defined within, and do not appear as settings in the Inspector. C# Before we begin, as we will not be using it, remove the Start()function from this script by deleting voidStart(){}. To establish the required variables, put the following code snippet into your script after the opening of the class, shown as follows: usingUnityEngine; usingSystem.Collections; publicclassShooter:MonoBehaviour{ publicRigidbodybullet;publicfloatpower=1500f;publicfloatmoveSpeed=2f; voidUpdate(){ } } Note that in this example, the default explanatory comments and the Start()function have been removed in order to save space. JavaScript In order to establish public member variables in JavaScript, you will need to simply ensure that your variables are declared outside of any existing function. This is usually done at the top of the script, so to declare the three variables we need, add the following to the top of your new Shooterscript so that it looks like this: varbullet:Rigidbody;varpower:float=1500;varmoveSpeed:float=5;functionUpdate(){ } Note that JavaScript (UnityScript) is much less declarative and needs less typing to start. Assigning scripts to objects In order for this script to be used within our game it must be attached as a component of one of the game objects within the existing scene. Save your script by choosing File | Save from the top menu of your script editor and return to Unity. There are several ways to assign a script to an object in Unity: Drag it from the Project panel and drop it onto the name of an object in the Hierarchy panel. Drag it from the Project panel and drop it onto the visual representation of the object in the Scene panel. Select the object you wish to apply the script to and then drag and drop the script to empty space at the bottom of the Inspector view for that object. Select the object you wish to apply the script to and then choose Component | Scripts | and the name of your script from the top menu. The most common method is the first approach, and this would be most appropriate since trying to drag to the camera in the Scene View, for example, would be difficult as the camera itself doesn't have a tangible surface to drag to. For this reason, drag your new Shooterscript from the Project panel and drop it onto the name of Main Camera in the Hierarchy to assign it, and you should see your script appear as a new component, following the existing audio listener component. You will also see its three public variables such as bullet, power, and moveSpeedin the Inspector, as follows: You can alternatively act in the Inspector, directly, press the Add Component button, and look for Shooterby typing in the search box. Note, this is valid if you didn't add the component in this way initially. In that case, the Shootercomponent will already be attached to the camera GameObject. As you will see, Unity has taken the variable names and given them capital letters, and in the case of our moveSpeedvariable, it takes a capital letter in the middle of the phrase to signify the start of a new word in the Inspector, placing a space between the two words when seen as a public variable. You can also see here that the bulletvariable is not yet set, but it is expecting an object to be assigned to it that has a Rigidbody attached-this is often referred to as being a Rigidbody object. Despite the fact that, in Unity, all objects in the scene can be referred to as game objects, when describing an object as a Rigidbodyobject in scripting, we will only be able to refer to properties and functions of the Rigidbodyclass. This is not a problem however; it simply makes our script more efficient than referring to the entire GameObjectclass. For more on this, take a look at the script reference documentation for both the classes: GameObject Rigidbody Beware that when adjusting values of public variables in the Inspector, any values changed will simply override those written in the script, rather than replacing them. Let's continue working on our script and add some interactivity; so, return to your script editor now. Moving the camera Next, we will make use of the moveSpeedvariable combined with keyboard input in order to move the camera and effectively create a primitive aiming of our shot, as we will use the camera as the point to shoot from. As we want to use the arrow keys on the keyboard, we need to be aware of how to address them in the code first. Unity has many inputs that can be viewed and adjusted using the Input Manager-choose Edit | Project Settings | Input: As seen in this screenshot, two of the default settings for input are Horizontal and Vertical. These rely on an axis-based input that, when holding the Positive Button, builds to a value of 1, and when holding the Negative Button, builds to a value of -1. Releasing either button means that the input's value springs back to 0, as it would if using a sprung analog joystick on a gamepad. As input is also the name of a class, and all named elements in the Input Manager are axes or buttons, in scripting terms, we can simply use: Input.GetAxis("Horizontal"); This receives the current value of the horizontal keys, that is, a value between -1 and 1, depending upon what the user is pressing. Let's put that into practice in our script now, using local variables to represent our axes. By doing this, we can modify the value of this variable later using multiplication, taking it from a maximum value of 1 to a higher number, allowing us to move the camera faster than 1 unit at a time. This variable is not something that we will ever need to set inside the Inspector, as Unity is assigning values based on our key input. As such, these values can be established as local variables. Local, private, and public variables Before we continue, let's take an overview of local, private, and public variables in order to cement your understanding: Local variables: These are variables established inside a function; they will not be shown in the Inspector, and are only accessible to the function they are in. Private variables: These are established outside a function, and therefore accessible to any function within your class. However, they are also not visible in the Inspector. Public variables: These are established outside a function, are accessible to any function in their class and also to other scripts, apart from being visible for editing in the Inspector. Local variables and receiving input The local variables in C# and JavaScript are shown as follows: C# Here is the code for C#: voidUpdate(){floath=Input.GetAxis("Horizontal")*Time.deltaTime*moveSpeed;floatv=Input.GetAxis("Vertical")*Time.deltaTime*moveSpeed; JavaScript Here is the code for JavaScript: functionUpdate(){varh:float=Input.GetAxis("Horizontal")*Time.deltaTime*moveSpeed;varv:float=Input.GetAxis("Vertical")*Time.deltaTime*moveSpeed; The variables declared here-hfor Horizontaland vfor Vertical, could be named anything we like; it is simply quicker to write single letters. Generally speaking, we would normally give these a name, because some letters cannot be used as variable names, for example, x, y, and z, because they are used for coordinate values and therefore reserved for use as such. As these axes' values can be anything from -1 to 1, they are likely to be a number with a decimal place, and as such, we must declare them as floating point type variables. They are then multiplied using the *symbol by Time.deltaTime, which simply means that the value is divided by the number of frames per second (the deltaTimeis the time it takes from one frame to the next or the time taken since the Update()function last ran), which means that the value adds up to a consistent amount per second, regardless of the framerate. The resultant value is then increased by multiplying it by the public variable we made earlier, moveSpeed. This means that although the values of hand vare local variables, we can still affect them by adjusting public moveSpeedin the Inspector, as it is a part of the equation that those variables represent. This is a common practice in scripting as it takes advantage of the use of publicly accessible settings combined with specific values generated by a function. [box type="note" align="" class="" width=""]You read an excerpt from the book Unity 5.x Game Development Essentials, Third Edition written by Tommaso Lintrami. Unity is the most popular game engine among Indie developers, start-ups, and medium to large independent game development companies. This book is a complete exercise in game development covering environments, physics, sound, particles, and much more—to get you up and running with Unity rapidly.[/box] Scripting Strategies Unity 3.x Scripting-Character Controller versus Rigidbody
Read more
  • 0
  • 0
  • 3905

article-image-build-first-android-app-kotlin
Aarthi Kumaraswamy
13 Apr 2018
10 min read
Save for later

Build your first Android app with Kotlin

Aarthi Kumaraswamy
13 Apr 2018
10 min read
Android application with Kotlin is an area which shines. Before getting started on this journey, we must set up our systems for the task at hand. A major necessity for developing Android applications is a suitable IDE - it is not a requirement but it makes the development process easier. Many IDE choices exist for Android developers. The most popular are: Android Studio Eclipse IntelliJ IDE Android Studio is by far the most powerful of the IDEs available with respect to Android development. As a consequence, we will be utilizing this IDE in all Android-related chapters in this book. Setting up Android Studio At the time of writing, the version of Android Studio that comes bundled with full Kotlin support is Android Studio 3.0. The canary version of this software can be downloaded from this website. Once downloaded, open the downloaded package or executable and follow the installation instructions. A setup wizard exists to guide you through the IDE setup procedure: Continuing to the next setup screen will prompt you to choose which type of Android Studio setup you'd like: Select the Standard setup and continue to the next screen. Click Finish on the Verify Settings screen. Android Studio will now download the components required for your setup. You will need to wait a few minutes for the required components to download: Click Finish once the component download has completed. You will be taken to the Android Studio landing screen. You are now ready to use Android Studio: [box type="note" align="" class="" width=""]You may also want to read Benefits of using Kotlin Java for Android programming.[/box] Building your first Android application with Kotlin Without further ado, let's explore how to create a simple Android application with Android Studio. We will be building the HelloApp. The HelloApp is an app that displays Hello world! on the screen upon the click of a button. On the Android Studio landing screen, click Start a new Android Studio project. You will be taken to a screen where you will specify some details that concern the app you are about to build, such as the name of the application, your company domain, and the location of the project. Type in HelloApp as the application name and enter a company domain. If you do not have a company domain name, fill in any valid domain name in the company domain input box – as this is a trivial project, a legitimate domain name is not required. Specify the location in which you want to save this project and tick the checkbox for the inclusion of Kotlin support. After filling in the required parameters, continue to the next screen: Here, we are required to specify our target devices. We are building this application to run on smartphones specifically, hence tick the Phone and Tablet checkbox if it's not already ticked. You will notice an options menu next to each device option. This dropdown is used to specify the target API level for the project being created. An API level is an integer that uniquely identifies the framework API division offered by a version of the Android platform. Select API level 15 if not already selected and continue to the next screen: On the next screen, we are required to select an activity to add to our application. An activity is a single screen with a unique user interface—similar to a window. We will discuss activities in more depth in Chapter 2, Building an Android Application – Tetris. For now, select the empty activity and continue to the next screen. Now, we need to configure the activity that we just specified should be created. Name the activity HelloActivityand ensure the Generate Layout File and Backwards Compatibility checkboxes are ticked: Now, click the Finish button. Android Studio may take a few minutes to set up your project. Once the setup is complete, you will be greeted by the IDE window containing your project files. [box type="note" align="" class="" width=""]Errors pertaining to the absence of required project components may be encountered at any point during project development. Missing components can be downloaded from the SDK manager. [/box] Make sure that the project window of the IDE is open (on the navigation bar, select View | Tool Windows | Project) and the Android view is currently selected from the drop-down list at the top of the Project window. You will see the following files at the left-hand side of the window: app | java | com.mydomain.helloapp | HelloActivity.java: This is the main activity of your application. An instance of this activity is launched by the system when you build and run your application: app | res | layout | activity_hello.xml: The user interface for HelloActivity is defined within this XML file. It contains a TextView element placed within the ViewGroup of a ConstraintLayout. The text of the TextView has been set to Hello World! app | manifests | AndroidManifest.xml: The AndroidManifest file is used to describe the fundamental characteristics of your application. In addition, this is the file in which your application's components are defined. Gradle Scripts | build.gradle: Two build.gradle files will be present in your project. The first build.gradle file is for the project and the second is for the app module. You will most frequently work with the module's build.gradle file for the configuration of the compilation procedure of Gradle tools and the building of your app. [box type="note" align="" class="" width=""]Gradle is an open source build automation system used for the declaration of project configurations. In Android, Gradle is utilized as a build tool with the goal of building packages and managing application dependencies. [/box] Creating a user interface A user interface (UI) is the primary means by which a user interacts with an application. The user interfaces of Android applications are made by the creation and manipulation of layout files. Layout files are XML files that exist in app | res | layout. To create the layout for the HelloApp, we are going to do three things: Add a LinearLayout to our layout file Place the TextView within the LinearLayout and remove the android:text attribute it possesses Add a button to the LinearLayout Open the activity_hello.xml file if it's not already opened. You will be presented with the layout editor. If the editor is in the Design view, change it to its Text view by toggling the option at the bottom of the layout editor. Now, your layout editor should look similar to that of the following screenshot: ViewGroup that arranges child views in either a horizontal or vertical manner within a single column. Copy the code snippet of our required LinearLayout from the following block and paste it within the ConstraintLayout preceding the TextView: <LinearLayout android:id="@+id/ll_component_container" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:gravity="center"> </LinearLayout> Now, copy and paste the TextView present in the activity_hello.xml file into the body of the LinearLayout element and remove the android:text attribute: <LinearLayout android:id="@+id/ll_component_container" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:gravity="center"> <TextView android:id="@+id/tv_greeting" android:layout_width="wrap_content" android:layout_height="wrap_content"       android:textSize="50sp" /> </LinearLayout> Lastly, we need to add a button element to our layout file. This element will be a child of our LinearLayout. To create a button, we use the Button element: <LinearLayout android:id="@+id/ll_component_container" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:gravity="center"> <TextView android:id="@+id/tv_greeting" android:layout_width="wrap_content" android:layout_height="wrap_content"       android:textSize="50sp" /> <Button       android:id="@+id/btn_click_me" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="16dp" android:text="Click me!"/> </LinearLayout> Toggle to the layout editor's design view to see how the changes we have made thus far translate when rendered on the user interface: Now we have our layout, but there's a problem. Our CLICK ME! button does not actually do anything when clicked. We are going to fix that by adding a listener for click events to the button. Locate and open the HelloActivity.java file and edit the function to add the logic for the CLICK ME! button's click event as well as the required package imports, as shown in the following code: package com.mydomain.helloapp import android.support.v7.app.AppCompatActivity import android.os.Bundle import android.text.TextUtils import android.widget.Button import android.widget.TextView import android.widget.Toast class HelloActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_hello) val tvGreeting = findViewById<TextView>(R.id.tv_greeting) val btnClickMe = findViewById<Button>(R.id.btn_click_me) btnClickMe.setOnClickListener { if (TextUtils.isEmpty(tvGreeting.text)) { tvGreeting.text = "Hello World!" } else { Toast.makeText(this, "I have been clicked!",                       Toast.LENGTH_LONG).show() } } } } In the preceding code snippet, we have added references to the TextView and Button elements present in our activity_hello layout file by utilizing the findViewById function. The findViewById function can be used to get references to layout elements that are within the currently-set content view. The second line of the onCreate function has set the content view of HelloActivity to the activity_hello.xml layout. Next to the findViewById function identifier, we have the TextView type written between two angular brackets. This is called a function generic. It is being used to enforce that the resource ID being passed to the findViewById belongs to a TextView element. After adding our reference objects, we set an onClickListener to btnClickMe. Listeners are used to listen for the occurrence of events within an application. In order to perform an action upon the click of an element, we pass a lambda containing the action to be performed to the element's setOnClickListener method. When btnClickMe is clicked, tvGreeting is checked to see whether it has been set to contain any text. If no text has been set to the TextView, then its text is set to Hello World!, otherwise a toast is displayed with the I have been clicked! text. Running the Android application In order to run the application, click the Run 'app' (^R) button at the top-right side of the IDE window and select a deployment target. The HelloApp will be built, installed, and launched on the deployment target: You may use one of the available prepackaged virtual devices or create a custom virtual device to use as the deployment target.  You may also decide to connect a physical Android device to your computer via USB and select it as your target. The choice is up to you. After selecting a deployment device, click OK to build and run the application. Upon launching the application, our created layout is rendered: When CLICK ME! is clicked, Hello World! is shown to the user: Subsequent clicks of the CLICK ME! button display a toast message with the text I have been clicked!: You enjoyed an excerpt from the book, Kotlin Programming By Example by Iyanu Adelekan. Start building and deploying Android apps with Kotlin using this book. Check out other related posts: Creating a custom layout implementation for your Android app Top 5 Must-have Android Applications OpenCV and Android: Making Your Apps See      
Read more
  • 0
  • 0
  • 9122

article-image-how-to-build-an-options-trading-web-app-using-q-learning
Sunith Shetty
13 Apr 2018
19 min read
Save for later

How to build an options trading web app using Q-learning

Sunith Shetty
13 Apr 2018
19 min read
Today we will learn to develop an options trading web app using Q-learning algorithm and will also evaluate the model. Developing an options trading web app using Q-learning The trading algorithm is the process of using computers programmed to follow a defined set of instructions for placing a trade in order to generate profits at a speed and frequency that is impossible for a human trader. The defined sets of rules are based on timing, price, quantity, or any mathematical model. Problem description Through this project, we will predict the price of an option on a security for N days in the future according to the current set of observed features derived from the time of expiration, the price of the security, and volatility. The question would be: what model should we use for such an option pricing model? The answer is that there are actually many; Black-Scholes stochastic partial differential equations (PDE) is one of the most recognized. In mathematical finance, the Black-Scholes equation is necessarily a PDE overriding the price evolution of a European call or a European put under the Black-Scholes model. For a European call or put on an underlying stock paying no dividends, the equation is: Where V is the price of the option as a function of stock price S and time t, r is the risk-free interest rate, and σ σ (displaystyle sigma) is the volatility of the stock. One of the key financial insights behind the equation is that anyone can perfectly hedge the option by buying and selling the underlying asset in just the right way without any risk. This hedge implies that there is only one right price for the option, as returned by the Black-Scholes formula. Consider a January maturity call option on an IBM with an exercise price of $95. You write a January IBM put option with an exercise price of $85. Let us consider and focus on the call options of a given security, IBM. The following chart plots the daily price of the IBM stock and its derivative call option for May 2014, with a strike price of $190: Figure 1: IBM stock and call $190 May 2014 pricing in May-Oct 2013 Now, what will be the profit and loss be for this position if IBM is selling at $87 on the option maturity date? Alternatively, what if IBM is selling at $100? Well, it is not easy to compute or predict the answer. However, in options trading, the price of an option depends on a few parameters, such as time decay, price, and volatility: Time to expiration of the option (time decay) The price of the underlying security The volatility of returns of the underlying asset A pricing model usually does not consider the variation in trading volume in terms of the underlying security. Therefore, some researchers have included it in the option trading model. As we have described, any RL-based algorithm should have an explicit state (or states), so let us define the state of an option using the following four normalized features: Time decay (timeToExp): This is the time to expiration once normalized in the range of (0, 1). Relative volatility (volatility): within a trading session, this is the relative variation of the price of the underlying security. It is different than the more complex volatility of returns defined in the Black-Scholes model, for example. Volatility relative to volume (vltyByVol): This is the relative volatility of the price of the security adjusted for its trading volume. Relative difference between the current price and the strike price (priceToStrike): This measures the ratio of the difference between the price and the strike price to the strike price. The following graph shows the four normalized features that can be used for the IBM option strategy: Figure 2: Normalized relative stock price volatility, volatility relative to trading volume, and price relative to strike price for the IBM stock Now let us look at the stock and the option price dataset. There are two files IBM.csv and IBM_O.csv contain the IBM stock prices and option prices, respectively. The stock price dataset has the date, the opening price, the high and low price, the closing price, the trade volume, and the adjusted closing price. A shot of the dataset is given in the following diagram: Figure 3: IBM stock data On the other hand, IBM_O.csv has 127 option prices for IBM Call 190 Oct 18, 2014. A few values are 1.41, 2.24, 2.42, 2.78, 3.46, 4.11, 4.51, 4.92, 5.41, 6.01, and so on. Up to this point, can we develop a predictive model using a Q-Learning, algorithm that can help us answer the previously mentioned question: Can it tell us the how IBM can make maximum profit by utilizing all the available features? Well, we know how to implement the Q-Learning, and we know what option trading is. Implementing an options trading web application The goal of this project is to create an options trading web application that creates a Q-Learning model from the IBM stock data. Then the app will extract the output from the model as a JSON object and show the result to the user. Figure 4, shows the overall workflow: Figure 4: Workflow of the options trading Scala web The compute API prepares the input for the Q-learning algorithm, and the algorithm starts by extracting the data from the files to build the option model. Then it performs operations on the data such as normalization and discretization. It passes all of this to the Q-learning algorithm to train the model. After that, the compute API gets the model from the algorithm, extracts the best policy data, and puts it onto JSON to be returned to the web browser. Well, the implementation of the options trading strategy using Q-learning consists of the following steps: Describing the property of an option Defining the function approximation Specifying the constraints on the state transition Creating an option property Considering the market volatility, we need to be a bit more realistic, because any longer- term prediction is quite unreliable. The reason is that it would fall outside the constraint of the discrete Markov model. So, suppose we want to predict the price for next two days—that is, N= 2. That means the price of the option two days in the future is the value of the reward profit or loss. So, let us encapsulate the following four parameters: timeToExp: Time left until expiration as a percentage of the overall duration of the option Volatility normalized Relative volatility of the underlying security for a given trading session vltyByVol: Volatility of the underlying security for a given trading session relative to a trading volume for the session priceToStrike: Price of the underlying security relative to the Strike price for a given trading session The OptionProperty class defines the property of a traded option on a security. The constructor creates the property for an option: class OptionProperty(timeToExp:  Double,volatility: Double,vltyByVol: Double,priceToStrike:  Double) {  nval toArray  = Array[Double](timeToExp,  volatility, vltyByVol,  priceToStrike)  require(timeToExp   > 0.01, s"OptionProperty  time to expiration  found  $timeToExp  required 0.01") } Creating an option model Now we need to create an OptionModel to act as the container and the factory for the properties of the option. It takes the following parameters and creates a list of option properties, propsList, by accessing the data source of the four features described earlier: The symbol of the security. The strike price for option, strikePrice. The source of the data, src. The minimum time decay or time to expiration, minTDecay. Out-of-the-money options expire worthlessly, and in-the-money options have a very different price behavior as they get closer to the expiration. Therefore, the last minTDecay trading sessions prior to the expiration date are not used in the training process. The number of steps (or buckets), nSteps, is used in approximating the values of each feature. For instance, an approximation of four steps creates four buckets: (0, 25), (25, 50), (50, 75), and (75, 100). Then it assembles OptionProperties and computes the normalized minimum time to the expiration of the option. Then it computes an approximation of the value of options by discretization of the actual value in multiple levels from an array of options prices; finally it returns a map of an array of levels for the option price and accuracy. Here is the constructor of the class: class OptionModel( symbol:  String, strikePrice: Double, src:  DataSource, minExpT: Int, nSteps:  Int ) Inside this class implementation, at first, a validation is done using the check() method, by checking the following: strikePrice: A positive price is required minExpT: This has to be between 2 and 16 nSteps: Requires a minimum of two steps Here's the invocation of this method: check(strikePrice,  minExpT, nSteps) The signature of the preceding method is shown in the following code: def check(strikePrice:  Double, minExpT: Int, nSteps:  Int): Unit = { require(strikePrice  > 0.0, s"OptionModel.check  price found $strikePrice required  > 0") require(minExpT  > 2 && minExpT  < 16,s"OptionModel.check  Minimum expiration time found  $minExpT required  ]2, 16[") require(nSteps   > 1,s"OptionModel.check,  number of steps found $nSteps required  > 1") } Once the preceding constraint is satisfied, the list of option properties, named propsList, is created as follows: val propsList  = (for { price  <- src.get(adjClose) volatility  <- src.get(volatility) nVolatility  <- normalize[Double](volatility) vltyByVol  <- src.get(volatilityByVol) nVltyByVol <- normalize[Double](vltyByVol) priceToStrike  <- normalize[Double](price.map(p  => 1.0 - strikePrice / p)) } yield { nVolatility.zipWithIndex./:(List[OptionProperty]())  { case (xs,  (v, n)) => val normDecay  = (n + minExpT).toDouble  / (price.size + minExpT) new OptionProperty(normDecay,  v, nVltyByVol(n), priceToStrike(n))  :: xs } .drop(2).reverse }).get In the preceding code block, the factory uses the zipWithIndex Scala method to represent the index of the trading sessions. All feature values are normalized over the interval (0, 1), including the time decay (or time to expiration) of the normDecay option. The quantize() method of the OptionModel class converts the normalized value of each option property of features into an array of bucket indices. It returns a map of profit and loss for each bucket keyed on the array of bucket indices: def quantize(o:  Array[Double]): Map[Array[Int],  Double] = { val mapper  = new mutable.HashMap[Int,  Array[Int]] val acc:  NumericAccumulator[Int]  = propsList.view.map(_.toArray) map(toArrayInt(_)).map(ar  => { val enc = encode(ar) mapper.put(enc,  ar) enc }) .zip(o)./:( new NumericAccumulator[Int])  { case (_acc,  (t, y)) => _acc  += (t, y); _acc } acc.map  { case (k,  (v, w)) =>  (k, v / w) } .map  { case (k,  v) => (mapper(k),  v) }.toMap } The method also creates a mapper instance to index the array of buckets. An accumulator, acc, of type NumericAccumulator extends the Map[Int,  (Int, Double)] and computes this tuple (number of occurrences of features on each bucket, sum of the increase or decrease of the option price). The toArrayInt method converts the value of each option property (timeToExp, volatility, and so on) into the index of the appropriate bucket. The array of indices is then encoded to generate the id or index of a state. The method updates the accumulator with the number of occurrences and the total profit and loss for a trading session for the option. It finally computes the reward on each action by averaging the profit and loss on each bucket. The signature of the encode(), toArrayInt() is given in the following code: private def encode(arr:  Array[Int]): Int = arr./:((1,  0)) { case ((s,  t), n) =>  (s * nSteps,  t + s * n) }._2 private def toArrayInt(feature:  Array[Double]): Array[Int] = feature.map(x  => (nSteps * x).floor.toInt) final class NumericAccumulator[T] extends mutable.HashMap[T,  (Int, Double)] { def +=(key:  T, x: Double):  Option[(Int, Double)]  = { val newValue  = if (contains(key))  (get(key).get._1 + 1,  get(key).get._2 + x) else (1,  x) super.put(key,  newValue) } } Finally, and most importantly, if the preceding constraints are satisfied (you can modify these constraints though) and once the instantiation of the OptionModel class generates a list of OptionProperty elements if the constructor succeeds; otherwise, it generates an empty list. Putting it altogether Because we have implemented the Q-learning algorithm, we can now develop the options trading application using Q-learning. However, at first, we need to load the data using the DataSource class (we will see its implementation later on). Then we can create an option model from the data for a given stock with default strike and minimum expiration time parameters, using OptionModel, which defines the model for a traded option, on a security. Then we have to create the model for the profit and loss on an option given the underlying security. The profit and loss are adjusted to produce positive values. It instantiates an instance of the Q-learning class, that is, a generic parameterized class that implements the Q-learning algorithm. The Q-learning model is initialized and trained during the instantiation of the class, so it can be in the correct state for the runtime prediction. Therefore, the class instances have only two states: successfully trained and failed training Q-learning value action. Then the model is returned to get processed and visualized. So, let us create a Scala object and name it QLearningMain. Then, inside the QLearningMain object, define and initialize the following parameters: Name: Used to indicate the reinforcement algorithm's name (for our case, it's Q- learning) STOCK_PRICES: File that contains the stock data OPTION_PRICES: File that contains the available option data STRIKE_PRICE: Option strike price MIN_TIME_EXPIRATION: Minimum expiration time for the option recorded QUANTIZATION_STEP: Steps used in discretization or approximation of the value of the security ALPHA: Learning rate for the Q-learning algorithm DISCOUNT (gamma): Discount rate for the Q-learning algorithm MAX_EPISODE_LEN:Maximum number of states visited per episode NUM_EPISODES: Number of episodes used during training MIN_COVERAGE: Minimum coverage allowed during the training of the Q- learning model NUM_NEIGHBOR_STATES: Number of states accessible from any other state REWARD_TYPE: Maximum reward or Random Tentative initializations for each parameter are given in the following code: val name: String = "Q-learning"// Files containing the historical prices for the stock and option val STOCK_PRICES = "/static/IBM.csv" val OPTION_PRICES = "/static/IBM_O.csv"// Run configuration parameters val STRIKE_PRICE = 190.0 // Option strike price val MIN_TIME_EXPIRATION = 6 // Min expiration time for option recorded val QUANTIZATION_STEP = 32 // Quantization step (Double => Int) val ALPHA = 0.2 // Learning rate val DISCOUNT = 0.6 // Discount rate used in Q-Value update equation val MAX_EPISODE_LEN = 128 // Max number of iteration for an episode val NUM_EPISODES = 20 // Number of episodes used for training. val NUM_NEIGHBHBOR_STATES = 3 // No. of states from any other state Now the run() method accepts as input the reward type (Maximum  reward in our case), quantized step (in our case, QUANTIZATION_STEP), alpha (the learning rate, ALPHA in our case) and gamma (in our case, it's DISCOUNT, the discount rate for the Q-learning algorithm). It displays the distribution of values in the model. Additionally, it displays the estimated Q-value for the best policy on a Scatter plot (we will see this later). Here is the workflow of the preceding method: First, it extracts the stock price from the IBM.csv file Then it creates an option model createOptionModel using the stock prices and quantization, quantizeR (see the quantize method for more and the main method invocation later) The option prices are extracted from the IBM_o.csv file After that, another model, model, is created using the option model to evaluate it on the option prices, oPrices Finally, the estimated Q-Value (that is, Q-value = value * probability) is displayed 0n a Scatter plot using the display method By amalgamating the preceding steps, here's the signature of the run() method: private def run(rewardType:  String,quantizeR: Int,alpha:  Double,gamma: Double): Int = { val sPath  = getClass.getResource(STOCK_PRICES).getPath val src  = DataSource(sPath,  false, false, 1).get val option  = createOptionModel(src,  quantizeR) val oPricesSrc  = DataSource(OPTION_PRICES,  false, false, 1).get val oPrices  = oPricesSrc.extract.get val model  = createModel(option,  oPrices, alpha, gamma)model.map(m  => {if (rewardType  != "Random") display(m.bestPolicy.EQ,m.toString,s"$rewardType  with quantization order $quantizeR")1}).getOrElse(-1) } Now here is the signature of the createOptionModel() method that creates an option model using (see the OptionModel class): private def createOptionModel(src:  DataSource, quantizeR: Int): OptionModel = new OptionModel("IBM",  STRIKE_PRICE, src, MIN_TIME_EXPIRATION, quantizeR) Then the createModel() method creates a model for the profit and loss on an option given the underlying security. Note that the option prices are quantized using the quantize() method defined earlier. Then the constraining method is used to limit the number of actions available to any given state. This simple implementation computes the list of all the states within a radius of this state. Then it identifies the neighboring states within a predefined radius. Finally, it uses the input data to train the Q-learning model to compute the minimum value for the profit, a loss so the maximum loss is converted to a null profit. Note that the profit and loss are adjusted to produce positive values. Now let us see the signature of this method: def createModel(ibmOption: OptionModel,oPrice: Seq[Double],alpha: Double,gamma: Double): Try[QLModel] = { val qPriceMap = ibmOption.quantize(oPrice.toArray) val numStates = qPriceMap.size val neighbors = (n: Int) => { def getProximity(idx: Int, radius: Int): List[Int] = { val idx_max = if (idx + radius >= numStates) numStates - 1 else idx + radius val idx_min = if (idx < radius) 0 else idx - radiusRange(idx_min, idx_max + 1).filter(_ != idx)./:(List[Int]())((xs, n) => n :: xs)}getProximity(n, NUM_NEIGHBHBOR_STATES) } val qPrice: DblVec = qPriceMap.values.toVector val profit: DblVec = normalize(zipWithShift(qPrice, 1).map { case (x, y) => y - x}).get val maxProfitIndex = profit.zipWithIndex.maxBy(_._1)._2 val reward = (x: Double, y: Double) => Math.exp(30.0 * (y - x)) val probabilities = (x: Double, y: Double) => if (y < 0.3 * x) 0.0 else 1.0println(s"$name Goal state index: $maxProfitIndex") if (!QLearning.validateConstraints(profit.size, neighbors)) thrownew IllegalStateException("QLearningEval Incorrect states transition constraint") val instances = qPriceMap.keySet.toSeq.drop(1) val config = QLConfig(alpha, gamma, MAX_EPISODE_LEN, NUM_EPISODES, 0.1) val qLearning = QLearning[Array[Int]](config,Array[Int](maxProfitIndex),profit,reward,proba bilities,instances,Some(neighbors)) val modelO = qLearning.getModel if (modelO.isDefined) { val numTransitions = numStates * (numStates - 1)println(s"$name Coverage ${modelO.get.coverage} for $numStates states and $numTransitions transitions") val profile = qLearning.dumpprintln(s"$name Execution profilen$profile")display(qLearning)Success(modelO.get)} else Failure(new IllegalStateException(s"$name model undefined")) } Note that if the preceding invocation cannot create an option model, the code fails to show a message that the model creation failed. Nonetheless, remember that the minCoverage used in the following line is important, considering the small dataset we used (because the algorithm will converge very quickly): val config  = QLConfig(alpha,  gamma, MAX_EPISODE_LEN,  NUM_EPISODES, 0.0) Although we've already stated that it is not assured that the model creation and training will be successful, a Naïve clue would be using a very small minCoverage value between 0.0 and 0.22. Now, if the preceding invocation is successful, then the model is trained and ready for making prediction. If so, then the display method is used to display the estimated Q-value = value * probability in a Scatter plot. Here is the signature of the method: private def display(eq:  Vector[DblPair],results: String,params:  String): Unit = { import org.scalaml.plots.{ScatterPlot,  BlackPlotTheme, Legend} val labels  = Legend(name,  s"Q-learning config:  $params", "States", "States")ScatterPlot.display(eq, labels,  new BlackPlotTheme) } Hang on and do not lose patience! We are finally ready to see a simple rn and inspect the result. So let us do it: def main(args: Array[String]): Unit = {run("Maximum reward", QUANTIZATION_STEP, ALPHA, DISCOUNT) Action: state 71 => state 74 Action: state 71 => state 73 Action: state 71 => state 72 Action: state 71 => state 70 Action: state 71 => state 69 Action: state 71 => state 68...Instance: [I@1f021e6c - state: 124 Action: state 124 => state 125 Action: state 124 => state 123 Action: state 124 => state 122 Action: state 124 => state 121Q-learning Coverage 0.1 for 126 states and 15750 transitions Q-learning Execution profile Q-Value -> 5.572310105096295, 0.013869013819834967, 4.5746487300071825, 0.4037703812585325, 0.17606260549479869, 0.09205272504875522, 0.023205692430068765, 0.06363082458984902, 50.405283888218435... 6.5530411130514015 Model: Success(Optimal policy: Reward - 1.00,204.28,115.57,6.05,637.58,71.99,12.34,0.10,4939.71,521.30,402.73, with coverage: 0.1) Evaluating the model The preceding output shows the transition from one state to another, and for the 0.1 coverage, the Q-Learning model had 15,750 transitions for 126 states to reach goal state 37 with optimal rewards. Therefore, the training set is quite small and only a few buckets have actual values. So we can understand that the size of the training set has an impact on the number of states. Q-Learning will converge too fast for a small training set (like what we have for this example). However, for a larger training set, Q-Learning will take time to converge; it will provide at least one value for each bucket created by the approximation. Also, by seeing those values, it is difficult to understand the relation between Q-values and states. So what if we can see the Q-values per state? Why not! We can see them on a scatter plot: Figure  5: Q-value  per state Now let us display the profile of the log of the Q-value (QLData.value) as the recursive search (or training) progress for different episodes or epochs. The test uses a learning rate α = 0.1 and a discount rate γ = 0.9 (see more in the deployment section): Figure 6: Profile of the logarithmic Q-Value for different  epochs during Q-learning training The preceding chart illustrates the fact that the Q-value for each profile is independent of the order of the epochs during training. However, the number of iterations to reach the goal state depends on the initial state selected randomly in this example. To get more insights, inspect the output on your editor or access the API endpoint at http://localhost:9000/api/compute (see following). Now, what if we display the distribution of values in the model and display the estimated Q-value for the best policy on a Scatter plot for the given configuration parameters? Figure 7: Maximum reward with quantization 32 with the QLearning The final evaluation consists of evaluating the impact of the learning rate and discount rate on the coverage of the training: Figure 7: Impact of the learning rate and discount rate on the coverage of the training The coverage decreases as the learning rate increases. This result confirms the general rule of using learning rate < 0.2. A similar test to evaluate the impact of the discount rate on the coverage is inconclusive. We learned to develop a real-life application for options trading using a reinforcement learning algorithm called Q-learning. You read an excerpt from a book written by Md. Rezaul Karim, titled Scala Machine Learning Projects. In this book, you will learn to develop, build, and deploy research or commercial machine learning projects in a production-ready environment. Check out other related posts: Getting started with Q-learning using TensorFlow How to implement Reinforcement Learning with TensorFlow How Reinforcement Learning works  
Read more
  • 0
  • 0
  • 4973
article-image-how-to-develop-restful-web-services-in-spring
Vijin Boricha
13 Apr 2018
6 min read
Save for later

How to develop RESTful web services in Spring

Vijin Boricha
13 Apr 2018
6 min read
Today, we will explore the basics of creating a project in Spring and how to leverage Spring Tool Suite for managing the project. To create a new project, we can use a Maven command prompt or an online tool, such as Spring Initializr (http://start.spring.io), to generate the project base. This website comes in handy for creating a simple Spring Boot-based web project to start the ball rolling. Creating a project base Let's go to http://start.spring.io in our browser and configure our project by filling in the following parameters to create a project base: Group: com.packtpub.restapp Artifact: ticket-management Search for dependencies: Web (full-stack web development with Tomcat and Spring MVC) After configuring our project, it will look as shown in the following screenshot: Now you can generate the project by clicking Generate Project. The project (ZIP file) should be downloaded to your system. Unzip the .zip file and you should see the files as shown in the following screenshot: Copy the entire folder (ticket-management) and keep it in your desired location. Working with your favorite IDE Now is the time to pick the IDE. Though there are many IDEs used for Spring Boot projects, I would recommend using Spring Tool Suite (STS), as it is open source and easy to manage projects with. In my case, I use sts-3.8.2.RELEASE. You can download the latest STS from this link: https://spring. io/tools/sts/ all. In most cases, you may not need to install; just unzip the file and start using it: After extracting the STS, you can start using the tool by running STS.exe (shown in the preceding screenshot). In STS, you can import the project by selecting Existing Maven Projects, shown as follows: After importing the project, you can see the project in Package Explorer, as shown in the following screenshot: You can see the main Java file (TicketManagementApplication) by default: To simplify the project, we will clean up the existing POM file and update the required dependencies. Add this file configuration to pom.xml: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.packtpub.restapp</groupId> <artifactId>ticket-management</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>ticket-management</name> <description>Demo project for Spring Boot</description> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>5.0.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <version>1.5.7.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> <version>1.5.7.RELEASE</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.9.2</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>5.0.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>5.0.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <version>1.5.7.RELEASE</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> In the preceding configuration, you can check that we have used the following libraries: spring-web spring-boot-starter spring-boot-starter-tomcat spring-bind jackson-databind As the preceding dependencies are needed for the project to run, we have added them to our pom.xml file. So far we have got the base project ready for Spring Web Service. Let's add a basic REST code to the application. First, remove the @SpringBootApplication annotation from the TicketManagementApplication class and add the following annotations: @Configuration @EnableAutoConfiguration @ComponentScan @Controller These annotations will help the class to act as a web service class. I am not going to talk much about what these configurations will do in this chapter. After adding the annotations, please add a simple method to return a string as our basic web service method: @ResponseBody @RequestMapping("/") public String sayAloha(){ return "Aloha"; } Finally, your code will look as follows: package com.packtpub.restapp.ticketmanagement; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; @Configuration @EnableAutoConfiguration @ComponentScan @Controller public class TicketManagementApplication { @ResponseBody @RequestMapping("/") public String sayAloha(){ return "Aloha"; } public static void main(String[] args) { SpringApplication.run(TicketManagementApplication.class, args); } } Once all the coding changes are done, just run the project on Spring Boot App (Run As | Spring Boot App). You can verify the application has loaded by checking this message in the console: Tomcat started on port(s): 8080 (http) Once verified, you can check the API on the browser by simply typing localhost:8080. Check out the following screenshot: If you want to change the port number, you can configure a different port number in application.properties, which is in src/main/resources/application.properties. Check out the following screenshot: You read an excerpt from Building RESTful Web Services with Spring 5 - Second Edition written by Raja CSP Raman. From this book, you will learn to implement the REST architecture to build resilient software in Java. Check out other related posts: Starting with Spring Security Testing RESTful Web Services with Postman Applying Spring Security using JSON Web Token (JWT)
Read more
  • 0
  • 0
  • 3772

article-image-building-docker-images-using-dockerfiles
Aarthi Kumaraswamy
12 Apr 2018
8 min read
Save for later

Building Docker images using Dockerfiles

Aarthi Kumaraswamy
12 Apr 2018
8 min read
Docker images are read-only templates. They give us containers during runtime. Central to this is the concept of a 'base image'. Layers then sit on top of this base image. For example, you might have a base image of Fedora or Ubuntu, but you can then install packages or make modifications over the base image to create a new layer. The base image and new layer can then be treated as a completely  new image. In the image below, Debian is the base image and emacs and Apache are the two layers added on top of it. They are highly portable and can be shared easily: Source: Docker Image layers Layers are transparently laid on top of the base image to create a single coherent filesystem. There are a couple of ways to create images, one is by manually committing layers and the other way is through Dockerfiles. In this recipe, we'll create images with Dockerfiles. Dockerfiles help us in automating image creation and getting precisely the same image every time we want it. The Docker builder reads instructions from a text file (a Dockerfile) and executes them one after the other in order. It can be compared as Vagrant files, which allows you to configure VMs in a predictable manner. Getting ready A Dockerfile with build instructions. Create an empty directory: $ mkdir sample_image $ cd sample_image Create a file named Dockerfile with the following content: $ cat Dockerfile # Pick up the base image FROM fedora # Add author name MAINTAINER Neependra Khare # Add the command to run at the start of container CMD date How to do it… Run the following command inside the directory, where we created Dockerfile to build the image: $ docker build . We did not specify any repository or tag name while building the image. We can give those with the -toption as follows: $ docker build -t fedora/test . The preceding output is different from what we did earlier. However, here we are using a cache after each instruction. Docker tries to save the intermediate images as we saw earlier and tries to use them in subsequent builds to accelerate the build process. If you don't want to cache the intermediate images, then add the --no-cache option with the build. Let's take a look at the available images now: How it works… A context defines the files used to build the Docker image. In the preceding command, we define the context to the build. The build is done by the Docker daemon and the entire context is transferred to the daemon. This is why we see the Sending build context to Docker daemon 2.048 kB message. If there is a file named .dockerignore in the current working directory with the list of files and directories (new line separated), then those files and directories will be ignored by the build context. More details about .dockerignore can be found at https://docs.docker.com/reference/builder/#the-dockerignore-file. After executing each instruction, Docker commits the intermediate image and runs a container with it for the next instruction. After the next instruction has run, Docker will again commit the container to create the intermediate image and remove the intermediate container created in the previous step. For example, in the preceding screenshot, eb9f10384509 is an intermediate image and c5d4dd2b3db9 and ffb9303ab124 are the intermediate containers. After the last instruction is executed, the final image will be created. In this case, the final image is 4778dd1f1a7a: The -a option can be specified with the docker images command to look for intermediate layers: $ docker images -a There's more… The format of the Dockerfile is: INSTRUCTION arguments Generally, instructions are given in uppercase, but they are not case sensitive. They are evaluated in order. A # at the beginning is treated like a comment. Let's take a look at the different types of instructions: FROM: This must be the first instruction of any Dockerfile, which sets the base image for subsequent instructions. By default, the latest tag is assumed to be: FROM  <image> Alternatively, consider the following tag: FROM  <images>:<tag> There can be more than one FROM instruction in one Dockerfile to create multiple images. If only image names, such as Fedora and Ubuntu are given, then the images will be downloaded from the default Docker registry (Docker Hub). If you want to use private or third-party images, then you have to mention this as follows:  [registry_hostname[:port]/][user_name/](repository_name:version_tag) Here is an example using the preceding syntax: FROM registry-host:5000/nkhare/f20:httpd MAINTAINER: This sets the author for the generated image, MAINTAINER <name>. RUN: We can execute the RUN instruction in two ways—first, run in the shell (sh -c): RUN <command> <param1> ... <pamamN> Second, directly run an executable: RUN ["executable", "param1",...,"paramN" ] As we know with Docker, we create an overlay—a layer on top of another layer—to make the resulting image. Through each RUN instruction, we create and commit a layer on top of the earlier committed layer. A container can be started from any of the committed layers. By default, Docker tries to cache the layers committed by different RUN instructions, so that it can be used in subsequent builds. However, this behavior can be turned off using --no-cache flag while building the image. LABEL: Docker 1.6 added a new feature to the attached arbitrary key-value pair to Docker images and containers. We covered part of this in the Labeling and filtering containers recipe in Chapter 2, Working with Docker Containers. To give a label to an image, we use the LABEL instruction in the Dockerfile as LABEL distro=fedora21. CMD: The CMD instruction provides a default executable while starting a container. If the CMD instruction does not have an executable (parameter 2), then it will provide arguments to ENTRYPOINT. CMD  ["executable", "param1",...,"paramN" ] CMD ["param1", ... , "paramN"] CMD <command> <param1> ... <pamamN> Only one CMD instruction is allowed in a Dockerfile. If more than one is specified, then only the last one will be honored. ENTRYPOINT: This helps us configure the container as an executable. Similar to CMD, there can be at max one instruction for ENTRYPOINT; if more than one is specified, then only the last one will be honored: ENTRYPOINT  ["executable", "param1",...,"paramN" ] ENTRYPOINT <command> <param1> ... <pamamN> Once the parameters are defined with the ENTRYPOINT instruction, they cannot be overwritten at runtime. However, ENTRYPOINT can be used as CMD, if we want to use different parameters to ENTRYPOINT. EXPOSE: This exposes the network ports on the container on which it will listen at runtime: EXPOSE  <port> [<port> ... ] We can also expose a port while starting the container. We covered this in the Exposing a port while starting a container recipe in Chapter 2, Working with Docker Containers. ENV: This will set the environment variable <key> to <value>. It will be passed all the future instructions and will persist when a container is run from the resulting image: ENV <key> <value> ADD: This copies files from the source to the destination: ADD <src> <dest> The following one is for the path containing white spaces: ADD ["<src>"... "<dest>"] <src>: This must be the file or directory inside the build directory from which we are building an image, which is also called the context of the build. A source can be a remote URL as well. <dest>: This must be the absolute path inside the container in which the files/directories from the source will be copied. COPY: This is similar to ADD.COPY <src> <dest>: COPY  ["<src>"... "<dest>"] VOLUME: This instruction will create a mount point with the given name and flag it as mounting the external volume using the following syntax: VOLUME ["/data"] Alternatively, you can use the following code: VOLUME /data USER: This sets the username for any of the following run instructions using the following syntax: USER  <username>/<UID> WORKDIR: This sets the working directory for the RUN, CMD, and ENTRYPOINT instructions that follow it. It can have multiple entries in the same Dockerfile. A relative path can be given which will be relative to the earlier WORKDIR instruction using the following syntax: WORKDIR <PATH> ONBUILD: This adds trigger instructions to the image that will be executed later, when this image will be used as the base image of another image. This trigger will run as part of the FROM instruction in downstream Dockerfile using the following syntax: ONBUILD [INSTRUCTION] See also Look at the help option of docker build: $ docker build -help The documentation on the Docker website https://docs.docker.com/reference/builder/ You just enjoyed an excerpt from the book, DevOps: Puppet, Docker, and Kubernetes by Thomas Uphill, John Arundel, Neependra Khare, Hideto Saito, Hui-Chuan Chloe Lee, and Ke-Jou Carol Hsu. To master working with Docker containers, images and much more, check out this book today! Read other posts: How to publish Docker and integrate with Maven Building Scalable Microservices How to deploy RethinkDB using Docker  
Read more
  • 0
  • 0
  • 4741