Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-6-popular-regression-techniques-need-know
Amey Varangaonkar
02 Jan 2018
8 min read
Save for later

6 Popular Regression Techniques You Need To Know

Amey Varangaonkar
02 Jan 2018
8 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Statistics for Data Science, authored by IBM certified expert James D. Miller. This book gives you a statistical view of building smart data models that help you get unique insights from your data.[/box] In this article, the author introduces you to the concept of regression analysis, one of the most popular machine learning algorithms - what it is, the different types of regression, and how to choose the right regression technique to build your data model. What is Regression Analysis? For starters, regression analysis or statistical regression is a process for estimating the relationships among variables. This process encompasses numerous techniques for modeling and analyzing variables, focusing on the relationship between a dependent variable and one (or more) independent variables (or predictors). Regression analysis is the work done to identify and understand how the (best representative) value of a dependent variable (a variable that depends on other factors) changes when any one of the independent variables (a variable that stands alone and isn't changed by the other variables) is changed while the other independent variables stay the same. A simple example might be how the total dollars spent on marketing (an independent variable example) impacts the total sales dollars (a dependent variable example) over a period of time (is it really as simple as more marketing equates to higher sales?), or perhaps there is a correlation between the total marketing dollars spent (independent variable), discounting a products price (another independent variable), and the amount of sales (a dependent variable)? [box type="info" align="" class="" width=""]Keep in mind this key point that regression analysis is used to understand which among the independent variables are related to the dependent variable(s), not just the relationship of these variables. Also, the inference of causal relationships (between the independent and dependent variables) is an important objective. However, this can lead to illusions or false relationships, so caution is recommended![/box] Overall, regression analysis can be thought of as estimating the conditional expectations of the value of the dependent variable, given the independent variables being observed, that is, endeavoring to predict the average value of the dependent variable when the independent variables are set to certain values. I call this the lever affect—meaning when one increases or decreases a value of one component, it directly affects the value at least one other (variable). An alternate objective of the process of regression analysis is the establishment of location parameters or the quantile of a distribution. In other words, this idea is to determine values that may be a cutoff, dividing a range of a probability distribution values. You'll find that regression analysis can be a great tool for prediction and forecasting (not just complex machine learning applications). We'll explore some real-world examples later, but for now, let's us look at some techniques for the process. Popular techniques and approaches for regression You'll find that various techniques for carrying out regression analysis have been developed and accepted.These are: Linear Logistic Polynomial Stepwise Ridge Lasso Linear regression Linear regression is the most basic type of regression and is commonly used for predictive analysis projects. In fact, when you are working with a single predictor (variable), we call it simple linear regression, and if there are multiple predictor variables, we call it multiple linear regression. Simply put, linear regression uses linear predictor functions whose values are estimated from the data in the model. Logistic regression Logistic regression is a regression model where the dependent variable is a categorical variable. This means that the variable only has two possible values, for example, pass/fail, win/lose, alive/dead, or healthy/sick. If the dependent variable has more than two possible values, one can use various modified logistic regression techniques, such as multinomial logistic regression, ordinal logistic regression, and so on. Polynomial regression When we speak of polynomial regression, the focus of this technique is on modeling the relationship between the independent variable and the dependent variable as an nth degree polynomial. Polynomial regression is considered to be a special case of multiple linear regressions. The predictors resulting from the polynomial expansion of the baseline predictors are known as interactive features. Stepwise regression Stepwise regression is a technique that uses some kind of automated procedure to continually execute a step of logic, that is, during each step, a variable is considered for addition to or subtraction from the set of independent variables based on some prespecified criterion. Ridge regression Often predictor variables are identified as being interrelated. When this occurs, the regression coefficient of any one variable depends on which other predictor variables are included in the model and which ones are left out. Ridge regression is a technique where a small bias factor is added to the selected variables in order to improve this situation. Therefore, ridge regression is actually considered a remedial measure to alleviate multicollinearity amongst predictor variables. Lasso regression Lasso (Least Absolute Shrinkage Selector Operator) regression is a technique where both predictor variable selection and regularization are performed in order to improve the prediction accuracy and interpretability of the result it produces. Which technique should I choose? In addition to the aforementioned regression techniques, there are numerous others to consider with, most likely, more to come. With so many options, it's important to choose the technique that is right for your data and your project. Rather than selecting the right regression approach, it is more about selecting the most effective regression approach. Typically, you use the data to identify the regression approach you'll use. You start by establishing statistics or a profile for your data. With this effort, you need to identify and understand the importance of the different variables, their relationships, coefficient signs, and their effect. Overall, here's some generally good advice for choosing the right regression approach from your project: Copy what others have done and had success with. Do the research. Incorporate the results of other projects into yours. Don't reinvent the wheel. Also, even if an observed approach doesn't quite fit as it was used, perhaps some simple adjustments would make it a good choice. Keep your approach as simple as possible. Many studies show that simpler models generally produce better predictions. Start simple, and only make the model more complex as needed. The more complex you make your model, the more likely it is that you are tailoring the model to your dataset specifically, and generalizability suffers. Check your work. As you evaluate methods, check the residual plots (more on this in the next section of this chapter) because they can help you avoid inadequate models and adjust your model for better results. Use your subject matter expertise. No statistical method can understand the underlying process or subject area the way you do. Your knowledge is a crucial part and, most likely, the most reliable way of determining the best regression approach for your project. Does it fit? After selecting a model that you feel is appropriate for use with your data (also known as determining that the approach is the best fit), you need to validate your selection, that is, determine its fit. A well-fitting regression model results in predicted values close to the observed data values. The mean model (which uses the mean for every predicted value) would generally be used if there were no informative predictor variables. The fit of a proposed regression model should, therefore, be better than the fit of the mean model. As a data scientist, you will need to scrutinize the coefficients of determination, measure the standard error of estimate, analyze the significance of regression parameters and confidence intervals. [box type="info" align="" class="" width=""]Remember that the better the fit of a regression model, most likely the better the precision in, or just better, the results.[/box] Finally, it has been proven that simple models produce more accurate results! Keep this always in mind when selecting an approach or technique, and even when the problem might be complex, it is not always obligatory to adopt a complex regression approach. Choosing the right technique, though, goes a long way in developing an accurate model. If you found this excerpt to be useful, make sure you check out our book Statistics for Data Science for more such tips on building effective data models by leveraging the power of the statistical tools and techniques.
Read more
  • 0
  • 0
  • 2603

article-image-create-box-whisker-plot-tableau
Sugandha Lahoti
30 Dec 2017
5 min read
Save for later

How to create a Box and Whisker Plot in Tableau

Sugandha Lahoti
30 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Shweta Sankhe-Savale, titled Tableau Cookbook – Recipes for Data Visualization. With the recipes in this book, learn to create beautiful data visualizations in no time on Tableau.[/box] In today’s tutorial, we will learn how to create a Box and Whisker plot in Tableau. The Box plot, or Box and Whisker plot as it is popularly known, is a convenient statistical representation of the variation in a statistical population. It is a great way of showing a number of data points as well as showing the outliers and the central tendencies of data. This visual representation of the distribution within a dataset was first introduced by American mathematician John W. Tukey in 1969. A box plot is significantly easier to plot than say a histogram and it does not require the user to make assumptions regarding the bin sizes and number of bins; and yet it gives significant insight into the distribution of the dataset. The box plot primarily consists of four parts: The median provides the central tendency of our dataset. It is the value that divides our dataset into two parts, values that are either higher or lower than the median. The position of the median within the box indicates the skewness in the data as it shifts either towards the upper or lower quartile. The upper and lower quartiles, which form the box, represent the degree of dispersion or spread of the data between them. The difference between the upper and lower quartile is called the Interquartile Range (IQR) and it indicates the mid-spread within which 50 percentage of the points in our dataset lie. The upper and lower whiskers in a box plot can either be plotted at the maximum and minimum value in the dataset, or 1.5 times the IQR on the upper and lower side. Plotting the whiskers at the maximum and minimum values includes 100 percentage of all values in the dataset including all the outliers. Whereas plotting the whiskers at 1.5 times the IQR on the upper and lower side represents outliers in the data beyond the whiskers. The points lying between the lower whisker and the lower quartile are the lower 25 percent of values in the dataset, whereas the points lying between the upper whisker and the upper quartile are the upper 25 percent of values in the dataset. In a typical normal distribution, each part of the box plot will be equally spaced. However, in most cases, the box plot will quickly show the underlying variations and trends in data and allows for easy comparison between datasets: Getting Ready Create a Box and Whisker plot in a new sheet in a workbook. For this purpose, we will connect to an Excel file named Data for Box plot & Gantt chart, which has been uploaded on https://1drv.ms/f/ s!Av5QCoyLTBpnhkGyrRrZQWPHWpcY. Let us save this Excel file in Documents | My Tableau Repository | Datasources | Tableau Cookbook data folder. The data contains information about customers in terms of their gender and recorded weight. The data contains 100 records, one record per customer. Using this data, let us look at how we can create a Box and Whisker plot. How to do it Once we have downloaded and saved the data from the link provided in the Getting ready section, we will create a new worksheet in our existing workbook and rename it to Box and Whisker plot. Since we haven't connected to the new dataset yet, establish a new data connection by pressing Ctrl + D on our keyboard. Select the Excel option and connect to the Data for Box plot & Gantt chart file, which is saved in our Documents | My Tableau Repository | Datasources | Tableau Cookbook data folder. Next let us select the table named Box and Whisker plot data by doubleclicking on it. Let us go ahead with the Live option to connect to this data. Next let us multi-select the Customer and Gender field from the Dimensions pane and the Weight from the Measures pane by doing a Ctrl + Select. Refer to the following image: 6. Next let us click on the Show Me! button and select the box-and-whisker plot. Refer to the highlighted section in the following image: 7. Once we click on the box-and-whisker plot option, we will see the following view: How it works In the preceding chart, we get two box and whisker plots: one for each gender. The whiskers are the maximum and minimum extent of the data. Furthermore, in each category we can see some circles, which are essentially representing a customer. Thus, within each gender category, the graph is showing the distribution of customers by their respective weights. When we hover over any of these circles, we can see details of the customer in terms of name, gender, and recorded weight in the tooltip. Refer to the following image: However, when we hover over the box (gray section), we will see the details in terms of median, lower quartiles, upper quartiles, and so on. Refer to the following image: Thus, a summary of the box plot that we created is as follows: In more simple terms, for the female category, the majority of the population lies between the weight range of 44 to 75, whereas for the male category, the majority of the population lies between the weight range of 44 to 82. Please note that in our visualization, even though the Row shelf displays SUM(Weight), since we have Customer in the Detail shelf, there's only one entry per customer, so SUM(Weight) is actually the same as MIN(Weight), MAX(Weight), or AVG(Weight). We learnt the basics of Box and Whisker plot and how to create them using Tableau. If you had fun with this recipe, do check out our book Tableau Cookbook – Recipes for Data Visualization to create interactive dashboards and beautiful data visualizations with Tableau.        
Read more
  • 0
  • 0
  • 7265

article-image-2017-generative-adversarial-networks-gans-research-milestones
Savia Lobo
30 Dec 2017
9 min read
Save for later

2017 Generative Adversarial Networks (GANs) Research Milestones

Savia Lobo
30 Dec 2017
9 min read
Generative Adversarial Models, introduced by Ian Goodfellow, are the next big revolution in the field of deep learning. Why? Because of their ability to perform semi-supervised learning where there is a vast majority of data is unlabelled. Here, GANs can efficiently carry out image generation tasks and other tasks such as converting sketches to an image, conversion of satellite images to a map, and many other tasks. GANs are capable of generating realistic images in any circumstances, for instance, giving some text written in a particular handwriting as an input to the generative model in order to generate more texts in the similar handwriting. The speciality of these GANs is that as compared to discriminative models, these generative models make use of a joint distribution probability to generate more likely samples. In short, these generative models or GANs are an improvisation to the discriminative models. Let’s explore some of the research papers that are contributing to further advancements in GANs. CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks This paper talks about CycleGANs, a class of generative Adversarial networks that carry out Image-to-Image translation. This means, capturing special characteristics of one image collection and figuring out how these characteristics could be translated into the other image collection, all in the absence of any paired training examples. CycleGANs method can also be applied in variety of applications such as Collection Style Transfer, Object Transfiguration, season transfer and photo enhancement. Cycle GAN architecture Source: GitHub CycleGANs are built upon the advantages of PIX2PIX architecture. The key advantage of CycleGANs model is, it allows to point the model at two discrete, unpaired collection of images.For example, one image collection say Group A, would consist photos of landscapes in summer whereas Group B would include photos of  landscapes in winter. The CycleGAN model can learn to translate the images between these two aesthetics without the need to merge tightly correlated matches together into a single X/Y training image. Source: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks The way CycleGANs are able to learn such great translations without having explicit X/Y training images involves introducing the idea of a full translation cycle to determine how good the entire translation system is, thus improving both generators at the same time. Source: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Currently, the applications of CycleGANs can be seen in Image-to-Image translation and video translations. For example they can be seen used in Animal Transfiguration, Turning portrait faces into doll faces, and so on. Further ahead, we could potentially see its implementations in audio, text, etc., would help us in generating new data for training. Although this method has compelling results, it also has some limitations The geometric changes within an image are not fully successful (for instance, the cat to dog transformation showed minute success). This could be caused by the generator architecture choices, which are tailored for good performance on the appearance changes. Thus, handling more varied and extreme transformations, especially geometric changes, is an important problem. Failure caused by the distribution characteristics of the training datasets. For instance, in the horse to zebra transfiguration, the model got confused as it was trained on the wild horse and zebra synsets of ImageNet, which does not contain images of a person riding a horse or zebra. These and some other limitations are described in the research paper. To read more about CycleGANs in detail visit the link here. Wasserstein GAN In this paper, we get an exposure to Wasserstein GANs and how they overcomes the drawbacks in original GANs. Although GANs have shown a drastic success in realistic image generation, the training however is not that easy as the process is slow and unstable. In the paper proposed for WGANs, it is empirically shown that WGANs cure the training problem. Wasserstein distance, also known as Earth Mover’s (EM) distance, is a measure of distance between two probability distributions. The basic idea in WGAN is to replace the loss function so that there always exists a non-zero gradient. This can be done using Wasserstein distance between the generator distribution and the data distribution. Training these WGANs does not require keeping a balance in training of the discriminator and the generator. It also doesn’t require a design of the network architecture too. One of the most fascinating practical benefits of WGANs is the ability to continuously estimate the EM distance by training the discriminator to an optimal level. The learning curves when used for plotting are useful for debugging and hyperparameter searches. These curves also correlate well with the observed sample quality and improved stability of the optimization process. Thus, Wasserstein GANs are an alternative to traditional GAN training with features such as: Improvement in the stability of learning Elimination of problems like mode collapse Provide meaningful learning curves useful for debugging and hyperparameter searches Furthermore, the paper also showcases that the corresponding optimization problem is sound, and provides extensive theoretical work highlighting the deep connections to other distances between distributions. The Wasserstein GAN has been utilized to train a language translation machine. The condition here is that there is no parallel data between the word embeddings between the two languages. Wasserstein GANs have been used to perform English-Russian and English-Chinese language mappings. Limitations of WGANs: WGANs suffer from unstable training at times, when one uses a momentum based optimizer or when one uses high learning rates. Includes slow convergence after weight clipping, especially when clipping window is too large. It also suffers from the vanishing gradient problem when the clipping window is too small. To have a detailed understanding of WGANs have a look at the research paper here. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets This paper describes InfoGAN (Information-theoretic extension to the Generative Adversarial Network). It can learn disentangled representations in a completely unsupervised manner. In traditional GANs, learned dataset is entangled i.e. encoded in a complex manner within the data space. However, if the representation is disentangled, it would be easy to implement and easy to apply tasks on it. InfoGAN solves the entangled data problem in GANs.   Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, extracts poses of objects correctly irrespective of the lighting conditions within the 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hairstyles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. InfoGAN does not require any kind of supervision. In comparison to InfoGAN, the only other unsupervised method that learns disentangled representations is hossRBM, a higher-order extension of the spike-and-slab restricted Boltzmann machine which disentangles emotion from identity on the Toronto Face Dataset. However, hossRBM can only disentangle discrete latent factors, and its computation cost grows exponentially in the number of factors. Whereas, InfoGAN can disentangle both discrete and continuous latent factors, scale to complicated datasets, and typically requires no more training time than regular GAN. In the experiments given in the paper, firstly the comparison of  InfoGAN with prior approaches on relatively clean datasets is shown. Another experiment shown is, where InfoGAN can learn interpretable representations on complex datasets (here no previous unsupervised approach is known to learn representations of comparable quality.) Thus, InfoGAN is completely unsupervised and learns interpretable and disentangled representations on challenging datasets. Additionally, InfoGAN adds only negligible computation cost on top of GAN and is easy to train. The core idea of using mutual information to induce representation can be applied to other methods like VAE (Variational AutoEncoder) in future. The other possibilities with InfoGAN in future could be,learning hierarchical latent representations, improving semi-supervised learning with better codes, and using InfoGAN as a high-dimensional data discovery tool. To know more about this research paper in detail, visit the link given here. Progressive growing of GANs for improved Quality, Stability, and Variation This paper describes a brand new method for training your Generative Adversarial Networks. The basic idea here is to train both the generator and the discriminator progressively. This means, starting from a low resolution and adding new layers so that the model increases in providing images with finer details as training progresses. Such a method speeds up the training and also stabilizes it to a greater extent, which in turn produces images of unprecedented quality. For instance, a higher quality version of the CELEBA images dataset that provides output resolutions up to 10242 pixels.   Source: https://arxiv.org/pdf/1710.10196.pdf When new layers are added to the networks, they fade in smoothly. This helps in avoiding the sudden shocks to the already well-trained, smaller resolution layers. Also, the progressive training has various other benefits. The generation of smaller images is substantially more stable because there is less class information and fewer modes By increasing the resolution little by little, we are continuously asking a much simpler question compared to the end goal of discovering a mapping from latent vectors to e.g. 10242 images Progressive growing of GANs also reduces the training time. In addition to this, most of the iterations are done at lower resolutions, and the quality of the result obtained is upto 2-6 times faster, depending on the resolution of the final output. Thus, by progressively training GANs results into better quality, stability, and variation in images. This may also lead to true photorealism in near future. The paper concludes with the fact that, though there are certain limitations with this training method, which include semantic sensibility and understanding dataset-dependent constraints(such as certain objects being straight rather than curved). This leaves a lot to be desired from GANs and there is also room for improvement in the micro-structure of the images. To have a thorough understanding of this research paper, read the paper here.  
Read more
  • 0
  • 0
  • 3085
Visually different images

article-image-format-publish-code-using-r-markdown
Savia Lobo
29 Dec 2017
6 min read
Save for later

How to format and publish code using R Markdown

Savia Lobo
29 Dec 2017
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Ahmed Sherif titled Practical Business Intelligence.  This book is a complete guide for implementing Business intelligence with the help of powerful tools like D3.js, R, Tableau, Qlikview and Python that are available on the market. It starts off by preparing you for data analytics and then moves on to teach you a range of techniques to fetch important information from various databases.[/box] Today you will explore how to use R Markdown, which is a format that allows reproducible reports with embedded R code that can be published into slideshows, Word documents, PDF files, and HTML web pages. Getting started with R Markdown R Markdown documents have the .RMD extension and are created by selecting R Markdown from the menu bar of RStudio, as seen here: If this is the first time you are creating an R Markdown report, you may be prompted to install some additional packages for R Markdown to work, as seen in the following screenshot: Once the packages are installed, we can define a title, author, and default output format, as follows: For our purposes we will use HTML output. The default output of the R Markdown document will appear like this: R Markdown features and components We can go ahead and delete everything below line 7 in the previous screenshot as we will create our own template with our embedded code and formatting. Header levels can be generated using # in front of a title. The largest font size will have a single # and each subsequent # added will decrease the header level font. Whenever we wish to embed actual R code into the report, we can include it inside of a code chunk by clicking on the icon shown here: Once that icon is selected, a shaded region is created between two ``` characters where R code can be generated identical to that used in RStudio. The first header generated will be for the results, and then the subsequent header will indicate the libraries used to generate the report. This can be generated using the following script: # Results ###### Libraries used are RODBC, plotly, and forecast Executing R code inside of R Markdown The next step is to run the actual R code inside of the chunk snippet that calls the required libraries needed to generate the report. This can be generated using the following script: ```{r} # We will not see the actual libraries loaded # as it is not necessary for the end user library('RODBC') library('plotly') library('forecast') ``` We can then click on the Knit HTML icon on the menu bar to generate a preview of our code results in R Markdown. Unfortunately, this output of library information is not useful to the end user. Exporting tips for R Markdown The report output includes all the messages and potential warnings that are the result of calling a package. This is not information that is useful to the report consumer. Fortunately for R developers, these types of messages can be concealed by tweaking the R chunk snippets to include the following logic in their script: ```{r echo = FALSE, results = 'hide', message = FALSE} ``` We can continue embedding R code into our report to run queries against the SQL Server database and produce summary data of the dataframe as well as the three main plots for the time series plot, observed versus fitted smoothing, and Holt-Winters forecasting: ###### Connectivity to Data Source is through ODBC ```{r echo = FALSE, results = 'hide', message = FALSE} connection_SQLBI<-odbcConnect("SQLBI") #Get Connection Details connection_SQLBI ##query fetching begin## SQL_Query_1<-sqlQuery(connection_SQLBI, 'SELECT [WeekInYear] ,[DiscountCode] FROM [AdventureWorks2014].[dbo].[DiscountCodebyWeek]' ) ##query fetching end## #begin table manipulation colnames(SQL_Query_1)<- c("Week", "Discount") SQL_Query_1$Weeks <- as.numeric(SQL_Query_1$Week) SQL_Query_1<-SQL_Query_1[,-1] #removes first column SQL_Query_1<-SQL_Query_1[c(2,1)] #reverses columns 1 and 2 #end table manipulation ``` ### Preview of First 6 rows of data ```{r echo = FALSE, message= FALSE} head(SQL_Query_1) ``` ### Summary of Table Observations ```{r echo = FALSE, message= FALSE} str(SQL_Query_1) ``` ### Time Series and Forecast Plots ```{r echo = FALSE, message= FALSE} Query1_TS<-ts(SQL_Query_1$Discount) par(mfrow=c(3,1)) plot.ts(Query1_TS, xlab = 'Week (1-52)', ylab = 'Discount', main = 'Time Series of Discount Code by Week') discountforecasts <- HoltWinters(Query1_TS, beta=FALSE, gamma=FALSE) plot(discountforecasts) discountforecasts_8periods <- forecast.HoltWinters(discountforecasts, h=8) plot.forecast(discountforecasts_8periods, ylab='Discount', xlab = 'Weeks (1-60)', main = 'Forecasting 8 periods') ``` The final output Before publishing the output with the results, R Markdown offers the developer opportunities to prettify the end product. One effect I like to add to a report is a logo of some kind. This can be done by applying the following code to any line in R Markdown: ![](http://website.com/logo.jpg) # image is on a website ![](images/logo.jpg) # image is locally on your machine The first option adds an image from a website, and the second option adds an image locally. For my purposes, I will add a PacktPub logo right above the Results section in the R Markdown, as seen in the following screenshot: To learn more about customizing an R Markdown document, visit the following website:  http://rmarkdown.rstudio.com/authoring_basics.html. Once we are ready to preview the results of the R Markdown output, we can once again select the Knit to HTML button on the menu. The new report can be seen in this screenshot: As can be seen in the final output, even if the R code is embedded within the R Markdown document, we can suppress the unnecessary technical output and reveal the relevant tables, fields, and charts that will provide the most benefit to end users and report consumers. If you have enjoyed reading this article and want to develop the ability to think along the right lines and use more than one tool to perform analysis depending on the needs of your business, do check out Practical Business Intelligence.
Read more
  • 0
  • 0
  • 3893

article-image-getting-to-know-and-manipulate-tensors-in-tensorflow
Sunith Shetty
29 Dec 2017
5 min read
Save for later

Getting to know and manipulate Tensors in TensorFlow

Sunith Shetty
29 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This article is a book excerpt written by Rodolfo Bonnin, titled Building Machine Learning Projects with TensorFlow. In this book, you will learn to build powerful machine learning projects to tackle complex data for gaining valuable insights. [/box] Today, you will learn everything about Tensors, their properties and how they are used to represent data. What are tensors? TensorFlow bases its data management on tensors. Tensors are concepts from the field of mathematics, and are developed as a generalization of the linear algebra terms of vectors and matrices. Talking specifically about TensorFlow, a tensor is just a typed, multidimensional array, with additional operations, modeled in the tensor object. Tensor properties - ranks, shapes, and types TensorFlow uses tensor data structure to represent all data. Any tensor has a static type and dynamic dimensions, so you can change a tensor's internal organization in real-time. Another property of tensors, is that only objects of the tensor type can be passed between nodes in the computation graph. Let's now see what the properties of tensors are (from now on, every time we use the word tensor, we'll be referring to TensorFlow's tensor objects). Tensor rank Tensor ranks represent the dimensional aspect of a tensor, but is not the same as a matrix rank. It represents the quantity of dimensions in which the tensor lives, and is not a precise measure of the extension of the tensor in rows/columns or spatial equivalents. A rank one tensor is the equivalent of a vector, and a rank one tensor is a matrix. For a rank two tensor you can access any element with the syntax t[i, j]. For a rank three tensor you would need to address an element with t[i, j, k], and so on. In the following example, we will create a tensor, and access one of its components: import tensorflow as tf sess = tf.Session() tens1 = tf.constant([[[1,2],[2,3]],[[3,4],[5,6]]]) print sess.run(tens1)[1,1,0] Output: 5 This is a tensor of rank three, because in each element of the containing matrix, there is a vector element: Rank Math entity Code definition example 0 Scalar scalar = 1000 1 Vector Vector vector = [2, 8, 3] 2 Matrix matrix = [[4, 2, 1], [5, 3, 2], [5, 5, 6]] 3 3-tensor tensor = [[[4], [3], [2]], [[6], [100], [4]], [[5], [1], [4]]] n n-tensor … Tensor shape The TensorFlow documentation uses three notational conventions to describe tensor dimensionality: rank, shape, and dimension number. The following table shows how these relate to one another: Rank Shape Dimension number Example 0 [] 0 4 1 [D0] 1 [2] 2 [D0, D1] 2 [6, 2] 3 [D0, D1, D2] 3 [7, 3, 2] n [D0, D1, … Dn-1] n-D A tensor with shape [D0, D1, … Dn-1] In the following example, we create a sample rank three tensor, and print the shape of it: Tensor data types In addition to dimensionality, tensors have a fixed data type. You can assign any one of the following data types to a tensor: Data type Python type Description DT_FLOAT tf.float32 32 bits floating point. DT_DOUBLE tf.float64 64 bits floating point. DT_INT8 tf.int8 8 bits signed integer. DT_INT16 tf.int16 16 bits signed integer. DT_INT32 tf.int32 32 bits signed integer. DT_INT64 tf.int64 64 bits signed integer. DT_UINT8 tf.uint8 8 bits unsigned integer. DT_STRING tf.string Variable length byte arrays. Each element of a tensor is a byte array. DT_BOOL tf.bool Boolean. Creating new tensors We can either create our own tensors, or derivate them from the well-known numpy library. In the following example, we create some numpy arrays, and do some basic math with them: import tensorflow as tf import numpy as np x = tf.constant(np.random.rand(32).astype(np.float32)) y= tf.constant ([1,2,3]) x Y Output: <tf.Tensor 'Const_2:0' shape=(3,) dtype=int32> From numpy to tensors and vice versa TensorFlow is interoperable with numpy, and normally the eval() function calls will return a numpy object, ready to be worked with the standard numerical tools. We must note that the tensor object is a symbolic handle for the result of an operation, so it doesn't hold the resulting values of the structures it contains. For this reason, we must run the eval() method to get the actual values, which is the equivalent to Session.run(tensor_to_eval). In this example, we build two numpy arrays, and convert them to tensors: import tensorflow as tf #we import tensorflow import numpy as np #we import numpy sess = tf.Session() #start a new Session Object x_data = np.array([[1.,2.,3.],[3.,2.,6.]]) # 2x3 matrix x = tf.convert_to_tensor(x_data, dtype=tf.float32) print (x) Output: Tensor("Const_3:0", shape=(2, 3), dtype=float32) Useful method: tf.convert_to_tensor: This function converts Python objects of various types to tensor objects. It accepts tensorobjects, numpy arrays, Python lists, and Python scalars. Getting things done - interacting with TensorFlow As with the majority of Python's modules, TensorFlow allows the use of Python's interactive console: In the previous figure, we call the Python interpreter (by simply calling Python) and create a tensor of constant type. Then we invoke it again, and the Python interpreter shows the shape and type of the tensor. We can also use the IPython interpreter, which will allow us to employ a format more compatible with notebook-style tools, such as Jupyter: When talking about running TensorFlow Sessions in an interactive manner, it's better to employ the InteractiveSession object. Unlike the normal tf.Session class, the tf.InteractiveSession class installs itself as the default session on construction. So when you try to eval a tensor, or run an operation, it will not be necessary to pass a Session object to indicate which session it refers to. To summarize, we have learned about tensors, the key data structure in TensorFlow and simple operations we can apply to the data. To know more about different machine learning techniques and algorithms that can be used to build efficient and powerful projects, you can refer to the book Building Machine Learning Projects with TensorFlow.    
Read more
  • 0
  • 0
  • 10553

article-image-saving-backups-on-cloud-services-with-elasticsearch-plugins
Savia Lobo
29 Dec 2017
9 min read
Save for later

Saving backups on cloud services with ElasticSearch plugins

Savia Lobo
29 Dec 2017
9 min read
[box type="note" align="" class="" width=""]Our article is a book excerpt taken from Mastering Elasticsearch 5.x. written by Bharvi Dixit. This book guides you through the intermediate and advanced functionalities of Elasticsearch, such as querying, indexing, searching, and modifying data. In other words, you will gain all the knowledge necessary to master Elasticsearch and put into efficient use.[/box] This article will explain how Elasticsearch, with the help of additional plugins, allows us to push our data outside of the cluster to the cloud. There are three possibilities where our repository can be located, at least using officially supported plugins: The S3 repository: AWS The HDFS repository: Hadoop clusters The GCS repository: Google cloud services The Azure repository: Microsoft's cloud platform Let's go through these repositories to see how we can push our backup data on the cloud services. The S3 repository The S3 repository is a part of the Elasticsearch AWS plugin, so to use S3 as the repository for snapshotting, we need to install the plugin first on every node of the cluster and each node must be restarted after the plugin installation: sudo bin/elasticsearch-plugin install repository-s3 After installing the plugin on every Elasticsearch node in the cluster, we need to alter their configuration (the elasticsearch.yml file) so that the AWS access information is available. The example configuration can look like this: cloud: aws: access_key: YOUR_ACCESS_KEY secret_key: YOUT_SECRET_KEY To create the S3 repository that Elasticsearch will use for snapshotting, we need to run a command similar to the following one: curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{ "type": "s3", "settings": { "bucket": "bucket_name" } }' The following settings are supported when defining an S3-based repository: bucket: This is the required parameter describing the Amazon S3 bucket to which the Elasticsearch data will be written and from which Elasticsearch will read the data. region: This is the name of the AWS region where the bucket resides. By default, the US Standard region is used. base_path: By default, Elasticsearch puts the data in the root directory. This parameter allows you to change it and alter the place where the data is placed in the repository. server_side_encryption: By default, encryption is turned off. You can set this parameter to true in order to use the AES256 algorithm to store data. chunk_size: By default, this is set to 1GB and specifies the size of the data chunk that will be sent. If the snapshot size is larger than the chunk_size, Elasticsearch will split the data into smaller chunks that are not larger than the size specified in the chunk_size. The chunk size can be specified in size notations such as 1GB, 100mb, and 1024kB. buffer_size: The size of this buffer is set to 100mb by default. When the chunk size is greater than the value of the buffer_size, Elasticsearch will split it into buffer_size fragments and use the AWS multipart API to send it. The buffer size cannot be set lower than 5 MB because it disallows the use of the multipart API. endpoint: This defaults to AWS's default S3 endpoint. Setting a region overrides the endpoint setting. protocol: Specifies whether to use http or https. It default to cloud.aws.protocol or cloud.aws.s3.protocol. compress: Defaults to false and when set to true. This option allows snapshot metadata files to be stored in a compressed format. Please note that index files are already compressed by default. read_only: Makes a repository to be read only. It defaults to false. max_retries: This specifies the number of retries Elasticsearch will take before giving up on storing or retrieving the snapshot. By default, it is set to 3. In addition to the preceding properties, we are allowed to set two additional properties that can overwrite the credentials stored in elasticsearch.yml, which will be used to connect to S3. This is especially handy when you want to use several S3 repositories, each with its own security settings: access_key: This overwrites cloud.aws.access_key from elasticsearch.yml secret_key: This overwrites cloud.aws.secret_key from elasticsearch.yml    Note:    AWS instances resolve S3 endpoints to a public IP. If the Elasticsearch instances reside in a private subnet in an AWS VPC then all traffic to S3 will go through that VPC's NAT instance. If your VPC's NAT instance is a smaller instance size (for example, a t1.micro) or is handling a high volume of network traffic, your bandwidth to S3 may be limited by that NAT instance's networking bandwidth limitations. So, if you running your Elasticsearch cluster inside a VPC then make sure that you are using instances with a high networking bandwidth and there is no network congestion. Note:  Instances residing in a public subnet in an AWS VPC will   connect to S3 via the VPC's Internet gateway and not be bandwidth limited by the VPC's NAT instance. The HDFS repository If you use Hadoop and its HDFS (http://wiki.apache.org/hadoop/HDFS) filesystem, a good alternative to back up the Elasticsearch data is to store it in your Hadoop cluster. As with the case of S3, there is a dedicated plugin for this. To install it, we can use the following  command: sudo bin/elasticsearch-plugin install repository-hdfs   Note :    The HDFS snapshot/restore plugin is built against the latest Apache Hadoop 2.x (currently 2.7.1). If your Hadoop distribution is not protocol compatible with Apache Hadoop, you can replace the Hadoop libraries inside the plugin folder with your own (you might have to adjust the security permissions required). Note:  Even if Hadoop is already installed on the Elasticsearch nodes, for security        reasons, the required libraries need to be placed under the plugin folder. Note that in most cases, if the distribution is compatible, one simply needs to configure the repository with the appropriate Hadoop configuration files. After installing the plugin on each node in the cluster and restarting every node, we can use the following command to create a repository in our Hadoop cluster: curl -XPUT 'http://localhost:9200/_snapshot/es_hdfs_repository' -d '{ "type": "hdfs" "settings": { "uri": "hdfs://namenode:8020/", "path": "elasticsearch_snapshots/es_hdfs_repository" } }' The available settings that we can use are as follows: uri: This is a required parameter that tells Elasticsearch where HDFS resides. It should have a format like hdfs://HOST:PORT/. path: This is the information about the path where snapshot files should be stored. It is a required parameter. load_default: This specifies whether the default parameters from the Hadoop configuration should be loaded and set to false if the reading of the settings should be disabled. This setting is enabled by default. chunk_size: This specifies the size of the chunk that Elasticsearch will use to split the snapshot data. If you want the snapshotting to be faster, you can use smaller chunks and more streams to push the data to HDFS. By default, it is disabled. conf.<key>: This is an optional parameter and tells where a key is in any Hadoop argument. The value provided using this property will be merged with the configuration. As an alternative, you can define your HDFS repository and its settings inside the elasticsearch.yml file of each node as follows: repositories: hdfs: uri: "hdfs://<host>:<port>/" path: "some/path" load_defaults: "true" conf.<key> : "<value>" compress: "false" chunk_size: "10mb" The Azure repository Just like Amazon S3, we are able to use a dedicated plugin to push our indices and metadata to Microsoft cloud services. To do this, we need to install a plugin on every node of the cluster, which we can do by running the following command: sudo bin/elasticsearch-plugin install repository-azure The configuration is also similar to the Amazon S3 plugin configuration. Our elasticsearch.yml file should contain the following section: cloud: azure: storage: my_account: account: your_azure_storage_account key: your_azure_storage_key Do not forget to restart all the nodes after installing the plugin. After Elasticsearch is configured, we need to create the actual repository, which we do by running the following command: curl -XPUT 'http://localhost:9200/_snapshot/azure_repository' -d '{ "type": "azure" }' The following settings are supported by the Elasticsearch Azure plugin: account: Microsoft Azure account settings to be used. container: As with the bucket in Amazon S3, every piece of information must reside in the container. This setting defines the name of the container in the Microsoft Azure space. The default value is elasticsearch-snapshots. base_path: This allows us to change the place where Elasticsearch will put the data. By default, the value for this setting is empty which causes Elasticsearch to put the data in the root directory. compress: This defaults to false and when enabled it allows us to compress the metadata files during the snapshot creation. chunk_size: This is the maximum chunk size used by Elasticsearch (set to 64m by default, and this is also the maximum value allowed). You can change it to change the size when the data should be split into smaller chunks. You can change the chunk size using size value notations such as, 1g, 100m, or 5k.  An example of creating a repository using the settings follows: curl -XPUT "http://localhost:9205/_snapshot/azure_repository" -d' { "type": "azure", "settings": { "container": "es-backup-container", "base_path": "backups", "chunk_size": "100m", "compress": true } }' The Google cloud storage repository Similar to Amazon S3 and Microsoft Azure, we can use a GCS repository plugin for snapshotting and restoring of our indices. The settings for this plugin are almost similar to other cloud plugins. To know how to work with the Google cloud repository plugin please refer to the following URL: https://www.elastic.co/guide/en/elasticsearch/plugins/5.0/repository-gcs.htm Thus, in the article we learn how to carry out backup of your data from Elasticsearch clusters to the cloud, i.e. within different cloud repositories by making use of the additional plugin options with Elasticsearch. If you found our excerpt useful, you may explore other interesting features and advanced concepts of Elasticsearch 5.x like aggregation, index control, sharding, replication, and clustering in the book Mastering Elasticsearch 5.x.
Read more
  • 0
  • 1
  • 5294
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $19.99/month. Cancel anytime
article-image-25-startups-machine-learning-differently-2018
Fatema Patrawala
29 Dec 2017
14 min read
Save for later

25 Startups using machine learning differently in 2018: From farming to brewing beer to elder care

Fatema Patrawala
29 Dec 2017
14 min read
What really excites me about data science and by extension machine learning is the sheer number of possibilities! You can think of so many applications off the top of your head: robo-advisors, computerized lawyers, digital medicine, even automating VC decisions when they invest in startups. You can even venture into automation of art and music, algorithms writing papers which are indistinguishable from human-written papers. It's like solving a puzzle, but a puzzle that's meaningful and that has real world implications. The things that we can do today weren’t possible 5 years ago, and this is largely thanks to growth in computational power, data availability, and the adoption of the cloud that made accessing these resources economical for everyone, all key enabling factors for the advancement of Machine learning and AI. Having witnessed the growth of data science as discipline, industries like finance, health-care, education, media & entertainment, insurance, retail as well as energy has left no stone unturned to harness this opportunity. Data science has the capability to offer even more; and we will see the wide range of applications in the future in places haven’t even been explored. In the years to come, we will increasingly see data powered/AI enabled products and services take on roles traditionally handled by humans as they required innately human qualities to successfully perform. In this article we have covered some use cases of Data Science being used differently and start-ups who have practically implemented it: The Nurturer: For elder care The world is aging rather rapidly. According to the World Health Organization, nearly two billion people across the world are expected to be over 60 years old by 2050, a figure that’s more than triple what it was in 2000. In order to adapt to their increasingly aging population, many countries have raised the retirement age, reducing pension benefits, and have started spending more on elderly care. Research institutions in countries like Japan, home to a large elderly population, are focusing their R&D efforts on robots that can perform tasks like lifting and moving chronically ill patients, many startups are working on automating hospital logistics and bringing in virtual assistance. They also offer AI-based virtual assistants to serve as middlemen between nurses and patients, reducing the need for frequent in-hospital visits. Dr Ben Maruthappu, a practising doctor, has brought a change to the world of geriatric care with an AI based app Cera. It is an on-demand platform to aid the elderly in need. The Cera app firmly puts itself in the category of Uber & Amazon, whereby it connects elderly people in need of care with a caregiver in a matter of few hours. The team behind this innovation also plans to use AI to track patients’ health conditions and reduce the number of emergency patients admitted in hospitals. A social companion technology - Elliq created by Intuition Robotics helps older adults stay active and engaged with a proactive social robot that overcomes the digital divide. AliveCor, a leading FDA-cleared mobile heart solution helps save lives, money, and has brought modern healthcare alive into the 21st century. The Teacher: Personalized education platform for lifelong learning With children increasingly using smartphones and tablets and coding becoming a part of national curricula around the world, technology has become an integral part of classrooms. We have already witnessed the rise and impact of education technology especially through a multitude of adaptive learning platforms that allow learners to strengthen their skills and knowledge - CBTs, LMSes, MOOCs and more. And now virtual reality (VR) and artificial intelligence (AI) are gaining traction to provide us with lifelong learning companion that can accompany and support individuals throughout their studies - in and beyond school . An AI based educational platform learns the amount of potential held by each particular student. Based on this data, tailored guidance is provided to fix mistakes and improvise on the weaker areas. A detailed report can be generated by the teachers to help them customise lesson plans to best suit the needs of the student. Take Gruff Davies’ Kwiziq for example. Gruff with his team leverage AI to provide a personalised learning experience for students based on their individual needs. Students registered on the platform get an advantage of an AI based language coach which asks them to solve various micro quizzes. Quiz solutions provided by students are then turned into detailed “brain maps”.  These brain maps are further used to provide tailored instructions and feedback for improvement. Other startup firms like Blippar specialize in Augmented reality for visual and experiential learning. Unelma Platforms, a software platform development company provides state-of-the-art software for higher-education, healthcare and business markets. The Provider: Farming to be more productive, sustainable and advanced Though farming is considered the backbone of many national economies especially in the developing world, there is often an outdated view of it involving a small, family-owned lands where crops are hand harvested. The reality of modern-day farms have had to overhaul operations to meet demand and remain competitively priced while adapting to the ever-changing ways technology is infiltrating all parts of life. Climate change is a serious environmental threat farmers must deal with every season: Strong storms and severe droughts have made farming even more challenging. Additionally lack of agricultural input, water scarcity, over-chemicalization in fertilizers, water & soil pollution or shortage of storage systems has made survival for farmers all the more difficult.   To overcome these challenges, smart farming techniques are the need of an hour for farmers in order to manage resources and sustain in the market. For instance, in a paper published by arXiv, the team explains how they used a technique known as transfer learning to teach the AI how to recognize crop diseases and pest damage.They utilized TensorFlow, to build and train a neural network of their own, which involved showing the AI 2,756 images of cassava leaves from plants in Tanzania. Their efforts were a success, as the AI was able to correctly identify brown leaf spot disease with 98 percent accuracy. WeFarm, SaaS based agritech firm, headquartered in London, aims to bridge the connectivity gap amongst the farmer community. It allows them to send queries related to farming via text message which is then shared online into several languages. The farmer then receives a crowdsourced response from other farmers around the world. In this way, a particular farmer in Kenya can get a solution from someone sitting in Uganda, without having to leave his farm, spend additional money or without accessing the internet. Benson Hill Bio-systems, by Matthew B. Crisp, former President of Agricultural Biotechnology Division, has differentiated itself by bringing the power of Cloud Biology™ to agriculture. It combines cloud computing, big data analytics, and plant biology to inspire innovation in agriculture. At the heart of Benson Hill is CropOS™, a cognitive engine that integrates crop data and analytics with the biological expertise and experience of the Benson Hill scientists. CropOS™ continuously advances and improves with every new dataset, resulting in the strengthening of the system’s predictive power. Firms like Plenty Inc and Bowery Farming Inc are nowhere behind in offering smart farming solutions. Plenty Inc is an agriculture technology company that develops plant sciences for crops to flourish in a pesticide and GMO-free environment. While Bowery Farming uses high-tech approaches such as robotics, LED lighting and data analytics to grow leafy greens indoors. The Saviour: For sustainability and waste management The global energy landscape continues to evolve, sometimes by the nanosecond, sometimes by the day. The sector finds itself pulled to economize and pushed to innovate due to a surge in demand for new power and utilities offerings. Innovations in power-sector technology, such as new storage battery options and smartphone-based thermostat apps, AI enabled sensors etc; are advancing at a pace that has surprised developers and adopters alike. Consumer’s demands for such products have increased. To meet this, industry leaders are integrating those innovations into their operations and infrastructure as rapidly as they can. On the other hand, companies pursuing energy efficiency have two long-standing goals — gaining a competitive advantage and boosting the bottom line — and a relatively new one: environmental sustainability. Realising the importance of such impending situations in the industry, we have startups like SmartTrace offering an innovative cloud-based platform to quickly manage waste at multiple levels. This includes bridging rough data from waste contractors, extrapolating to volume, EWC, finance and Co2 statistics. Data extracted acts as a guide to improve methodology, educate, strengthen oversight and direct improvements to the bottom line, as well as environmental outcomes. One Concern provides damage estimates using artificial intelligence on natural phenomena sciences. Autogrid organizes energy data and employs big data analytics to generate real-time predictions to create actionable data. The Dreamer: For lifestyle and creative product development and design Consumers in our modern world continually make multiple decisions with regard to product choice due to many competing products in the market.Often those choices boil down to whether it provides better value than others either in terms of product quality, price or by aligning with their personal beliefs and values.Lifestyle products and brands operate off ideologies, hoping to attract a relatively high number of people and ultimately becoming a recognized social phenomenon. While ecommerce has leveraged data science to master the price dimension, here are some examples of startups trying to deconstruct the other two dimensions: product development and branding. I wonder if you have ever imagined your beer to be brewed by AI? Well, now you can with IntelligentX. The Intelligent X team claim to have invented the world's first beer brewed by Artificial intelligence. They also plan to craft a premium beer using complex machine learning algorithms which can improve itself from the feedback given by its customers. Customers are given to try one of their four bottled conditioned beers, after the trial they are asked by their AI what they think of the beer, via an online feedback messenger. The data then collected is used by an algorithm to brew the next batch. Because their AI is constantly reacting to user feedback, they can brew beer that matches what customers want, more quickly than anyone else can. What this actually means that the company gets more data and customers get a customized fresh beer! In the lifestyle domain, we have Stitch Fix which has brought a personal touch to the online shopping journey. They are no regular other apparel e-commerce company. They have created a perfect formula for blending human expertise with the right amount of Data Science to serve their customers. According to Katrina Lake, Founder, and CEO, "You can look at every product on the planet, but trying to figure out which one is best for you is really the challenge” and that’s where Stitch Fix has come into the picture. The company is disrupting traditional retail by bridging the gap of personalized shopping, that the former could not achieve. To know how StitchFix uses Full Stack Data Science read our detailed article. The Writer: From content creation to curation to promotion In the publishing industry, we have seen a digital revolution coming in too. Echobox are one of the pioneers in building AI for the publishing industry. Antoine Amann, founder of Echobox, wrote in a blog post that they have "developed an AI platform that takes large quantity of variables into account and analyses them in real time to determine optimum post performance". Echobox pride itself to currently work with Facebook and Twitter for optimizing social media content, perform advanced analytics with A/B testing and also curate content for desired CTRs. With global client base like The Le Monde, The Telegraph, The Guardian etc. they have conveniently ripped social media editors. New York-based startup Agolo uses AI to create real-time summaries of information. It initially use to curate Twitter feeds in order to focus on conversations, tweets and hashtags that were most relevant to its user's preferences. Using natural language processing, Agolo scans content, identifies relationships among the information sources, and picks out the most relevant information, all leading to a comprehensive summary of the original piece of information. Other websites like Grammarly, offers AI-powered solutions to help people write, edit and formulate mistake-free content. Textio came up with augmented writing which means every time you wrote something and you would come to know ahead of time exactly who is going to respond. It basically means writing which is supported by outcomes in real time. Automated Insights, Creator of Wordsmith, the natural language generation platform enables you to produce human-sounding narratives from data. The Matchmaker: Connecting people, skills and other entities AI will make networking at B2B events more fun and highly productive for business professionals. Grip, a London based startup, formerly known as Network, rebranded itself in the month of April, 2016. Grip is using AI as a platform to make networking at events more constructive and fruitful. It acts as a B2B matchmaking engine that accumulates data from social accounts (LinkedIn, Twitter) and smartly matches the event registration data. Synonymous to Tinder for networking, Grip uses advanced algorithms to recommend the right people and presents them with an easy to use swiping interface feature. It also delivers a detailed report to the event organizer on the success of the event for every user or a social Segment. We are well aware of the data scientist being the sexiest job of the 21st century. JamieAi harnessing this fact connects technical talent with data-oriented jobs organizations of all types and sizes. The start-up firm has combined AI insights and human oversight to reduce hiring costs and eliminate bias.  Also, third party recruitment agencies are removed from the process to boost transparency and efficiency in the path to employment. Another example is Woo.io, a marketplace for matching tech professionals and companies. The Manager: Virtual assistants of a different kind Artificial Intelligence can also predict how much your household appliance will cost on your electricity bill. Verv, a producer of clever home energy assistance provides intelligent information on your household appliances. It helps its customers with a significant reduction on their electricity bills and carbon footprints. The technology uses machine learning algorithms to provide real-time information by learning how much power and money each device is using. Not only this, it can also suggest eco-friendly alternatives, alert homeowners of appliances in use for a longer duration and warn them of any dangerous activity when they aren’t present at home. Other examples include firms like Maana which manages machines and improves operational efficiencies in order to make fast data driven decisions. Gong.io, acts as a sales representative’s assistant to understand sales conversations resulting into actionable insights. ObEN, creates complete virtual identities for consumers and celebrities in the emerging digital world. The Motivator: For personal and business productivity and growth A super cross-functional company Perkbox, came up with an employee engagement platform. Saurav Chopra founder of Perkbox believes teams perform their best when they are happy and engaged! Hence, Perkbox helps companies boost employee motivation and create a more inspirational atmosphere to work. The platform offers gym services, dental discounts and rewards for top achievers in the team to firms in UK. Perkbox offers a wide range of perks, discounts and tools to help organizations retain and motivate their employees. Technologies like AWS and Kubernetes allow to closely knit themselves with their development team. In order to build, scale and support Perkbox application for the growing number of user base. So, these are some use cases where we found startups using data science and machine learning differently. Do you know of others? Please share them in the comments below.
Read more
  • 0
  • 0
  • 4552

article-image-decrypting-bitcoin-2017-journey
Ashwin Nair
28 Dec 2017
7 min read
Save for later

There and back again: Decrypting Bitcoin`s 2017 journey from $1000 to $20000

Ashwin Nair
28 Dec 2017
7 min read
Lately, Bitcoin has emerged as the most popular topic of discussion amongst colleagues, friends and family. The conversations more or less have a similar theme - filled with inane theories around what Bitcoin is and and the growing fear of missing out on Bitcoin mania. Well to be fair, who would want to miss out on an opportunity to grow their money by more than 1000%. That`s the return posted by Bitcoin since the start of the year. Bitcoin at the time of writing this article is at $15,000 with a marketcap over $250 billion. To put the hype in context, Bitcoin is now valued higher than 90% of companies in S&P 500 list. Supposedly, invented by an anonymous group or an individual under the alias Satoshi Nakamoto in 2009, Bitcoin has been seen as a digital currency that the internet might not need but one that it deserves. Satoshi`s vision was to create a robust electronic payment system that functions smoothly without a need for a trusted third party. This was achievable with the help of Blockchain, a digital ledger which records transactions in an indelible fashion and is distributed across multiple nodes. This ensured that no transaction is altered or deleted, thus completely eliminating the need for a trusted third party. For all the excitement around Bitcoin and the increase in interest level towards owning one, the thought of dissecting the roller coaster journey from sub $1000 level to $15,000 this year seemed pretty exciting. Started year with a bang / Global uncertainties leading to Bitcoin rally - Oct 2016 to Jan 2017 Global uncertainties played a big role in driving Bitcoin`s price and boy 2016 was full of it!  From Brexit to Trump winning the US elections and major economic commotion in terms of Devaluation of China`s Yuan and India`s Demonetization drive all leading to investors seeking shelter in Bitcoin. The first blip - Jan 2017 to March 2017 China as a country has had a major impact in determining Bitcoin`s fate. Early 2017, China contributed to over 80% of Bitcoin transactions indicating the amount of power Chinese traders and investors had in controlling Bitcoin`s price. However, People's Bank of China`s closer inspection of the exchanges revealed several irregularities in the transactions and business practices. Which eventually led to officials halting withdrawals using Bitcoin transactions and taking stringent steps towards cryptocurrency exchanges. Source - Bloomberg On the path tobecoming  mainstream and gaining support from Ethereum - Mar 2017 to June 2017 During this phase, we saw the rise of another cryptocurrency, Ethereum, a close digital cousin of Bitcoin. Similar to Bitcoin, Ethereum is also built on top of Blockchain technology allowing it to build a decentralized public network. However, Ethereum’s capabilities extend beyond being a cryptocurrency and help developers to build and deploy any kind of decentralized applications. Ethereum`s valuation in this period rose from $20 to $375 which was quite beneficial for Bitcoin. As every reference of Ethereum had Bitcoin mentioned as well, whether it was to explain what Ethereum is or how it can take the number 1 cryptocurrency spot in the future. This, coupled with the rise in Blockchain`s popularity, increased Bitcoin`s visibility within USA. The media started observing politicians, celebrities and other prominent personalities speaking on Bitcoin as well. Bitcoin also received a major boost from Chinese exchanges, wherein withdrawals of the cryptocurrency resumed after nearly a four-month freeze. All these factors led to Bitcoin crossing an all time high of $2500, up by more than 150% since the start of the year. The Curious case of Fork - June 2017 to September 2017 The month of July saw the Cryptocurrencies market cap witnessing a sharp decline, with questions being raised on the price volatility and whether Bitcoin`s rally for the year was over? We can of-course now confidently debunk that question. Though, there hasn’t been any proven rationale behind the decline, one of the reasons seems to be profit booking following months of steep rally in valuations witnessed by Bitcoin. Another major factor, which might have driven the price collapse may be an unresolved dispute among leading members of the bitcoin community over how to overcome the growing problem of Bitcoin being slow and expensive. With growing usage of Bitcoin, its performance in terms of transaction time has slowed down. Bitcoin`s network due to its limitation in terms of block size could only execute around 7 transactions per second compared to the VISA Network which could do over 1600 transactions. This also led to transaction fees being increased substantially to $5.00 per transaction and settlement time often taking hours and even days. This eventually put Bitcoin`s flaw in the spotlight when compared with services offered by competitors such as Paypal in terms of cost and transaction time. Source - Coinmarketcap The poor performance of Bitcoin led to investors opting for other cryptocurrencies. The above graph, shows how Bitcoin`s dominance fell substantially compared to other cryptocurrencies such as Ethereum and Litecoin during this time.   With the core-community still unable to come to a consensus on how to improve the performance and update the software, the prospect of a “fork” was raised.  Fork highlights change in underlying software protocol of Bitcoin to make previous rules valid or invalid. There are two types of blockchain forks - soft fork and hard fork. Around August, the community announced to go ahead with a hard fork in the form of Bitcoin Cash. This news was surprisingly taken in a positive manner leading to Bitcoin rebounding strongly and reaching new heights of around $4000 in price. Once bitten, Twice Shy (China) September 2017 to October 2017 Source - Zerohedge The month of September saw another setback for Bitcoin due to measures taken from People`s Bank of China. This time, PBoC banned initial coin offerings (ICO), thus prohibiting the practice of building and selling cryptocurrency to any investors or to finance any startup projects within China. Based on a  report by National Committee of Experts on Internet Financial Security Technology, Chinese Investors were involved in 2.6 Billion Yuan worth of ICOs in January-June, 2017 reflecting China`s exposure towards Bitcoin. My Precious (October 2017 to December 2017) Source - Manmanroe During the last quarter Bitcoin`s surge has shocked even hardcore Bitcoin fanatics.  Everything seems to be going right for Bitcoin at the moment.  While at the start of the year China was the major contributor towards the hike in Bitcoin`s valuation, now the momentum seemed to have shifted to a much sensible and responsible market in terms of Japan who have embraced Bitcoin in quite a positive manner. As you can see from the below graph, Japan now holds more than 50% of transaction compared to USA which is much lesser in size. Besides Japan, we are also seeing positive developments in country such as Russia and India, who are looking to legalize cryptocurrency usage. Moreover, the level of interests towards Bitcoin from institutional investors is at its peak. All these factors have resulted in Bitcoin to cross the 5 digit mark for the first time in Nov, 2017 and touching an all time high figure of close to $20,000 in December, 2017. Post the record high, Bitcoin has been witnessing a crash and rebound phenomenon in the last two weeks of December. From a record high of $20,000 to $11,000 and now at $15,000, Bitcoin is still a volatile digital currency if one is looking for a quick price appreciation. Despite the valuation dilemma and the price volatility, one thing is sure: the world is warming up to the idea of cryptocurrencies and even owning one. There are already several predictions being made on how Bitcoin`s astronomical growth is going to continue in 2018. However, Bitcoin needs to overcome several challenges before it can replace the traditional currency and and be widely accepting in banking practices. Besides, the rise of other cryptocurriencies such as Ethereum, LiteCoin or bitcoin cash who are looking to dethrone Bitcoin from the #1 spot, there are broader issues at hand which the Bitcoin community should prioritize such as how to curb the effect of Bitcoin`s mining activities on the environment and on having smoother reforms as well as building regulatory roadmap from countries before people actually start using instead of just looking it as a tool for making a quick buck.  
Read more
  • 0
  • 20
  • 7232

article-image-getting-started-spark-2-0
Sunith Shetty
28 Dec 2017
10 min read
Save for later

Getting started with Spark 2.0

Sunith Shetty
28 Dec 2017
10 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Muhammad Asif Abbasi titled Learning Apache Spark 2. In this book, author explains how to perform big data analytics using Spark streaming, machine learning techniques and more.[/box] Today, we will learn about the basics of Spark architecture and its various components. We will also explore how to install Spark for running it in the local mode. Apache Spark architecture overview Apache Spark is an open source distributed data processing engine for clusters, which provides a unified programming model engine across different types data processing workloads and platforms. At the core of the project is a set of APIs for Streaming, SQL, Machine Learning (ML), and Graph. Spark community supports the Spark project by providing connectors to various open source and proprietary data storage engines. Spark also has the ability to run on a variety of cluster managers like YARN and Mesos, in addition to the Standalone cluster manager which comes bundled with Spark for standalone installation. This is thus a marked difference from Hadoop ecosystem where Hadoop provides a complete platform in terms of storage formats, compute engine, cluster manager, and so on. Spark has been designed with the single goal of being an optimized compute engine. This therefore allows you to run Spark on a variety of cluster managers including being run standalone, or being plugged into YARN and Mesos. Similarly, Spark does not have its own storage, but it can connect to a wide number of storage engines. Currently Spark APIs are available in some of the most common languages including Scala, Java, Python, and R. Let's start by going through various API's available in Spark. Spark-core At the heart of the Spark architecture is the core engine of Spark, commonly referred to as spark-core, which forms the foundation of this powerful architecture. Spark-core provides services such as managing the memory pool, scheduling of tasks on the cluster (Spark works as a Massively Parallel Processing (MPP) system when deployed in cluster mode), recovering failed jobs, and providing support to work with a wide variety of storage systems such as HDFS, S3, and so on. Note: Spark-Core provides a full scheduling component for Standalone Scheduling: Code is available at: https://github.com/apache/spark/tree/master/core/src/main/scala/org/apache/spark/scheduler Spark-Core abstracts the users of the APIs from lower-level technicalities of working on a cluster. Spark-Core also provides the RDD APIs which are the basis of other higher-level APIs, and are the core programming elements on Spark. Note: MPP systems generally use a large number of processors (on separate hardware or virtualized) to perform a set of operations in parallel. The objective of the MPP systems is to divide work into smaller task pieces and running them in parallel to increase in throughput time. Spark SQL Spark SQL is one of the most popular modules of Spark designed for structured and semistructured data processing. Spark SQL allows users to query structured data inside Spark programs using SQL or the DataFrame and the Dataset API, which is usable in Java, Scala, Python, and R. Because of the fact that the DataFrame API provides a uniform way to access a variety of data sources, including Hive datasets, Avro, Parquet, ORC, JSON, and JDBC, users should be able to connect to any data source the same way, and join across these multiple sources together. The usage of Hive meta store by Spark SQL gives the user full compatibility with existing Hive data, queries, and UDFs. Users can seamlessly run their current Hive workload without modification on Spark. Spark SQL can also be accessed through spark-sql shell, and existing business tools can connect via standard JDBC and ODBC interfaces. Spark streaming More than 50% of users consider Spark Streaming to be the most important component of Apache Spark. Spark Streaming is a module of Spark that enables processing of data arriving in passive or live streams of data. Passive streams can be from static files that you choose to stream to your Spark cluster. This can include all sorts of data ranging from web server logs, social-media activity (following a particular Twitter hashtag), sensor data from your car/phone/home, and so on. Spark-streaming provides a bunch of APIs that help you to create streaming applications in a way similar to how you would create a batch job, with minor tweaks. As of Spark 2.0, the philosophy behind Spark Streaming is not to reason about streaming and building data application as in the case of a traditional data source. This means the data from sources is continuously appended to the existing tables, and all the operations are run on the new window. A single API lets the users create batch or streaming applications, with the only difference being that a table in batch applications is finite, while the table for a streaming job is considered to be infinite. MLlib MLlib is Machine Learning Library for Spark, if you remember from the preface, iterative algorithms are one of the key drivers behind the creation of Spark, and most machine learning algorithms perform iterative processing in one way or another. Note: Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data. Spark MLlib allows developers to use Spark API and build machine learning algorithms by tapping into a number of data sources including HDFS, HBase, Cassandra, and so on. Spark is super fast with iterative computing and it performs 100 times better than MapReduce. Spark MLlib contains a number of algorithms and utilities including, but not limited to, logistic regression, Support Vector Machine (SVM), classification and regression trees, random forest and gradient-boosted trees, recommendation via ALS, clustering via K-Means, Principal Component Analysis (PCA), and many others. GraphX GraphX is an API designed to manipulate graphs. The graphs can range from a graph of web pages linked to each other via hyperlinks to a social network graph on Twitter connected by followers or retweets, or a Facebook friends list. Graph theory is a study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph is made up of vertices (nodes/points), which are connected by edges (arcs/lines).                                                                                                                            – Wikipedia.org Spark provides a built-in library for graph manipulation, which therefore allows the developers to seamlessly work with both graphs and collections by combining ETL, discovery analysis, and iterative graph manipulation in a single workflow. The ability to combine transformations, machine learning, and graph computation in a single system at high speed makes Spark one of the most flexible and powerful frameworks out there. The ability of Spark to retain the speed of computation with the standard features of fault-tolerance makes it especially handy for big data problems. Spark GraphX has a number of built-in graph algorithms including PageRank, Connected components, Label propagation, SVD++, and Triangle counter. Spark deployment Apache Spark runs on both Windows and Unix-like systems (for example, Linux and Mac OS). If you are starting with Spark you can run it locally on a single machine. Spark requires Java 7+, Python 2.6+, and R 3.1+. If you would like to use Scala API (the language in which Spark was written), you need at least Scala version 2.10.x. Spark can also run in a clustered mode, using which Spark can run both by itself, and on several existing cluster managers. You can deploy Spark on any of the following cluster managers, and the list is growing everyday due to active community support: Hadoop YARN. Apache Mesos. Standalone scheduler. Yet Another Resource Negotiator (YARN) is one of the key features including a redesigned resource manager thus splitting out the scheduling and resource management capabilities from original MapReduce in Hadoop. Apache Mesos is an open source cluster manager that was developed at the University of California, Berkeley. It provides efficient resource isolation and sharing across distributed applications, or frameworks. Installing Apache Spark As mentioned in the earlier pages, while Spark can be deployed on a cluster, you can also run it in local mode on a single machine. In this chapter, we are going to download and install Apache Spark on a Linux machine and run it in local mode. Before we do anything we need to download Apache Spark from Apache's web page for the Spark project: Use your recommended browser to navigate to http://spark.apache.org/downloads.html. Choose a Spark release. You'll find all previous Spark releases listed here. We'll go with release 2.0.0 (at the time of writing, only the preview edition was available). You can download Spark source code, which can be built for several versions of Hadoop, or download it for a specific Hadoop version. In this case, we are going to download one that has been pre-built for Hadoop 2.7 or later. You can also choose to download directly or from among a number of different Mirrors. For the purpose of our exercise we'll use direct download and download it to our preferred location. Note: If you are using Windows, please remember to use a pathname without any spaces. 5. The file that you have downloaded is a compressed TAR archive. You need to extract the archive. Note: The TAR utility is generally used to unpack TAR files. If you don't have TAR, you might want to download that from the repository or use 7-ZIP, which is also one of my favorite utilities. 6. Once unpacked, you will see a number of directories/files. Here's what you would typically see when you list the contents of the unpacked directory: The bin folder contains a number of executable shell scripts such as pypark, sparkR, spark-shell, spark-sql, and spark-submit. All of these executables are used to interact with Spark, and we will be using most if not all of these. 7. If you see my particular download of spark you will find a folder called yarn. The example below is a Spark that was built for Hadoop version 2.7 which comes with YARN as a cluster manager. We'll start by running Spark shell, which is a very simple way to get started with Spark and learn the API. Spark shell is a Scala Read-Evaluate-Print-Loop (REPL), and one of the few REPLs available with Spark which also include Python and R. You should change to the Spark download directory and run the Spark shell as follows: /bin/spark-shell. We now have Spark running in standalone mode. We'll discuss the details of the deployment architecture a bit later in this chapter, but now let's kick start some basic Spark programming to appreciate the power and simplicity of the Spark framework. We have gone through the Spark architecture overview and the steps to install Spark on your own personal machine. To know more about Spark SQL, Spark Streaming, and Machine Learning with Spark, you can refer our book Learning Apache Spark 2.    
Read more
  • 0
  • 0
  • 2534

article-image-18-striking-ai-trends-to-watch-in-2018-part-2
Sugandha Lahoti
28 Dec 2017
12 min read
Save for later

18 striking AI Trends to watch in 2018 - Part 2

Sugandha Lahoti
28 Dec 2017
12 min read
We are back with Part 2 of our analysis of intriguing AI trends in 2018 as promised in our last post.  We covered the first nine trends in part 1 of this two-part prediction series. To refresh your memory, these are the trends we are betting on. Artificial General Intelligence may gain major traction in research. We will turn to AI enabled solution to solve mission-critical problems. Machine Learning adoption in business will see rapid growth. Safety, ethics, and transparency will become an integral part of AI application design conversations. Mainstream adoption of AI on mobile devices Major research on data efficient learning methods AI personal assistants will continue to get smarter Race to conquer the AI optimized hardware market will heat up further We will see closer AI integration into our everyday lives. The cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications. Advancements in AI and Quantum Computing will share a symbiotic relationship Deep learning will continue to play a significant role in AI development progress. AI will be on both sides of the cybersecurity challenge. Augmented reality content will be brought to smartphones. Reinforcement learning will be applied to a large number of real-world situations. Robotics development will be powered by Deep Reinforcement learning and Meta-learning A rise in immersive media experiences enabled by AI. A large number of organizations will use Digital Twin Without further ado, let’s dive straight into why we think these trends are important. 10. Neural AI: Deep learning will continue to play a significant role in AI progress. Talking about AI is incomplete without mentioning Deep learning. 2017 saw a wide variety of deep learning applications emerge in diverse areas from Self-driving cars, to Beating Video Games and Go champions, to Dreaming, to Painting pictures, and making scientific discoveries. The year started with Pytorch posing a real challenge to Tensorflow, especially in research. Tensorflow countered it by releasing dynamic computation graphs in Tensorflow Fold. As deep learning frameworks became more user-friendly and accessible, and the barriers for programmers and researchers to use deep learning lowered, it increased developer acceptance. This trend will continue to grow in 2018. There would also be improvements in designing and tuning deep learning networks and for this, techniques such as automated hyperparameter tuning will be used widely. We will start seeing real-world uses of automated machine learning development popping up. Deep learning algorithms will continue to evolve around unsupervised and generative learning to detect features and structure in data. We will see high-value use cases of neural networks beyond image, audio, or video analysis such as for advanced text classification, musical genre recognition, biomedical image analysis etc. 2017 also saw ONNX standardization of neural network representations as an important and necessary step forward to interoperability. This will pave way for deep learning models to become more transparent i.e., start making it possible to explain their predictions, especially when the outcomes of these models are used to influence or inform human decisions. 2017 saw a large portion of deep learning research dedicated to GANs. In 2018, We should see implementations of some of GANs ideas, in real-world use cases such as in cyber threat detection. 2018 may also see more deep learning methods gain Bayesian equivalents and probabilistic programming languages to start incorporating deep learning. 11. Autodidact AI: Reinforcement learning will be applied to a large number of real-world situations. Reinforcement learning systems learn by interacting with the environment through observations, actions, and rewards. The historic victory of AlphaGo, this year, was a milestone for reinforcement learning techniques. Although the technique has existed for decades, the idea to combine it with neural networks to solve complex problems (such as the game of Go) made it widely popular. In 2018, we will see reinforcement learning used in real-world situations. We will also see the development of several simulated environments to increase the progress of these algorithms. A notable fact about reinforcement learning algorithms is that they are trained via simulation, which eliminates the need for labeled data entirely. Given such advantages, we can see solutions which combine Reinforcement Learning and agent-based simulation in the coming year. We can expect to see more algorithms and bots enabling edge devices to learn on their own, especially in IoT environments. These bots will push the boundaries between AI techniques such as reinforcement learning, unsupervised learning and auto-generated training to learn on their own. 12. Gray Hat AI: AI will be on both sides of the cybersecurity challenge. 2017 saw some high-profile cases of ransomware attack, the most notable being WannaCry. Cybercrime is projected to cause $6 trillion in damages by 2021. Companies now need to respond better and faster to these security breaches. Since hiring and training and reskilling people is time-consuming and expensive, companies are turning to AI to automate tasks and detect threats. 2017 saw a variety of AI in cyber sec releases. From Watson AI helping companies stay ahead of hackers and cybersecurity attacks, to Darktrace—a company by Cambridge university mathematicians—which uses AI to spot patterns and prevent cyber crimes before they occur. In 2018 we may see AI being used for making better predictions about never seen before threats. We may also hear about AI being used to prevent a complex cybersecurity attack or the use of AI in incident management. On the research side, we can expect announcements related to securing IoT. McAfee has identified five cybersecurity trends for 2018 relating to Adversarial Machine Learning, Ransomware, Serverless Apps, Connected Home Privacy, and Privacy of Child-Generated Content. 13. AI in Robotics: Robotics development will be powered by Deep Reinforcement learning and Meta-learning Deep reinforcement learning was seen in a new light, especially in the field of robotics after Pieter Abbeel’s fantastic Keynote speech at NIPS 2017.  It talked about the implementation of Deep Reinforcement Learning (DRL) in Robotics, what challenges exist and how these challenges can be overcome. DRL has been widely used to play games (Alpha Go and Atari). In 2018, deep reinforcement learning will be used to instill more human-like qualities of discernment and complex decision-making in robots. Meta-learning was another domain which gained widespread attention in 2017. We Started with  model-agnostic meta-learning, which addresses the problem of discovering learning algorithms that generalize well from very few examples. Later in the year, more research on meta-learning for few shot learning was published, using deep temporal convolutional networks and, graph neural networks among others. We're also now seeing meta-learn approaches that learn to do active learning, cold-start item recommendation, reinforcement learning, and many more. More research and real-world implementations of these algorithms will happen in 2018. 2018 may also see developments to overcome the Meta-learning challenge of requiring more computing power so that it can be successfully applied to the field of robotics. Apart from these, there would be improvements in significant other challenges such as safe learning, and value alignment for AI in robotics. 14. AI Dapps: Within the developer community, the cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications. Blockchain is expected to be the storehouse for 10% of the world GDP by 2025.  With such a high market growth, Amazon announced the AWS Blockchain Partners Portal to support customers’ integration of blockchain solutions with systems built on AWS. Following Amazon’s announcement, more tech companies are expected to launch such solutions in the coming year.  Blockchain in combination with AI will provide a way for maintaining immutability in a blockchain network creating a secure ecosystem for transactions and data exchange. AI BlockChain is a digital ledger that maximizes security while remaining immutable by employing AI agents that govern the chain. And 2018, will see more such security solutions coming up. A drawback of blockchain is that blockchain mining requires a high amount of energy.  Google’s DeepMind has already proven that AI can help in optimizing energy consumption in data centers. Similar results can be achieved for blockchain as well. For example, Ethereum has come up with proof of stake, a set of algorithms which selects validators based in part on the size of their respective monetary deposits instead of rewarding participants for spending computational resources, thus saving energy. Research is also expected in the area of using AI to reduce the network latency to enable faster transactions. 15. Quantum AI: Convergence of AI in Quantum Computing Quantum computing was called one of the three path-breaking technologies that will shape the world in the coming years by Microsoft CEO, Satya Nadella. 2017 began with Google unveiling a blueprint for quantum supremacy. IBM edged past them by developing a quantum computer capable of handling 50 qubits. Then came, Microsoft with their Quantum Development Kit and a new quantum programming language. The year ended with Rigetti Computing, a startup, announcing a new quantum algorithm for unsupervised machine learning. 2018 is expected to bring in more organizations, new and old, competing to develop a quantum computer with the capacity to handle even more qubits and process data-intensive large-scale algorithms at speeds never imagined before. As more companies successfully build quantum computers, they would also use them for making substantial progress on current efforts in AI development and for finding new areas of scientific discovery. As with Rigetti, new quantum algorithms would be developed to solve complex machine learning problems. We can also see tools, languages, and frameworks such as Microsoft's Q# programming language being developed to facilitate quantum app development. 16. AI doppelgangers: A large number of organizations will use Digital Twin Digital twin, as the name suggests, is a virtual replica of a product, process or service. 2017 saw some major work going in the field of Digital twin. The most important being GE, which now has over 551,000 digital twins built on their Predix platform. SAP expanded their popular IoT platform, SAP Leonardo with a new digital twin offering. Gartner has named Digital Twin as one of the top 10 Strategic Technology Trends for 2018. Following this news, we can expect to see more organizations coming up with their own digital twins. First to, monitor and control assets, to reduce asset downtime, lower the maintenance costs and improve efficiency. And later to organize and envision more complex entities, such as cities or even human beings. These Digital twins will be infused with AI capabilities to enable advanced simulation, operation, and analysis over the digital representations of physical objects. 2018 is expected to have digital twins make steady progress and benefit city architects, digital marketers, healthcare professionals and industrial planners. 17. Experiential AI: Rise in immersive media experiences based on Artificial Intelligence. 2017 saw the resurgence of Virtual Reality thanks to advances made in AI. Facebook unveiled a standalone headset, Oculus Go, to go on sale in early 2018. Samsung added a separate controller to its Gear VR, and Google's Daydream steadily improved from the remains of Google Cardboard. 2018 will see virtual reality the way 2017 saw GANs -  becoming an accepted convention with impressive use cases but not fully deployed at a commercial scale. It won’t be limited to just creating untethered virtual reality headgears, but will also combine the power of virtual reality, artificial intelligence, and conversational platforms to build a uniquely-immersive experience. These immersive technologies will come out of conventional applications(read the gaming industry) to be used in real estate industry, travel & hospitality industry, and other segments. Intel is reportedly working on is a VR set dedicated to sports events. It allows a viewer to experience the basketball game from any seats they choose. It uses AI and big data to analyze different games happening at the same time, so they can switch to watch them immediately. Not only that, Television will start becoming a popular source of immersive experiences. The next-gen televisions will be equipped with high definition cameras, as well as AI technology to analyze a viewer's emotions as they watch shows. 18. AR AI: Augmented reality content will be brought to smartphones Augmented Reality first garnered worldwide attention with the release of Pokemon Go. Following which a large number of organizations invested in the development of AR-enabled smartphones in 2017. Most notable was Apple’s ARKit framework, which allowed developers to create augmented reality experiences for iPhone and iPad.  Following which Google launched ARCore, to create augmented reality experiences at Android scale. Then came Snapchat, which released Lens Studio, a tool for creating customizable AR effects. The latest AR innovation came from Facebook, which launched AR Studio in open beta to bring AR into the everyday life of its users through the Facebook camera. For 2018, they are planning to develop 3D digital objects for people to place onto surfaces and interact within their physical space. 2018 will further allow us to get a taste of augmented reality content through beta products set in the context of our everyday lives. A recent report published by Digi-Capital suggests that mobile AR market will be worth an astonishing $108 billion by 2021. Following this report, more e-commerce websites will engage mobile users using some form of AR content seeking inspiration from the likes of the Ikea Place AR app. Apart from these, more focus would be on building apps and frameworks which consumes less battery life and have high mobile connectivity capability. With this, we complete our list of our 18 in 18’ AI trends to watch.  We would love to know which of our AI-driven prediction surprises you the most and the trends which you agree with. Please feel free to leave a comment below with your views. Happy New Year!
Read more
  • 0
  • 0
  • 2831
article-image-implement-neural-network-single-layer-perceptron
Pravin Dhandre
28 Dec 2017
10 min read
Save for later

How to Implement a Neural Network with Single-Layer Perceptron

Pravin Dhandre
28 Dec 2017
10 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Rodolfo Bonnin titled Machine Learning for Developers. This book is a systematic developer’s guide for various machine learning algorithms and techniques to develop more efficient and intelligent applications.[/box] In this article we help you go through a simple implementation of a neural network layer by modeling a binary function using basic python techniques. It is the first step in solving some of the complex machine learning problems using neural networks. Take a look at the following code snippet to implement a single function with a single-layer perceptron: import numpy as np import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') from pprint import pprint %matplotlib inline from sklearn import datasets import matplotlib.pyplot as plt Defining and graphing transfer function types The learning properties of a neural network would not be very good with just the help of a univariate linear classifier. Even some mildly complex problems in machine learning involve multiple non-linear variables, so many variants were developed as replacements for the transfer functions of the perceptron. In order to represent non-linear models, a number of different non-linear functions can be used in the activation function. This implies changes in the way the neurons will react to changes in the input variables. In the following sections, we will define the main different transfer functions and define and represent them via code. In this section, we will start using some object-oriented programming (OOP) techniques from Python to represent entities of the problem domain. This will allow us to represent concepts in a much clearer way in the examples. Let's start by creating a TransferFunction class, which will contain the following two methods: getTransferFunction(x): This method will return an activation function determined by the class type getTransferFunctionDerivative(x): This method will clearly return its derivative For both functions, the input will be a NumPy array and the function will be applied element by element, as follows: >class TransferFunction: def getTransferFunction(x): raise NotImplementedError def getTransferFunctionDerivative(x): raise NotImplementedError Representing and understanding the transfer functions Let's take a look at the following code snippet to see how the transfer function works: def graphTransferFunction(function): x = np.arange(-2.0, 2.0, 0.01) plt.figure(figsize=(18,8)) ax=plt.subplot(121) ax.set_title(function.  name  ) plt.plot(x, function.getTransferFunction(x)) ax=plt.subplot(122) ax.set_title('Derivative of ' + function.  name  ) plt.plot(x, function.getTransferFunctionDerivative(x)) Sigmoid or logistic function A sigmoid or logistic function is the canonical activation function and is well-suited for calculating probabilities in classification properties. Firstly, let's prepare a function that will be used to graph all the transfer functions with their derivatives, from a common range of -2.0 to 2.0, which will allow us to see the main characteristics of them around the y axis. The classical formula for the sigmoid function is as follows: class Sigmoid(TransferFunction): #Squash 0,1 def getTransferFunction(x): return 1/(1+np.exp(-x)) def getTransferFunctionDerivative(x): return x*(1-x) graphTransferFunction(Sigmoid) Take a look at the following graph: Playing with the sigmoid Next, we will do an exercise to get an idea of how the sigmoid changes when multiplied by the weights and shifted by the bias to accommodate the final function towards its minimum. Let's then vary the possible parameters of a single sigmoid first and see it stretch and move: ws=np.arange(-1.0, 1.0, 0.2) bs=np.arange(-2.0, 2.0, 0.2) xs=np.arange(-4.0, 4.0, 0.1) plt.figure(figsize=(20,10)) ax=plt.subplot(121) for i in ws: plt.plot(xs, Sigmoid.getTransferFunction(i *xs),label= str(i)); ax.set_title('Sigmoid variants in w') plt.legend(loc='upper left'); ax=plt.subplot(122) for i in bs: plt.plot(xs, Sigmoid.getTransferFunction(i +xs),label= str(i)); ax.set_title('Sigmoid variants in b') plt.legend(loc='upper left'); Take a look at the following graph: Let's take a look at the following code snippet: class Tanh(TransferFunction): #Squash -1,1 def getTransferFunction(x): return np.tanh(x) def getTransferFunctionDerivative(x): return np.power(np.tanh(x),2) graphTransferFunction(Tanh) Lets take a look at the following graph: Rectified linear unit or ReLU ReLU is called a rectified linear unit, and one of its main advantages is that it is not affected by vanishing gradient problems, which generally consist of the first layers of a network tending to be values of zero, or a tiny epsilon: class Relu(TransferFunction): def getTransferFunction(x): return x * (x>0) def getTransferFunctionDerivative(x): return 1 * (x>0) graphTransferFunction(Relu) Let's take a look at the following graph: Linear transfer function Let's take a look at the following code snippet to understand the linear transfer function: class Linear(TransferFunction): def getTransferFunction(x): return x def getTransferFunctionDerivative(x): return np.ones(len(x)) graphTransferFunction(Linear) Let's take a look at the following graph: Defining loss functions for neural networks As with every model in machine learning, we will explore the possible functions that we will use to determine how well our predictions and classification went. The first type of distinction we will do is between the L1 and L2 error function types. L1, also known as least absolute deviations (LAD) or least absolute errors (LAE), has very interesting properties, and it simply consists of the absolute difference between the final result of the model and the expected one, as follows: L1 versus L2 properties Now it's time to do a head-to-head comparison between the two types of loss function:   Robustness: L1 is a more robust loss function, which can be expressed as the resistance of the function when being affected by outliers, which projects a quadratic function to very high values. Thus, in order to choose an L2 function, we should have very stringent data cleaning for it to be efficient. Stability: The stability property assesses how much the error curve jumps for a large error value. L1 is more unstable, especially for non-normalized datasets (because numbers in the [-1, 1] range diminish when squared). Solution uniqueness: As can be inferred by its quadratic nature, the L2 function ensures that we will have a unique answer for our search for a minimum. L2 always has a unique solution, but L1 can have many solutions, due to the fact that we can find many paths with minimal length for our models in the form of piecewise linear functions, compared to the single line distance in the case of L2. Regarding usage, the summation of the past properties allows us to use the L2 error type in normal cases, especially because of the solution uniqueness, which gives us the required certainty when starting to minimize error values. In the first example, we will start with a simpler L1 error function for educational purposes. Let's explore these two approaches by graphing the error results for a sample L1 and L2 loss error function. In the next simple example, we will show you the very different nature of the two errors. In the first two examples, we have normalized the input between -1 and 1 and then with values outside that range. As you can see, from samples 0 to 3, the quadratic error increases steadily and continuously, but with non-normalized data it can explode, especially with outliers, as shown in the following code snippet: sampley_=np.array([.1,.2,.3,-.4, -1, -3, 6, 3]) sampley=np.array([.2,-.2,.6,.10, 2, -1, 3, -1]) ax.set_title('Sigmoid variants in b') plt.figure(figsize=(10,10)) ax=plt.subplot() plt.plot(sampley_ - sampley, label='L1') plt.plot(np.power((sampley_ - sampley),2), label="L2") ax.set_title('L1 vs L2 initial comparison') plt.legend(loc='best') plt.show() Let's take a look at the following graph: Let's define the loss functions in the form of a LossFunction class and a getLoss method for the L1 and L2 loss function types, receiving two NumPy arrays as parameters, y_, or the estimated function value, and y, the expected value: class LossFunction: def getLoss(y_ , y ): raise NotImplementedError class L1(LossFunction): def getLoss(y_, y): return np.sum (y_ - y) class L2(LossFunction): def getLoss(y_, y): return np.sum (np.power((y_ - y),2)) Now it's time to define the goal function, which we will define as a simple Boolean. In order to allow faster convergence, it will have a direct relationship between the first input variable and the function's outcome: # input dataset X = np.array([ [0,0,1], [0,1,1], [1,0,1], [1,1,1] ]) # output dataset y = np.array([[0,0,1,1]]).T The first model we will use is a very minimal neural network with three cells and a weight for each one, without bias, in order to keep the model's complexity to a minimum: # initialize weights randomly with mean 0 W = 2*np.random.random((3,1)) - 1 print (W) Take a look at the following output generated by running the preceding code: [[ 0.52014909] [-0.25361738] [ 0.165037 ]] Then we will define a set of variables to collect the model's error, the weights, and training results progression: errorlist=np.empty(3); weighthistory=np.array(0) resultshistory=np.array(0) Then it's time to do the iterative error minimization. In this case, it will consist of feeding the whole true table 100 times via the weights and the neuron's transfer function, adjusting the weights in the direction of the error. Note that this model doesn't use a learning rate, so it should converge (or diverge) quickly: for iter in range(100): # forward propagation l0 = X l1 = Sigmoid.getTransferFunction(np.dot(l0,W)) resultshistory = np.append(resultshistory , l1) # Error calculation l1_error = y - l1 errorlist=np.append(errorlist, l1_error) # Back propagation 1: Get the deltas l1_delta = l1_error * Sigmoid.getTransferFunctionDerivative(l1) # update weights W += np.dot(l0.T,l1_delta) weighthistory=np.append(weighthistory,W) Let's simply review the last evaluation step by printing the output values at l1. Now we can see that we are reflecting quite literally the output of the original function: print (l1) Take a look at the following output, which is generated by running the preceding code: [[ 0.11510625] [ 0.08929355] [ 0.92890033] [ 0.90781468]] To better understand the process, let's have a look at how the parameters change over time. First, let's graph the neuron weights. As you can see, they go from a random state to accepting the whole values of the first column (which is always right), going to almost 0 for the second column (which is right 50% of the time), and then going to -2 for the third (mainly because it has to trigger 0 in the first two elements of the table): plt.figure(figsize=(20,20)) print (W) plt.imshow(np.reshape(weighthistory[1:],(-1,3))[:40], cmap=plt.cm.gray_r, interpolation='nearest'); Take a look at the following output, which is generated by running the preceding code: [[ 4.62194116] [-0.28222595] [-2.04618725]] Let's take a look at the following screenshot: Let's also review how our solutions evolved (during the first 40 iterations) until we reached the last iteration; we can clearly see the convergence to the ideal values: plt.figure(figsize=(20,20)) plt.imshow(np.reshape(resultshistory[1:], (-1,4))[:40], cmap=plt.cm.gray_r, interpolation='nearest'); Let's take a look at the following screenshot: We can see how the error evolves and tends to be zero through the different epochs. In this case, we can observe that it swings from negative to positive, which is possible because we first used an L1 error function: plt.figure(figsize=(10,10)) plt.plot(errorlist); Let's take a look at the following screenshot: The above explanation of implementing neural network using single-layer perceptron helps to create and play with the transfer function and also explore how accurate did the classification and prediction of the dataset took place. To know how classification is generally done on complex and large datasets, you may read our article on multi-layer perceptrons. To get hands on with advanced concepts and powerful tools for solving complex computational machine learning techniques, do check out this book Machine Learning for Developers and start building smart applications in your machine learning projects.      
Read more
  • 0
  • 0
  • 8638

article-image-mine-popular-trends-github-python-part-2
Amey Varangaonkar
27 Dec 2017
1 min read
Save for later

How to Mine Popular Trends on GitHub using Python - Part 2

Amey Varangaonkar
27 Dec 2017
1 min read
[box type="note" align="" class="" width=""]This article is an excerpt taken from the book Python Social Media Analytics, written by Siddhartha Chatterjee and Michal Krystyanczuk. In this book, you will find widely used social media mining techniques for extracting useful insights to drive your business.[/box] In Part 1 of this series, we gathered the GitHub data for analysis. Here, we will analyze that data as per our requirements, to get interesting insights on the highest trending and popular tools and languages on GitHub. We have seen so far that the GitHub API provides interesting sets of information about the code repositories and metadata around the activity of its users around these repositories. In the following sections, we will analyze this data to find out which are the most popular repositories through the analysis of its descriptions and then drilling down to the watchers, forks, and issues submitted on the emerging technologies. Since, technology is evolving so rapidly, this approach could help us to stay on top of the latest trending technologies. In order to find out what are the trending technologies, we will perform the analysis in a few steps: Identifying top technologies First of all, we will use text analytics techniques to identify what are the most popular phrases related to technologies in repositories from 2017. Our analysis will be focused on the most frequent bigrams. We import a nltk.collocation module which implements n-gram search tools: import nltk from nltk.collocations import * Then, we convert the clean description column into a list of tokens: list_documents = df['clean'].apply(lambda x: x.split()).tolist() As we perform an analysis on documents, we will use the method from_documents instead of a default one from_words. The difference between these two methods lies in then input data format. The one used in our case takes as argument a list of tokens and searches for n-grams document-wise instead of corpus-wise. It protects against detecting bi-grams composed of the last word of one document and the first one of another one: bigram_measures = nltk.collocations.BigramAssocMeasures() bigram_finder = BigramCollocationFinder.from_documents(list_documents) We take into account only bi-grams which appear at least three times in our document set: bigram_finder.apply_freq_filter(3) We can use different association measures to find the best bi-grams, such as raw frequency, pmi, student t, or chi sq. We will mostly be interested in the raw frequency measure, which is the simplest and most convenient indicator in our case. We get top 20 bigrams according to raw_freq measure: bigrams = bigram_finder.nbest(bigram_measures.raw_freq,20) We can also obtain their scores by applying the score_ngrams method: scores = bigram_finder.score_ngrams(bigram_measures.raw_freq) All the other measures are implemented as methods of BigramCollocationFinder. To try them, you can replace raw_freq by, respectively, pmi, student_t, and chi_sq. However, to create a visualization we will need the actual number of occurrences instead of scores. We create a list by using the ngram_fd.items() method and we sort it in descending order. ngram = list(bigram_finder.ngram_fd.items()) ngram.sort(key=lambda item: item[-1], reverse=True) It returns a dictionary of tuples which contain an embedded tuple and its frequency. We transform it into a simple list of tuples where we join bigram tokens: frequency = [(" ".join(k), v) for k,v in ngram] For simplicity reasons we put the frequency list into a dataframe: df=pd.DataFrame(frequency) And then, we plot the top 20 technologies in a bar chart: import matplotlib.pyplot as plt plt.style.use('ggplot') df.set_index([0], inplace = True) df.sort_values(by = [1], ascending = False).head(20).plot(kind = 'barh') plt.title('Trending Technologies') plt.ylabel('Technology') plt.xlabel('Popularity') plt.legend().set_visible(False) plt.axvline(x=14, color='b', label='Average', linestyle='--', linewidth=3) for custom in [0, 10, 14]: plt.text(14.2, custom, "Neural Networks", fontsize = 12, va = 'center', bbox = dict(boxstyle='square', fc='white', ec='none')) plt.show() We've added an additional line which helps us to aggregate all technologies related to neural networks. It is done manually by selecting elements by indices, (0,10,14) in this case. This operation might be useful for interpretation. The preceding analysis provides us with an interesting set of the most popular technologies on GitHub. It includes topics for software engineering, programming languages, and artificial intelligence. An important thing to be noted is that technology around neural networks emerges more than once, notably, deep learning, TensorFlow, and other specific projects. This is not surprising, since neural networks, which are an important component in the field of artificial intelligence, have been spoken about and practiced heavily in the last few years. So, if you're an aspiring programmer interested in AI and machine learning, this is a field to dive into! Programming languages  The next step in our analysis is the comparison of popularity between different programming languages. It will be based on samples of the top 1,000 most popular repositories by year. Firstly, we get the data for last three years: queries = ["created:>2017-01-01", "created:2015-01-01..2015-12-31", "created:2016-01-01..2016-12-31"] We reuse the search_repo_paging function to collect the data from the GitHub API and we concatenate the results to a new dataframe. df = pd.DataFrame() for query in queries: data = search_repo_paging(query) data = pd.io.json.json_normalize(data) df = pd.concat([df, data]) We convert the dataframe to a time series based on the create_at column df['created_at'] = df['created_at'].apply(pd.to_datetime) df = df.set_index(['created_at']) Then, we use aggregation method groupby which restructures the data by language and year, and we count the number of occurrences by language: dx = pd.DataFrame(df.groupby(['language', df.index.year])['language'].count()) We represent the results on a bar chart: fig, ax = plt.subplots() dx.unstack().plot(kind='bar', title = 'Programming Languages per Year', ax= ax) ax.legend(['2015', '2016', '2017'], title = 'Year') plt.show() The preceding graph shows a multitude of programming languages from assembly, C, C#, Java, web, and mobile languages, to modern ones like Python, Ruby, and Scala. Comparing over the three years, we see some interesting trends. We notice HTML, which is the bedrock of all web development, has remained very stable over the last three years. This is not something that will not be replaced in a hurry. Once very popular, Ruby now has a decrease in popularity. The popularity of Python, also our language of choice for this book, is going up. Finally, the cross-device programming language, Swift, initially created by Apple but now open source, is getting extremely popular over time. It could be interesting to see in the next few years, if these trends change or hold true for long. Programming languages used in top technologies Now we know what are the top programming languages and technologies quoted in repositories description. In this section we will try to combine this information and find out what are the main programming languages for each technology. We select four technologies from previous section and print corresponding programming languages. We look up the column containing cleaned repository description and create a set of the languages related to the technology. Using a set will assure that we have unique Values. technologies_list = ['software engineering', 'deep learning', 'open source', 'exercise practice'] for tech in technologies_list: print(tech) print(set(df[df['clean'].str.contains(tech)]['language'])) software engineering {'HTML', 'Java'} deep learning {'Jupyter Notebook', None, 'Python'} open source {None, 'PHP', 'Java', 'TypeScript', 'Go', 'JavaScript', 'Ruby', 'C++'} exercise practice {'CSS', 'JavaScript', 'HTML'} Following the text analysis of the descriptions of the top technologies and then extracting the programming languages for them we notice the following: You can do a lot more analysis with this GitHub data such as: Want to know how? You can check out our book Python Social Media Analytics to get a detailed walkthrough of these topics.
Read more
  • 0
  • 0
  • 3106

article-image-storing-apache-storm-data-in-elasticsearch
Richa Tripathi
27 Dec 2017
6 min read
Save for later

Storing Apache Storm data in Elasticsearch

Richa Tripathi
27 Dec 2017
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Ankit Jain titled Mastering Apache Storm. This book explores various real-time processing functionalities offered by Apache Storm such as parallelism, data partitioning, and more.[/box] In this article, we are going to cover how to store the data processed by Apache Storm in Elasticsearch. Elasticsearch is an open source, distributed search engine platform developed on Lucene. It provides a multitenant-capable, full-text search engine capability.Though Apache storm is meant for real-time data processing, in most cases, we need to store the processed data in a data store so that it can be used for further batch analysis and to execute the batch analysis queries on the data stored. We assume that Elasticsearch is running on your environment. Please refer to https://www.elastic.co/guide/en/elasticsearch/reference/2.3/_installation.html to install Elasticsearch on any of the boxes if you don't have any running Elasticsearch cluster. Go through the following steps to integrate Storm with Elasticsearch: Create a Maven project using com.stormadvance for the groupID and storm_elasticsearch for the artifactID. Add the following dependencies and repositories to the pom.xml file: <dependencies> <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> <version>2.4.4</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.storm</groupId> <artifactId>storm-core</artifactId> <version>1.0.2</version> <scope>provided</scope> </dependency> </dependencies> 3. Create an ElasticSearchOperation class in the com.stormadvance.storm_elasticsearch package.The ElasticSearchOperation class contains the following method: insert(Map<String, Object> data, String indexName, String indexMapping, String indexId): This method takes the record data, indexName, indexMapping, and indexId as input, and inserts the input record in Elasticsearch. The following is the source code of the ElasticSearchOperation class: public class ElasticSearchOperation { private TransportClient client; public ElasticSearchOperation(List<String> esNodes) throws Exception { try { Settings settings = Settings.settingsBuilder() .put("cluster.name", "elasticsearch").build(); client = TransportClient.builder().settings(settings).build(); for (String esNode : esNodes) { client.addTransportAddress(new InetSocketTransportAddress( InetAddress.getByName(esNode), 9300)); } } catch (Exception e) { throw e; } } public void insert(Map<String, Object> data, String indexName, String indexMapping, String indexId) { client.prepareIndex(indexName, indexMapping, indexId) .setSource(data).get(); } public static void main(String[] s){ try{ List<String> esNodes = new ArrayList<String>(); esNodes.add("127.0.0.1"); ElasticSearchOperation elasticSearchOperation = new ElasticSearchOperation(esNodes); Map<String, Object> data = new HashMap<String, Object>(); data.put("name", "name"); data.put("add", "add"); elasticSearchOperation.insert(data,"indexName","indexMapping",UUID. randomUUID().toString()); }catch(Exception e) { e.printStackTrace(); //System.out.println(e); } } } 4. Create a SampleSpout class in the com.stormadvance.stormhbase package. This class generates random records and passes them to the next action (bolt) in the topology. The following is the format of the record generated by the SampleSpout class: ["john","watson","abc"] The following is the source code of the SampleSpout class: public class SampleSpout extends BaseRichSpout { private static final long serialVersionUID = 1L; private SpoutOutputCollector spoutOutputCollector; private static final Map<Integer, String> FIRSTNAMEMAP = new HashMap<Integer, String>(); static { FIRSTNAMEMAP.put(0, "john"); FIRSTNAMEMAP.put(1, "nick"); FIRSTNAMEMAP.put(2, "mick"); FIRSTNAMEMAP.put(3, "tom"); FIRSTNAMEMAP.put(4, "jerry"); } private static final Map<Integer, String> LASTNAME = new   HashMap<Integer, String>(); static { LASTNAME.put(0, "anderson"); LASTNAME.put(1, "watson"); LASTNAME.put(2, "ponting"); LASTNAME.put(3, "dravid"); LASTNAME.put(4, "lara"); } private static final Map<Integer, String> COMPANYNAME = new HashMap<Integer, String>(); static { COMPANYNAME.put(0, "abc"); COMPANYNAME.put(1, "dfg"); COMPANYNAME.put(2, "pqr"); COMPANYNAME.put(3, "ecd"); COMPANYNAME.put(4, "awe"); } public void open(Map conf, TopologyContext context, SpoutOutputCollector spoutOutputCollector) { // Open the spout this.spoutOutputCollector = spoutOutputCollector; } public void nextTuple() { // Storm cluster repeatedly call this method to emit the continuous // // stream of tuples. final Random rand = new Random(); // generate the random number from 0 to 4. int randomNumber = rand.nextInt(5); spoutOutputCollector.emit (new Values(FIRSTNAMEMAP.get(randomNumber),LASTNAME.get(randomNumber),CO MPANYNAME.get(randomNumber))); } public void declareOutputFields(OutputFieldsDeclarer declarer) { // emits the field   firstName , lastName and companyName. declarer.declare(new Fields("firstName","lastName","companyName")); } } 5. Create an ESBolt class in the com.stormadvance.storm_elasticsearch package. This bolt receives the tuples emitted by the SampleSpout class, converts it to the Map structure, and then calls the insert() method of the ElasticSearchOperation class to insert the record into Elasticsearch. The following is the source code of the ESBolt class: public class ESBolt implements IBasicBolt { private static final long serialVersionUID = 2L; private ElasticSearchOperation elasticSearchOperation; private List<String> esNodes; /** * * @param esNodes */ public ESBolt(List<String> esNodes) { this.esNodes = esNodes; } public void execute(Tuple input, BasicOutputCollector collector) { Map<String, Object> personalMap = new HashMap<String, Object>(); // "firstName","lastName","companyName") personalMap.put("firstName", input.getValueByField("firstName")); personalMap.put("lastName", input.getValueByField("lastName")); personalMap.put("companyName", input.getValueByField("companyName")); elasticSearchOperation.insert(personalMap,"person","personmapping", UUID.randomUUID().toString()); } public void declareOutputFields(OutputFieldsDeclarer declarer) { } public Map<String, Object> getComponentConfiguration() { // TODO Auto-generated method stub return null; } public void prepare(Map stormConf, TopologyContext context) { try { // create the instance of ESOperations class elasticSearchOperation = new  ElasticSearchOperation(esNodes); } catch (Exception e) { throw new RuntimeException(); } } public void cleanup() { } } 6.  Create an ESTopology class in the com.stormadvance.storm_elasticsearch package. This class creates an instance of the spout and bolt classes and chains them together using a TopologyBuilder class. The following is the implementation of the main class: public class ESTopology { public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException { TopologyBuilder builder = new TopologyBuilder(); //ES Node list List<String> esNodes = new ArrayList<String>(); esNodes.add("10.191.209.14"); // set the spout class builder.setSpout("spout", new SampleSpout(), 2); // set the ES bolt class builder.setBolt("bolt", new ESBolt(esNodes), 2) .shuffleGrouping("spout"); Config conf = new Config(); conf.setDebug(true); // create an instance of LocalCluster class for // executing topology in local mode. LocalCluster cluster = new LocalCluster(); // ESTopology is the name of submitted topology. cluster.submitTopology("ESTopology", conf, builder.createTopology()); try { Thread.sleep(60000); } catch (Exception exception) { System.out.println("Thread interrupted exception : " + exception); } System.out.println("Stopped Called : "); // kill the LearningStormTopology cluster.killTopology("StormHBaseTopology"); // shutdown the storm test cluster cluster.shutdown(); } } To summarize we covered how we can store the data processed by Apache Storm into Elasticsearch by making the connection with Elasticsearch nodes inside the Storm bolts. If you enjoyed this post, check out the book Mastering Apache Storm to know more about different types of real time processing techniques used to create distributed applications.
Read more
  • 0
  • 0
  • 2906
article-image-18-striking-ai-trends-2018-part-1
Sugandha Lahoti
27 Dec 2017
14 min read
Save for later

18 striking AI Trends to watch in 2018 - Part 1

Sugandha Lahoti
27 Dec 2017
14 min read
Artificial Intelligence is the talk of the town. It has evolved past merely being a buzzword in 2016, to be used in a more practical manner in 2017. As 2018 rolls out, we will gradually notice AI transitioning into a necessity. We have prepared a detailed report, on what we can expect from AI in the upcoming year. So sit back, relax, and enjoy the ride through the future. (Don’t forget to wear your VR headgear! ) Here are 18 things that will happen in 2018 that are either AI driven or driving AI: Artificial General Intelligence may gain major traction in research. We will turn to AI enabled solution to solve mission-critical problems. Machine Learning adoption in business will see rapid growth. Safety, ethics, and transparency will become an integral part of AI application design conversations. Mainstream adoption of AI on mobile devices Major research on data efficient learning methods AI personal assistants will continue to get smarter Race to conquer the AI optimized hardware market will heat up further We will see closer AI integration into our everyday lives. The cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications. Advancements in AI and Quantum Computing will share a symbiotic relationship Deep learning will continue to play a significant role in AI development progress. AI will be on both sides of the cybersecurity challenge. Augmented reality content will be brought to smartphones. Reinforcement learning will be applied to a large number of real-world situations. Robotics development will be powered by Deep Reinforcement learning and Meta-learning Rise in immersive media experiences enabled by AI. A large number of organizations will use Digital Twin. 1. General AI: AGI may start gaining traction in research. AlphaZero is only the beginning. 2017 saw Google’s AlphaGo Zero (and later AlphaZero) beat human players at Go, Chess, and other games. In addition to this, computers are now able to recognize images, understand speech, drive cars, and diagnose diseases better with time. AGI is an advancement of AI which deals with bringing machine intelligence as close to humans as possible. So, machines can possibly do any intellectual task that a human can! The success of AlphaGo covered one of the crucial aspects of AGI systems—the ability to learn continually, avoiding catastrophic forgetting. However, there is a lot more to achieving human-level general intelligence than the ability to learn continually. For instance, AI systems of today can draw on skills it learned on one game to play another. But they lack the ability to generalize the learned skill. Unlike humans, these systems do not seek solutions from previous experiences. An AI system cannot ponder and reflect on a new task, analyze its capabilities, and work out how best to apply them. In 2018, we expect to see advanced research in the areas of deep reinforcement learning, meta-learning, transfer learning, evolutionary algorithms and other areas that aid in developing AGI systems. Detailed aspects of these ideas are highlighted in later points. We can indeed say, Artificial General Intelligence is inching closer than ever before and 2018 is expected to cover major ground in that direction. 2. Enterprise AI: Machine Learning adoption in enterprises will see rapid growth. 2017 saw a rise in cloud offerings by major tech players, such as the Amazon Sagemaker, Microsoft Azure Cloud, Google Cloud Platform, allowing business professionals and innovators to transfer labor-intensive research and analysis to the cloud. Cloud is a $130 billion industry as of now, and it is projected to grow.  Statista carried out a survey to present the level of AI adoption among businesses worldwide, as of 2017.  Almost 80% of the participants had incorporated some or other form of AI into their organizations or planned to do so in the future. Source: https://www.statista.com/statistics/747790/worldwide-level-of-ai-adoption-business/ According to a report from Deloitte, medium and large enterprises are set to double their usage of machine learning by the end of 2018. Apart from these, 2018 will see better data visualization techniques, powered by machine learning, which is a critical aspect of every business.  Artificial intelligence is going to automate the cycle of report generation and KPI analysis, and also, bring in deeper analysis of consumer behavior. Also with abundant Big data sources coming into the picture, BI tools powered by AI will emerge, which can harness the raw computing power of voluminous big data for data models to become streamlined and efficient. 3. Transformative AI: We will turn to AI enabled solutions to solve mission-critical problems. 2018 will see the involvement of AI in more and more mission-critical problems that can have world-changing consequences: read enabling genetic engineering, solving the energy crisis, space exploration, slowing climate change, smart cities, reducing starvation through precision farming, elder care etc. Recently NASA revealed the discovery of a new exoplanet, using data crunched from Machine learning and AI. With this recent reveal, more AI techniques would be used for space exploration and to find other exoplanets. We will also see the real-world deployment of AI applications. So it will not be only about academic research, but also about industry readiness. 2018 could very well be the year when AI becomes real for medicine. According to Mark Michalski, executive director, Massachusetts General Hospital and Brigham and Women’s Center for Clinical Data Science, “By the end of next year, a large number of leading healthcare systems are predicted to have adopted some form of AI within their diagnostic groups.”  We would also see the rise of robot assistants, such as virtual nurses, diagnostic apps in smartphones, and real clinical robots that can monitor patients, take care of the elderly, alert doctors, and send notifications in case of emergency. More research will be done on how AI enabled technology can help in difficult to diagnose areas in health care like mental health, the onset of hereditary diseases among others. Facebook's attempt at detection of potential suicidal messages using AI is a sign of things to come in this direction. As we explore AI enabled solutions to solve problems that have a serious impact on individuals and societies at large, considering the ethical and moral implications of such solutions will become central to developing them, let alone hard to ignore. 4. Safe AI: Safety, Ethics, and Transparency in AI applications will become integral to conversations on AI adoption and app design. The rise of machine learning capabilities has also given rise to forms of bias, stereotyping and unfair determination in such systems. 2017 saw some high profile news stories about gender bias, object recognition datasets like MS COCO, to racial disparities in education AI systems. At NIPS 2017, Kate Crawford talked about bias in machine learning systems which resonated greatly with the community and became pivotal to starting conversations and thinking by other influencers on how to address the problems raised.  DeepMind also launched a new unit, the DeepMind Ethics & Society,  to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI for the benefit of all. Independent bodies like IEEE also pushed for standards in it’s ethically aligned design paper. As news about the bro culture in Silicon Valley and the lack of diversity in the tech sector continued to stay in the news all of 2017, it hit closer home as the year came to an end, when Kristian Lum, Lead Statistician at HRDAG, described her experiences with harassment as a graduate student at prominent stat conferences. This has had a butterfly effect of sorts with many more women coming forward to raise the issue in the ML/AI community. They talked about the formation of a stronger code of conduct by boards of key conferences such as NIPS among others. Eric Horvitz, a Microsoft research director, called Lum’s post a "powerful and important report." Jeff Dean, head of Google’s Brain AI unit applauded Lum for having the courage to speak about this behavior. Other key influencers from the ML and statisticians community also spoke in support of Lum and added their views on how to tackle the problem. While the road to recovery is long and machines with moral intelligence may be decades away, 2018 is expected to start that journey in the right direction by including safety, ethics, and transparency in AI/ML systems. Instead of just thinking about ML contributing to decision making in say hiring or criminal justice, data scientists would begin to think of the potential role of ML in the harmful representation of human identity. These policies will not only be included in the development of larger AI ecosystems but also in national and international debates in politics, businesses, and education. 5. Ubiquitous AI: AI will start redefining life as we know it, and we may not even know it happened. Artificial Intelligence will gradually integrate into our everyday lives. We will see it in our everyday decisions like what kind of food we eat, the entertainment we consume, the clothes we wear, etc.  Artificially intelligent systems will get better at complex tasks that humans still take for granted, like walking around a room and over objects. We’re going to see more and more products that contain some form of AI enter our lives. AI enabled stuff will become more common and available. We will also start seeing it in the background for life-altering decisions we make such as what to learn, where to work, whom to love, who our friends are,  whom should we vote for, where should we invest, and where should we live among other things. 6. Embedded AI: Mobile AI means a radically different way of interacting with the world. There is no denying that AI is the power source behind the next generation of smartphones. A large number of organizations are enabling the use of AI in smartphones, whether in the form of deep learning chips, or inbuilt software with AI capabilities. The mobile AI will be a  combination of on-device AI and cloud AI. Intelligent phones will have end-to-end capabilities that support coordinated development of chips, devices, and the cloud. The release of iPhone X’s FaceID—which uses a neural network chip to construct a mathematical model of the user’s face— and self-driving cars are only the beginning. As 2018 rolls out we will see vast applications on smartphones and other mobile devices which will run deep neural networks to enable AI. AI going mobile is not just limited to the embedding of neural chips in smartphones. The next generation of mobile networks 5G will soon greet the world. 2018 is going to be a year of closer collaborations and increasing partnerships between telecom service providers, handset makers, chip markers and AI tech enablers/researchers. The Baidu-Huawei partnership—to build an open AI mobile ecosystem, consisting of devices, technology, internet services, and content—is an example of many steps in this direction. We will also see edge computing rapidly becoming a key part of the Industrial Internet of Things (IIoT) to accelerate digital transformation. In combination with cloud computing, other forms of network architectures such as fog and mist would also gain major traction. All of the above will lead to a large-scale implementation of cognitive IoT, which combines traditional IoT implementations with cognitive computing. It will make sensors capable of diagnosing and adapting to their environment without the need for human intervention. Also bringing in the ability to combine multiple data streams that can identify patterns. This means we will be a lot closer to seeing smart cities in action. 7. Data-sparse AI: Research into data efficient learning methods will intensify 2017 saw highly scalable solutions for problems in object detection and recognition, machine translation, text-to-speech, recommender systems, and information retrieval.  The second conference on Machine Translation happened in September 2017.  The 11th ACM Conference on Recommender Systems in August 2017 witnessed a series of papers presentations, featured keynotes, invited talks, tutorials, and workshops in the field of recommendation system. Google launched the Tacotron 2 for generating human-like speech from text. However, most of these researches and systems attain state-of-the-art performance only when trained with large amounts of data. With GDPR and other data regulatory frameworks coming into play, 2018 is expected to witness machine learning systems which can learn efficiently maintaining performance, but in less time and with less data. A data-efficient learning system allows learning in complex domains without requiring large quantities of data. For this, there would be developments in the field of semi-supervised learning techniques, where we can use generative models to better guide the training of discriminative models. More research would happen in the area of transfer learning (reuse generalize knowledge across domains), active learning, one-shot learning, Bayesian optimization as well as other non-parametric methods.  In addition, researchers and organizations will exploit bootstrapping and data augmentation techniques for efficient reuse of available data. Other key trends propelling data efficient learning research are growing in-device/edge computing, advancements in robotics, AGI research, and energy optimization of data centers, among others. 8. Conversational AI: AI personal assistants will continue to get smarter AI-powered virtual assistants are expected to skyrocket in 2018. 2017 was filled to the brim with new releases. Amazon brought out the Echo Look and the Echo Show. Google made its personal assistant more personal by allowing linking of six accounts to the Google Assistant built into the Home via the Home app. Bank of America unveiled Erica, it’s AI-enabled digital assistant. As 2018 rolls out, AI personal assistants will find its way into an increasing number of homes and consumer gadgets. These include increased availability of AI assistants in our smartphones and smart speakers with built-in support for platforms such as Amazon’s Alexa and Google Assistant. With the beginning of the new year, we can see personal assistants integrating into our daily routines. Developers will build voice support into a host of appliances and gadgets by using various voice assistant platforms. More importantly, developers in 2018 will try their hands on conversational technology which will include emotional sensitivity (affective computing) as well as machine translational technology (the ability to communicate seamlessly between languages). Personal assistants would be able to recognize speech patterns, for instance, of those indicative of wanting help. AI bots may also be utilized for psychiatric counseling or providing support for isolated people.  And it’s all set to begin with the AI assistant summit in San Francisco scheduled on 25 - 26 January 2018. It will witness talks by world's leading innovators in advances in AI Assistants and artificial intelligence. 9. AI Hardware: Race to conquer the AI optimized hardware market will heat up further Top tech companies (read Google, IBM, Intel, Nvidia) are investing heavily in the development of AI/ML optimized hardware. Research and Markets have predicted the global AI chip market will have a growth rate of about 54% between 2017 and 2021. 2018 will see further hardware designs intended to greatly accelerate the next generation of applications and run AI computational jobs. With the beginning of 2018 chip makers will battle it out to determine who creates the hardware that artificial intelligence lives on. Not only that, there would be a rise in the development of new AI products, both for hardware and software platforms that run deep learning programs and algorithms. Also, chips which move away from the traditional one-size-fits-all approach to application-based AI hardware will grow in popularity. 2018 would see hardware which does not only store data, but also transform it into usable information. The trend for AI will head in the direction of task-optimized hardware. 2018 may also see hardware organizations move to software domains and vice-versa. Nvidia, most famous for their Volta GPUs have come up with NVIDIA DGX-1, a software for AI research, designed to streamline the deep learning workflow. More such transitions are expected at the highly anticipated CES 2018. [dropcap]P[/dropcap]hew, that was a lot of writing! But I hope you found it just as interesting to read as I found writing it. However, we are not done yet. And here is part 2 of our 18 AI trends in ‘18. 
Read more
  • 0
  • 0
  • 5949

article-image-hitting-the-right-notes-in-2017-ai-song-for-data-scientists
Aarthi Kumaraswamy
26 Dec 2017
3 min read
Save for later

Hitting the right notes in 2017: AI in a song for Data Scientists

Aarthi Kumaraswamy
26 Dec 2017
3 min read
A lot, I mean lots and lots of great articles have been written already about AI’s epic journey in 2017. They all generally agree that 2017 sets the stage for AI in very real terms.  We saw immense progress in academia, research and industry in terms of an explosion of new ideas (like capsNets), questioning of established ideas (like backprop, AI black boxes), new methods (Alpha Zero’s self-learning), tools (PyTorch, Gluon, AWS SageMaker), and hardware (quantum computers, AI chips). New and existing players gearing up to tap into this phenomena even as they struggled to tap into the limited talent pool at various conferences and other community hangouts. While we have accelerated the pace of testing and deploying some of those ideas in the real world with self-driving cars, in media & entertainment, among others, progress in building a supportive and sustainable ecosystem has been slow. We also saw conversations on AI ethics, transparency, interpretability, fairness, go mainstream alongside broader contexts such as national policies, corporate cultural reformation setting the tone of those conversations. While anxiety over losing jobs to robots keeps reaching new heights proportional to the cryptocurrency hype, we saw humanoids gain citizenship, residency and even talk about contesting in an election! It has been nothing short of the stuff, legendary tales are made of: struggle, confusion, magic, awe, love, fear, disgust, inspiring heroes, powerful villains, misunderstood monsters, inner demons and guardian angels. And stories worth telling must have songs written about them! Here’s our ode to AI Highlights in 2017 while paying homage to an all-time favorite: ‘A few of my favorite things’ from Sound of Music. Next year, our AI friends will probably join us behind the scenes in the making of another homage to the extraordinary advances in data science, machine learning, and AI. [box type="shadow" align="" class="" width=""] Stripes on horses and horsetails on zebras Bright funny faces in bowls full of rameN Brown furry bears rolled into pandAs These are a few of my favorite thinGs   TensorFlow projects and crisp algo models Libratus’ poker faces, AlphaGo Zero’s gaming caboodles Cars that drive and drones that fly with the moon on their wings These are a few of my favorite things   Interpreting AI black boxes, using Python hashes Kaggle frenemies and the ones from ML MOOC classes R white spaces that melt into strings These are a few of my favorite things   When models don’t converge, and networks just forget When I am sad I simply remember my favorite things And then I don’t feel so bad[/box]   PS: We had to leave out many other significant developments in the above cover as we are limited in our creative repertoire. We invite you to join in and help us write an extended version together! The idea is to make learning about data science easy, accessible, fun and memorable!    
Read more
  • 0
  • 0
  • 1461