Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-10-reasons-data-scientists-love-jupyter-notebooks
Aarthi Kumaraswamy
04 Apr 2018
5 min read
Save for later

10 reasons why data scientists love Jupyter notebooks

Aarthi Kumaraswamy
04 Apr 2018
5 min read
In the last twenty years, Python has been increasingly used for scientific computing and data analysis as well. Today, the main advantage of Python and one of the main reasons why it is so popular is that it brings scientific computing features to a general-purpose language that is used in many research areas and industries. This makes the transition from research to production much easier. IPython is a Python library that was originally meant to improve the default interactive console provided by Python and to make it scientist-friendly. In 2011, ten years after the first release of IPython, the IPython Notebook was introduced. This web-based interface to IPython combines code, text, mathematical expressions, inline plots, interactive figures, widgets, graphical interfaces, and other rich media within a standalone sharable web document. This platform provides an ideal gateway to interactive scientific computing and data analysis. IPython has become essential to researchers, engineers, data scientists, teachers and their students. Within a few years, IPython gained an incredible popularity among the scientific and engineering communities. The Notebook started to support more and more programming languages beyond Python. In 2014, the IPython developers announced the Jupyter project, an initiative created to improve the implementation of the Notebook and make it language-agnostic by design. The name of the project reflects the importance of three of the main scientific computing languages supported by the Notebook: Julia, Python, and R. Today, Jupyter is an ecosystem by itself that comprehends several alternative Notebook interfaces (JupyterLab, nteract, Hydrogen, and others), interactive visualization libraries, authoring tools compatible with notebooks. Jupyter has its own conference named JupyterCon. The project received funding from several companies as well as the Alfred P. Sloan Foundation and the Gordon and Betty Moore Foundation. Apart from the rich legacy that Jupyter notebooks come from and the richer ecosystem that it provides developers, here are ten more reasons for you to start using it for your next data science project if aren’t already using it now. All in one place: The Jupyter Notebook is a web-based interactive environment that combines code, rich text, images, videos, animations, mathematical equations, plots, maps, interactive figures and widgets, and graphical user interfaces, into a single document. Easy to share: Notebooks are saved as structured text files (JSON format), which makes them easily shareable. Easy to convert: Jupyter comes with a special tool, nbconvert, which converts notebooks to other formats such as HTML and PDF. Another online tool, nbviewer, allows us to render a publicly-available notebook directly in the browser. Language independent: The architecture of Jupyter is language independent. The decoupling between the client and kernel makes it possible to write kernels in any language. Easy to create kernel wrappers: Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python. Wrapper kernels can implement optional methods, notably for code completion and code inspection. Easy to customize: Jupyter interface can be used to create an entirely customized experience in the Jupyter Notebook (or another client application such as the console). Extensions with custom magic commands: Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook. Stress-free Reproducible experiments: Jupyter notebooks can help you conduct efficient and reproducible interactive computing experiments with ease. It lets you keep a detailed record of your work. Also, the ease of use of the Jupyter Notebook means that you don't have to worry about reproducibility; just do all of your interactive work in notebooks, put them under version control, and commit regularly. Don't forget to refactor your code into independent reusable components. Effective teaching-cum-learning tool: The Jupyter Notebook is not only a tool for scientific research and data analysis but also a great tool for teaching. An example is IPython Blocks - a library that allows you or your students to create grids of colorful blocks. Interactive code and data exploration: The ipywidgets package provides many common user interface controls for exploring code and data interactively. You enjoyed excerpts from Cyrille Rossant’s latest book, IPython Cookbook, Second Edition. This book contains 100+ recipes for high-performance scientific computing and data analysis, from the latest IPython/Jupyter features to the most advanced tricks, to help you write better and faster code. For free recipes from the book, head over to the Ipython Cookbook Github page. If you loved what you saw, support Cyrille’s work by buying a copy of the book today! Related Jupyter articles: Latest Jupyter news updates: Is JupyterLab all set to phase out Jupyter Notebooks? What’s new in Jupyter Notebook 5.3.0 3 ways JupyterLab will revolutionize Interactive Computing Jupyter notebooks tutorials: Getting started with the Jupyter notebook (part 1) Jupyter and Python Scripting Jupyter as a Data Laboratory: Part 1    
Read more
  • 0
  • 0
  • 20289

article-image-generative-models-action-create-van-gogh-neural-artistic-style-transfer
Sunith Shetty
03 Apr 2018
14 min read
Save for later

Generative Models in action: How to create a Van Gogh with Neural Artistic Style Transfer

Sunith Shetty
03 Apr 2018
14 min read
In today’s tutorial, we will learn the principles behind neural artistic style transfer and show a working example to transfer the style of Van Gogh art onto an image. Neural artistic style transfer An image can be considered as a combination of style and content. The artistic style transfer technique transforms an image to look like a painting with a specific painting style. We will see how to code this idea up. The loss function will compare the generated image with the content of the photo and style of the painting. Hence, the optimization is carried out for the image pixel, rather than for the weights of the network. Two values are calculated by comparing the content of the photo with the generated image followed by the style of the painting and the generated image. Content loss Since pixels are not a good choice, we will use the CNN features of various layers, as they are a better representation of the content. The initial layers have high-frequency such as edges, corners, and textures but the later layers represent objects, and hence are better for content. The latter layer can compare the object to object better than the pixel. But for this, we need to first import the required libraries, using the following code: import  numpy as  np from PIL  import  Image from  scipy.optimize  import fmin_l_bfgs_b from  scipy.misc  import imsave from  vgg16_avg  import VGG16_Avg from  keras import  metrics from  keras.models  import Model from  keras import  backend as K  Now, let's load the required image, using the following command: content_image = Image.open(work_dir + 'bird_orig.png') We will use the following image for this instance: As we are using the VGG architecture for extracting the features, the mean of all the ImageNet images has to be subtracted from all the images, as shown in the following code: imagenet_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32) def subtract_imagenet_mean(image):  return (image - imagenet_mean)[:, :, :, ::-1] Note that the channels are different. The preprocess function takes the generated image and subtracts the mean and then reverses the channel. The deprocess function reverses that effect because of the preprocessing step, as shown in the following code: def add_imagenet_mean(image, s):  return np.clip(image.reshape(s)[:, :, :, ::-1] + imagenet_mean, 0,    255) First, we will see how to create an image with the content from another image. This is a process of creating an image from random noise. The content used here is the sum of the activation in some layer. We will minimize the loss of the content between the random noise and image, which is termed as the content loss. This loss is similar to pixel-wise loss but applied on layer activations, hence will capture the content leaving out the noise. Any CNN architecture can be used to do forward inference of content image and random noise. The activations are taken and the mean squared error is calculated, comparing the activations of these two outputs. The pixel of the random image is updated while the CNN weights are frozen. We will freeze the VGG network for this case. Now, the VGG model can be loaded. Generative images are very sensitive to subsampling techniques such as max pooling. Getting back the pixel values from max pooling is not possible. Hence, average pooling is a smoother method than max pooling. The function to convert VGG model with average pooling is used for loading the model, as shown here: vgg_model = VGG16_Avg(include_top=False) Note that the weights are the same for this model as the original, even though the pooling type has been changed. The ResNet and Inception models are not suited for this because of their inability to provide various abstractions. We will take the activations from the last convolutional layer of the VGG model namely block_conv1, while the model was frozen. This is the third last layer from the VGG, with a wide receptive field. The code for the same is given here for your reference: content_layer = vgg_model.get_layer('block5_conv1').output Now, a new model is created with a truncated VGG, till the layer that was giving good features. Hence, the image can be loaded now and can be used to carry out the forward inference, to get the actually activated layers. A TensorFlow variable is created to capture the activation, using the following code: content_model = Model(vgg_model.input, content_layer) content_image_array = subtract_imagenet_mean(np.expand_dims(np.array(content_image), 0)) content_image_shape = content_image_array.shape target = K.variable(content_model.predict(content_image_array)) Let's define an evaluator class to compute the loss and gradients of the image. The following class returns the loss and gradient values at any point of the iteration: class ConvexOptimiser(object): def __init__(self, cost_function, tensor_shape): self.cost_function = cost_function self.tensor_shape = tensor_shape self.gradient_values = None def loss(self, point): loss_value, self.gradient_values = self.cost_function([point.reshape(self.tensor_shape)]) return loss_value.astype(np.float64) def gradients(self, point): return self.gradient_values.flatten().astype(np.float64) Loss function can be defined as the mean squared error between the values of activations at specific convolutional layers. The loss will be computed between the layers of generated image and the original content photo, as shown here: mse_loss = metrics.mean_squared_error(content_layer, target) The gradients of the loss can be computed by considering the input of the model, as shown: grads = K.gradients(mse_loss, vgg_model.input) The input to the function is the input of the model and the output will be the array of loss and gradient values as shown: cost_function = K.function([vgg_model.input], [mse_loss]+grads) This function is deterministic to optimize, and hence SGD is not required: optimiser = ConvexOptimiser(cost_function, content_image_shape) This function can be optimized using a simple optimizer, as it is convex and hence is deterministic. We can also save the image at every step of the iteration. We will define it in such a way that the gradients are accessible, as we are using the scikit-learn's optimizer, for the final optimization. Note that this loss function is convex and so, a simple optimizer is good enough for the computation. The optimizer can be defined using the following code: def optimise(optimiser, iterations, point, tensor_shape, file_name): for i in range(iterations): point, min_val, info = fmin_l_bfgs_b(optimiser.loss, point.flatten(), fprime=optimiser.gradients, maxfun=20) point = np.clip(point, -127, 127) print('Loss:', min_val) imsave(work_dir + 'gen_'+file_name+'_{i}.png', add_imagenet_mean(point.copy(), tensor_shape)[0]) return point The optimizer takes loss function, point, and gradients, and returns the updates. A random image needs to be generated so that the content loss will be minimized, using the following code: def generate_rand_img(shape):  return np.random.uniform(-2.5, 2.5, shape)/1 generated_image = generate_rand_img(content_image_shape) Here is the random image that is created: The optimization can be run for 10 iterations to see the results, as shown: iterations = 10 generated_image = optimise(optimiser, iterations, generated_image, content_image_shape, 'content') If everything goes well, the loss should print as shown here, over the iterations: Current loss value: 73.2010421753 Current loss value: 22.7840042114 Current loss value: 12.6585302353 Current loss value: 8.53817081451 Current loss value: 6.64649534225 Current loss value: 5.56395864487 Current loss value: 4.83072710037 Current loss value: 4.32800722122 Current loss value: 3.94804215431 Current loss value: 3.66387653351 Here is the image that is generated and now, it almost looks like a bird. The optimization can be run for further iterations to have this done: An optimizer took the image and updated the pixels so that the content is the same. Though the results are worse, it can reproduce the image to a certain extent with the content. All the images through iterations give a good intuition on how the image is generated. There is no batching involved in this process. In the next section, we will see how to create an image in the style of a painting. Style loss using the Gram matrix After creating an image that has the content of the original image, we will see how to create an image with just the style. Style can be thought of as a mix of colour and texture of an image. For that purpose, we will define style loss. First, we will load the image and convert it to an array, as shown in the following code: style_image = Image.open(work_dir + 'starry_night.png') style_image = style_image.resize(np.divide(style_image.size, 3.5).astype('int32')) Here is the style image we have loaded: Now, we will preprocess this image by changing the channels, using the following code: style_image_array = subtract_imagenet_mean(np.expand_dims(style_image, 0)[:, :, :, :3]) style_image_shape = style_image_array.shape For this purpose, we will consider several layers, like we have done in the following code: model = VGG16_Avg(include_top=False, input_shape=shp[1:]) outputs = {l.name: l.output for l in model.layers} Now, we will take multiple layers as an array output of the first four blocks, using the following code: layers = [outputs['block{}_conv1'.format(o)] for o in range(1,3)] A new model is now created, that can output all those layers and assign the target variables, using the following code: layers_model = Model(model.input, layers) targs = [K.variable(o) for o in layers_model.predict(style_arr)] Style loss is calculated using the Gram matrix. The Gram matrix is the product of a matrix and its transpose. The activation values are simply transposed and multiplied. This matrix is then used for computing the error between the style and random images. The Gram matrix loses the location information but will preserve the texture information. We will define the Gram matrix using the following code: def grammian_matrix(matrix):  flattened_matrix = K.batch_flatten(K.permute_dimensions(matrix, (2, 0, 1)))  matrix_transpose_dot = K.dot(flattened_matrix, K.transpose(flattened_matrix))  element_count = matrix.get_shape().num_elements()  return matrix_transpose_dot / element_count As you might be aware now, it is a measure of the correlation between the pair of columns. The height and width dimensions are flattened out. This doesn't include any local pieces of information, as the coordinate information is disregarded. Style loss computes the mean squared error between the Gram matrix of the input image and the target, as shown in the following code def style_mse_loss(x, y):  return metrics.mse(grammian_matrix(x), grammian_matrix(y)) Now, let's compute the loss by summing up all the activations from the various layers, using the following code: style_loss = sum(style_mse_loss(l1[0], l2[0]) for l1, l2 in zip(style_features, style_targets)) grads = K.gradients(style_loss, vgg_model.input) style_fn = K.function([vgg_model.input], [style_loss]+grads) optimiser = ConvexOptimiser(style_fn, style_image_shape) We then solve it as the same way we did before, by creating a random image. But this time, we will also apply a Gaussian filter, as shown in the following code: generated_image = generate_rand_img(style_image_shape) The random image generated will look like this: The optimization can be run for 10 iterations to see the results, as shown below: generated_image = optimise(optimiser, iterations, generated_image, style_image_shape) If everything goes well, the solver should print the loss values similar to the following: Current loss value: 5462.45556641 Current loss value: 189.738555908 Current loss value: 82.4192581177 Current loss value: 55.6530838013 Current loss value: 37.215713501 Current loss value: 24.4533748627 Current loss value: 15.5914745331 Current loss value: 10.9425945282 Current loss value: 7.66888141632 Current loss value: 5.84042310715 Here is the image that is generated: Here, from a random noise, we have created an image with a particular painting style without any location information. In the next section, we will see how to combine both—the content and style loss. Style transfer Now we know how to reconstruct an image, as well as how to construct an image that captures the style of an original image. The obvious idea may be to just combine these two approaches by weighting and adding the two loss functions, as shown in the following code: w,h = style.size src = img_arr[:,:h,:w] Like before, we're going to grab a sequence of layer outputs to compute the style loss. However, we still only need one layer output to compute the content loss. How do we know which layer to grab? As we discussed earlier, the lower the layer, the more exact the content reconstruction will be. In merging content reconstruction with style, we might expect that a looser reconstruction of the content will allow more room for the style to affect (re: inspiration). Furthermore, a later layer ensures that the image looks like the same subject, even if it doesn't have the same details. The following code is used for this process: style_layers = [outputs['block{}_conv2'.format(o)] for o in range(1,6)] content_name = 'block4_conv2' content_layer = outputs[content_name] Now, a separate model for style is created with required output layers, using the following code: style_model = Model(model.input, style_layers) style_targs = [K.variable(o) for o in style_model.predict(style_arr)] We will also create another model for the content with the content layer, using the following code: content_model = Model(model.input, content_layer) content_targ = K.variable(content_model.predict(src)) Now, the merging of the two approaches is as simple as merging their respective loss functions. Note that as opposed to our previous functions, this function is producing three separate types of outputs: One for the original image One for the image whose style we're emulating One for the random image whose pixels we are training One way for us to tune how the reconstructions mix is by changing the factor on the content loss, which we have here as 1/10. If we increase that denominator, the style will have a larger effect on the image, and if it's too large, the original content of the image will be obscured by an unstructured style. Likewise, if it is too small then the image will not have enough style. We will use the following code for this process: style_wgts = [0.05,0.2,0.2,0.25,0.3] The loss function takes both style and content layers, as shown here: loss = sum(style_loss(l1[0], l2[0])*w    for l1,l2,w in zip(style_layers, style_targs, style_wgts)) loss += metrics.mse(content_layer, content_targ)/10 grads = K.gradients(loss, model.input) transfer_fn = K.function([model.input], [loss]+grads) evaluator = Evaluator(transfer_fn, shp) We will run the solver for 10 iterations as before, using the following code: iterations=10 x = rand_img(shp) x = solve_image(evaluator, iterations, x) The loss values should be printed as shown here: Current loss value: 2557.953125 Current loss value: 732.533630371 Current loss value: 488.321166992 Current loss value: 385.827178955 Current loss value: 330.915924072 Current loss value: 293.238189697 Current loss value: 262.066864014 Current loss value: 239.34185791 Current loss value: 218.086700439 Current loss value: 203.045211792 These results are remarkable. Each one of them does a fantastic job of recreating the original image in the style of the artist. The generated image will look like the following: We will now conclude the style transfer section. This operation is really slow but can work with any images. In the next section, we will see how to use a similar idea to create a superresolution network. There are several ways to make this better, such as: Adding a Gaussian filter to a random image Adding different weights to the layers Different layers and weights can be used to content Initialization of image rather than random image Color can be preserved Masks can be used for specifying what is required Any sketch can be converted to painting Drawing a sketch and creating the image Any image can be converted to artistic style by training a CNN to output such an image. To summarize, we learned to implement to transfer style from one image to another while preserving the content as is. You read an excerpt from a book written by Rajalingappaa Shanmugamani titled Deep Learning for Computer Vision. In this book, you will learn how to model and train advanced neural networks to implement a variety of Computer Vision tasks.
Read more
  • 0
  • 0
  • 6167

article-image-top-5-free-business-intelligence-tools
Amey Varangaonkar
02 Apr 2018
7 min read
Save for later

Top 5 free Business Intelligence tools

Amey Varangaonkar
02 Apr 2018
7 min read
There is no shortage of business intelligence tools available to modern businesses today. But they're not always easy on the pocket. Great functionality, stylish UI and ease of use always comes with a price tag. If you can afford it, great - if not, it's time to start thinking about open source and free business intelligence tools.  Free business intelligence tools can power your business Take a look at 5 of the best free or open source business intelligence tools. They're all as effective and powerful as anything you'd pay a premium for. You simply need to know what you're doing with them. BIRT BIRT (Business Intelligence and Reporting Tools) is an open-source project that offers industry-standard reporting and BI capabilities. It's available as both a desktop and web application. As a top-level project within the umbrella of the Eclipse Foundation, it's got a good pedigree that means you can be confident in its potency. BIRT is especially useful for businesses which have a working environment built around Java and Java EE, as its reporting and charting engines can integrate seamlessly with Java. From creating a range of reports to different types of charts and graphs, BIRT can also be used for advanced analytical tasks. You can learn about the impressive reporting capabilities that BIRT offers on its official features page. Pros: The BIRT platform is one of the most popularly used open source business intelligence tools across the world, with more than 12 million downloads and 2.5 million users across more than 150 countries. With a large community of users, getting started with this tool, or getting solutions to problems that you might come across should be easy. Cons: Some programming experience, preferably in Java, is required to make the best use of this tool. The complex functions and features may not be easy to grasp for absolute beginners. Jaspersoft Community Jaspersoft, formerly known as Panscopic, is one of the leading open source suites of tools for a variety of reporting and business intelligence tasks. It was acquired by TIBCO in 2014 in a deal worth approximately $185 million, and has grown in popularity ever since. Jaspersoft began with the promise of “saving the world from the oppression of complex, heavyweight business intelligence”, and the Community edition offers the following set of tools for easier reporting and analytics: JasperReports Server: This tool is used for designing standalone or embeddable reports which can be used across third party applications JasperReports Library: You can design pixel-perfect reports from different kinds of datasets Jaspersoft ETL: This is a popular warehousing tool powered by Talend for extracting useful insights from a variety of data sources Jaspersoft Studio: Eclipse-based report designer for JasperReports and JasperReports Server Visualize.js: A JavaScript-based framework to embed Jaspersoft applications Pros: Jaspersoft, like BIRT, has a large community of developers looking to actively solve any problem they might come across. More often than not, your queries are bound to be answered satisfactorily. Cons: Absolute beginners might struggle with the variety of offerings and their applications. The suite of Jaspersoft tools is more suited for someone with an intermediate programming experience. KNIME KNIME is a free, open-source data analytics and business intelligence company that offers a robust platform for reporting and data integration. Used commonly by data scientists and analysts, KNIME offers features for data mining, machine learning and data visualization in order to build effective end-to-end data pipelines. There are 2 major product offerings from KNIME: KNIME Analytics Platform KNIME Cloud Analytics Platform Considered to be one of the most established players in the Analytics and business intelligence market, KNIME has customers in over 60 countries worldwide. You can often find KNIME featured as a ‘Leader’ in the Gartner Magic Quadrant. It finds applications in a variety of enterprise use-cases, including pharma, CRM, finance, and more. Pros: If you want to leverage the power of predictive analytics and machine learning, KNIME offers you just the perfect environment to build industry-standard, accurate models. You can create a wide variety of visualizations including complex plots and charts, and perform complex ETL tasks with relative ease. Cons: KNIME is not suited for beginners. It's built instead for established professionals such as data scientists and analysts who want to conduct analyses quickly and efficiently. Tableau Public Tableau Public’s promise is simple - “Visualize and share your data in minutes - for free”. Tableau is one of the most popular business intelligence tools out there, rivalling the likes of Qlik, Spotfire, Power BI among others. Along with its enterprise edition which offers premium analytics, reporting and dashboarding features, Tableau also offers a freely available Public version for effective visual analytics. Last year, Tableau released an announcement that the interactive stories and reports published on the Tableau Public platform had received more than 1 billion views worldwide. Leading news organizations around the world, including BBC and CNBC, use Tableau Public for data visualization. Pros: Tableau Public is a very popular tool with a very large community of users. If you find yourself struggling to understand or execute any feature on this platform, there are ample number of solutions available on the community forums and also on forums such as Stack Overflow. The quality of visualizations is industry-standard, and you can publish them anywhere on the web without any hassle. Cons:It’s quite difficult to think of any drawback of using Tableau Public, to be honest. Having limited features as compared to the enterprise edition of Tableau is obviously a shortcoming, though. [box type="info" align="" class="" width=""]Editor’s tip: If you want to get started with Tableau Public and create interesting data stories using it, Creating Data Stories with Tableau Public is one book you do not want to miss out on![/box] Microsoft Power BI Microsoft Power BI is a paid, enterprise-ready offering by Microsoft to empower businesses to find intuitive data insights across a variety of data formats. Microsoft also offers a stripped-down version of Power BI with limited Business Intelligence capabilities called as Power BI Desktop. In this free version, users are offered up to 1 GB of data to work on, and the ability to create different kinds of visualizations on CSV data as well as Excel spreadsheets. The reports and visualizations built using Power BI Desktop can be viewed on mobile devices as well as on browsers, and can be updated on the go. Pros: Free, very easy to use. Power BI Desktop allows you to create intuitive visualizations and reports. For beginners looking to learn the basics of Business Intelligence and data visualization, this is a great tool to use. You can also work with any kind of data and connect it to the Power BI Desktop effortlessly. Cons: You don’t get the full suite of features on Power BI Desktop which make Power BI such an elegant and wonderful Business Intelligence tool. Also, new reports and dashboards cannot be created via the mobile platform. [box type="info" align="" class="" width=""]Editor’s Tip: If you want to get started with Microsoft Power BI, or want handy tips on using Power BI effectively, our Microsoft Power BI Cookbook will prove to be of great use! [/box] There are a few other free and open source tools which are quite effective and find a honorary mention in this article. We were absolutely spoilt for choices, and choosing the top 5 tools list among all these options was a lot of hard work! Some other tools which deserve a honorary mention are - Dataiku Free Edition, Pentaho Community Edition, QlikView Personal Edition, Rapidminer, among others. You may want to check them out as well. What do you think about this list? Are there any other free/open source business intelligence tools which should’ve made it into list?
Read more
  • 0
  • 0
  • 9004
Visually different images

article-image-3-ways-to-deploy-a-qt-and-opencv-application
Gebin George
02 Apr 2018
16 min read
Save for later

3 ways to deploy a QT and OpenCV application

Gebin George
02 Apr 2018
16 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book, Computer Vision with OpenCV 3 and Qt5 written by Amin Ahmadi Tazehkandi.  This book covers how to build, test, and deploy Qt and OpenCV apps, either dynamically or statically.[/box] Today, we will learn three different methods to deploy a QT + OpenCV application. It is extremely important to provide the end users with an application package that contains everything it needs to be able to run on the target platform. And demand very little or no effort at all from the users in terms of taking care of the required dependencies. Achieving this kind of works-out-of-the-box condition for an application relies mostly on the type of the linking (dynamic or static) that is used to create an application, and also the specifications of the target operating system. Deploying using static linking Deploying an application statically means that your application will run on its own and it eliminates having to take care of almost all of the needed dependencies, since they are already inside the executable itself. It is enough to simply make sure you select the Release mode while building your application, as seen in the following screenshot: When your application is built in the Release mode, you can simply pick up the produced executable file and ship it to your users. If you try to deploy your application to Windows users, you might face an error similar to the following when your application is executed: The reason for this error is that on Windows, even when building your Qt application statically, you still need to make sure that Visual C++ Redistributables exist on the target system. This is required for C++ applications that are built by using Microsoft Visual C++, and the version of the required redistributables correspond to the Microsoft Visual Studio installed on your computer. In our case, the official title of the installer for these libraries is Visual C++ Redistributables for Visual Studio 2015, and it can be downloaded from the following link: https:/ / www. microsoft. com/en- us/ download/ details. aspx? id= 48145. It is a common practice to include the redistributables installer inside the installer for our application and perform a silent installation of them if they are not already installed. This process happens with most of the applications you use on your Windows PCs, most of the time, without you even noticing it. We already quite briefly talked about the advantages (fewer files to deploy) and disadvantages (bigger executable size) of static linking. But when it is meant in the context of deployment, there are some more complexities that need to be considered. So, here is another (more complete) list of disadvantages, when using static linking to deploy your applications: The building takes more time and the executable size gets bigger and bigger. You can't mix static and shared (dynamic) Qt libraries, which means you can't use the power of plugins and extending your application without building everything from scratch. Static linking, in a sense, means hiding the libraries used to build an application. Unfortunately, this option is not offered with all libraries, and failing to comply with it can lead to licensing issues with your application. This complexity arises partly because of the fact that Qt Framework uses some third-party libraries that do not offer the same set of licensing options as Qt itself. Talking about licensing issues is not a discussion suitable for this book, so we'll suffice with mentioning that you must be careful when you plan to create commercial applications using static linking of Qt libraries. For a detailed list of licenses used by third-party libraries within Qt, you can always refer to the Licenses Used in Qt web page from the following link: http://doc.qt.io/qt-5/ licenses-used-in-qt.html Static linking, even with all of its disadvantages that we just mentioned, is still an option, and a good one in some cases, provided that you can comply with the licensing options of the Qt Framework. For instance, in Linux operating systems where creating an installer for our application requires some extra work and care, static linking can help extremely reduce the effort needed to deploy applications (merely a copy and paste). So, the final decision of whether to use static linking or not is mostly on you and how you plan to deploy your application. Making this important decision will be much easier by the end of this chapter, when you have an overview of the possible linking and deployment methods. Deploying using dynamic linking When you deploy an application built with Qt and OpenCV using shared libraries (or dynamic linking), you need to make sure that the executable of your application is able to reach the runtime libraries of Qt and OpenCV, in order to load and use them. This reachability or visibility of runtime libraries can have different meanings depending on the operating system. For instance, on Windows, you need to copy the runtime libraries to the same folder where your application executable resides, or put them in a folder that is appended to the PATH environment value. Qt Framework offers command-line tools to simplify the deployment of Qt applications on Windows and macOS. As mentioned before, the first thing you need to do is to make sure your application is built in the Release mode, and not Debug mode. Then, if you are on Windows, first copy the executable (let us assume it is called app.exe) from the build folder into a separate folder (which we will refer to as deploy_path) and execute the following commands using a command-line instance: cd deploy_path QT_PATHbinwindeployqt app.exe The windeployqt tool is a deployment helper tool that simplifies the process of copying the required Qt runtime libraries into the same folder as the application executable. Itsimply takes an executable as a parameter and after determining the modules used to create it, copies all required runtime libraries and any additional required dependencies, such as Qt plugins, translations, and so on. This takes care of all the required Qt runtime libraries, but we still need to take care of OpenCV runtime libraries. If you followed all of the steps in Chapter 1, Introduction to OpenCV and Qt, for building OpenCV libraries dynamically, then you only need to manually copy the opencv_world330.dll and opencv_ffmpeg330.dll files from OpenCV installation folder (inside the x86vc14bin folder) into the same folder where your application executable resides. We didn't really go into the benefits of turning on the BUILD_opencv_world option when we built OpenCV in the early chapters of the book; however, it should be clear now that this simplifies the deployment and usage of the OpenCV libraries, by requiring only a single entry for LIBS in the *.pro file and manually copying only a single file (not counting the ffmpeg library) when deploying OpenCV applications. It should be also noted that this method has the disadvantage of copying all OpenCV codes (in a single library) along your application even when you do not need or use all of its modules in a project. Also note that on Windows, as mentioned in the Deploying using static linking section, you still need to similarly provide the end users of your application with Microsoft Visual C++ Redistributables. On a macOS operating system, it is also possible to easily deploy applications written using Qt Framework. For this reason, you can use the macdeployqt command-line tool provided by Qt. Similar to windeployqt, which accepts a Windows executable and fills the same folder with the required libraries, macdeployqt accepts a macOS application bundle and makes it deployable by copying all of the required Qt runtimes as private frameworks inside the bundle itself. Here is an example: cd deploy_path QT_PATH/bin/macdeployqt my_app_bundle Optionally, you can also provide an additional -dmg parameter, which leads to the creation of a macOS *.dmg (disk image) file. As for the deployment of OpenCV libraries when dynamic linking is used, you can create an installer using Qt Installer Framework (which we will learn about in the next section), a third-party provider, or a script that makes sure the required runtime libraries are copied to their required folders. This is because of the fact that simply copying your runtime libraries (whether it is OpenCV or anything else) to the same folder as the application executable does not help with making them visible to an application on macOS. The same also applies to the Linux operating system, where unfortunately even a tool for deploying Qt runtime libraries does not exist (at least for the moment), so we also need to take care of Qt libraries in addition to OpenCV libraries, either by using a trusted third-party provider (which you can search for online) or by using the cross-platform installer provided by Qt itself, combined with some scripting to make sure everything is in place when our application is executed. Deploy using Qt Installer Framework Qt Installer Framework allows you to create cross-platform installers of your Qt applications for Windows, macOS, and Linux operating systems. It allows for creating standard installer wizards where the user is taken through consecutive dialogs that provide all the necessary information, and finally display the progress for when the application is being installed and so on, similar to most of installations you have probably faced, and especially the installation of Qt Framework itself. Qt Installer Framework is based on Qt Framework itself but is provided as a different package and does not require Qt SDK (Qt Framework, Qt Creator, and so on) to be present on a computer. It is also possible to use Qt Installer Framework in order to create installer packages for any application, not just Qt applications. In this section, we are going to learn how to create a basic installer using Qt Installer Framework, which takes care of installing your application on a target computer and copying all the necessary dependencies. The result will be a single executable installer file that you can put on a web server to be downloaded or provide it in a USB stick or CD, or any other media type. This example project will help you get started with working your way around the many great capabilities of Qt Installer Framework by yourself. You can use the following link to download and install the Qt Installer Framework. Make sure to simply download the latest version when you use this link, or any other source for downloading it. At the moment, the latest version is 3.0.2: https://download.qt.io/official_releases/qt-installer-framework After you have downloaded and installed Qt Installer Framework, you can start creating the required files that Qt Installer Framework needs in order to create an installer. You can do this by simply browsing to the Qt Installer Framework, and from the examples folder copying the tutorial folder, which is also a template in case you want to quickly rename and re-edit all of the files and create your installer quickly. We will go the other way and create them manually; first because we want to understand the structure of the required files and folders for the Qt Installer Framework, and second, because it is still quite easy and simple. Here are the required steps for creating an installer: Assuming that you have already finished developing your Qt and OpenCV application, you can start by creating a new folder that will contain the installer files. Let's assume this folder is called deploy. Create an XML file inside the deploy folder and name it config.xml. This XML file must contain the following: <?xml version="1.0" encoding="UTF-8"?> <Installer> <Name>Your application</Name> <Version>1.0.0</Version> <Title>Your application Installer</Title> <Publisher>Your vendor</Publisher> <StartMenuDir>Super App</StartMenuDir> <TargetDir>@HomeDir@/InstallationDirectory</TargetDir> </Installer> Make sure to replace the required XML fields in the preceding code with information relevant to your application and then save and close this file: Now, create a folder named packages inside the deploy folder. This folder will contain the individual packages that you want the user to be able to install, or make them mandatory or optional so that the user can review and decide what will be installed. In the case of simpler Windows applications that are written using Qt and OpenCV, usually it is enough to have just a single package that includes the required files to run your application, and even do silent installation of Microsoft Visual C++ Redistributables. But for more complex cases, and especially when you want to have more control over individual installable elements of your application, you can also go for two or more packages, or even sub-packages. This is done by using domain-like folder names for each package. Each package folder can have a name like com.vendor.product, where vendor and product are replaced by the developer name or company and the application. A subpackage (or sub-component) of a package can be identified by adding. subproduct to the name of the parent package. For instance, you can have the following folders inside the packages folder: com.vendor.product com.vendor.product.subproduct1 com.vendor.product.subproduct2 com.vendor.product.subproduct1.subsubproduct1 … This can go on for as many products (packages) and sub-products (sub-packages) as we like. For our example case, let's create a single folder that contains our executable, since it describes it all and you can create additional packages by simply adding them to the packages folder. Let's name it something like com.amin.qtcvapp. Now, follow these required steps: Now, create two folders inside the new package folder that we created, the com.amin.qtcvapp folder. Rename them to data and meta. These two folders must exist inside all packages. Copy your application files inside the data folder. This folder will be extracted into the target folder exactly as it is (we will talk about setting the target folder of a package in the later steps). In case you are planning to create more than one package, then make sure to separate their data correctly and in a way that it makes sense. Of course, you won't be faced with any errors if you fail to do so, but the users of your application will probably be confused, for instance by skipping a package that should be installed at all times and ending up with an installed application that does not work. Now, switch to the meta folder and create the following two files inside that folder, and fill them with the codes provided for each one of them. The package.xml file should contain the following. There's no need to mention that you must fill the fields inside the XML with values relevant to your package: <?xml version="1.0" encoding="UTF-8"?> <Package> <DisplayName>The component</DisplayName> <Description>Install this component.</Description> <Version>1.0.0</Version> <ReleaseDate>1984-09-16</ReleaseDate> <Default>script</Default> <Script>installscript.qs</Script> </Package> The script in the previous XML file, which is probably the most important part of the creation of an installer, refers to a Qt Installer Script (*.qs file), which is named installerscript.qs and can be used to further customize the package, its target folder, and so on. So, let us create a file with the same name (installscript.qs) inside the meta folder, and use the following code inside it: function Component() { // initializations go here } Component.prototype.isDefault = function() { // select (true) or unselect (false) the component by default return true; } Component.prototype.createOperations = function() { try { // call the base create operations function component.createOperations(); } catch (e) { console.log(e); } } This is the most basic component script, which customizes our package (well, it only performs the default actions) and it can optionally be extended to change the target folder, create shortcuts in the Start menu or desktop (on Windows), and so on. It is a good idea to keep an eye on the Qt Installer Framework documentation and learn about its scripting to be able to create more powerful installers that can put all of the required dependencies of your app in place, and automatically. You can also browse through all of the examples inside the examples folder of the Qt Installer Framework and learn how to deal with different deployment cases. For instance, you can try to create individual packages for Qt and OpenCV dependencies and allow the users to deselect them, in case they already have the Qt runtime libraries on their computer. The last step is to use the binarycreator tool to create our single and standalone installer. Simply run the following command by using a Command Prompt (or Terminal) instance: binarycreator -p packages -c config.xml myinstaller The binarycreator is located inside the Qt Installer Framework bin folder. It requires two parameters that we have already prepared. -p must be followed by our packages folder and -c must be followed by the configuration file (or config.xml) file. After executing this command, you will get myinstaller (on Windows, you can append *.exe to it), which you can execute to install your application. This single file should contain all of the required files needed to run your application, and the rest is taken care of. You only need to provide a download link to this file, or provide it on a CD to your users. The following are the dialogs you will face in this default and most basic installer, which contains most of the usual dialogs you would expect when installing an application: If you go to the installation folder, you will notice that it contains a few more files than you put inside the data folder of your package. Those files are required by the installer to handle modifications and uninstall your application. For instance, the users of your application can easily uninstall your application by executing the maintenance tool executable, which would produce another simple and user-friendly dialog to handle the uninstall process: We saw how to deploy a QT + OpenCV applications using static linking, dynamic linking, and QT installer. If you found our post useful, do check out this book Computer Vision with OpenCV 3 and Qt5  to accentuate your OpenCV applications by developing them with Qt.  
Read more
  • 0
  • 0
  • 12185

article-image-analyzing-moby-dick-frequency-distribution-nltk
Richard Gall
30 Mar 2018
2 min read
Save for later

Analyzing Moby Dick through frequency distribution with NLTK

Richard Gall
30 Mar 2018
2 min read
What is frequency distribution and why does it matter? In the context of natural language processing, frequency distribution is simply a tally of the number of times each unique word is used in a text. Recording the individual word counts of a text can better help us understand not only what topics are being discussed and what information is important but also how that information is being discussed as well. It's a useful method for better understanding language and different types of texts. This video tutorial has been taken from from Natural Language Processing with Python. Word frequency distribution is central to performing content analysis with NLP. Its applications are wide ranging. From understanding and characterizing an author’s writing style to analyzing the vocabulary of rappers, the technique is playing a large part in wider cultural conversations. It’s also used in psychological research in a number of ways to analyze how patients use language to form frameworks for thinking about themselves and the world. Trivial or serious, word frequency distribution is becoming more and more important in the world of research. Of course, manually creating such a word frequency distribution models would be time consuming and inconvenient for data scientists. Fortunately for us, NLTK, Python’s toolkit for natural language processing, makes life much easier. How to use NLTK for frequency distribution Take a look at how to use NLTK to create a frequency distribution for Herman Melville’s Moby Dick in the video tutorial above. In it, you'll find a step by step guide to performing an important data analysis task. Once you've done that, you can try it for yourself, or have a go at performing a similar analysis on another data set. Read Analyzing Textual information using the NLTK library. Learn more about natural language processing - read How to create a conversational assistant or chatbot using Python.
Read more
  • 0
  • 0
  • 3570

article-image-how-to-perform-full-text-search-fts-in-postgresql
Sugandha Lahoti
27 Mar 2018
8 min read
Save for later

How to perform full-text search (FTS) in PostgreSQL

Sugandha Lahoti
27 Mar 2018
8 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book, Mastering  PostgreSQL 10, written by Hans-Jürgen Schönig. This book provides expert techniques on PostgreSQL 10 development and administration.[/box] If you are looking up names or for simple strings, you are usually querying the entire content of a field. In Full-Text-Search (FTS), this is different. The purpose of the full-text search is to look for words or groups of words, which can be found in a text. Therefore, FTS is more of a contains operation as you are basically never looking for an exact string. In this article, we will show how to perform a full-text search operation in PostgreSQL. In PostgreSQL, FTS can be done using GIN indexes. The idea is to dissect a text, extract valuable lexemes (= "preprocessed tokens of words"), and index those elements rather than the underlying text. To make your search even more successful, those words are preprocessed. Here is an example: test=# SELECT to_tsvector('english', 'A car, I want a car. I would not even mind having many cars'); to_tsvector --------------------------------------------------------------- 'car':2,6,14 'even':10 'mani':13 'mind':11 'want':4 'would':8 (1 row) The example shows a simple sentence. The to_tsvector function will take the string, apply English rules, and perform a stemming process. Based on the configuration (english), PostgreSQL will parse the string, throw away stop words, and stem individual words. For example, car and cars will be transformed to the car. Note that this is not about finding the word stem. In the case of many, PostgreSQL will simply transform the string to mani by applying standard rules working nicely with the English language. Note that the output of the to_tsvector function is highly language dependent. If you tell PostgreSQL to treat the string as dutch, the result will be totally different: test=# SELECT to_tsvector('dutch', 'A car, I want a car. I would not even mind having many cars'); to_tsvector ----------------------------------------------------------------- 'a':1,5 'car':2,6,14 'even':10 'having':12 'i':3,7 'many':13 'mind':11 'not':9 'would':8 (1 row) To figure out which configurations are supported, consider running the following query: SELECT cfgname FROM pg_ts_config; Comparing strings After taking a brief look at the stemming process, it is time to figure out how a stemmed text can be compared to a user query. The following code snippet checks for the word wanted: test=# SELECT to_tsvector('english', 'A car, I want a car. I would not even mind having many cars') @@ to_tsquery('english', 'wanted'); ?column? ---------- t (1 row) Note that wanted does not actually show up in the original text. Still, PostgreSQL will return true. The reason is that want and wanted are both transformed to the same lexeme, so the result is true. Practically, this makes a lot of sense. Imagine you are looking for a car on Google. If you find pages selling cars, this is totally fine. Finding common lexemes is, therefore, an intelligent idea. Sometimes, people are not only looking for a single word, but want to find a set of words. With to_tsquery, this is possible, as shown in the next example: test=# SELECT to_tsvector('english', 'A car, I want a car. I would not even mind having many cars') @@ to_tsquery('english', 'wanted & bmw'); ?column? ---------- f (1 row) In this case, false is returned because bmw cannot be found in our input string. In the to_tsquery function, & means and and | means or. It is therefore easily possible to build complex search strings. Defining GIN indexes If you want to apply text search to a column or a group of columns, there are basically two choices: Create a functional index using GIN Add a column containing ready-to-use tsvectors and a trigger to keep them in sync In this section, both options will be outlined. To show how things work, I have created some sample data: test=# CREATE TABLE t_fts AS SELECT comment FROM pg_available_extensions; SELECT 43 Indexing the column directly with a functional index is definitely a slower but more space efficient way to get things done: test=# CREATE INDEX idx_fts_func ON t_fts USING gin(to_tsvector('english', comment)); CREATE INDEX Deploying an index on the function is easy, but it can lead to some overhead. Adding a materialized column needs more space, but will lead to a better runtime behavior: test=# ALTER TABLE t_fts ADD COLUMN ts tsvector; ALTER TABLE The only trouble is, how do you keep this column in sync? The answer is by using a trigger: test=# CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE ON t_fts FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger(somename, 'pg_catalog.english', 'comment'); Fortunately, PostgreSQL already provides a C function that can be used by a trigger to sync the tsvector column. Just pass a name, the desired language, as well as a couple of columns to the function, and you are already done. The trigger function will take care of all that is needed. Note that a trigger will always operate within the same transaction as the statement making the modification. Therefore, there is no risk of being inconsistent. Debugging your search Sometimes, it is not quite clear why a query matches a given search string. To debug your query, PostgreSQL offers the ts_debug function. From a user's point of view, it can be used just like to_tsvector. It reveals a lot about the inner workings of the FTS infrastructure: test=# x Expanded display is on. test=# SELECT * FROM ts_debug('english', 'go to www.postgresql-support.de'); -[ RECORD 1 ]+---------------------------- alias  | asciiword description | Word, all ASCII token      | go dictionaries | {english_stem} dictionary           | english_stem lexemes    | {go} -[ RECORD 2 ]+---------------------------- alias  | blank Description | Space symbols token   |         dictionaries | {}         dictionary     |         lexemes       | -[ RECORD 3 ]+---------------------------- alias  | asciiword description | Word, all ASCII token      | to dictionaries | {english_stem} dictionary   | english_stem lexemes    | {} -[ RECORD 4 ]+---------------------------- alias  | blank description | Space symbols token | dictionaries | {} dictionary   | lexemes          | -[ RECORD 5 ]+---------------------------- alias  | host description | Host token      | www.postgresql-support.de dictionaries | {simple} dictionary | simple lexemes    | {www.postgresql-support.de} ts_debug will list every token found and display information about the token. You will see which token the parser found, the dictionary used, as well as the type of object. In my example, blanks, words, and hosts have been found. You might also see numbers, email addresses, and a lot more. Depending on the type of string, PostgreSQL will handle things differently. For example, it makes absolutely no sense to stem hostnames and e-mail addresses. Gathering word statistics Full-text search can handle a lot of data. To give end users more insights into their texts, PostgreSQL offers the pg_stat function, which returns a list of words: SELECT * FROM ts_stat('SELECT to_tsvector(''english'', comment) FROM pg_available_extensions') ORDER BY 2 DESC LIMIT 3; word   | ndoc | nentry ----------+------+-------- function | 10 |   10 data      |      10 |  10 type        |   7  |     7 (3 rows) The word column contains the stemmed word, ndoc tells us about the number of documents a certain word occurs.nentry indicates how often a word was found all together. Taking advantage of exclusion operators So far, indexes have been used to speed things up and to ensure uniqueness. However, a couple of years ago, somebody came up with the idea of using indexes for even more. As you have seen in this chapter, GiST supports operations such as intersects, overlaps, contains, and a lot more. So, why not use those operations to manage data integrity? Here is an example: test=# CREATE EXTENSION btree_gist; test=# CREATE TABLE t_reservation ( room int, from_to tsrange, EXCLUDE USING GiST (room with =, from_to with &&) ); CREATE TABLE The EXCLUDE  USING  GiST clause defines additional constraints. If you are selling rooms, you might want to allow different rooms to be booked at the same time. However, you don't want to sell the same room twice during the same period. What the EXCLUDE clause says in my example is this, if a room is booked twice at the same time, an error should pop up (the data in from_to with must not overlap (&&) if it is related to the same room). The following two rows will not violate constraints: test=# INSERT INTO t_reservation VALUES (10, '["2017-01-01", "2017-03-03"]'); INSERT 0 1 test=# INSERT INTO t_reservation VALUES (13, '["2017-01-01", "2017-03-03"]'); INSERT 0 1 However, the next INSERT will cause a violation because the data overlaps: test=# INSERT INTO t_reservation VALUES (13, '["2017-02-02", "2017-08-14"]'); ERROR:  conflicting key value violates exclusion constraint "t_reservation_room_from_to_excl" DETAIL:   Key (room, from_to)=(13, ["2017-02-02 00:00:00","2017-08-14 00:00:00"]) conflicts with existing key (room, from_to)=(13, ["2017-01-01 00:00:00","2017-03-03 00:00:00"]). The use of exclusion operators is very useful and can provide you with highly advanced means to handle integrity. To summarize, we learnt how to perform full-text search operation in PostgreSQL. If you liked our article, check out the book Mastering  PostgreSQL 10 to understand how to perform operations such as indexing, query optimization, concurrent transactions, table partitioning, server tuning, and more.  
Read more
  • 0
  • 0
  • 8671
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-creating-2d-3d-plots-using-matplotlib
Pravin Dhandre
22 Mar 2018
10 min read
Save for later

Creating 2D and 3D plots using Matplotlib

Pravin Dhandre
22 Mar 2018
10 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by L. Felipe Martins, Ruben Oliva Ramos and V Kishore Ayyadevara titled SciPy Recipes. This book provides data science recipes for users to effectively process, manipulate, and visualize massive datasets using SciPy.[/box] In today’s tutorial, we will demonstrate how to create two-dimensional and three-dimensional plots for displaying graphical representation of data using a full-fledged scientific library -  Matplotlib. Creating two-dimensional plots of functions and data We will present the basic kind of plot generated by Matplotlib: a two-dimensional display, with axes, where datasets and functional relationships are represented by lines. Besides the data being displayed, a good graph will contain a title (caption), axes labels, and, perhaps, a legend identifying each line in the plot. Getting ready Start Jupyter and run the following commands in an execution cell: %matplotlib inline import numpy as np import matplotlib.pyplot as plt How to do it… Run the following code in a single Jupyter cell: xvalues = np.linspace(-np.pi, np.pi) yvalues1 = np.sin(xvalues) yvalues2 = np.cos(xvalues) plt.plot(xvalues, yvalues1, lw=2, color='red', label='sin(x)') plt.plot(xvalues, yvalues2, lw=2, color='blue', label='cos(x)') plt.title('Trigonometric Functions') plt.xlabel('x') plt.ylabel('sin(x), cos(x)') plt.axhline(0, lw=0.5, color='black') plt.axvline(0, lw=0.5, color='black') plt.legend() None This code will insert the plot shown in the following screenshot into the Jupyter Notebook: How it works… We start by generating the data to be plotted, with the three following statements: xvalues = np.linspace(-np.pi, np.pi, 300) yvalues1 = np.sin(xvalues) yvalues2 = np.cos(xvalues) We first create an xvalues array, containing 300 equally spaced values between -π and π. We then compute the sine and cosine functions of the values in xvalues, storing the results in the yvalues1 and yvalues2 arrays. Next, we generate the first line plot with the following statement: plt.plot(xvalues, yvalues1, lw=2, color='red', label='sin(x)') The arguments to the plot() function are described as follows: xvalues and yvalues1 are arrays containing, respectively, the x and y coordinates of the points to be plotted. These arrays must have the same length. The remaining arguments are formatting options. lw specifies the line width and color the line color. The label argument is used by the legend() function, discussed as follows. The next line of code generates the second line plot and is similar to the one explained previously. After the line plots are defined, we set the title for the plot and the legends for the axes with the following commands: plt.title('Trigonometric Functions') plt.xlabel('x') plt.ylabel('sin(x), cos(x)') We now generate axis lines with the following statements: plt.axhline(0, lw=0.5, color='black') plt.axvline(0, lw=0.5, color='black') The first arguments in axhline() and axvline() are the locations of the axis lines and the options specify the line width and color. We then add a legend for the plot with the following statement: plt.legend() Matplotlib tries to place the legend intelligently, so that it does not interfere with the plot. In the legend, one item is being generated by each call to the plot() function and the text for each legend is specified in the label option of the plot() function. Generating multiple plots in a single figure Wouldn't it be interesting to know how to generate multiple plots in a single figure? Well, let's get started with that. Getting ready Start Jupyter and run the following three commands in an execution cell: %matplotlib inline import numpy as np import matplotlib.pyplot as plt How to do it… Run the following commands in a Jupyter cell: plt.figure(figsize=(6,6)) xvalues = np.linspace(-2, 2, 100) plt.subplot(2, 2, 1) yvalues = xvalues plt.plot(xvalues, yvalues, color='blue') plt.xlabel('$x$') plt.ylabel('$x$') plt.subplot(2, 2, 2) yvalues = xvalues ** 2 plt.plot(xvalues, yvalues, color='green') plt.xlabel('$x$') plt.ylabel('$x^2$') plt.subplot(2, 2, 3) yvalues = xvalues ** 3 plt.plot(xvalues, yvalues, color='red') plt.xlabel('$x$') plt.ylabel('$x^3$') plt.subplot(2, 2, 4) yvalues = xvalues ** 4 plt.plot(xvalues, yvalues, color='black') plt.xlabel('$x$') plt.ylabel('$x^3$') plt.suptitle('Polynomial Functions') plt.tight_layout() plt.subplots_adjust(top=0.90) None Running this code will produce results like those in the following screenshot: How it works… To start the plotting constructions, we use the figure() function, as shown in the following line of code: plt.figure(figsize=(6,6)) The main purpose of this call is to set the figure size, which needs adjustment, since we plan to make several plots in the same figure. After creating the figure, we add four plots with code, as demonstrated in the following segment: plt.subplot(2, 2, 3) yvalues = xvalues ** 3 plt.plot(xvalues, yvalues, color='red') plt.xlabel('$x$') plt.ylabel('$x^3$') In the first line, the plt.subplot(2, 2, 3) call tells pyplot that we want to organize the plots in a two-by-two layout, that is, in two rows and two columns. The last argument specifies that all following plotting commands should apply to the third plot in the array. Individual plots are numbered, starting with the value 1 and counting across the rows and columns of the plot layout. We then generate the line plot with the following statements: yvalues = xvalues ** 3 plt.plot(xvalues, yvalues, color='red') The first line of the preceding code computes the yvalues array, and the second draws the corresponding graph. Notice that we must set options such as line color individually for each subplot. After the line is plotted, we use the xlabel() and ylabel() functions to create labels for the axes. Notice that these have to be set up for each individual subplot too. After creating the subplots, we explain the subplots: plt.suptitle('Polynomial Functions') sets a common title for all Subplots plt.tight_layout() adjusts the area taken by each subplot, so that axes' legends do not overlap plt.subplots_adjust(top=0.90) adjusts the overall area taken by the plots, so that the title displays correctly Creating three-dimensional plots Matplotlib offers several different ways to visualize three-dimensional data. In this recipe, we will demonstrate the following methods: Drawing surfaces plots Drawing two-dimensional contour plots Using color maps and color bars Getting ready Start Jupyter and run the following three commands in an execution cell: %matplotlib inline import numpy as np import matplotlib.pyplot as plt How to do it… Run the following code in a Jupyter code cell: from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm f = lambda x,y: x**3 - 3*x*y**2 fig = plt.figure(figsize=(12,6)) ax = fig.add_subplot(1,2,1,projection='3d') xvalues = np.linspace(-2,2,100) yvalues = np.linspace(-2,2,100) xgrid, ygrid = np.meshgrid(xvalues, yvalues) zvalues = f(xgrid, ygrid) surf = ax.plot_surface(xgrid, ygrid, zvalues, rstride=5, cstride=5, linewidth=0, cmap=cm.plasma) ax = fig.add_subplot(1,2,2) plt.contourf(xgrid, ygrid, zvalues, 30, cmap=cm.plasma) fig.colorbar(surf, aspect=18) plt.tight_layout() None Running this code will produce a plot of the monkey saddle surface, which is a famous example of a surface with a non-standard critical point. The displayed graph is shown in the following screenshot: How it works… We start by importing the Axes3D class from the mpl_toolkits.mplot3d library, which is the Matplotlib object used for creating three-dimensional plots. We also import the cm class, which represents a color map. We then define a function to be plotted, with the following line of code: f = lambda x,y: x**3 - 3*x*y**2 The next step is to define the Figure object and an Axes object with a 3D projection, as done in the following lines of code: fig = plt.figure(figsize=(12,6)) ax = fig.add_subplot(1,2,1,projection='3d') Notice that the approach used here is somewhat different than the other recipes in this chapter. We are assigning the output of the figure() function call to the fig variable and then adding the subplot by calling the add_subplot() method from the fig object. This is the recommended method of creating a three-dimensional plot in the most recent version of Matplotlib. Even in the case of a single plot, the add_subplot() method should be used, in which case the command would be ax = fig.add_subplot(1,1,1,projection='3d'). The next few lines of code, shown as follows, compute the data for the plot: xvalues = np.linspace(-2,2,100) yvalues = np.linspace(-2,2,100) xgrid, ygrid = np.meshgrid(xvalues, yvalues) zvalues = f(xgrid, ygrid) The most important feature of this code is the call to meshgrid(). This is a NumPy convenience function that constructs grids suitable for three-dimensional surface plots. To understand how this function works, run the following code: xvec = np.arange(0, 4) yvec = np.arange(0, 3) xgrid, ygrid = np.meshgrid(xvec, yvec) After running this code, the xgrid array will contain the following values: array([[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]]) The ygrid array will contain the following values: array([[0, 0, 0, 0], [1, 1, 1, 1], [2, 2, 2, 2]]) Notice that the two arrays have the same dimensions. Each grid point is represented by a pair of the (xgrid[i,j],ygrid[i,j]) type. This convention makes the computation of a vectorized function on a grid easy and efficient, with the f(xgrid, ygrid) expression. The next step is to generate the surface plot, which is done with the following function call: surf = ax.plot_surface(xgrid, ygrid, zvalues, rstride=5, cstride=5, linewidth=0, cmap=cm.plasma) The first three arguments, xgrid, ygrid, and zvalues, specify the data to be plotted. We then use the rstride and cstride options to select a subset of the grid points. Notice that the xvalues and yvalues arrays both have length 100, so that xgrid and ygrid will have 10,000 entries each. Using all grid points would be inefficient and produce a poor plot from the visualization point of view. Thus, we set rstride=5 and cstride=5, which results in a plot containing every fifth point across each row and column of the grid. The next option, linewidth=0, sets the line width of the plot to zero, preventing the display of a wireframe. The final argument, cmap=cm.plasma, specifies the color map for the plot. We use the cm.plasma color map, which has the effect of plotting higher functional values with a hotter color. Matplotlib offer as large number of built-in color maps, listed at https:/​/​matplotlib.​org/​examples/​color/​colormaps_​reference.​html.​ Next, we add the filled contour plot with the following code: ax = fig.add_subplot(1,2,2) ax.contourf(xgrid, ygrid, zvalues, 30, cmap=cm.plasma) Notice that, when selecting the subplot, we do not specify the projection option, which is not necessary for two-dimensional plots. The contour plot is generated with the contourf() method. The first three arguments, xgrid, ygrid, zvalues, specify the data points, and the fourth argument, 30, sets the number of contours. Finally, we set the color map to be the same one used for the surface plot. The final component of the plot is a color bar, which provides a visual representation of the value associated with each color in the plot, with the fig.colorbar(surf, aspect=18) method call. Notice that we have to specify in the first argument which plot the color bar is associated to. The aspect=18 option is used to adjust the aspect ratio of the bar. Larger values will result in a narrower bar. To finish the plot, we call the tight_layout() function. This adjusts the sizes of each plot, so that axis labels are displayed correctly.   We generated 2D and 3D plots using Matplotlib and represented the results of technical computation in graphical manner. If you want to explore other types of plots such as scatter plot or bar chart, you may read Visualizing 3D plots in Matplotlib 2.0. Do check out the book SciPy Recipes to take advantage of other libraries of the SciPy stack and perform matrices, data wrangling and advanced computations with ease.
Read more
  • 0
  • 0
  • 20058

article-image-how-to-secure-data-in-salesforce-einstein-analytics
Amey Varangaonkar
22 Mar 2018
5 min read
Save for later

How to secure data in Salesforce Einstein Analytics

Amey Varangaonkar
22 Mar 2018
5 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Learning Einstein Analytics written by Santosh Chitalkar. This book includes techniques to build effective dashboards and Business Intelligence metrics to gain useful insights from data.[/box] Before getting into security in Einstein Analytics, it is important to set up your organization, define user types so that it is available to use. In this article we explore key aspects of security in Einstein Analytics. The following are key points to consider for data security in Salesforce: Salesforce admins can restrict access to data by setting up field-level security and object-level security in Salesforce. These settings prevent data flow from loading sensitive Salesforce data into a dataset. Dataset owners can restrict data access by using row-level security. Analytics supports security predicates, a robust row-level security feature that enables you to model many different types of access control on datasets. Analytics also supports sharing inheritance. Take a look at the following diagram: Salesforce data security In Einstein Analytics, dataflows bring the data to the Analytics Cloud from Salesforce. It is important that Einstein Analytics has all the necessary permissions and access to objects as well as fields. If an object or a field is not accessible to Einstein then the data flow fails and it cannot extract data from Salesforce. So we need to make sure that the required access is given to the integration user and security user. We can configure the permission set for these users. Let’s configure permissions for an integration user by performing the following steps: Switch to classic mode and enter Profiles in the Quick Find / Search… box Select and clone the Analytics Cloud Integration User profile and Analytics Cloud Security User profile for the integration user and security user respectively: Save the cloned profiles and then edit them Set the permission to Read for all objects and fields Save the profile and assign it to users Take a look at the following diagram: Data pulled from Salesforce can be made secure from both sides: Salesforce as well as Einstein Analytics. It is important to understand that Salesforce and Einstein Analytics are two independent databases. So, a user security setting given to Einstein will not affect the data in Salesforce. There are the following ways to secure data pulled from Salesforce: Salesforce Security Einstein Analytics Security Roles and profiles Inheritance security Organization-Wide Defaults (OWD) and record ownership Security predicates Sharing rules Application-level security Sharing mechanism in Einstein All Analytics users start off with Viewer access to the default Shared App that’s available out-of-the-box; administrators can change this default setting to restrict or extend access. All other applications created by individual users are private, by default; the application owner and administrators have Manager access and can extend access to other Users, groups, or roles. The following diagram shows how the sharing mechanism works in Einstein Analytics: Here’s a summary of what users can do with Viewer, Editor, and Manager access: Action / Access level Viewer Editor Manager View dashboards, lenses, and datasets in the application. If the underlying dataset is in a different application than a lens or dashboard, the user must have access to both applications to view the lens or dashboard. Yes Yes Yes See who has access to the application. Yes Yes Yes Save contents of the application to another application that the user has Editor or Manager access to. Yes Yes Yes Save changes to existing dashboards, lenses, and datasets in the application (saving dashboards requires the appropriate permission set license and permission). Yes Yes Change the application’s sharing settings. Yes Rename the application. Yes Delete the application. Yes Confidentiality, integrity, and availability together are referred to as the CIA Triad and it is designed to help organizations decide what security policies to implement within the organization. Salesforce knows that keeping information private and restricting access by unauthorized users is essential for business. By sharing the application, we can share a lens, dashboard, and dataset all together with one click. To share the entire application, do the following: Go to your Einstein Analytics and then to Analytics Studio Click on the APPS tab and then the icon for your application that you want to share, as shown in the following screenshot: 3. Click on Share and it will open a new popup window, as shown in the following screenshot: Using this window, you can share the application with an individual user, a group of users, or a particular role. You can define the access level as Viewer, Editor, or Manager After selecting User, click on the user you wish to add and click on Add Save and then close the popup And that’s it. It’s done. Mass-sharing the application Sometimes, we are required to share the application with a wide audience: There are multiple approaches to mass-sharing the Wave application such as by role or by username In Salesforce classic UI, navigate to Setup|Public Groups | New For example, to share a sales application, label a public group as Analytics_Sales_Group Search and add users to a group by Role, Roles and Subordinates, or by Users (username): 5. Search for the Analytics_Sales public group 6. Add the Viewer option as shown in the following screenshot: 7. Click on Save Protecting data from breaches, theft, or from any unauthorized user is very important. And we saw that Einstein Analytics provides the necessary tools to ensure the data is secure. If you found this excerpt useful and want to know more about securing your analytics in Einstein, make sure to check out this book Learning Einstein Analytics.  
Read more
  • 0
  • 0
  • 6502

article-image-embed-einstein-dashboards-salesforce-classic
Amey Varangaonkar
21 Mar 2018
5 min read
Save for later

How to embed Einstein dashboards on Salesforce Classic

Amey Varangaonkar
21 Mar 2018
5 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Learning Einstein Analytics written by Santosh Chitalkar. This book highlights the key techniques and know-how to unlock critical insights from your data using Salesforce Einstein Analytics.[/box] With Einstein Analytics, users have the power to embed their dashboards on various third-party applications and even on their web applications. In this article, we will show how to embed an Einstein dashboard on Salesforce Classic. In order to start embedding the dashboard, let's create a sample dashboard by performing the following steps: Navigate to Analytics Studio | Create | Dashboard. Add three chart widgets on the dashboard. Click on the Chart button in the middle and select the Opportunity dataset. Select Measures as Sum of Amount and select BillingCountry under Group by. Click on Done. Repeat the second step for the second widget, but select Account Source under Group by and make it a donut chart. Repeat the second step for the third widget but select Stage under Group by and make it a funnel chart. Click on Save (s) and enter Embedding Opportunities in the title field, as shown in the following screenshot: Now that we have created a dashboard, let's embed this dashboard in Salesforce Classic. In order to start embedding the dashboard, exit from the Einstein Analytics platform and go to Classic mode. The user can embed the dashboard on the record detail page layout in Salesforce Classic. The user can view the dashboard, drill in, and apply a filter, just like in the Einstein Analytics window. Let's add the dashboard to the account detail page by performing the following steps: Navigate to Setup | Customize | Accounts | Page Layouts as shown in the following screenshot: Click on Edit of Account Layout and it will open a page layout editor which has two parts: a palette on the upper portion of the screen, and the page layout on the lower portion of the screen. The palette contains the user interface elements that you can add to your page layout, such as Fields, Buttons, Links, and Actions, and Related Lists, as shown in the following screenshot: Click on the Wave Analytics Assets option from the palette and you can see all the dashboards on the right-side panel. Drag and drop a section onto the page layout, name it Einstein Dashboard, and click on OK. Drag and drop the dashboard which you wish to add to the record detail page. We are going to add Embedded Opportunities. Click on Save. Go to any accounting record and you should see a new section within the dashboard: Users can easily configure the embedded dashboards by using attributes. To access the dashboard properties, go to edit page layout again, and go to the section where we added the dashboard to the layout. Hover over the dashboard and click on the Tool icon. It will open an Asset Properties window: The Asset Properties window gives the user the option to change the following features: Width (in pixels or %): This feature allows you to adjust the width of the dashboard section. Height (in pixels): This feature allows you to adjust the height of the dashboard section. Show Title: This feature allows you to display or hide the title of the dashboard. Show Sharing Icon: Using this feature, by default, the share icon is disabled. The Show Sharing Icon option gives the user a flexibility to include the share icon on the dashboard. Show Header: This feature allows you to display or hide the header. Hide on error: This feature gives you control over whether the Analytics asset appears if there is an error. Field mapping: Last but not least, field mapping is used to filter the relevant data to the record on the dashboard. To set up the dashboard to show only the data that’s relevant to the record being viewed, use field mapping. Field mapping links data fields in the dashboard to the object’s fields. We are using the Embedded Opportunity dashboard. Let's add field mapping to it. The following is the format for field mapping: { "datasets": { "datasetName":[{ "fields":["Actual Field name from object"], "filter":{"operator": "matches", "values":["$dataset Fieldname"]} }] } Let's add field mapping for account by using the following format: { "datasets": { "Account":[{ "fields":["Name"], "filter":{"operator": "matches", "values":["$Name"]} }] } } If your dashboard uses multiple datasets, then you can use the following format: { "datasets": { "datasetName1":[{ "fields":["Actual Field name from object"], "filter":{"operator": "matches", "values":["$dataset1 Fieldname"]} }], "datasetName2":[{ "fields":["Actual Field name from object"], "filter":{"operator": "matches", "values":["$dataset2 Fieldname"]} }] } Let's add field mapping for account and opportunities: { "datasets": { "Opportunities":[{ "fields":["Account.Name"], "Filter":{"operator": "Matches", "values":["$Name"]} }], "Account":[{ "fields":["Name"], "filter":{"operator": "matches", "values":["$Name"]} }] } } Now that we have added field mapping, save the page layout and go to the actual record. Observe that the dashboard is getting filtered now per record, as shown in the following screenshot: To summarize, we saw it’s fairly easy to embed your custom dashboards in Salesforce. Similarly, you can do so on other platforms such as Lightning, Visualforce pages, and even on your websites and web applications. If you are keen to learn more, you may check out the book Learning Einstein Analytics.    
Read more
  • 0
  • 0
  • 3283

article-image-write-high-quality-code-python-15-tips-data-scientists-researchers
Aarthi Kumaraswamy
21 Mar 2018
5 min read
Save for later

How to write high quality code in Python: 15+ tips for data scientists and researchers

Aarthi Kumaraswamy
21 Mar 2018
5 min read
Writing code is easy. Writing high quality code is much harder. Quality is to be understood both in terms of actual code (variable names, comments, docstrings, and so on) and architecture (functions, modules, and classes). In general, coming up with a well-designed code architecture is much more challenging than the implementation itself. In this post, we will give a few tips about how to write high quality code. This is a particularly important topic in academia, as more and more scientists without prior experience in software development need to code. High quality code writing first principles Writing readable code means that other people (or you in a few months or years) will understand it quicker and will be more willing to use it. It also facilitates bug tracking. Modular code is also easier to understand and to reuse. Implementing your program's functionality in independent functions that are organized as a hierarchy of packages and modules is an excellent way of achieving high code quality. It is easier to keep your code loosely coupled when you use functions instead of classes. Spaghetti code is really hard to understand, debug, and reuse. Iterate between bottom-up and top-down approaches while working on a new project. Starting with a bottom-up approach lets you gain experience with the code before you start thinking about the overall architecture of your program. Still, make sure you know where you're going by thinking about how your components will work together. How these high quality code writing first principles translate in Python? Take the time to learn the Python language seriously. Review the list of all modules in the standard library—you may discover that functions you implemented already exist. Learn to write Pythonic code, and do not translate programming idioms from other languages such as Java or C++ to Python. Learn common design patterns; these are general reusable solutions to commonly occurring problems in software engineering. Use assertions throughout your code (the assert keyword) to prevent future bugs (defensive programming). Start writing your code with a bottom-up approach; write independent Python functions that implement focused tasks. Do not hesitate to refactor your code regularly. If your code is becoming too complicated, think about how you can simplify it. Avoid classes when you can. If you can use a function instead of a class, choose the function. A class is only useful when you need to store persistent state between function calls. Make your functions as pure as possible (no side effects). In general, prefer Python native types (lists, tuples, dictionaries, and types from Python's collections module) over custom types (classes). Native types lead to more efficient, readable, and portable code. Choose keyword arguments over positional arguments in your functions. Argument names are easier to remember than argument ordering. They make your functions self-documenting. Name your variables carefully. Names of functions and methods should start with a verb. A variable name should describe what it is. A function name should describe what it does. The importance of naming things well cannot be overstated. Every function should have a docstring describing its purpose, arguments, and return values, as shown in the following example. You can also look at the conventions chosen in popular libraries such as NumPy. The exact convention does not matter, the point is to be consistent within your code. You can use a markup language such as Markdown or reST to do that. Follow (at least partly) Guido van Rossum's Style Guide for Python, also known as Python Enhancement Proposal number 8 (PEP8). It is a long read, but it will help you write well-readable Python code. It covers many little things such as spacing between operators, naming conventions, comments, and docstrings. For instance, you will learn that it is considered a good practice to limit any line of your code to 79 or 99 characters. This way, your code can be correctly displayed in most situations (such as in a command-line interface or on a mobile device) or side by side with another file. Alternatively, you can decide to ignore certain rules. In general, following common guidelines is beneficial on projects involving many developers. You can check your code automatically against most of the style conventions in PEP8 with the pycodestyle Python package. You can also automatically make your code PEP8-compatible with the autopep8 package. Use a tool for static code analysis such as flake8 or Pylint. It lets you find potential errors or low-quality code statically, that is, without running your code. Use blank lines to avoid cluttering your code (see PEP8). You can also demarcate sections in a long Python module with salient comments. A Python module should not contain more than a few hundreds lines of code. Having too many lines of code in a module may be a sign that you need to split it into several modules. Organize important projects (with tens of modules) into subpackages (subdirectories). Take a look at how major Python projects are organized. For example, the code of IPython is well-organized into a hierarchy of subpackages with focused roles. Reading the code itself is also quite instructive. Learn best practices to create and distribute a new Python package. Make sure that you know setuptools, pip, wheels, virtualenv, PyPI, and so on. Also, you are highly encouraged to take a serious look at conda, a powerful and generic packaging system created by Anaconda. Packaging has long been a rapidly evolving topic in Python, so read only the most recent references. You enjoyed an excerpt from Cyrille Rossant’s latest book, IPython Cookbook, Second Edition. This book contains 100+ recipes for high-performance scientific computing and data analysis, from the latest IPython/Jupyter features to the most advanced tricks, to help you write better and faster code. For free recipes from the book, head over to the Ipython Cookbook Github page. If you loved what you saw, support Cyrille’s work by buying a copy of the book today!
Read more
  • 0
  • 1
  • 7857
article-image-25-datasets-deep-learning-iot
Sugandha Lahoti
20 Mar 2018
8 min read
Save for later

25 Datasets for Deep Learning in IoT

Sugandha Lahoti
20 Mar 2018
8 min read
Deep Learning is one of the major players for facilitating the analytics and learning in the IoT domain. A really good roundup of the state of deep learning advances for big data and IoT is described in the paper Deep Learning for IoT Big Data and Streaming Analytics: A Survey by Mehdi Mohammadi, Ala Al-Fuqaha, Sameh Sorour, and Mohsen Guizani. In this article, we have attempted to draw inspiration from this research paper to establish the importance of IoT datasets for deep learning applications. The paper also provides a handy list of commonly used datasets suitable for building deep learning applications in IoT, which we have added at the end of the article. IoT and Big Data: The relationship IoT and Big data have a two-way relationship. IoT is the main producer of big data, and as such an important target for big data analytics to improve the processes and services of IoT. However, there is a difference between the two. Large-Scale Streaming data: IoT data is a large-scale streaming data. This is because a large number of IoT devices generate streams of data continuously. Big data, on the other hand, lack real-time processing. Heterogeneity: IoT data is heterogeneous as various IoT data acquisition devices gather different information. Big data devices are generally homogeneous in nature. Time and space correlation: IoT sensor devices are also attached to a specific location, and thus have a location and time-stamp for each of the data items. Big data sensors lack time-stamp resolution. High noise data: IoT data is highly noisy, owing to the tiny pieces of data in IoT applications, which are prone to errors and noise during acquisition and transmission. Big data, in contrast, is generally less noisy. Big data, on the other hand, is classified according to conventional 3V’s, Volume, Velocity, and Variety. As such techniques used for Big data analytics are not sufficient to analyze the kind of data, that is being generated by IoT devices. For instance, autonomous cars need to make fast decisions on driving actions such as lane or speed change. These decisions should be supported by fast analytics with data streaming from multiple sources (e.g., cameras, radars, left/right signals, traffic light etc.). This changes the definition of IoT big data classification to 6V’s. Volume: The quantity of generated data using IoT devices is much more than before and clearly fits this feature. Velocity: Advanced tools and technologies for analytics are needed to efficiently operate the high rate of data production. Variety: Big data may be structured, semi-structured, and unstructured data. The data types produced by IoT include text, audio, video, sensory data and so on. Veracity: Veracity refers to the quality, consistency, and trustworthiness of the data, which in turn leads to accurate analytics. Variability: This property refers to the different rates of data flow. Value: Value is the transformation of big data to useful information and insights that bring competitive advantage to organizations. Despite the recent advancement in DL for big data, there are still significant challenges that need to be addressed to mature this technology. Every 6 characteristics of IoT big data imposes a challenge for DL techniques. One common denominator for all is the lack of availability of IoT big data datasets.   IoT datasets and why are they needed Deep learning methods have been promising with state-of-the-art results in several areas, such as signal processing, natural language processing, and image recognition. The trend is going up in IoT verticals as well. IoT datasets play a major role in improving the IoT analytics. Real-world IoT datasets generate more data which in turn improve the accuracy of DL algorithms. However, the lack of availability of large real-world datasets for IoT applications is a major hurdle for incorporating DL models in IoT. The shortage of these datasets acts as a barrier to deployment and acceptance of IoT analytics based on DL since the empirical validation and evaluation of the system should be shown promising in the natural world. The lack of availability is mainly because: Most IoT datasets are available with large organizations who are unwilling to share it so easily. Access to the copyrighted datasets or privacy considerations. These are more common in domains with human data such as healthcare and education. While there is a lot of ground to be covered in terms of making datasets for IoT available, here is a list of commonly used datasets suitable for building deep learning applications in IoT. Dataset Name Domain Provider Notes Address/Link CGIAR dataset Agriculture, Climate CCAFS High-resolution climate datasets for a variety of fields including agricultural http://www.ccafs-climate.org/ Educational Process Mining Education University of Genova Recordings of 115 subjects’ activities through a logging application while learning with an educational simulator http://archive.ics.uci.edu/ml/datasets/Educational+Process+Mining+%28EPM%29%3A+A+Learning+Analytics+Data+Set Commercial Building Energy Dataset Energy, Smart Building IIITD Energy related data set from a commercial building where data is sampled more than once a minute. http://combed.github.io/ Individual household electric power consumption Energy, Smart home EDF R&D, Clamart, France One-minute sampling rate over a period of almost 4 years http://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption AMPds dataset Energy, Smart home S. Makonin AMPds contains electricity, water, and natural gas measurements at one minute intervals for 2 years of monitoring http://ampds.org/ UK Domestic Appliance-Level Electricity Energy, Smart Home Kelly and Knottenbelt Power demand from five houses. In each house both the whole-house mains power demand as well as power demand from individual appliances are recorded. http://www.doc.ic.ac.uk/∼dk3810/data/ PhysioBank databases Healthcare PhysioNet Archive of over 80 physiological datasets. https://physionet.org/physiobank/database/ Saarbruecken Voice Database Healthcare Universitat¨ des Saarlandes A collection of voice recordings from more than 2000 persons for pathological voice detection. http://www.stimmdatebank.coli.uni-saarland.de/help_en.php4   T-LESS   Industry CMP at Czech Technical University An RGB-D dataset and evaluation methodology for detection and 6D pose estimation of texture-less objects http://cmp.felk.cvut.cz/t-less/ CityPulse Dataset Collection Smart City CityPulse EU FP7 project Road Traffic Data, Pollution Data, Weather, Parking http://iot.ee.surrey.ac.uk:8080/datasets.html Open Data Institute - node Trento Smart City Telecom Italia Weather, Air quality, Electricity, Telecommunication http://theodi.fbk.eu/openbigdata/ Malaga datasets Smart City City of Malaga A broad range of categories such as energy, ITS, weather, Industry, Sport, etc. http://datosabiertos.malaga.eu/dataset Gas sensors for home activity monitoring Smart home Univ. of California San Diego Recordings of 8 gas sensors under three conditions including background, wine and banana presentations. http://archive.ics.uci.edu/ml/datasets/Gas+sensors+for+home+activity+monitoring CASAS datasets for activities of daily living Smart home Washington State University Several public datasets related to Activities of Daily Living (ADL) performance in a two story home, an apartment, and an office settings. http://ailab.wsu.edu/casas/datasets.html ARAS Human Activity Dataset Smart home Bogazici University Human activity recognition datasets collected from two real houses with multiple residents during two months. https://www.cmpe.boun.edu.tr/aras/ MERLSense Data Smart home, building Mitsubishi Electric Research Labs Motion sensor data of residual traces from a network of over 200 sensors for two years, containing over 50 million records. http://www.merl.com/wmd SportVU   Sport Stats LLC   Video of basketball and soccer games captured from 6 cameras. http://go.stats.com/sportvu RealDisp Sport O. Banos   Includes a wide range of physical activities (warm up, cool down and fitness exercises). http://orestibanos.com/datasets.htm   Taxi Service Trajectory Transportation Prediction Challenge, ECML PKDD 2015 Trajectories performed by all the 442 taxis running in the city of Porto, in Portugal. http://www.geolink.pt/ecmlpkdd2015-challenge/dataset.html GeoLife GPS Trajectories Transportation Microsoft A GPS trajectory by a sequence of time-stamped points https://www.microsoft.com/en-us/download/details.aspx?id=52367 T-Drive trajectory data Transportation Microsoft Contains a one-week trajectories of 10,357 taxis https://www.microsoft.com/en-us/research/publication/t-drive-trajectory-data-sample/ Chicago Bus Traces data Transportation M. Doering   Bus traces from the Chicago Transport Authority for 18 days with a rate between 20 and 40 seconds. http://www.ibr.cs.tu-bs.de/users/mdoering/bustraces/   Uber trip data Transportation FiveThirtyEight About 20 million Uber pickups in New York City during 12 months. https://github.com/fivethirtyeight/uber-tlc-foil-response Traffic Sign Recognition Transportation K. Lim   Three datasets: Korean daytime, Korean nighttime, and German daytime traffic signs based on Vienna traffic rules. https://figshare.com/articles/Traffic_Sign_Recognition_Testsets/4597795 DDD17   Transportation J. Binas End-To-End DAVIS Driving Dataset. http://sensors.ini.uzh.ch/databases.html      
Read more
  • 0
  • 2
  • 48302

article-image-getting-started-with-python-web-scraping
Amarabha Banerjee
20 Mar 2018
13 min read
Save for later

Getting started with Python Web Scraping

Amarabha Banerjee
20 Mar 2018
13 min read
[box type="note" align="" class="" width=""]Our article is an excerpt from the book Web Scraping with Python, written by Richard Lawson. This book contains step by step tutorials on how to leverage Python programming techniques for ethical web scraping. [/box] The amount of data available on the web is consistently growing both in quantity and in form. Businesses require this data to make decisions, particularly with the explosive growth of machine learning tools which require large amounts of data for training. Much of this data is available via Application Programming Interfaces, but at the same time a lot of valuable data is still only available through the process of web scraping. Python is the choice of programing language for many who build systems to perform scraping. It is an easy to use programming language with a rich ecosystem of tools for other tasks. In this article, we will focus on the fundamentals of setting up a scraping environment and perform basic requests for data with several tools of trade. Setting up a Python development environment If you have not used Python before, it is important to have a working development  environment. The recipes in this book will be all in Python and be a mix of interactive examples, but primarily implemented as scripts to be interpreted by the Python interpreter. This recipe will show you how to set up an isolated development environment with virtualenv and manage project dependencies with pip . We also get the code for the book and install it into the Python virtual environment. Getting ready We will exclusively be using Python 3.x, and specifically in my case 3.6.1. While Mac and Linux normally have Python version 2 installed, and Windows systems do not. So it is likely that in any case that Python 3 will need to be installed. You can find references for Python installers at www.python.org. You can check Python's version with python --version pip comes installed with Python 3.x, so we will omit instructions on its installation. Additionally, all command line examples in this book are run on a Mac. For Linux users the commands should be identical. On Windows, there are alternate commands (like dir instead of ls), but these alternatives will not be covered. How to do it We will be installing a number of packages with pip. These packages are installed into a Python environment. There often can be version conflicts with other packages, so a good practice for following along with the recipes in the book will be to create a new virtual Python environment where the packages we will use will be ensured to work properly. Virtual Python environments are managed with the virtualenv tool. This can be installed with the following command: ~ $ pip install virtualenv Collecting virtualenv Using cached virtualenv-15.1.0-py2.py3-none-any.whl Installing collected packages: virtualenv Successfully installed virtualenv-15.1.0 Now we can use virtualenv. But before that let's briefly look at pip. This command installs Python packages from PyPI, a package repository with literally 10's of thousands of packages. We just saw using the install subcommand to pip, which ensures a package is installed. We can also see all currently installed packages with pip list: ~ $ pip list alabaster (0.7.9) amqp (1.4.9) anaconda-client (1.6.0) anaconda-navigator (1.5.3) anaconda-project (0.4.1) aniso8601 (1.3.0) Packages can also be uninstalled using pip uninstall followed by the package name. I'll leave it to you to give it a try. Now back to virtualenv. Using virtualenv is very simple. Let's use it to create an environment and install the code from github. Let's walk through the steps: Create a directory to represent the project and enter the directory. ~ $ mkdir pywscb ~ $ cd pywscb Initialize a virtual environment folder named env: pywscb $ virtualenv env Using base prefix '/Users/michaelheydt/anaconda' New python executable in /Users/michaelheydt/pywscb/env/bin/python copying /Users/michaelheydt/anaconda/bin/python => /Users/michaelheydt/pywscb/env/bin/python copying /Users/michaelheydt/anaconda/bin/../lib/libpython3.6m.dylib => /Users/michaelheydt/pywscb/env/lib/libpython3. 6m.dylib Installing setuptools, pip, wheel...done. This creates an env folder. Let's take a look at what was installed. pywscb $ ls -la env total 8 drwxr-xr-x 6 michaelheydt staff 204 Jan 18 15:38 . drwxr-xr-x 3 michaelheydt staff 102 Jan 18 15:35 .. drwxr-xr-x 16 michaelheydt staff 544 Jan 18 15:38 bin drwxr-xr-x 3 michaelheydt staff 102 Jan 18 15:35 include drwxr-xr-x 4 michaelheydt staff 136 Jan 18 15:38 lib -rw-r--r-- 1 michaelheydt staff 60 Jan 18 15:38 pipselfcheck. json New we activate the virtual environment. This command uses the content in the env folder to configure Python. After this all python activities are relative to this virtual environment. pywscb $ source env/bin/activate (env) pywscb $ We can check that python is indeed using this virtual environment with the following command: (env) pywscb $ which python /Users/michaelheydt/pywscb/env/bin/python With our virtual environment created, let's clone the books sample code and take a look at its structure. (env) pywscb $ git clone https://github.com/PacktBooks/PythonWebScrapingCookbook.git Cloning into 'PythonWebScrapingCookbook'... remote: Counting objects: 420, done. remote: Compressing objects: 100% (316/316), done. remote: Total 420 (delta 164), reused 344 (delta 88), pack-reused 0 Receiving objects: 100% (420/420), 1.15 MiB | 250.00 KiB/s, done. Resolving deltas: 100% (164/164), done. Checking connectivity... done. This created a PythonWebScrapingCookbook directory. (env) pywscb $ ls -l total 0 drwxr-xr-x 9 michaelheydt staff 306 Jan 18 16:21 PythonWebScrapingCookbook drwxr-xr-x 6 michaelheydt staff 204 Jan 18 15:38 env Let's change into it and examine the content. (env) PythonWebScrapingCookbook $ ls -l total 0 drwxr-xr-x 15 michaelheydt staff 510 Jan 18 16:21 py drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:21 www There are two directories. Most the the Python code is is the py directory. www contains some web content that we will use from time-to-time using a local web server. Let's look at the contents of the py directory: (env) py $ ls -l total 0 drwxr-xr-x 9 michaelheydt staff 306 Jan 18 16:21 01 drwxr-xr-x 25 michaelheydt staff 850 Jan 18 16:21 03 drwxr-xr-x 21 michaelheydt staff 714 Jan 18 16:21 04 drwxr-xr-x 10 michaelheydt staff 340 Jan 18 16:21 05 drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:21 06 drwxr-xr-x 25 michaelheydt staff 850 Jan 18 16:21 07 drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:21 08 drwxr-xr-x 7 michaelheydt staff 238 Jan 18 16:21 09 drwxr-xr-x 7 michaelheydt staff 238 Jan 18 16:21 10 drwxr-xr-x 9 michaelheydt staff 306 Jan 18 16:21 11 drwxr-xr-x 8 michaelheydt staff 272 Jan 18 16:21 modules Code for each chapter is in the numbered folder matching the chapter (there is no code for chapter 2 as it is all interactive Python). Note that there is a modules folder. Some of the recipes throughout the book use code in those modules. Make sure that your Python path points to this folder. On Mac and Linux you can sets this in your .bash_profile file (and environments variables dialog on Windows): Export PYTHONPATH="/users/michaelheydt/dropbox/packt/books/pywebscrcookbook/code/py/modules" export PYTHONPATH The contents in each folder generally follows a numbering scheme matching the sequence of the recipe in the chapter. The following is the contents of the chapter 6 folder: (env) py $ ls -la 06 total 96 drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:21 . drwxr-xr-x 14 michaelheydt staff 476 Jan 18 16:26 .. -rw-r--r-- 1 michaelheydt staff 902 Jan 18 16:21 01_scrapy_retry.py -rw-r--r-- 1 michaelheydt staff 656 Jan 18 16:21 02_scrapy_redirects.py -rw-r--r-- 1 michaelheydt staff 1129 Jan 18 16:21 03_scrapy_pagination.py -rw-r--r-- 1 michaelheydt staff 488 Jan 18 16:21 04_press_and_wait.py -rw-r--r-- 1 michaelheydt staff 580 Jan 18 16:21 05_allowed_domains.py -rw-r--r-- 1 michaelheydt staff 826 Jan 18 16:21 06_scrapy_continuous.py -rw-r--r-- 1 michaelheydt staff 704 Jan 18 16:21 07_scrape_continuous_twitter.py -rw-r--r-- 1 michaelheydt staff 1409 Jan 18 16:21 08_limit_depth.py -rw-r--r-- 1 michaelheydt staff 526 Jan 18 16:21 09_limit_length.py -rw-r--r-- 1 michaelheydt staff 1537 Jan 18 16:21 10_forms_auth.py -rw-r--r-- 1 michaelheydt staff 597 Jan 18 16:21 11_file_cache.py -rw-r--r-- 1 michaelheydt staff 1279 Jan 18 16:21 12_parse_differently_based_on_rules.py In the recipes I'll state that we'll be using the script in <chapter directory>/<recipe filename>. Now just the be complete, if you want to get out of the Python virtual environment, you can exit using the following command: (env) py $ deactivate py $ And checking which python we can see it has switched back: py $ which python /Users/michaelheydt/anaconda/bin/python Scraping Python.org with Requests and Beautiful Soup In this recipe we will install Requests and Beautiful Soup and scrape some content from www.python.org. We'll install both of the libraries and get some basic familiarity with them. We'll come back to them both in subsequent chapters and dive deeper into each. Getting ready In this recipe, we will scrape the upcoming Python events from https:/ / www. python. org/events/ pythonevents. The following is an an example of The Python.org Events Page (it changes frequently, so your experience will differ): We will need to ensure that Requests and Beautiful Soup are installed. We can do that with the following: pywscb $ pip install requests Downloading/unpacking requests Downloading requests-2.18.4-py2.py3-none-any.whl (88kB): 88kB downloaded Downloading/unpacking certifi>=2017.4.17 (from requests) Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB): 151kB downloaded Downloading/unpacking idna>=2.5,<2.7 (from requests) Downloading idna-2.6-py2.py3-none-any.whl (56kB): 56kB downloaded Downloading/unpacking chardet>=3.0.2,<3.1.0 (from requests) Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB): 133kB downloaded Downloading/unpacking urllib3>=1.21.1,<1.23 (from requests) Downloading urllib3-1.22-py2.py3-none-any.whl (132kB): 132kB downloaded Installing collected packages: requests, certifi, idna, chardet, urllib3 Successfully installed requests certifi idna chardet urllib3 Cleaning up... pywscb $ pip install bs4 Downloading/unpacking bs4 Downloading bs4-0.0.1.tar.gz Running setup.py (path:/Users/michaelheydt/pywscb/env/build/bs4/setup.py) egg_info for package bs4 How to do it Now let's go and learn to scrape a couple events. For this recipe we will start by using interactive python. Start it with the ipython command: $ ipython Python 3.6.1 |Anaconda custom (x86_64)| (default, Mar 22 2017, 19:25:17) Type "copyright", "credits" or "license" for more information. IPython 5.1.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: Next we import Requests In [1]: import requests We now use requests to make a GET HTTP request for the following url:https://www.python.org/events/ python-events/ by making a GET request: In [2]: url = 'https://www.python.org/events/python-events/' In [3]: req = requests.get(url) That downloaded the page content but it is stored in our requests object req. We can retrieve the content using the .text property. This prints the first 200 characters. req.text[:200] Out[4]: '<!doctype html>n<!--[if lt IE 7]> <html class="no-js ie6 lt-ie7 lt-ie8 lt-ie9"> <![endif]-->n<!--[if IE 7]> <html class="no-js ie7 lt-ie8 lt-ie9"> <![endif]-->n<!--[if IE 8]> <h' We now have the raw HTML of the page. We can now use beautiful soup to parse the HTML and retrieve the event data. First import Beautiful Soup In [5]: from bs4 import BeautifulSoup Now we create a BeautifulSoup object and pass it the HTML. In [6]: soup = BeautifulSoup(req.text, 'lxml') Now we tell Beautiful Soup to find the main <ul> tag for the recent events, and then to get all the <li> tags below it. In [7]: events = soup.find('ul', {'class': 'list-recentevents'}). findAll('li') And finally we can loop through each of the <li> elements, extracting the event details, and print each to the console: In [13]: for event in events: ...: event_details = dict() ...: event_details['name'] = event_details['name'] = event.find('h3').find("a").text ...: event_details['location'] = event.find('span', {'class' 'event-location'}).text ...: event_details['time'] = event.find('time').text ...: print(event_details) ...: {'name': 'PyCascades 2018', 'location': 'Granville Island Stage, 1585 Johnston St, Vancouver, BC V6H 3R9, Canada', 'time': '22 Jan. – 24 Jan. 2018'} {'name': 'PyCon Cameroon 2018', 'location': 'Limbe, Cameroon', 'time': '24 Jan. – 29 Jan. 2018'} {'name': 'FOSDEM 2018', 'location': 'ULB Campus du Solbosch, Av. F. Roosevelt 50, 1050 Bruxelles, Belgium', 'time': '03 Feb. – 05 Feb. 2018'} {'name': 'PyCon Pune 2018', 'location': 'Pune, India', 'time': '08 Feb. – 12 Feb. 2018'} {'name': 'PyCon Colombia 2018', 'location': 'Medellin, Colombia', 'time': '09 Feb. – 12 Feb. 2018'} {'name': 'PyTennessee 2018', 'location': 'Nashville, TN, USA', 'time': '10 Feb. – 12 Feb. 2018'} This entire example is available in the 01/01_events_with_requests.py script file. The following is its content and it pulls together all of what we just did step by step: import requests from bs4 import BeautifulSoup def get_upcoming_events(url): req = requests.get(url) soup = BeautifulSoup(req.text, 'lxml') events = soup.find('ul', {'class': 'list-recent-events'}).findAll('li') for event in events: event_details = dict() event_details['name'] = event.find('h3').find("a").text event_details['location'] = event.find('span', {'class', 'eventlocation'}). text event_details['time'] = event.find('time').text print(event_details) get_upcoming_events('https://www.python.org/events/python-events/') You can run this using the following command from the terminal: $ python 01_events_with_requests.py {'name': 'PyCascades 2018', 'location': 'Granville Island Stage, 1585 Johnston St, Vancouver, BC V6H 3R9, Canada', 'time': '22 Jan. – 24 Jan. 2018'} {'name': 'PyCon Cameroon 2018', 'location': 'Limbe, Cameroon', 'time': '24 Jan. – 29 Jan. 2018'} {'name': 'FOSDEM 2018', 'location': 'ULB Campus du Solbosch, Av. F. D. Roosevelt 50, 1050 Bruxelles, Belgium', 'time': '03 Feb. – 05 Feb. 2018'} {'name': 'PyCon Pune 2018', 'location': 'Pune, India', 'time': '08 Feb. – 12 Feb. 2018'} {'name': 'PyCon Colombia 2018', 'location': 'Medellin, Colombia', 'time': '09 Feb. – 12 Feb. 2018'} {'name': 'PyTennessee 2018', 'location': 'Nashville, TN, USA', 'time': '10 Feb. – 12 Feb. 2018'} How it works We will dive into details of both Requests and Beautiful Soup in the next chapter, but for now let's just summarize a few key points about how this works. The following important points about Requests: Requests is used to execute HTTP requests. We used it to make a GET verb request of the URL for the events page. The Requests object holds the results of the request. This is not only the page content, but also many other items about the result such as HTTP status codes and headers. Requests is used only to get the page, it does not do an parsing. We use Beautiful Soup to do the parsing of the HTML and also the finding of content within the HTML. To understand how this worked, the content of the page has the following HTML to start the Upcoming Events section: We used the power of Beautiful Soup to: Find the <ul> element representing the section, which is found by looking for a <ul> with the a class attribute that has a value of list-recent-events. From that object, we find all the <li> elements. Each of these <li> tags represent a different event. We iterate over each of those making a dictionary from the event data found in child HTML tags: The name is extracted from the <a> tag that is a child of the <h3> tag The location is the text content of the <span> with a class of event-location And the time is extracted from the datetime attribute of the <time> tag. To summarize, we saw how to setup a Python environment for effective data scraping from the web and also explored ways to use Beautiful Soup to perform preliminary data scraping for ethical purposes. If you liked this post, be sure to check out Web Scraping with Python, which consists of useful recipes to work with Apache Kafka installation.        
Read more
  • 0
  • 0
  • 2849

article-image-cambridge-analytica-ethics-data-science
Richard Gall
20 Mar 2018
5 min read
Save for later

The Cambridge Analytica scandal and ethics in data science

Richard Gall
20 Mar 2018
5 min read
Earlier this month, Stack Overflow published the results of its 2018 developer survey. In it, there was an interesting set of questions around the concept of 'ethical code'. The main takeaway was ultimately that the area remains a gray area. The Cambridge Analytica scandal, however, has given the issue of 'ethical code' a renewed urgency in the last couple of days. The data analytics company are alleged to have not only been involved in votes in the UK and US, but also of harvesting copious amounts of data from Facebook (illegally). For whistleblower Christopher Wylie, the issue of ethical code is particularly pronounced. “I created Steve Bannon’s psychological mindfuck tool” he told Carole Cadwalladr in an interview in the Guardian. Cambridge Analytica: psyops or just market research? Wylie is a data scientist whose experience over the last half a decade or so has been impressive. It’s worth noting however, that Wylie’s career didn’t begin in politics. His academic career was focused primarily on fashion forecasting. That might all seem a little prosaic, but it underlines the fact that data science never happens in a vacuum. Data scientists always operate within a given field. It might be tempting to view the world purely through the prism of impersonal data and cold statistics. To a certain extent you have to if you’re a data scientist or a statistician. But at the very least this can be unhelpful; at worst a potential threat to global democracy. At one point in the interview Wylie remarks that: ...it’s normal for a market research company to amass data on domestic populations. And if you’re working in some country and there’s an auxiliary benefit to a current client with aligned interests, well that’s just a bonus. This is potentially the most frightening thing. Cambridge Analytica’s ostensible role in elections and referenda isn’t actually that remarkable. For all the vested interests and meetings between investors, researchers and entrepreneurs, the scandal is really just the extension of data mining and marketing tactics employed by just about every organization with a digital presence on the planet. Data scientists are always going to be in a difficult position. True, we're not all going to end up working alongside Steve Bannon. But your skills are always being deployed with a very specific end in mind. It’s not always easy to see the effects and impact of your work until later, but it’s still essential for data scientists and analysts to be aware of whose data is being collected and used, how it’s being used and why. Who is responsible for the ethics around data and code? There was another interesting question in the Stack Overflow survey that's relevant to all of this. The survey asked respondents who was ultimately most responsible for code that accomplishes something unethical. 57.5% claimed upper management were responsible, 22.8% said the person who came up with the idea, and 19.7% said it was the responsibility of the developer themselves. Clearly the question is complex. The truth lies somewhere between all three. Management make decisions about what’s required from an organizational perspective, but the engineers themselves are, of course, a part of the wider organizational dynamic. They should be in a position where they are able to communicate any personal misgivings or broader legal issues with the work they are being asked to do. The case of Wylie and Cambridge Analytica is unique, however. But it does highlight that data science can be deployed in ways that are difficult to predict. And without proper channels of escalation and the right degree of transparency it's easy for things to remain secretive, hidden in small meetings, email threads and paper trails. That's another thing that data scientists need to remember. Office politics might be a fact of life, but when you're a data scientist you're sitting on the apex of legal, strategic and political issues. To refuse to be aware of this would be naive. What the Cambridge Analytica story can teach data scientists But there's something else worth noting. This story also illustrates something more about the world in which data scientists are operating. This is a world where traditional infrastructure is being dismantled. This is a world where privatization and outsourcing is viewed as the route towards efficiency and 'value for money'. Whether you think that’s a good or bad thing isn’t really the point here. What’s important is that it makes the way we use data, even the code we write more problematic than ever because it’s not always easy to see how it’s being used. Arguably Wylie was naive. His curiosity and desire to apply his data science skills to intriguing and complex problems led him towards people who knew just how valuable he could be. Wylie has evidently developed greater self-awareness. This is perhaps the main reason why he has come forward with his version of events. But as this saga unfolds it’s worth remembering the value of data scientists in the modern world - for a range of organizations. It’s made the concept of the 'citizen data scientist' take on an even more urgent and literal meaning. Yes data science can help to empower the economy and possibly even toy with democracy. But it can also be used to empower people, improve transparency in politics and business. If anything, the Cambridge Analytica saga proves that data science is a dangerous field - not only the sexiest job of the twenty-first century, but one of the most influential in shaping the kind of world we're going to live in. That's frightening, but it's also pretty exciting.
Read more
  • 0
  • 0
  • 7991
article-image-create-prepare-first-dataset-salesforce-einstein
Amey Varangaonkar
19 Mar 2018
3 min read
Save for later

How to create and prepare your first dataset in Salesforce Einstein

Amey Varangaonkar
19 Mar 2018
3 min read
[box type="note" align="" class="" width=""]The following extract is taken from the book Learning Einstein Analytics written by Santosh Chitalkar. This book will help you learn Salesforce Einstein analytics, to get insights faster and understand your customer better.[/box] In this article, we see how to start your analytics journey using Salesforce Einstein by taking the first step in the process i.e; by creating and preparing your dataset! A dataset is a set of source data, specially formatted and optimized for interactive exploration. Here are the steps to create a new dataset in Salesforce Einstein: Click on the Create button in the top-right corner and then click on Dataset. You can see the following three options to create datasets: CSV File Salesforce Informatica Rev 2. Select CSV File and click on Continue, as shown in the following screenshot: 3. Select the Account_data.csv file or drag and drop the file. 4. Click on Next. The next screen uploads the user interface to create a single dataset by using the external.csv file: 5. Click on Next to proceed as shown in the following screenshot: 6. Change the dataset name if you want. You can select an application to store the dataset. You can also replace the CSV file from this screen. 7. Click on in the Data Schema File section and select the Replace File option to change the file. You can also download the uploaded .csv file from here as shown in the following screenshot: 8. Click on Next. In the next screen, you can change field attributes such as column name, dimensions, field type, and so on. 9. Click on the Next button and it will start uploading the file in Analytics and queuing it in dataflow. Once done click on the Got it button. 10. Wait for 10-15 minutes (depending on the data, it may take a longer time to create the dataset). 11. Go to Analytics Studio and open the DATASETS tab. You can see the Account_data dataset as shown in the following screenshot: Congrats!!! You have created your first dataset. Let's now update this dataset with the same information but with some additional columns. Updating datasets We need to update the dataset to add new fields, change application settings, remove fields, and so on. Einstein Analytics gives users the flexibility to update the dataset. Here are the steps to update an existing dataset: Create a CSV file to include some new fields and name it Account_Data_Updated. Save the file to a location that you can easily remember. In Salesforce, go to the Analytics Studio home page and find the dataset. Hover over the dataset and click on the button, then click on Edit, as shown in the following screenshot: 4. Salesforce displays the dataset editing screen. Click on the Replace Data button in the top-right corner of the page: 5. Click on the Next button and upload your new CSV file using upload UI. 6. Click on the Next button again to get to the next screen for editing and click on Next again. 7. Click on Replace as shown in the following screenshot: Voila! You’ve successfully updated your dataset. As you can see it’s fairly easy to create and then update the dataset if required, using Einstein without any hassle. If you found this post useful, make sure to check out our book Learning Einstein Analytics for more tips and techniques on using Einstein Analytics effectively to uncover unique insights from your data.        
Read more
  • 0
  • 1
  • 5160

article-image-perform-crud-operations-on-mongodb-with-php
Amey Varangaonkar
17 Mar 2018
6 min read
Save for later

Perform CRUD operations on MongoDB with PHP

Amey Varangaonkar
17 Mar 2018
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book Mastering MongoDB 3.x authored by Alex Giamas. This book covers the key concepts, and tips & tricks needed to build fault-tolerant applications in MongoDB. It gives you the power to become a true expert when it comes to the world’s most popular NoSQL database.[/box] In today’s tutorial, we will cover the CRUD (Create, Read, Update and Delete) operations using the popular PHP language with the official MongoDB driver. Create and delete operations To perform the create and delete operations, run the following code: $document = array( "isbn" => "401", "name" => "MongoDB and PHP" ); $result = $collection->insertOne($document); var_dump($result); This is the output: MongoDBInsertOneResult Object ( [writeResult:MongoDBInsertOneResult:private] => MongoDBDriverWriteResult Object ( [nInserted] => 1 [nMatched] => 0 [nModified] => 0 [nRemoved] => 0 [nUpserted] => 0 [upsertedIds] => Array ( ) [writeErrors] => Array ( ) [writeConcernError] => [writeConcern] => MongoDBDriverWriteConcern Object ( ) ) [insertedId:MongoDBInsertOneResult:private] => MongoDBBSONObjectID Object ( [oid] => 5941ac50aabac9d16f6da142 ) [isAcknowledged:MongoDBInsertOneResult:private] => 1 ) The rather lengthy output contains all the information that we may need. We can get the ObjectId of the document inserted; the number of inserted, matched, modified, removed, and upserted documents by fields prefixed with n; and information about writeError or writeConcernError. There are also convenience methods in the $result object if we want to get the Information: $result->getInsertedCount(): To get the number of inserted objects $result->getInsertedId(): To get the ObjectId of the inserted document We can also use the ->insertMany() method to insert many documents at once, like this: $documentAlpha = array( "isbn" => "402", "name" => "MongoDB and PHP, 2nd Edition" ); $documentBeta = array( "isbn" => "403", "name" => "MongoDB and PHP, revisited" ); $result = $collection->insertMany([$documentAlpha, $documentBeta]); print_r($result); The result is: ( [writeResult:MongoDBInsertManyResult:private] => MongoDBDriverWriteResult Object ( [nInserted] => 2 [nMatched] => 0 [nModified] => 0 [nRemoved] => 0 [nUpserted] => 0 [upsertedIds] => Array ( ) [writeErrors] => Array ( ) [writeConcernError] => [writeConcern] => MongoDBDriverWriteConcern Object ( ) ) [insertedIds:MongoDBInsertManyResult:private] => Array ( [0] => MongoDBBSONObjectID Object ( [oid] => 5941ae85aabac9d1d16c63a2 ) [1] => MongoDBBSONObjectID Object ( [oid] => 5941ae85aabac9d1d16c63a3 ) ) [isAcknowledged:MongoDBInsertManyResult:private] => 1 ) Again, $result->getInsertedCount() will return 2, whereas $result->getInsertedIds() will return an array with the two newly created ObjectIds: array(2) { [0]=> object(MongoDBBSONObjectID)#13 (1) { ["oid"]=> string(24) "5941ae85aabac9d1d16c63a2" } [1]=> object(MongoDBBSONObjectID)#14 (1) { ["oid"]=> string(24) "5941ae85aabac9d1d16c63a3" } } Deleting documents is similar to inserting but with the deleteOne() and deleteMany() methods; an example of deleteMany() is shown here: $deleteQuery = array( "isbn" => "401"); $deleteResult = $collection->deleteMany($deleteQuery); print_r($result); print($deleteResult->getDeletedCount()); Here is the output: MongoDBDeleteResult Object ( [writeResult:MongoDBDeleteResult:private] => MongoDBDriverWriteResult Object ( [nInserted] => 0 [nMatched] => 0 [nModified] => 0 [nRemoved] => 2 [nUpserted] => 0 [upsertedIds] => Array ( ) [writeErrors] => Array ( ) [writeConcernError] => [writeConcern] => MongoDBDriverWriteConcern Object ( ) ) [isAcknowledged:MongoDBDeleteResult:private] => 1 ) 2 In this example, we used ->getDeletedCount() to get the number of affected documents, which is printed out in the last line of the output. Bulk write The new PHP driver supports the bulk write interface to minimize network calls to MongoDB: $manager = new MongoDBDriverManager('mongodb://localhost:27017'); $bulk = new MongoDBDriverBulkWrite(array("ordered" => true)); $bulk->insert(array( "isbn" => "401", "name" => "MongoDB and PHP" )); $bulk->insert(array( "isbn" => "402", "name" => "MongoDB and PHP, 2nd Edition" )); $bulk->update(array("isbn" => "402"), array('$set' => array("price" => 15))); $bulk->insert(array( "isbn" => "403", "name" => "MongoDB and PHP, revisited" )); $result = $manager->executeBulkWrite('mongo_book.books', $bulk); print_r($result); The result is: MongoDBDriverWriteResult Object ( [nInserted] => 3 [nMatched] => 1 [nModified] => 1 [nRemoved] => 0 [nUpserted] => 0 [upsertedIds] => Array ( ) [writeErrors] => Array ( ) [writeConcernError] => [writeConcern] => MongoDBDriverWriteConcern Object ( ) ) In the preceding example, we executed two inserts, one update, and a third insert in an ordered fashion. The WriteResult object contains a total of three inserted documents and one modified document. The main difference compared to simple create/delete queries is that executeBulkWrite() is a method of the MongoDBDriverManager class, which we instantiate on the first line. Read operation Querying an interface is similar to inserting and deleting, with the findOne() and find() methods used to retrieve the first result or all results of a query: $document = $collection->findOne( array("isbn" => "101") ); $cursor = $collection->find( array( "name" => new MongoDBBSONRegex("mongo", "i") ) ); In the second example, we are using a regular expression to search for a key name with the value mongo (case-insensitive). Embedded documents can be queried using the . notation, as with the other languages that we examined earlier in this chapter: $cursor = $collection->find( array('meta.price' => 50) ); We do this to query for an embedded document price inside the meta key field. Similarly to Ruby and Python, in PHP we can query using comparison operators, like this: $cursor = $collection->find( array( 'price' => array('$gte'=> 60) ) ); Querying with multiple key-value pairs is an implicit AND, whereas queries using $or, $in, $nin, or AND ($and) combined with $or can be achieved with nested queries: $cursor = $collection->find( array( '$or' => array( array("price" => array( '$gte' => 60)), array("price" => array( '$lte' => 20)) ))); This finds documents that have price>=60 OR price<=20. Update operation Updating documents has a similar interface with the ->updateOne() OR ->updateMany() method. The first parameter is the query used to find documents and the second one will update our documents. We can use any of the update operators explained at the end of this chapter to update in place or specify a new document to completely replace the document in the query: $result = $collection->updateOne( array( "isbn" => "401"), array( '$set' => array( "price" => 39 ) ) ); We can use single quotes or double quotes for key names, but if we have special operators starting with $, we need to use single quotes. We can use array( "key" => "value" ) or ["key" => "value"]. We prefer the more explicit array() notation in this book. The ->getMatchedCount() and ->getModifiedCount() methods will return the number of documents matched in the query part or the ones modified from the query. If the new value is the same as the existing value of a document, it will not be counted as modified. We saw, it is fairly easy and advantageous to use PHP as a language and tool for performing efficient CRUD operations in MongoDB to handle data efficiently. If you are interested to get more information on how to effectively handle data using MongoDB, you may check out this book Mastering MongoDB 3.x.
Read more
  • 0
  • 0
  • 5923