Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-apple-steals-ai-chief-google
Richard Gall
04 Apr 2018
2 min read
Save for later

Apple steals AI chief from Google

Richard Gall
04 Apr 2018
2 min read
Google are the leaders when it comes to artificial intelligence. Apple have somewhat fallen behind - where Google are perhaps known more for technical innovation and experimentation, Apple's success is built on it's focus on design and customer experience. But that might be changing thanks to a high profile coup. "Apple have hired Google's chief of search and artificial intellience", the New York Times reports. John Giannandrea, after 8 years working at Google, will be joining Apple to help drive the organization's machine learning and artificial intelligence projects forward. Anyone who has used Siri will know that Apple have some catching up to do in terms of conversational UI - Amazon's Alexa and Google Assistant have captured the marketplace and seem to be defining the future. One of the reasons Apple has struggled to keep up the pace with the likes of Google and Facebook, as noted by a number of news sites, is that they have a completely different approach to user data. As we've seen in recent weeks, Facebook have a huge wealth of data on users that expands beyond the limits of the platform - Google, in defining the foundations of many people's experiences of search, also has a huge amount of data on users. As the New York Times explains: Apple has taken a strong stance on protecting the privacy of people who use its devices and online services, which could put it at a disadvantage when building services using neural networks. Researchers train these systems by pooling enormous amounts of digital data, sometimes from customer services. Apple, however, has said it is developing methods that would allow it to train these algorithms without compromising privacy. Giannandrea's perspective on AI would seem to be well-aligned with Apple's philosophy. In a number of interviews and conference talks, he has played down talks of automation and human's becoming obsolete, instead urging people to consider the biases and ethical considerations of artificial intelligence. Read more: Apple Recruits Google's Search and AI Chief John Giannandrea to Help Improve Siri [Gizmodo] Apple hires Google’s former AI boss to help improve Siri [The Verge]  
Read more
  • 0
  • 0
  • 2267

article-image-the-5-biggest-announcements-from-tensorflow-dev-summit-2018
Sugandha Lahoti
02 Apr 2018
4 min read
Save for later

The 5 biggest announcements from TensorFlow Developer Summit 2018

Sugandha Lahoti
02 Apr 2018
4 min read
The second TensorFlow Developer Summit was filled with exciting product announcements and technical talks from the TensorFlow team and guest speakers. Here are 5 major features extended to the TensorFlow machine learning framework, announced at the Summit. TensorFlow.js: Machine Learning brought to your browsers Using TensorFlow.js, developers can now define, train, and run machine learning models entirely in the browser. This open-source library can be run using Javascript and a high-level layers API. What does this mean from a developer’s perspective? TensorFlow.js allows importing of an existing, pre-trained model, say a TensorFlow or Keras model into the TensorFlow.js format. Developers can use transfer learning to re-train an imported model, using only a small amount of data. What does this mean from a user’s perspective? No need to install any libraries or drivers. Just open a webpage, and your program is ready to run. TensorFlow.js automatically supports WebGL, so it will accelerate your code when a GPU is available. With TensorFlow.js, users may also open their webpage from a mobile device, where the model will take advantage of sensor data from the mobile’s gyroscope or an accelerometer. All the data stays on the client, making TensorFlow.js useful for privacy preserving and low-latency inference. You can see TensorFlow.js in action by trying out the Emoji Scavenger Hunt game from a browser on your mobile phone. TensorFlow Hub: A library for reusable Machine Learning modules in TensorFlow The next major announcement at the TensorFlow Developer summit was the TensorFlow Hub. This platform is an aggregator to publish, discover, and reuse parts of machine learning modules in TensorFlow. Module here refers to a self-contained piece of a TensorFlow graph, along with its weights, that can be reused across other similar tasks. Model reusing helps a developer train a model using a smaller dataset, improve generalization, or speed up training. TensorFlow Hub comes with two tools that help in finding potential issues in neural networks. The first is a graphical debugger for inspecting the artificial neurons of an AI. The other visualize how well the model as a whole analyzes large amounts of data. TensorFlow Model Analysis TFMA is an open-source library that combines the power of TensorFlow and Apache Beam to compute and visualize evaluation metrics. TFMA ensures that ML models meet specific quality thresholds and behaves as expected for all relevant slices of data. TFMA uses Apache Beam to do a full pass over the specified evaluation dataset. This allows more accurate calculation of metrics and also scales up to massive evaluation datasets. TFMA allows developers to visualize model metrics over time in a time series graph. It visualizes metrics computed for a single model over multiple versions of the exported SavedModel. TFMA uses Slicing metrics to analyze the performance of a model on a more granular level. TensorFlow is now available in more languages and platforms TensorFlow Developer Summit also brought a good news for swift programmers. As of April 2018, TensorFlow for Swift will be open sourced. TensorFlow for Swift is more than just language binding for TensorFlow. It integrates first-class compiler and language support, providing the full power of graphs with the usability of eager execution. TensorFlow Lite, TensorFlow’s cross-platform solution for deploying trained ML models on mobile, also has major updates. It will now feature full support for Raspberry Pi and increased support for ops/models (including custom ops). The TensorFlow Lite core interpreter is now only 75 KB in size (vs 1.1 MB for TensorFlow) with speedups of up to 3x when running quantized image classification models. New applications and domains opened using TensorFlow TensorFlow Developer Summit also made announcements pertaining to sectors beyond the core deep learning and neural network models. The TensorFlow Probability API provides state-of-the-art methods for Bayesian analysis. This library contains building blocks like probability distributions, sampling methods, and new metrics and losses. They’ve also released Nucleus, a library for reading, writing, and filtering common genomics file formats for use in TensorFlow. This is released along with DeepVariant, an open-source TensorFlow based tool for genome variant discovery. Both these tools intend to help spur new research and advances in genomics. The TensorFlow Developer Summit also showcased a new blog, YouTube channel, and other community resources.  
Read more
  • 0
  • 0
  • 3516

article-image-nvidia-open-sources-nvvl-library-for-machine-learning-training
Sugandha Lahoti
30 Mar 2018
2 min read
Save for later

NVIDIA open sources NVVL, library for machine learning training

Sugandha Lahoti
30 Mar 2018
2 min read
NVIDIA has open sourced NVVL, a library that provides GPU accelerated video decoding for DL training. Quick rundown of NVIDIA NVVL : The NVIDIA NVVL library uses hardware acceleration to load sequences of video frames to ease out the training of machine learning algorithms. It uses FFmpeg's libraries to parse and read the compressed packets from video files and the video decoding hardware available on NVIDIA GPU. It can off-load and accelerate the decoding of these compressed packets, providing a ready-for-training tensor in GPU device memory. NVVL can additionally perform data augmentation while loading the frames. Frames can be scaled, cropped, and flipped horizontally using the GPUs dedicated texture mapping units. It significantly reduces the demands on the storage and I/O systems during training by using compressed video files instead of individual frame image files. Thereby saving upto 40X on storage space and bandwidth. Also reducing CPU load by 2X when training on video datasets. NVVL Dependencies: CUDA Toolkit. NVIDIA NVVL works well with versions 8.0 and above. It performs better with CUDA 9.0 or later. FFmpeg's libavformat, libavcodec, libavfilter, and libavutil. These can be installed from source as in the example Dockerfiles or from the Ubuntu 16.04 packages libavcodec-dev libavfilter-dev libavformat-dev libavutil-dev. NVIDIA has also provided a super-resolution example project which quantifies the performance advantage of using NVVL. When training this example project on a NVIDIA DGX-1, the CPU load when using NVVL was 50-60% of the load seen when using a normal dataloader for .png files. There is a wrapper for PyTorch available as most users will want to use the deep learning framework wrappers rather than using the library directly. For a complete list of details and code files, visit the NVIDIA Github.
Read more
  • 0
  • 0
  • 3950
Visually different images

article-image-data-science-news-daily-roundup-29th-march-2018
Packt Editorial Staff
29 Mar 2018
3 min read
Save for later

Data Science News Daily Roundup – 29th March 2018

Packt Editorial Staff
29 Mar 2018
3 min read
TensorFlow 1.7.0 is out, March release of SQL Operations Studio is now available, Google announced the integration of NVIDIA® TensorRTTM and TensorFlow, Introducing new HP Z8 workstation for machine learning, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data Science news of the Day TensorFlow 1.7.0 released! Peep-in to see new updates and improvements Other Data Science News at a Glance Google announced integration of NVIDIA® TensorRTTM and TensorFlow. This new integration simplifies the path to use TensorRT from within TensorFlow with world-class performance. Read more on Google Developers Blog. Google Cloud Platform introduces Cloud Text-to-Speech powered by DeepMind WaveNet technology. Cloud Text-to-Speech program lets you choose from 32 different voices,  12 languages and variants. It can correctly pronounce complex text such as name, date, time and address for authentic sounding speech right out of the gate. Read more on Google Cloud Platform blog. HP introduced its new HP Z8 workstation for machine learning development. The machine is based on Nvidia’s Quadro GV100 platform, the latest version of its Quadro graphics processing unit (GPU) for workstations. Read more on Venture Beat NVIDIA and Arm to integrate. NVIDIA’s open-source Deep Learning Accelerator (NVDLA) architecture into Arm’s Project Trillium platform for machine learning. Read more on Forbes Announcing the release of TopN, an open source PostgreSQL extension that returns the top values in a database. The TopN extension enables you to serve instant and approximate results to TopN queries. Read more on PostgreSQL. Nvidia open sources NVVL: a library that provides GPU accelerated video decoding for DL training. With NVVL, one can save 40X on storage space and bandwidth, reduce CPU load by 2X when training on video datasets. It’s great for GPU dense systems like DGX-2. Read more on GitHub. Google uses Machine Learning to Discover Neural Network Optimizers. Neural Optimizer Search makes use of a recurrent neural network controller which is given access to a list of simple primitives that are typically relevant for optimization.  Read more on Google Research Blog The March release of SQL Operations Studio is now available.Read more on SQL Server Blog. Ripple Joins Hyperledger Blockchain Consortium. Through this partnership with Hyperledger, Ripple developers will be able to access Interledger Protocol (ILP) in Java for enterprise use. Read more on Coindesk pgbedrock is a new open source tool for managing access in one’s @postgresql cluster. pgbedrock is an application for managing the roles, memberships, schema ownership, and most importantly the permission for tables, sequences, and schema in a Postgre database. Read more on GitHub Bing announces improvements and new scenarios to its intelligent search features which tap into advances in AI to provide people with more comprehensive answers, faster. Read more on Bing’s Blog
Read more
  • 0
  • 0
  • 1889

article-image-tensorflow-1-7-0-released-updates-and-improvements
Savia Lobo
29 Mar 2018
2 min read
Save for later

TensorFlow 1.7.0 released: updates and improvements

Savia Lobo
29 Mar 2018
2 min read
Early this month, TensorFlow released its major version 1.6.0. Soon after that they announced rc-0 and rc-1 for TensorFlow 1.7.0. And to our surprise, TensorFlow 1.7.0 has arrived much sooner than expected! Clearly moving quickly is essential to the TensorFlow team. Both the rc-0 and rc-1 gave us a starter on what might be expected in the TF 1.7.0. This major release contains with some major improvements, features, bug fixes, and other changes. Major features and improvements in TensorFlow 1.7.0: Eager mode is moving out of contrib, try tf.enable_eager_execution(). EGraph rewrites emulating fixed-point quantization compatible with TensorFlow Lite are now supported by new tf.contrib.quantize package. Easy customize gradient computation are now available with tf.custom_gradient. TensorBoard Debugger Plugin, the graphical user interface (GUI) of TensorFlow Debugger (tfdbg), is now in alpha. Experimental support for reading a sqlite database as a Dataset with new tf.contrib.data.SqlDataset. Distributed Mutex / CriticalSection added to tf.contrib.framework.CriticalSection. Better text processing with tf.regex_replace. Easy, efficient sequence input with tf.contrib.data.bucket_by_sequence_length Bug fixes in TF 1.7.0  version include: Added MaxPoolGradGrad support for XLA and disabled CSE pass from Tensorflow in XLA. Added support for building C++ Dataset op kernels as external libraries, using the tf.load_op_library() mechanism. Added support for scalars in tf.contrib.all_reduce. Deprecated tf.contrib.learn. Additional changes are: Added library for statistical testing of samplers and helpers to stream data from the GCE VM to a Cloud TPU. Added TensorSpec to represent the specification of Tensors. Integrated TPUClusterResolver with GKE's integration for Cloud TPUs and also ClusterResolvers with TPUEstimator. Fixed MomentumOptimizer lambda Constant folding pass is now deterministic. A support for float16 dtype in tf.linalg.* Read full coverage about the version release on TensorFlow’s GitHub Repository.  
Read more
  • 0
  • 0
  • 2543

article-image-how-googles-deepmind-is-creating-images-with-artificial-intelligence
Sugandha Lahoti
28 Mar 2018
2 min read
Save for later

How Google’s DeepMind is creating images with artificial intelligence

Sugandha Lahoti
28 Mar 2018
2 min read
The research team at DeepMind have been using deep reinforcement learning agents to generate images as humans do. DeepMind’s AI Agents understand how digits, characters, and portraits are actually constructed instead of analyzing pixels that represent it on a screen. DeepMind’s AI agents interact with the computer paint program, placing strokes on digital canvas and changing the brush size, pressure and color. How does DeepMind generate images? As a part of the initial training process, the agent starts by drawing random strokes with no visible intent or structure. Following the reinforcement learning approach, the agent is then ‘rewarded’. This ‘encourages’ it to produce meaningful drawings. To monitor the performance of the first network, DeepMind trained a second neural network, called the discriminator. This discriminator predicts whether a particular drawing was produced by the agent, or if it was sampled from a dataset of real photographs. The painting agent is rewarded by how much it manages to “fool” the discriminator into thinking that the drawings are real. Most importantly, DeepMind’s AI agents produce images by writing graphics programs to interact with a paint environment. This is different from how a GAN works where the generator in GAN setups directly output pixels.  Moreover, the model can also apply what it has learned on the simulated paint program to re-create characters in other similar environments. This is because the framework is interpretable in the sense that it produces a sequence of motions that control a simulated brush. Training DeepMind AI agents This agent was trained to generate images resembling MNIST digits: it was shown what the digits look like, but not how they are drawn. By attempting to generate images that fool the discriminator, the agent learned to control the brush and to maneuver it to fit the style of different digits. This model was also trained to reproduce specific images on real datasets. When trained to paint celebrity faces, the agent is capable of capturing the main traits of the face, such as shape, tone, and hairstyle, much like a street artist would when painting a portrait with a limited number of brush strokes. Source: DeepMind Blog For further details on methodology and experimentation, read the research paper.
Read more
  • 0
  • 0
  • 3857
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-data-science-news-daily-roundup-27th-march-2018
Packt Editorial Staff
27 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 27th March 2018

Packt Editorial Staff
27 Mar 2018
2 min read
Scala 2.12.5 releases, Reticulate package for interfacing R and Python, JSON Comes to CockroachDB, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day Scala 2.12.5 is here! R interface to Python via the Reticulate Package. Other Data Science News at a Glance TensorFlow 1.7.0-rc1 has been released. With Tensorflow 1.7.0-rc1, TensorBoard Debugger Plugin, the graphical user interface (GUI) of TensorFlow Debugger (tfdbg), is now in alpha. Also, Eager mode is moving out of contrib. See the full release notes on GitHub. CockroachDB has announced support for JSON in their most recent 2.0 Beta release. Developers can now use both structured and semi-structured data within the same database. Read more on the CockroachDB Blog. Microsoft has announced the general availability of Clustered and NonClustered Columnstore indexes in Standard tier Azure SQL Databases. Application vendors can now develop an application which leverages columnstore functionality and deploy it on both Standard and Premium performance tiers. Read more on the Microsoft Azure Blog. IBM has launched the Model Asset eXchange. MAX is effectively an App Store for free Machine Learning models to help developers and data scientists easily discover, rate and deploy AI. Read more on InsideHPC. MongoDB is now available on Google Cloud Platform (GCP) through MongoDB’s simple-to-use, fully managed Database as a Service (DBaaS) product, MongoDB Atlas. Read more on the Google Cloud Platform Blog. Kaggle API v1.1 released to programmatically create and maintain datasets via the the command line. Read more on the Kaggle Blog. The Linux Foundation has launched the LF Deep Learning Foundation. The new organization is designed to support and maintain open source innovation in artificial intelligence, machine learning, and deep learning. Read more on SDTimes. Microsoft announced that it uses Brainwave, a specialized hardware for AI computation to get more than 10 times faster performance for a machine learning model that powers functionality of its Bing search engine. Read more on VentureBeat. H2O.ai unveils H2O4GPU and Driverless AI for the Latest NVIDIA CUDA 9 and Tesla V100 Platforms. Read more on BusinessWire. AMAX, plans to showcase its Deep Learning and AI solutions at NVIDIA's GPU Technology Conference (GTC) from March 27-29 at the San Jose McEnery Convention Center. Read more on PRNewswire.
Read more
  • 0
  • 0
  • 2632

article-image-r-interface-to-python-via-the-reticulate-package
Savia Lobo
27 Mar 2018
2 min read
Save for later

R interface to Python via the Reticulate Package

Savia Lobo
27 Mar 2018
2 min read
Announcing the Reticulate package, an R interface to Python. This package consists of comprehensive set of tools for interoperability between Python and R. With this new package, one can: Call Python from R in several ways including R Markdown, sourcing Python scripts, importing Python modules, and using Python interactively within an R session. Translate between R and Python objects (for example, between R and Pandas data frames, or between R matrices and NumPy arrays). Bind to different versions of Python including virtual environments and Conda environments in a flexible manner. Reticulate embeds a Python session within one’s R session, enabling seamless, high-performance interoperability. It can dramatically streamline the workflow for R developers who use Python for their experiments or for a member of data science team that use both the languages. Python in R Markdown The reticulate package also includes a Python engine for R Markdown which has following  features: It can run Python chunks in a single Python session embedded within one’s R session (shared variables/state between Python chunks) Prints Python output, including graphical output from matplotlib. Access to objects created within Python chunks from R using the py object (e.g. py$x would access an x variable created within Python from R). Access to objects created within R chunks from Python using the r object (e.g. r.x would access to x variable created within R from Python) Built in conversion for many Python object types is provided, including NumPy arrays and Pandas data frames. One can also use Pandas to read and manipulate data, and easily plot the Pandas data frame using ggplot2. Read more about the Reticulate package in detail on R Studio GitHub Repo  
Read more
  • 0
  • 0
  • 2675

article-image-scala-2-12-5-is-here
Sugandha Lahoti
27 Mar 2018
2 min read
Save for later

Scala 2.12.5 is here!

Sugandha Lahoti
27 Mar 2018
2 min read
Scala 2.12.5 version has been released. Scala is a popular programming language for a data scientist. It is mostly favored by aspiring or seasoned data scientists who are planning to work with Apache Spark for Big Data analysis. With the new version 2.12.5, Scala has brought in four major highlights. Most importantly, Scala 2.12.5 is binary compatible with the whole Scala 2.12 series. Major highlights: When compiling on Java 9 or higher, the new-release N flag changes the compilation classpath to match the JDK version N. This works for the JDK itself and for multi-release JARs on the classpath. With the new -Ybackend-parallelism N compiler flag, the backend can now run bytecode serialization, classfile writing, and method-local optimizations (-opt:l:method) in parallel on N threads. The raw"" and s"" string interpolators are now intercepted by the compiler to produce more efficient bytecode. The -Ycache-plugin-class-loader and -Ycache-macro-class-loader flags enable caching of classloaders for compiler plugins and macro definitions. This can lead to significant performance improvements. Other features include: The apply method on the PartialFunction companion object is now deprecated. Scala JARs (library, reflect, compiler) now have an Automatic-Module-Name attribute in their manifests. Enabling unused warnings now lead to fewer false positives. Explicit eta-expansion (foo _) of a nullary method no longer gives a deprecation warning. Scala releases are available through a variety of channels, including: Bump the scalaVersion setting in your sbt-based project Download a distribution from scala-lang.org Obtain JARs via Maven Central However, there is regression since 2.12.4 when compiling code on Java 9 or 10 that uses macros. Users must either compile on Java 8 or wait for 2.12.6. You can check out all closed bugs and merged PRs for further details.
Read more
  • 0
  • 0
  • 2313

article-image-paper-two-minutes-novel-method-resource-efficient-image-classification
Sugandha Lahoti
23 Mar 2018
4 min read
Save for later

Paper in Two minutes: A novel method for resource efficient image classification

Sugandha Lahoti
23 Mar 2018
4 min read
This ICLR 2018 accepted paper, Multi-Scale Dense Networks for Resource Efficient Image Classification, introduces a new model to perform image classification with limited computational resources at test time. This paper is authored by Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Weinberger. The 6th annual ICLR conference is scheduled to happen between April 30 - May 03, 2018. Using a multi-scale convolutional neural network for resource efficient image classification What problem is the paper attempting to solve? Recent years have witnessed a surge in demand for applications of visual object recognition, for instance, in self-driving cars and content-based image search. This demand is because of the astonishing progress of convolutional networks (CNNs) where state-of-the-art models may have even surpassed human-level performance. However, most are complex models which have high computational demands at inference time. In real-world applications, computation is never free; it directly translates into power consumption, which should be minimized for environmental and economic reasons. Ideally, all systems should automatically use small networks when test images are easy or computational resources are limited and use big networks when test images are hard or computation is abundant. In order to develop resource-efficient image recognition, the authors aim to develop CNNs that slice the computation and process these slices one-by-one, stopping the evaluation once the CPU time is depleted or the classification sufficiently certain. Unfortunately, CNNs learn the data representation and the classifier jointly, which leads to two problems The features in the last layer are extracted directly to be used by the classifier, whereas earlier features are not. The features in different layers of the network may have a different scale. Typically, the first layers of deep nets operate on a fine scale (to extract low-level features), whereas later layers transition to coarse scales that allow global context to enter the classifier. The authors propose a novel network architecture that addresses both problems through careful design changes, allowing for resource-efficient image classification. Paper summary The model is based on a multi-scale convolutional neural network similar to the neural fabric, but with dense connections and with a classifier at each layer.  This novel network architecture, called Multi-Scale DenseNet (MSDNet), address both of the problems described above (of classifiers altering the internal representation and the lack of coarse-scale features in early layers) for resource-efficient image classification. The network uses a cascade of intermediate classifiers throughout the network. The first problem is addressed through the introduction of dense connectivity. By connecting all layers to all classifiers, features are no longer dominated by the most imminent early exit and the trade-off between early or later classification can be performed elegantly as part of the loss function. The second problem is addressed by adopting a multi-scale network structure. At each layer, features of all scales (fine-to-coarse) are produced, which facilitates good classification early on but also extracts low-level features that only become useful after several more layers of processing. Key Takeaways MSDNet, is a novel convolutional network architecture optimized to incorporate CPU budgets at test-time. The design is based on two high-level design principles, to generate and maintain coarse level features throughout the network and to interconnect the layers with dense connectivity. The final network design is a two-dimensional array of horizontal and vertical layers, which decouples depth and feature coarseness. Whereas in traditional convolutional networks features only become coarser with increasing depth, the MSDNet generates features of all resolutions from the first layer on and maintains them throughout. Through experiments, the authors show that their network outperforms all competitive baselines on an impressive range of budgets ranging from highly limited CPU constraints to almost unconstrained settings. Reviewer feedback summary Overall Score: 25/30 Average Score: 8.33 The reviewers found the approach to be natural and effective with good results. They found the presentation to be clear and easy to follow. The structure of the network was clearly justified. The reviewers found the use of dense connectivity to avoid the loss of performance of using early-exit classifier interesting. They appreciated the results and found them to be quite promising, with 5x speed-ups and same or better accuracy than previous models.  However, some reviewers pointed out that the results about the more efficient densenet* could be shown in the main paper.
Read more
  • 0
  • 0
  • 2466
article-image-microsoft-spring-updates-powerbi-powerapps
Savia Lobo
23 Mar 2018
2 min read
Save for later

Microsoft spring updates for PowerBI and PowerApps

Savia Lobo
23 Mar 2018
2 min read
Microsoft announced some significant spring updates in its products. These include the Common Data Services update to Power BI and Power Apps.    Common Data Service for Analytics comes to PowerBI The CDS for Analytics capability will reduce the complexity of driving business analytics across data from business apps and other sources. Its features include: Common data schema - Common Data Services for Analytics expands Power BI by introducing an extensible business application schema on which organizations can integrate data from multiple sources. Accelerated access to insights - With the new CDS for Analytics capability, customers will have the opportunity to purchase apps from Microsoft and its partners built on Power BI. App customization and extension - Users will be able to custom tailor reports or build new ones uniquely relevant to their needs using data services like Azure Machine Learning and Azure Databricks to self-service customizations in low code/no code experiences in Power BI, irrespective of their skillset. Bringing the power of Dynamics 365 to PowerApps Microsoft plans to amalgamate PowerApps with Business Application Platform, the platform that powers Dynamic 365. Microsoft took the Common Data Service, merged its features with the Dynamics 365 platform and renamed it to the Common Data Service for Apps to reflect the new functionality. The Common Data Service (CDS) would be adding capabilities such as server-side logic, business processes, advanced security and pro developer support to PowerApps. This update would also introduce a new style of app building known as model-driven apps. These apps would automatically generate rich user experience based on the data and processes in the Common Data Service. PowerApps built on the canvas get new capabilities for working with the Common Data Service as well. For these and the other updates rolled out by Microsoft, read the Microsoft Blog
Read more
  • 0
  • 0
  • 2301

article-image-d3-5-0-released
Savia Lobo
23 Mar 2018
2 min read
Save for later

D3.js v5.0 released!

Savia Lobo
23 Mar 2018
2 min read
D3.js version 5.0 released. D3.js is a JavaScript library for creating dynamic, interactive data visualizations in web browsers. The new version 5.0 includes only a few non-backwards-compatible changes. D3.js now uses Promises, instead of asynchronous callbacks to load data. Promises simplify the structure of asynchronous code, especially in modern browsers that support async and await. Let us see some of the changes in this version: The d3-request module has been replaced by d3-fetch, due to the adoption of Promises. D3 5.0 also deprecates and removes the d3-queue module. One can use Promise.all to run a batch of asynchronous tasks in parallel, or a helper library such as p-queue to control concurrency. D3 now includes d3-scale-chromatic, which implements excellent schemes from ColorBrewer, including categorical, diverging, sequential single-hue and sequential multi-hue schemes. The version 5.0 also provides implementations of marching squares and density estimation via d3-contour. There are two new d3-selection methods: selection.clone for inserting clones of the selected nodes, and d3.create for creating detached elements. D3’s package.json no longer pins exact versions of the dependent D3 modules. This fixes an issue with duplicate installs of D3 modules. To read more about the changes in detail, visit the GitHub repo.
Read more
  • 0
  • 0
  • 6961

article-image-data-science-news-daily-roundup-22nd-march-2018
Sugandha Lahoti
22 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 22nd March 2018

Sugandha Lahoti
22 Mar 2018
2 min read
DeepMind’s neural deletion, MachineLabs goes opensource, IBM’s Snap Machine Learning, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day MachineLabs, the browser-based machine learning platform, goes open source. Snap machine learning: The 46x faster than TensorFlow ML library by IBM. Other Data Science News at a Glance In a recent paper, DeepMind has used neuron deletion to analyze neural networks. They found that interpretable neurons are no more important than confusing neurons for image classification and networks that generalize better are harder to break. Read more on the DeepMind Blog. BokehPlots partners with NumFOCUS to launch another strong open-source visualization ecosystem. With the addition of Bokeh, the NumFOCUS fiscal sponsorship program now encompasses 22 open source scientific computing projects. Read more on the NumFOCUS Blog. Paperspace launches an AI Platform as a Service offering called Gradient, based on the serverless delivery model. Gradient removes the friction involved in launching and configuring GPU-backed VMs to train machine learning and deep learning models. Read more on Forbes. There are new features in InfluxDB Open Source Backup and Restore. Now, the open source backup utility provides the option to run both backup and restore functions on a live database. Read more on the Influx Data. BigchainDB has announced the name of their next release as BigchainDB 2.0. Bigchain 2.0 uses version 2.0 transactions, as documented in the IPDB Transaction Spec version 2.0. The release of BigchainDB 2.0 means that BigchainDB 1.x has reached the end of the line.   Read more on the BigchainDB Blog.
Read more
  • 0
  • 0
  • 1796
article-image-machinelabs-browser-based-machine-learning-platform-goes-open-source
Sugandha Lahoti
22 Mar 2018
2 min read
Save for later

MachineLabs, the browser based machine learning platform, goes open source

Sugandha Lahoti
22 Mar 2018
2 min read
MachineLabs has released the entire code base of their machine learning platform as open source under the MIT license.   Following this announcement, all work at the organization will happen completely publicly, giving everyone the chance to join the effort and fork and fiddle with the code. MachineLabs is an open source online platform for Machine Learning. It is accessible, sharable and explorable via the web. It has ready to use environments for main ML frameworks including Tensorflow, Theano, PyTorch, and Caffe. Also, it has an online code Editor and access to blazingly fast GPU hardware. Moreover, it can expose generated assets via API, enabling users to request trained models for browser-based front-end apps. Their core mission, MachineLabs says, “has always been to empower the Machine Learning community, we believe it is critical for us to be as open and transparent as possible.” The open source announcement is MachineLabs’ first move to make their ML platform 100% open and transparent and entirely owned and governed by the community. In addition, the community that creates, runs and funds the ML platform will essentially also own and steer the revenue generating MachineLabs service. In order for this to happen, they are looking for platforms such as Aragon and Habour to implement a decentralized governance model. MachineLabs researchers have also started working on a Machine Learning Online Course with a radical stance on simplicity to further help people getting started within the Machine Learning field. They have created a Developer Guide to make it as easy as possible for everyone to contribute to the platform. They also ask contributors to go through the Contributing Guidelines, to make every contribution as smooth as possible. To know more, about the open source announcement, go through the official MachineLabs blog.
Read more
  • 0
  • 0
  • 1979

article-image-snap-machine-learning-46x-faster-tensorflow-ml-library-ibm
Savia Lobo
22 Mar 2018
2 min read
Save for later

Snap machine learning: The 46x faster than TensorFlow ML library by IBM

Savia Lobo
22 Mar 2018
2 min read
IBM claims that its new Snap Machine learning library is 46x faster than TensorFlow. IBM’s Snap Machine Learning (Snap ML) is an efficient, scalable machine-learning library that enables very fast training of generalized linear models. IBM demonstrated that this new library can eliminate the training time as a bottleneck for machine-learning workloads, paving the way to a range of new applications. Much recently, the Snap ML library set a new benchmark by outperforming training time for ML models 46 times faster than TensorFlow. Source : IBM Research By using the online advertising dataset released by Criteo Labs, which includes more than 4 billion training examples, IBM trained a logistic regression classifier in 91.5 seconds. Prior to this, the best result for training the same model was bagged by TensorFlow, which trained the same model in 70 minutes on Google Cloud Platform. Snap ML library, allows more agile development faster and more fine-grained exploration of the hyper-parameter space enables scaling to massive datasets makes frequent retraining of models possible in order to adapt to events as they occur Source: Snap Machine Learning Research paper Let’s take a look at the three distinct features of Snap ML library: Distributed training: This feature helps in building a data-parallel framework. This allows one to scale out and train on massive datasets that exceed the memory capacity of a single machine, which is crucial for large-scale applications. GPU acceleration: Implementation of specialized solvers designed to leverage the massively parallel architecture of GPUs while respecting the data locality in GPU memory in order to avoid large data transfer overheads. Sparse data structures: Many machine learning datasets are sparse, therefore some new optimizations have been enrolled for the algorithms used in IBM’s own system when applied to sparse data structures. Read more about this exciting news in detail on IBM Research.
Read more
  • 0
  • 0
  • 2864