Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-data-science-news-daily-roundup-21st-march-2018
Packt Editorial Staff
21 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 21st March 2018

Packt Editorial Staff
21 Mar 2018
2 min read
Microsoft SQL Server Management Studio 17.6, IBM’s Deep Learning as a Service program, Intel’s nGraph, and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day Azure Database Services are now generally available for MySQL and PostgreSQL. Microsoft announces the release of SSMS, SQL Server Management Studio 17.6. IBM rolls out Deep Learning as a Service (DLaaS) program for AI developers. Other Data Science News at a Glance Intel AI open sources nGraph, a framework-neutral Deep Neural Network (DNN) model compiler. With nGraph, data scientists can focus on data science rather than worrying about how to adapt their DNN models to train and run efficiently on different devices. Read more on the Intel AI Blog. Databricks introduces a new millisecond low-latency mode of streaming called continuous mode in Apache Spark 2.3, now available in Databricks Runtime 4.0 as part of Databricks Unified Analytics Platform. Read more on the Databricks Blog. IBM announces the launch of IBM Watson Data Kits, which are designed to accelerate the development of AI applications. Watson Data Kits will provide companies with pre-enriched, machine-readable, industry-specific data for building AI projects. Read more on PR Newswire. DeepL introduces DeepL API, as part of DeepL Pro. DeepL API is an interface that allows other computer programs to send texts to DeepL servers and receive high-quality translations. Read more on the DeepL Blog. The Altair 2.0 release candidate is now available, with full support for interactive Vega-Lite visualizations in Python. Features a declarative grammar of interactions for building sophisticated interactive data views from easy-to-understand building blocks. Read more on Altair Github.
Read more
  • 0
  • 0
  • 2041

article-image-ibm-rolls-deep-learning-service-dlaas-program-ai-developers
Savia Lobo
21 Mar 2018
2 min read
Save for later

IBM rolls out Deep Learning as a Service (DLaaS) program for AI developers

Savia Lobo
21 Mar 2018
2 min read
On March 20, 2018, IBM launched its brand new Deep Learning as a Service (DLaaS) program for AI developers. The Deep Learning as a Service, which runs on IBM Watson, is an experiment-centric model training environment. This means users don’t have to worry about getting bogged down with planning and managing training runs themselves. Instead, the entire training life-cycle is managed automatically and the results can be viewed in real-time and can also be revisited later. The DLaaS service allows data scientists to train models using the resources they need and they simply have to pay only for the GPU time. Users can train their neural networks using a range of deep learning frameworks such as TensorFlow, PyTorch, and Caffe, without the need to buy and maintain the hardware cost. In order to use the service, users just have to prepare their data, upload it, begin training, then download the training results. This can potentially snip days or weeks off of training times. For instance, if a single GPU setup takes nearly a week to train visual image processing neural network on a couple million pictures, the time taken to do the same thing can be cut down to mere hours with this new cloud solution. Also, maintaining deep learning systems requires manpower. By investing time in IBM’s DLaaS can scale projects, which can result into clustering just a few GPUs for deep learning models. This experience is an entirely different skill set than training neural networks. To read more about this in detail, check out IBM’s blog post.  
Read more
  • 0
  • 0
  • 2291

article-image-microsoft-announces-release-ssms-sql-server-management-studio-17-6
Sugandha Lahoti
21 Mar 2018
2 min read
Save for later

Microsoft announces the release of SSMS, SQL Server Management Studio 17.6.

Sugandha Lahoti
21 Mar 2018
2 min read
Microsoft has released SSMS 17.6, the latest generation of SQL Server Management Studio with support for SQL Server 2017. SQL Server Management Studio (SSMS) can be used to query, design, and manage databases and data warehouses, from the local computer, or in the cloud. It provides tools to configure, monitor, and administer instances of SQL. SSMS can be used to deploy, monitor, and upgrade the data-tier components of applications, as well as build queries and scripts. The SQL Server Management Studio, 17.6 has added a support for Azure SQL Database Managed Instance. This new flavor of Azure SQL Database provides, Near 100% compatibility with SQL Server on-premises. A native virtual network (VNet) implementation that addresses common security concerns. A business model favorable for on-premises SQL Server customers. Support for common management scenarios like creating, altering, backing, and restoring databases, importing, exporting, extracting and publishing Data-tier Applications, viewing and altering Server properties, support for full Object Explorer, SQL Agent jobs, Linked Servers and more. The SSMS 17.6 also features changes in the Object Explorer. Microsoft has added settings to not force brackets around names when dragging & dropping from Object Explorer to Query Window. They have also improved Integration Services by adding support to deploy packages to a SQL Database Managed Instance. In addition, SSMS 17.6 is labeled as Microsoft SQL Server Management Studio 17, and has a new icon. Source: Microsoft SSMS docs The version 17.6 of SSMS works with all supported versions of SQL Server 2008 - SQL Server 2017 and provides support for working with the cloud features in Azure SQL Database and Azure SQL Data Warehouse. Additionally, SSMS 17.6 can be installed side by side with SSMS 16.x or SQL Server 2014 SSMS and earlier. You can read the entire release notes on Microsoft SSMS docs.
Read more
  • 0
  • 0
  • 2688
Visually different images

article-image-azure-database-services-now-generally-available-mysql-postgresql
Savia Lobo
21 Mar 2018
1 min read
Save for later

Azure Database services are now generally available for MySQL and PostgreSQL

Savia Lobo
21 Mar 2018
1 min read
Microsoft recently announced, Azure Database services are generally available for database management systems, MySQL and PostgreSQL. This general availability would allow opportunities such as, Bringing the community versions of MySQL and PostgreSQL built-in high availability A 99.99% availability SLA Elastic scaling for performance Bringing industry leading security and compliance to Azure Some key features of the Azure Database for MySQL and PostgreSQL Provides a fully managed and an enterprise-ready community MySQL database-as-a-service for development and deployment of applications. One can easily lift and shift their workloads to the cloud, and use languages and frameworks of choice. A built-in high availability and capability to scale in seconds, which enables easy adjustment to changes in customer demands One can also benefit from the unparalleled security and compliance, including Azure IP advantage, as well as Azure’s industry leading reach with more data centers A flexible pricing model with choices in resources for ones workload Read more about this news on Microsoft official blog post
Read more
  • 0
  • 0
  • 2128

article-image-what-is-meta-learning
Sugandha Lahoti
21 Mar 2018
5 min read
Save for later

What is Meta Learning?

Sugandha Lahoti
21 Mar 2018
5 min read
Meta Learning, an original concept of cognitive psychology, is now applied to machine learning techniques. If we go by the social psychology definition, meta learning is the state of being aware of and taking control of one's own learning. Similar concepts, when applied to the machine learning theory states that a meta learning algorithm uses prior experience to change certain aspects of an algorithm, such that the modified algorithm is better than the original algorithm. To explain in simple terms, meta-learning is how the algorithm learns how to learn. Meta Learning: Making a versatile AI agent Current AI Systems excel at mastering a single skill, playing Go, holding human-like conversations, predicting a disaster, etc. However, now that AI and machine learning is possibly being integrated in everyday tasks, we need a single AI system to solve a variety of problems. Currently, a Go Player, will not be able to navigate the roads or find new places. Or an AI navigation controller won’t be able to hold a perfect human-like conversation. What machine learning algorithms need to do is develop versatility – the capability of doing many different things. Versatility is achieved by intelligent amalgamation of Meta Learning along with related techniques such as reinforcement learning (finding suitable actions to maximize a reward), transfer learning (re-purposing a trained model for a specific task on a second related task), and active learning (learning algorithm chooses the data it wants to learn from). Such different learning techniques provides an AI agent with the brains to do multiple tasks without the need to learn every new task from scratch. Thereby making it capable of adapting intelligently to a wide variety of new, unseen situations. Apart from creating versatile agents, recent researches also focus on using meta learning for hyperparameter and neural network optimization, fast reinforcement learning, finding good network architectures and for specific cases such as few-shot image recognition. Using Meta Learning, AI agents learn how to learn new tasks by reusing prior experience, rather than examining each new task in isolation. Various approaches to Meta Learning algorithms A wide variety of approaches come under the umbrella of Meta-Learning. Let's have a quick glance at these algorithms and techniques: Algorithm Learning (selection) Algorithm selection or learning, selects learning algorithms on the basis of characteristics of the instance. For example, you have a set of ML algos (Random Forest, SVM, DNN), data sets as the instances and the error rate as the cost metric. Now, the goal of Algorithm Selection is to predict which machine learning algorithm will have a small error on each data set. Hyper-parameter Optimization Many machine learning algorithms have numerous hyper-parameters that can be optimized. The choice of selecting these hyper-parameters for learning algorithms determines how well the algorithm learns.  A recent paper, "Evolving Deep Neural Networks", provides a meta learning algorithm for optimizing deep learning architectures through evolution. Ensemble Methods Ensemble methods usually combine several models or approaches to achieve better predictive performance. There are 3 basic types – Bagging, Boosting, and Stacked Generalization. In Bagging, each model runs independently and then aggregates the outputs at the end without preference to any model. Boosting refers to a group of algorithms that utilize weighted averages to make weak learners into stronger learners. Boosting is all about “teamwork”. Stacked generalization, has a layered architecture. Each set of base-classifiers is trained on a dataset. Successive layers receive as input the predictions of the immediately preceding layer and the output is passed on to the next layer. A single classifier at the topmost level produces the final prediction. Dynamic bias selection In Dynamic Bias selection, we adjust the bias of the learning algorithm dynamically to suit the new problem instance. The performance of a base learner can trigger the need to explore additional hypothesis spaces, normally through small variations of the current hypothesis space. The bias selection can either be a form of data variation or a time-dependent feature. Inductive Transfer Inductive transfer describes learning using previous knowledge from related tasks. This is done by transferring meta-knowledge across domains or tasks; a process known as inductive transfer. The goal here is to incorporate the meta-knowledge into the new learning task rather than matching meta-features with a meta-knowledge base. Adding Enhancements to Meta Learning algorithms Supervised meta-learning:  When the meta-learner is trained with supervised learning. In supervised learning we have both input and output variables and the algorithm learns the mapping function from the input to the output. RL meta-learning: This algorithm talks about using standard deep RL techniques to train a recurrent neural network in such a way that the recurrent network can then implement its own Reinforcement learning procedure. Model-agnostic meta-learning: MAML trains over a wide range of tasks, for a representation that can be quickly adapted to a new task, via a few gradient steps. The meta-learner seeks an initialization that is not only useful for adapting to various problems, but also can be adapted quickly. The ultimate goal of any meta learning algorithm and its variations is to be fully self-referential. This means it can automatically inspect and improve every part of its own code. A regenerative meta learning algorithm, on the lines of how a lizard regenerates its limbs, would not only blur the distinction between the different variations as described above but will also lead to better future performance and versatility of machine learning algorithms.
Read more
  • 0
  • 0
  • 5824

article-image-data-science-news-daily-roundup-20th-march-2018
Packt Editorial Staff
20 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 20th March 2018

Packt Editorial Staff
20 Mar 2018
2 min read
Windows ML for Game development, NASA’s blockchain in deep space, IBM’s new computer : smaller than a grain of salt, and more in today’s top stories around machine learning, deep learning, and data science news Top Data science News Stories of the Day Microsoft plans to use Windows ML for Game development Watson-CoreML: IBM and Apple’s new machine learning collaboration project Other Data Science News at a Glance 1. NASA is using blockchain to help build intelligent computer networks in deep space far from a centralized computer hub. Read more on WOSU Radio 2. IBM has created a computer smaller than a grain of salt. The computer will cost less than ten cents to manufacture, and will also pack several hundred thousand transistors. Read more on Mashable 3. Io-Tahoe recently announced the launch of its smart data discovery platform. The new version includes the addition of Data Catalog, a new feature designed to allow data owners and stewards to use a machine learning-based smart catalog to create, maintain and search business rules. It will allow data-driven enterprises to enhance information about data automatically, regardless of the underlying technology and build a data catalog. Read more on IoT Evolution 4. Splice Machine is launching a connector that aims to boost IoT and machine learning applications. The connector will enable data engineers, data scientists, and developers to directly use Spark without excessive data transfers in and out of Splice Machine. Read more on database trends & applications 5. Datawatch Corporation today announced the general availability of Datawatch Panopticon 16.6. This new version gives users easier data access and faster methods for deploying and using the software’s analytical capabilities. Read more on Globe Newswire
Read more
  • 0
  • 0
  • 1603
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-microsoft-plans-use-windows-ml-game-development
Sugandha Lahoti
20 Mar 2018
2 min read
Save for later

Microsoft plans to use Windows ML for Game development

Sugandha Lahoti
20 Mar 2018
2 min read
Microsoft unveiled plans to use Windows ML and DirectX for both game development and game play at the 2018's Game Developers Conference. Announced earlier this month, Windows ML is a runtime framework for neural networks on Windows 10. The new platform enables developers to build machine-learning models, trained in Azure, right into their apps using Visual Studio and run them on their PCs. In this game-oriented reveal, Microsoft Windows ML will primarily be used to enhance the game development process. DirectX Raytracing, a new feature of the DirectX API, aims to make games look more realistic. Custom made Gameplays Microsoft will use Windows ML to help developers leverage deep neural networks (DNN) to enhance their games. They will also make use of ML to naturally adapt a game to a player's gaming style, such as a player’s in-game habits and change things on the fly. As an example, Microsoft says, "If you're someone who likes to find treasures in game but don't care to engage in combat, DNNs could prioritize and amplify those activities while reducing the amount or difficulty of battles." Better Game Development Process Apart from gameplay, Microsoft will also use Windows ML for improving the game development process. This includes using Neural networks to perform some of the more difficult parts of creating assets and graphics, leaving artists and developers free to focus on other areas. Microsoft says. "The time and money that studios could save with tools like these could get passed down to gamers in the form of earlier release dates, more realistic games, or more content to play." Visual Quality Improvements Windows ML will also be used to create and enhance Visuals. So, aliasing around objects in games can be smoothed out by tapping into machine learning models to determine the best color for each pixel. Microsoft also showcased a new part of the DirectX API. The DirectX Raytracing (DXR) will allow developers to use the DXR in DirectX 12 to bring real-time raytracing to their games. At present, it can be used to enhance certain aspects of visual quality, all the while working to be a full replacement of rasterization in the future. The full details are available on the official Microsoft Blog.
Read more
  • 0
  • 0
  • 2640

article-image-watson-coreml-ibm-apples-new-machine-learning-collaboration-project
Savia Lobo
20 Mar 2018
2 min read
Save for later

Watson-CoreML : IBM and Apple’s new machine learning collaboration project

Savia Lobo
20 Mar 2018
2 min read
Apple and IBM, the not so usual partners, contributed each of their renowned tech, Watson and CoreML in the collaboration to make the business applications running on Apple devices more intelligent. One can build one’s machine learning models using IBM Watson while using the data in an enterprise repository to train the model. After creating the model, one can run it through the CoreML converter tools and insert it in their Apple app. This allows the two partners to make the apps created under the partnership even smarter with machine learning. Apple developers seek a quick and easy way to build these apps and leverage the cloud where it’s delivered. IBM also announced a cloud console to simplify the connection between the Watson model building process and inserting that model in the application running on the Apple device. The app can share data back with Watson and improve the machine learning algorithm running on the edge device in a classic device-cloud partnership. The app runs in real-time and one does not need to be connected to Watson. However, as different parts on the device are classified, the data gets collected. Further, when one connects to Watson on a lower bandwidth, they can feed the data back in order to train their machine learning model and make it even better. Let’s take an instance where the app assists field technicians’ iPhones or iPads to scan the electrical equipment they are inspecting to automatically detect any anomalies. This would eliminate the need to send that data to IBM’s cloud computing data centers to process, thus speeding up the amount of time it takes to detect equipment bugs to near real-time. To know more about this project in detail visit Apple's official blog post.  
Read more
  • 0
  • 0
  • 2299

article-image-ibm-cloud-private-data-ibms-new-machine-learning-data-science-platform-2
Sugandha Lahoti
19 Mar 2018
2 min read
Save for later

IBM Cloud Private for Data: IBM’s new machine learning and data science platform

Sugandha Lahoti
19 Mar 2018
2 min read
IBM unveiled a new data science and machine learning platform for accelerating AI adoption. Termed as IBM Cloud Private for Data, it is an integrated data science, data engineering and app building platform which can ingest and analyze massive amounts of data –one million events per second. According to the official press release, “The platform is built to enable users to build and exploit event-driven applications capable of analyzing the torrents of data from things like IoT sensors, online commerce, mobile devices, and more.” The core features of the IBM Cloud Private for Data is an in-memory database, a real-time processing engine and the ability to ingest and analyze massive amounts of data. The platform can ingest up to 1 million rows per second and 250 billion events per day for both transactional and analytical processing. Another feature of the platform is an ML-powered intelligent data catalog that automates the process of creating meta-tags. It also has a real-time ingestion engine based on Apache Spark and the Apache Parquet column-oriented data store. The IBM Cloud Private Data solution also includes key capabilities from IBM’s Data Science Experience, Information Analyzer, Information Governance Catalogue, Data Stage, Db2 and Db2 Warehouse. This new solution provides a data infrastructure layer for AI behind the firewall. Talking about the architecture, IBM Cloud Private for Data is an application layer deployed on the Kubernetes open-source container software. It forms an integrated environment for data science and application development using microservices. In the future, the Cloud Private for Data will run on all cloud and will be available in industry-specific solutions, for financial services, healthcare, manufacturing, and more. In addition, IBM has also announced the formation of the Data Science Elite Team – an elite team of consultants who will help solve real-world data science problems of clients, with no charge. “These two data science efforts,” according to Rob Thomas, General Manager, IBM Analytics, “will bring the AI destination closer and give access to powerful machine learning and data science technologies that can turn data into game-changing insight.” Have a look at the IBM Cloud Private for Data official announcement for further details.
Read more
  • 0
  • 0
  • 2227

article-image-data-science-news-daily-roundup-19th-march-2018
Packt Editorial Staff
19 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 19th March 2018

Packt Editorial Staff
19 Mar 2018
2 min read
Baidu open sources ApolloScape and collaborates with Berkeley DeepDrive, Uber AI Labs launch VINE, Skope-rules, a Python machine learning module built on top of scikit-learn and more in today’s top stories around machine learning, deep learning, and data science news. Top Data science News Stories of the Day Baidu open sources ApolloScape and collaborates with Berkeley DeepDrive to further machine learning in automotives Apache Ignite 2.4 rolls out with Machine Learning and Spark DataFrames capabilities IBM Cloud Private for Data: IBM’s new machine learning and data science platform Other Data Science News at a Glance 1. Uber AI Labs launch VINE, an Open Source Interactive Data Visualization Tool for Neuroevolution. VINE can illuminate both evolution strategies (ES) and genetic algorithms (GA) style approaches, which help to train deep neural networks to solve difficult reinforcement learning (RL) problems. Read more on UBER Engineering blog. 2. XMPro to participate in the Industrial Internet Consortium (IIC) Smart Factory Machine Learning for Predictive Maintenance Testbed. The testbed aims to evaluate and validate machine learning techniques for predictive maintenance on high volume production machinery. Read more on prnewswire. 3. Introducing Skope-rules, a Python machine learning module built on top of scikit-learn and distributed under the 3-Clause BSD license. Skope-rules aims at learning logical, interpretable rules for "scoping" a target class, i.e. detecting with high precision instances of this class. Read more on Sebastian Raschka’s Twitter post 4. Fimmic Launches Aiforia Cloud, Bringing Self-Service Deep Learning AI to Digital Pathology. The platform’s new Aiforia Create tools bring unique self-service capabilities by allowing users to generate their own deep learning algorithms by training convolutional neural networks (CNN) to learn, detect and quantify specific features of interest in tissue images. Read more on WLNS.com  
Read more
  • 0
  • 0
  • 1661
article-image-apache-ignite-2-4-rolls-machine-learning-spark-dataframes-capabilities
Sugandha Lahoti
19 Mar 2018
2 min read
Save for later

Apache Ignite 2.4 rolls out with Machine Learning and Spark DataFrames capabilities

Sugandha Lahoti
19 Mar 2018
2 min read
The Apache Ignite community has announced the latest version of Apache Ignite, its open-source distributed database. Apache Ignite 2.4 features new machine learning capabilities, Spark DataFrames support, and the introduction of a low-level binary client protocol. Machine Learning APIs were first teased at the launch of Apache Ignite 2.0, approximately eight months ago. Now with Apache Ignite 2.4, the ML Grid is production ready. With new ML features, Ignite users can deal with fraud detection, predictive analytics, and for building recommendation systems. The ML grid can also solve regression and classification tasks,  and avoid ETL from Ignite to other systems. ML Grid in the future releases of Ignite 2.4, will also incorporate a genetic algorithm software, donated by NetMillennium Inc. This software will help in solving optimization problems by simulating the process of biological evolution. These in turn can be applied to real-world applications including automotive design, computer gaming, robotics, investments, traffic/shipment routing and more. There is also a good news for Spark users. Dataframes is now officially supported for Apache Spark. In addition, Apache Ignite can also be installed from the official RPM repository. Apache Ignite 2.4 also has a new low-level binary client protocol. This would allow all developers, including but not limited to Java, C#, and C++ developers, to utilize Ignite APIs in their applications. The protocol communicates with an existing Ignite cluster without starting a full-fledged Ignite node. An application can connect to the cluster through a raw TCP socket from any programming language. Apache Ignite 2.4 took five months in total for development. Normally, a new version is rolled out every three months. You can read the complete list of addition in the release notes.
Read more
  • 0
  • 0
  • 2548

article-image-baidu-open-sources-apolloscape-collaborates-berkeley-deepdrive-machine-learning-automotives
Savia Lobo
19 Mar 2018
2 min read
Save for later

Baidu open sources ApolloScape and collaborates with Berkeley DeepDrive to further machine learning in automotives

Savia Lobo
19 Mar 2018
2 min read
Baidu has dual announcements to make! Firstly, the launch of ApolloScape, a massive self-driving dataset for autonomous industry. Secondly, its collaboration with Berkeley DeepDrive (BDD) Industry Consortium, a research alliance exploring state-of-the-art technologies in computer vision and machine learning for automotive applications. ApolloScape : Largest Self-driving Dataset for autonomous driving technology The ApolloScape dataset is released under Baidu’s autonomous driving platform, Apollo. This dataset eliminates the time taken for manual data collection. Also, because it is open sourced, developers can now use ApolloScape as a base for building self-driving vehicles. Let’s take a quick look at some of the striking features of the ApolloScape dataset: Its data volume is 10 times greater than any other open-source autonomous driving dataset, including Kitti and CityScapes. This data could be further used for perception, simulation scenes, road networks etc., as well as enabling autonomous driving vehicles to be trained in more complex environments, weather and traffic conditions. It defines 26 different semantic items — eg. cars, bicycles, pedestrians, buildings, streetlights, etc. — with pixel-by-pixel semantic segmentation technique. It can also save researchers and developers a huge amount of time on real-world sensor data collection. Beyond data, ApolloScape will also facilitate advanced research on cutting-edge simulation technology aiming to create a simulation platform that aligns with real-world experience. Baidu’s collaboration with Berkeley DeepDrive Prior to Baidu, Berkeley DeepDrive (BDD) Industry Consortium, has partnered with many other famous brands including, Ford, Nvidia, Qualcomm, and General Motors (GM). The key research focus of BDD include, deep reinforcement learning, cross-modal transfer learning, and clockwork FCNs for fast video processing. This collaboration between Baidu and BDD would incorporate Apollo’s industrial resources and Berkeley’s top academic team to ramp up the innovation of theoretical research, applied technology, and commercial applications. Also, Apollo Open Platform and BDD will also jointly conduct a Workshop on Autonomous Driving at CVPR 2018 (IEEE International Conference on Computer Vision and Pattern Recognition) this June in Salt Lake City where they will organize task competitions based on ApolloScape. To know more about ApolloScape, visit the official website.
Read more
  • 0
  • 0
  • 2649

article-image-amoebanets-googles-new-evolutionary-automl
Savia Lobo
16 Mar 2018
2 min read
Save for later

AmoebaNets: Google’s new evolutionary AutoML

Savia Lobo
16 Mar 2018
2 min read
In order to detect objects within an image, artificial neural networks require careful design by experts over years of difficult research. They later address one specific task, such as to find what's in a photograph, to call a genetic variant, or to help diagnose a disease. Google believes one approach to generate these ANN architectures is through the use of evolutionary algorithms. So, today Google introduced AmoebaNets, an evolutionary algorithm that achieves state-of-the-art results for datasets such as ImageNet and CIFAR-10. Google offers AmoebaNets as an answer to questions such as, By using the computational resources to programmatically evolve image classifiers at unprecedented scale, can one achieve solutions with minimal expert participation? How good can today's artificially-evolved neural networks be? These questions were addressed through the two papers: Large-Scale Evolution of Image Classifiers,” presented at ICML 2017. In this paper, the authors have set up an evolutionary process with simple building blocks and trivial initial conditions. The idea was to "sit back" and let evolution at scale do the work of constructing the architecture. Regularized Evolution for Image Classifier Architecture Search (2018). This paper includes a scaled up computation using Google's new TPUv2 chips. This combination of modern hardware, expert knowledge, and evolution worked together to produce state-of-the-art models on CIFAR-10 and ImageNet, two popular benchmarks for image classification. One important feature of the evolutionary algorithm (AmoebaNets) that the team used in their second paper is a form of regularization, which means: Instead of letting the worst neural networks die, they remove the oldest ones — regardless of how good they are. This improves robustness to changes in the task being optimized and tends to produce more accurate networks in the end. Since weight inheritance is not allowed, all networks must train from scratch. Therefore, this form of regularization selects for networks that remain good when they are re-trained. These models achieve state-of-the-art results for CIFAR-10 (mean test error = 2.13%), mobile-size ImageNet (top-1 accuracy = 75.1% with 5.1 M parameters) and ImageNet (top-1 accuracy = 83.1%). Read more about AmoebaNets on Google Research Blog
Read more
  • 0
  • 0
  • 6604
article-image-data-science-news-daily-roundup-16th-march-2018
Packt Editorial Staff
16 Mar 2018
2 min read
Save for later

Data Science News Daily Roundup – 16th March 2018

Packt Editorial Staff
16 Mar 2018
2 min read
Unity releases ML-Agents v0.3, Magenta introduces MusicVAE, CockroachDB’s new enterprise feature named geo-partitioning, and more in today’s top stories around machine learning, deep learning, and data science news  Top Data science News Stories of the Day Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents  and more AmoebaNets: Google’s new evolutionary AutoML Other Data Science News at a Glance 1. AI chip startup SambaNova raises $56M from Google Ventures. SambaNova is a startup that makes chips specifically for artificial intelligence. The Series A funding round was led by Walden International and GV, Alphabet Inc.’s venture capital arm, and it also included participation from Redline Capital and Atlantic Bridge Ventures. Read more on SiliconAngle. 2. Magenta introduces MusicVAE, a machine learning model that lets us create palettes for blending and exploring musical scores. The technical goal of this class of models is to represent the variation in a high-dimensional dataset using a lower-dimensional code, making it easier to explore and manipulate intuitive characteristics of the data. Read more on the Magenta Blog. 3. CockroachDB introduces a new enterprise feature named geo-partitioning which aims at improving performance by reducing latency. Geo-partitioning grants developers row-level replication control. Read more on Cockroach Labs blog. 4. Princeton announces blockchain analysis tool, BlockSci 0.4.5. BlockSci is a fast and expressive tool to analyze public blockchains. The 0.4.5 version has a large number of feature enhancements and bug fixes including a 5x speed improvement over the initial version. Read more on Freedom to Tinker. 5. Earnix Ltd announced the introduction of its Integrated Machine Learning technology, as an enhancement to the existing insurance software suite. This new capability is designed for demanding, high-performance real-time enterprise production systems, and will deliver a new level of market responsiveness and analytical sophistication to insurers. Read more on digitaljournal.  
Read more
  • 0
  • 0
  • 1143

article-image-unity-releases-ml-agents-v0-3-imitation-learning-memory-enhanced-agents
Sugandha Lahoti
16 Mar 2018
2 min read
Save for later

Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents and more

Sugandha Lahoti
16 Mar 2018
2 min read
The Unity team has released the version 0.3 of their anticipated toolkit ML-Agents. The new release is jam-packed with features on the likes of Imitation Learning, Multi-Brain Training, On-Demand Decision-Making, and Memory-Enhanced Agents. Here’s a quick look at what each of these features brings to the table: Behavioral cloning, an imitation learning algorithm ML-Agents v0.3 uses imitation learning for training agents. Imitation Learning uses demonstrations of the desired behavior in order to provide a learning signal to the agents. For v0.3, the team uses Behavioral Cloning as the choice of imitation learning algorithm. This works by collecting training data from a teacher agent, and then simply using it to directly learn a behavior. Multi-Brain training Using Multi-Brain Training, one can train more than one brain at a time, with their separate observation and action space. At the end of training, there is only one binary (.bytes) file, which contains one neural network model per brain. On-Demand Decision-Making Agents ask for decisions in an on-demand fashion, rather than making decisions every step or every few steps of the engine. Users can enable and disable On-Demand Decision-Making for each agent independently with the click of a button! Learning under partial observability The unity team has included two methods for dealing with partial observability within learning environments through Memory-Enhanced Agents. The first memory enhancement is Observation-Stacking. This allows an agent to keep track of up to the past ten previous observations within an episode, and to feed them all to the brain for decision-making. The second form of memory is the inclusion of an optional recurrent layer for the neural network being trained. These Recurrent Neural Networks (RNNs) have the ability to learn to keep track of important information over time in a hidden state. Apart from these features, there is an addition of a Docker-Image, changes to API Semantics and a major revamp of the documentation. All this to make setup and usage simpler and more intuitive.  Users can check the GitHub page to download the new version and learn all the details on the release page.
Read more
  • 0
  • 0
  • 3351