Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-google-announces-cloud-tpus-on-the-cloud-machine-learning-engine-ml-engine
Pravin Dhandre
23 May 2018
2 min read
Save for later

Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine)

Pravin Dhandre
23 May 2018
2 min read
After almost a year of Cloud ML Engine release, Google has finally announced the use of Cloud TPU for faster training and running of machine learning models on Cloud ML Engine. This beta release allow customers of Cloud ML and Google Cloud Platform to use the revolutionary TPUs and accelerate the TensorFlow based machine learning models. Key features of Cloud TPU: High-level Performance - Each Cloud TPU offers a potential of up to 180 teraflops of computing performance and 64 gigabytes of ultra-high bandwidth memory. Availability of Reference Models - Solve challenges faced in image classification and object detection applications on Cloud TPUs with access to models like RetinaNet and ResNet 50. Access to Custom Machine Types - Get an an advantage of balancing processor speeds, memory, storage resources by connecting to Cloud TPU from various custom Virtual Machine types. Key Benefits: Speed Up Machine Learning Workloads - The newly innovated Cloud TPUs are designed to help in accelerating machine learning workloads with TensorFlow. Each of the Cloud TPU are buckled up with 180 teraflops of computational power for the cutting-edge machine learning models. Such large amounts of processing speed can help you create the next research breakthrough across Machine Learning and AI technology. On-Demand Machine Learning Supercomputing - You can access to powerful and high-performance machine learning accelerators on demand with absolute zero capital investment. Easy Ramping on Google Cloud - Knowing that TensorFlow is open-source, you can simply push your machine learning workloads of TensorFlow on Cloud TPUs.You can use TensorFlow high-level APIs and move your machine learning models to CPUs, GPUs, and TPUs with few line of codes. The Cloud TPU also offers models and training environment which can easily suffice your image classification and machine translation needs. Read more about Cloud TPU features at the official  CLOUD TPU page. Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine How to Build TensorFlow Models for Mobile and Embedded devices
Read more
  • 0
  • 0
  • 2861

article-image-tensorflow-js-0-11-1-releases
Sunith Shetty
21 May 2018
3 min read
Save for later

TensorFlow.js 0.11.1 releases!

Sunith Shetty
21 May 2018
3 min read
TensorFlow team has released a new version of TensorFlow.js - a browser-based JavaScript library - for training and deploying machine learning models. This new version 0.11.1 has brought notable features in their armory to ease WebGL accelerated browser-based machine learning. TensorFlow.js is an open source JavaScript library which allows you to build machine learning models in the browser. It provides you flexible and intuitive high-level APIs to build, train and run models from scratch. This means you can run and retrain pre-existing TensorFlow and Keras models right in the browser. Some of the noteworthy changes available in TensorFlow.js 0.11: Now you can save and load tf.models using various media - Thanks to the new capabilities added Browser IndexedDB Browser local storage HTTP requests Browser file downloads and uploads In order to know more about each medium used to save and load models in TensorFlow.js, you can refer the tutorials page. There are a set of new features added to both TensorFlow.js Core API and TensorFlow.js Layers API: TensorFlow.js Core API (0.8.3 ==> 0.11.0) TensorFlow.js Core API provides low-level, hardware-accelerated linear algebra operations. It also provides an eager API for carrying out automatic differentiation. Breaking changes From now on ES5 tf-core.js bundle users will have to use symbol tf instead of tfc Now you can export GPGPUContext and add getCanvas() to the WebGLBackend Performance and development changes They have optimized CPU conv2dDerInput on CPU to get 100x faster. Loading quantized weight support added to reduce the model size and improve model download time. New serialization infrastructure added to the core API New helper methods and basic types added to support model exporting New features added to the Core API Added tf.losses.logLoss support which allows you to add a log loss term to the training procedure They have also added tf.losses.cosineDistance which allows you to add a cosine-distance loss to the training procedure Added tensor.round() which rounds the value of a tensor to the nearest integer, element-wise. They have added tf.cumsum support which allows you to compute the cumulative sum of the tensor x along the axis. They have added tf.losses.hinge_loss support which allows you to add a hinge loss to the training procedure. For the complete list of new features, documentation changes, a plethora of bug fixes and other miscellaneous changes added to the Core API you can refer the release notes. TensorFlow.js Layers API (0.5.2 ==> 0.6.1) TensorFlow.js Layers API is a high-level machine learning model API built on TensorFlow.js Core. This API can be used to build, train and execute deep learning models in the browser. Breaking changes From now on, ES5 tf-core.js bundle users will have to use symbol tf instead of tfl They have removed the exporting of the backend symbols Changed default epochs to 1 in Model.fit () function Feature changes A new version string added to the keras_version field of JSONs from model serialization They have added tf.layers.cropping2D support which allows you to crop layer for 2D input (eg: image) For the complete list of documentation changes, bug fixes and other miscellaneous changes added to the Layers API you can refer the release notes. Emoji Scavenger Hunt showcases TensorFlow.js You can now make music with AI thanks to Magenta.js The 5 biggest announcements from TensorFlow Developer Summit 2018
Read more
  • 0
  • 0
  • 2854

article-image-pandas-0-23-released
Pravin Dhandre
17 May 2018
2 min read
Save for later

pandas 0.23 released

Pravin Dhandre
17 May 2018
2 min read
From its previous major release v0.22, the contributors of pandas release the next major version 0.23.0 with numerous new features, enhancements, lists of API changes and deprecations. This release adds pivotal support in performing custom types operations and extended support to arguments and conversion tasks.The upgraded version also power ups with performance improvements along with a large number of bug fixes. New feature highlights v0.23: Round-trippable JSON format with ‘table’ orient. Instantiation from dicts respects order for Python 3.6+. Dependent column arguments for assign. Merging/sorting on a combination of columns and index levels. Extending Pandas with custom types. Excluding unobserved categories from groupby. Changes to make output shape of DataFrame.apply consistent. Bug Fixes: Resolved bugs related to categorical operations like merge, index constructor, factorize. Bugs in numeric operations like Series constructor, Index multiplication, DataFrame flex arithmetic fixed. Other bugs related to Strings, indexing, Timezones, TimeDelta are also fixed in this version. Python’s pandas package provides developers with fast, flexible, and expressive data structures making it easy and intuitive to work with “relational” and “labeled” data. With its continuation release of feature-packed versions, pandas could soon become the most powerful and flexible open source data analysis and manipulation tool for your data science project. To know more about the API changes, deprecations and performance improvements, please read release documentation on Github. “Pandas is an effective tool to explore and analyze data”: An interview with Theodore Petrou Working with pandas DataFrames Up and Running with pandas
Read more
  • 0
  • 0
  • 3225
Visually different images

article-image-introducing-intels-openvino-computer-vision-toolkit-for-edge-computing
Pravin Dhandre
17 May 2018
2 min read
Save for later

Introducing Intel's OpenVINO computer vision toolkit for edge computing

Pravin Dhandre
17 May 2018
2 min read
Almost after a week of Microsoft’s announcement about its plan to develop a computer vision develop kit for edge computing, Intel smartly introduced its latest offering, called OpenVINO in the domain of Internet of Things (IoT) and Artificial Intelligence (AI). This toolkit is a comprehensive computer vision solution, that brings computer vision and deep learning capabilities to the edge devices smoothly. OpenVINO (Open Visual Inference and Neural Network Optimization) toolkit supports popular open source frameworks like OpenCV, Caffe and TensorFlow. It supports and works with Intel’s traditional CPUs, AI chips, field programmable gate array (FPGA) chips and Movidius vision processing unit (VPU). The toolkit presumes the potential to address a wide number of challenges faced by developers in delivering distributed and end-to-end intelligence. With OpenVINO, developers can simply streamline their deep learning inferences and deploy high-performance computer vision solutions across a wide range of use-cases. Computer vision limitations related to bandwidth, latency and storage are expected to be resolved to an extent. This toolkit would also help developers in optimizing AI-integrated computer vision applications and scaling distributed vision applications which generally needs a complete redesign of solution. Until now, edge computing has been more of a prospect for an IoT market. With OpenVINO, Intel stands as the the only industry leader in delivering IoT solutions from the edges, providing an unparalleled solution to meet AI needs of businesses. OpenVINO is already being used by companies like GE Healthcare, Dahua, Amazon Web Services and Honeywell across their Digital Imaging and IoT Solutions. To explore more information on its capabilities and performance, visit Intel’s official OpenVINO product documentation. A gentle note to readers: OpenVINO  is not to be confused with Openvino, an open-source winery and wine-backed cryptoasset, Openvino. Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? AWS Greengrass brings machine learning to the edge Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT
Read more
  • 0
  • 1
  • 4805

article-image-google-employees-quit-over-companys-continued-ai-ties-with-the-pentagon
Amey Varangaonkar
16 May 2018
2 min read
Save for later

Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon

Amey Varangaonkar
16 May 2018
2 min read
Raising ethical concerns over Google’s continued involvement in developing Artificial Intelligence for military and warfare purposes, about a dozen Google employees have reportedly resigned. Since inception, many Googlers have been against Project Maven - Google’s project with the Pentagon, regarding the supply of machine learning technologies for image recognition and object detection purposes in the military drones. Earlier in April, Google employees had signed a petition, urging Google CEO Sundar Pichai to dissociate themselves from the Department of Defence by pulling out of Project Maven. They were of the opinion that humans, not AI algorithms, should be responsible for the sensitive and potentially life-threatening military work, and Google should invest in the betterment of human lives, not in war. Google had reassured their employees that the technology would be used in a non-offensive manner, and that policies were in effect regarding the use of AI in military projects. However, the resigning employees are of the view that these policies were not being strictly followed. The employees also felt that Google were less transparent about communicating controversial business decisions and were not receptive of the employee feedback like before. One of the employees who has resigned said, “Over the last couple of months, I’ve been less and less impressed with Google’s response and the way our concerns are being listened to.” The resignation of the employees sheds a bad light on Google’s employee retention strategy, and their reputation as a whole. These resignations might encourage more employees to evaluate their position within the company, given the lack of grievance redressal from Google’s end. Surrounded by fierce competition, losing talent to their rivals should be the last thing on Google’s agenda right now, and it will be interesting to see what Google’s plan of action will be in this regard. On the other hand, rivals Microsoft and Amazon have also signed partnerships with the US government, offering the required infrastructure and services to improve the defence functionalities. While there has been no reports of protests by their employees, Google seem to have found themselves in a soup, on ethical and moral grounds. Google Employees Protest against the use of Artificial Intelligence in Military Google News’ AI revolution strikes balance between personalization and the bigger picture Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 2510

article-image-microsoft-open-sources-ml-net-a-cross-platform-machine-learning-framework
Pravin Dhandre
10 May 2018
2 min read
Save for later

Microsoft Open Sources ML.NET, a cross-platform machine learning framework

Pravin Dhandre
10 May 2018
2 min read
Microsoft Corporation at its three day build conference held in Seattle, Washington announced the preview release of a machine learning framework called ML.NET. Developed by the research subsidiary, Microsoft Research, the framework will assist .NET developers in developing their own models for their web apps across Windows, Linux and macOS platform. Developers can infuse the custom machine learning models into applications without much prior experience in building machine learning models. The current release 0.1 is the debut preview compatible with any of the platforms that support .NET Core 2.0 or .NET Framework. Developers can access the framework directly from Github. Apart from the machine learning capabilities, this debut preview of ML.NET also uncovers draft of .NET APIs schemed for developing models for prediction, and training of machine learning models, different machine learning algorithms and core ML data structures. Although it is the first release, Microsoft and its team have been using this framework in their various product groups like Azure, Bing and Windows. Microsoft has also mentioned clearly that soon, ML.NET will include more advanced machine learning scenarios such as recommendation systems and anomaly detection. Popular concepts like deep learning, and support for libraries like TensorFlow, CNTK, and Caffe2 would be added. Support for general machine learning libraries like Accord.NET framework would also be included in the near soon release. The framework would also add miscellaneous support to ONNX, scaling out on Azure, Better GUI for ML tasks simplification and integration support with VS Tools. To follow the progress on this framework, visit .NET Blog on Microsoft’s official site. Azure meets Artificial Intelligence Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) Google I/O 2018 conference Day 1 Highlights: Android P, Android Things, ARCore, ML kit and Lighthouse  
Read more
  • 0
  • 0
  • 2666
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-google-news-ai-revolution-strikes-balance-between-personalization-and-bigger-picture
Richard Gall
10 May 2018
4 min read
Save for later

Google News' AI revolution strikes balance between personalization and the bigger picture

Richard Gall
10 May 2018
4 min read
Google has launched a major revamp to its news feature at Google I/O 2018. 15 years after its launch, Google News is to offer more personalization with the help of AI. Perhaps that's surprising - surely Google has always been using AI across every feature? Well yes, to some extent. But this update brings artificial intelligence fully into the fold. It may feel strange talking about AI and news at the moment. Concern over 'echo chambers' and 'fake news' has become particularly pronounced recently. The Facebook and Cambridge Analytica scandal have thrown the spotlight on the relationship between platforms, publishers, and our data. That might explain why Google seems to be trying to counter balance the move towards greater personalization with a new feature called Full Coverage. Full Coverage has been designed by Google as a means to tackle current concerns around 'echo chambers' and polarization in discourse. Such a move highlights a greater awareness of the impact the platform can have on politics and society. It suggests by using AI in context, there's a way to get the balance right. "In order to make it easier to keep up and make sense of [today's constant flow of news and information from different sources and media][, we set out to bring our news products into one unified experience", explained Trystan Uphill in a blog post. Personalizing Google News with AI By making use of advanced machine learning and AI techniques, Google will now offer you a more personalized way to read the news. With a new 'For You' tab, Google will organize a feed of news based on everything that the search engine knows about you. This will be based on a range of things, from your browsing habits to your location. "The more you use the app, the better the app gets" Upstill explains. In a new feature called 'Newscasts' Google News will make use of natural language processing techniques to bring together wide range of sources on a single topic. It seems strange to think that Google wasn't doing this before, but in actual fact it says a lot about how the platform dictates how we understand the scope of a debate or the way a news cycle is reported and presented. With newscasts it should be easier to illustrate the sheer range of voices currently out there. Fundamentally, Google News is making its news feature smarter - where previously it relied upon keywords, there is an added dimension whereby Google's AI algorithms become much more adept at understanding how different news stories evolve, and how different things relate to one another. https://www.youtube.com/watch?v=wArETCVkS4g Tackling the impact of personalization With Full Coverage, Google News will provide a range of perspectives on a given news story. This seems to be a move to directly challenge the increased concern around online 'echo chambers.' Here's what Upstill says: "Having a productive conversation or debate requires everyone to have access to the same information. That’s why content in Full Coverage is the same for everyone—it’s an unpersonalized view of events from a range of trusted news sources." Essentially, it's about ensuring people have access to a broad overview of stories. Of course, Google is here acting a lot like a publisher or curator of news - even when giving a broad picture around a news story there still will be an element of editorializing (whether that's human or algorithmic). However, it nevertheless demonstrates that Google has some awareness of the issues around online discourse and how its artificial intelligence systems can lead to a certain degree of polarization. It's now easier to subscribe and follow your favourite news sources The evolution of digital publishing has seen the rise of subscription models for many publishers. But that hasn't always been that well-aligned for readers searching Google. However, it will now be easier to read and follow your favorite news sources on Google News. Not only will you now be able to subscribe to news sources through your Google account, you'll also be able to see paywalled content your subscribed to in your Google News feeds. That will certainly be a better reading experience. In turn, that means Google is helping to cement themselves as the go-to place for news. Of course, Google could hardly be said to be under threat. But as native applications and social media platforms have come to define the news experience for many readers in recent years, this is a way of Google staking a claim in an area in which it may be ever so slightly vulnerable.
Read more
  • 0
  • 0
  • 2916

article-image-what-we-learned-from-qlik-qonnections-2018
Amey Varangaonkar
09 May 2018
4 min read
Save for later

What we learned from Qlik Qonnections 2018

Amey Varangaonkar
09 May 2018
4 min read
Qlik’s new CEO Mike Capone keynoted the recently held Qlik Qonnections 2018, with some interesting feature rollouts and announcements. He also shed light on the evolution of Qlik’s two premium products - Qlikview and Qlik Sense, and shared their roadmap for the coming year. Close to 4000 developers and Business Intelligence professionals were in attendance, and were very receptive to the positive announcements made in the keynote. Let us take a quick look at some of the important announcements: Qlik continues to be the market leader Capone began the keynote by sharing some of the interesting performance metrics over the past year, which have led to Qlik being listed as a ‘Leader’ in the Gartner Magic Quadrant 2017. One of the most impressive achievements among all is the impressive customer base that Qlik boasts of, including: 9 out of the 10 major banks 8 out of the 10 major insurance companies 11 out of the 15 major global investment and securities companies With an impressive retention rate of 94%, Qlik have also managed to add close to 4000 new customers over the last year and have also doubled the developer community to over 25,000 members. These numbers mean only one thing - Qlik will continue to dominate. Migration from Qlikview to Qlik Sense There has been a lot of talk (and confusion) of late about Qlik supposedly looking to transition its focus from Qlikview to Qlik Sense. In the keynote, Capone gave us all the much needed clarity on the licensing and migration options for those looking to move from Qlikview’s guided analytics features to Qlik Sense’s self-service analytics. These are some of the important announcements in this regard: Migration from Qlikview to Qlik Sense is optional: Acknowledging some of the loyal customers who don’t want to move away from QlikView, Capone said that the migration from Qlikview to Qlik Sense is optional. For those who do want to migrate, Qlik have assured that the transition will be made as smooth as possible, and that they would be making this a priority. Single license to use both Qlikview and Qlik Sense: Qlik have made it possible for customers to get the most out of their products without having to buy multiple licenses for multiple products. With just an additional maintenance fee, they will be able to enjoy the premium features of both the tools seamlessly. Qlik venturing into cognitive analytics One of the most notable announcements of this conference was incorporating aspects of Artificial Intelligence into the Business Intelligence capabilities of the Qlik products. Qlik are aiming to improving the core associative engine that works with the available data smartly. Not just that, they have also announced the Insight Advisor feature, to auto-generate the best possible visualizations and reports. Hybrid and multi-cloud support added Qlik’s vision going forward is quite simple and straightforward - to support deployment of their applications and services in a hybrid-cloud or multi-cloud environment. Going forward, users will be able to move their Qlik Sense applications that run using a microservices-based architecture on Linux, in either public or private clouds. They will also be able to self-manage these applications with the support features provided by Qlik. New tools for Qlik developers Qonnections 2018 saw 2 important announcements made to make the lives of Qlik developers easier. Along with Qlik Branch - a platform to collaborate on projects and share innovations and new developments, Qlik also announced a new platform for developers called Qlik Core. This new platform will allow Qlik developers to leverage the offerings of IoT, edge analytics and more to design and drive innovative business models and strategies. Qlik Core is currently in the beta stage, and is expected to be generally available very soon. Interesting times ahead for Qlik In recent times, Qlik has faced stiff competition from other popular Business Intelligence tools such as Tableau, Spotfire, Microsoft’s very own Power BI - apart from the freely available tools which are easily available to customers for fast, effective business intelligence. With all the tools delivering on a similar promise and not coming out with any groundbreaking blue ocean features, it will be interesting to see how Qlik’s new offerings will fare against these sharks. The recent restructuring of the Qlik management and the downsizing happening over the past few years can make one wonder if they are struggling to keep up. However, the announcements in Qonnections 2018 indicate the company is indeed moving in a positive direction with their products, and should restore the public faith and dispel any doubts Qlik’s customers may have. How Qlik Sense is driving self-service Business Intelligence Overview of a Qlik Sense® Application’s Life Cycle QlikView Tips and Tricks
Read more
  • 0
  • 0
  • 2862

article-image-microsoft-build-2018-day-1-azure-meets-artificial-intelligence
Amey Varangaonkar
08 May 2018
5 min read
Save for later

Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence

Amey Varangaonkar
08 May 2018
5 min read
Microsoft’s kicked off Build 2018, their annual developer conference in style, with some interesting announcements related to coupling their existing products - mainly Microsoft Azure - with trending technologies such as Artificial Intelligence, IoT, Blockchain, and more. Held in Seattle, Washington over 7th and 8th May, this tech extravaganza promises to give away some exciting insights into Microsoft’s strategy for the coming year. Here is a quick recap of all the major announcements from Day 1: Azure IoT Edge open-sourced Microsoft open-sourced its Azure IoT Edge Runtime platform for their developers, in a bid to deliver cloud intelligence across all their IoT devices. Combining the power of Artificial Intelligence and the Azure services through this platform, Microsoft plans to connect over 20 billion devices globally by 2020. Microsoft also announced their partnership with Qualcomm to create smart camera-based IoT solutions, by combining the hardware from the Qualcomm Vision Intelligence Platform and the Azure services. They will also be teaming up with the China-based drone giants DJI to build a new drone SDK for Windows 10. Azure will be the official cloud solution for DJI’s commercial solutions which will utilize the AI services offered by Microsoft to analyze data for their customers. Last but not the least, Microsoft announced the first Azure Cognitive Service for the edge. Using this feature, the developers will be able to deploy AI algorithms to build powerful applications that run on these devices. Kinect returns - this time on Azure The Kinect is back! This time however, things are a bit different. Built on a technology that also runs the Microsoft Hololens, Kinect will now run on Azure rather than on the Xbox. This new and improved project will have a next-gen camera and a dedicated processor designed to handle the AI tasks to process significant amount of data before it is put up on the Azure cloud. It is worth remembering that Microsoft had also attempted to bring Kinect to the enterprise, way back in 2011, but the project failed miserably and was suspended in 2017. With a different approach this time, Microsoft are hoping to make this a resounding success. New Blockchain tools for Azure In a bid to simplify Blockchain app development on the cloud, Microsoft has announced a new Azure service called Azure Blockchain Workbench. This service will connect the decentralized applications to the Azure cloud services such as Azure Active Directory, and will dramatically reduce the overall development time. This workbench will equip Blockchain developers with all the necessary tools to develop end-to-end Azure-based Blockchain applications without any hassle. Project Brainwave for deep learning acceleration Microsoft released a public preview of the highly anticipated Project Brainwave - a deep neural network processing solution designed to make Azure the fastest cloud to run deep learning and other AI-based solutions. Per Nadella, the CEO of Microsoft who made this announcement, Brainwave will equip the developers with the necessary processing power to deploy machine learning and deep learning models with a higher performance potential, much beyond what they can expect from a CPU or GPU. Also, this project is expected to lead to five times lower hardware latency than Google’s TPU (Tensor Processing Unit). Is Brainwave Microsoft’s answer to Google’s TPU? Check out our detailed comparison on the two chips. Other important announcements on Day 1 Apart from these significant enhancements for Azure, Microsoft also announced a whole host of features to improve their other existing offerings, including Microsoft 365, Sharepoint, Excel, and Microsoft Bot Framework, among others. Here are the major announcements in this regard: Integration of Amazon’s Alexa and Microsoft’s Cortana for Windows devices Windows Machine Learning, a new AI platform for developers to build machine learning models on the cloud was announced Updated Microsoft Bot Framework was announced, with richer voice customization Integration of Azure Search with Cognitive Services was announced, for quicker searches powered by Artificial Intelligence Azure Kubernetes services will be integrated with the Azure IoT Edge for better container orchestration Microsoft Excel will now support visualizations designed in Microsoft Power BI Microsoft 365 will make it easier to customize and publish business applications, with the help of the Microsoft Teams API In the midst of a major 3-way power struggle between Amazon, Google and Microsoft, these announcements made by Microsoft in the Build 2018 conference feel like a breath of fresh air, especially for the Microsoft developers. Their vision of transforming Microsoft’s cloud platform and other services into an intelligent cloud powered by Artificial Intelligence and other trending technologies seems to be well underway. It will be interesting to see what Google’s response will be in their Google I/O conference which will be underway tonight, and not to mention Amazon re:Invent 2018 which takes place later in the year. Here are the highlights of day 1 of Microsoft Build in under 15 minutes, if you’re interested. Serverless computing wars: AWS Lambdas vs Azure Functions Introducing Azure Sphere – A secure way of running your Internet of Things devices How to get started with Azure Stream Analytics and 7 reasons to choose it  
Read more
  • 0
  • 0
  • 2309

article-image-you-can-now-make-music-with-ai-thanks-to-magenta-js
Richard Gall
04 May 2018
3 min read
Save for later

You can now make music with AI thanks to Magenta.js

Richard Gall
04 May 2018
3 min read
Google Brain's Magenta project has released Magenta.js, a tool that could open up new opportunities in developing music and art with AI. The Magenta team have been exploring a range of ways to create with machine learning, but with Magenta.js, they have developed a tool that's going to open up the very domain they've been exploring to new people. Let's take a look at how the tool works, what the aims are, and how you can get involved. How does Magenta.js work? Magenta.js is a JavaScript suite that runs on TensorFlow.js, which means it can run machine learning models in the browser. The team explains that JavaScript has been a crucial part of their project, as they have been eager to make sure they bridge the gap between the complex research they are doing and their end users. They want their research to result in tools that can actually be used. As they've said before: "...we often face conflicting desires: as researchers we want to push forward the boundaries of what is possible with machine learning, but as tool-makers, we want our models to be understandable and controllable by artists and musicians." As they note, JavaScript has informed a number of projects that have preceded Magenta.js, such as Latent Loops, Beat Blender and Melody Mixer. These tools were all built using MusicVAE, a machine learning model that forms an important part of the Magenta.js suite. The first package you'll want to pay attention to in Magenta.js is @magenta/music. This package features a number of Magenta's machine learning models for music including MusicVAE and DrumsRNN. Thanks to Magenta.js you'll be able to quickly get started. You can use a number of the project's pre-trained models which you can find on GitHub here. What next for Magenta.js? The Magenta team are keen for people to start using the tools they develop. They want a community of engineers, artists and creatives to help them drive the project forward. They're encouraging anyone who develops using Magenta.js to contribute to the GitHub repo. Clearly, this is a project where openness is going to be a huge bonus. We're excited to not only see what the Magenta team come up with next, but also the range of projects that are built using it. Perhaps we'll begin to see a whole new creative movement emerge? Read more on the project site here.
Read more
  • 0
  • 0
  • 3472
article-image-can-a-production-ready-pytorch-1-0-give-tensorflow-a-tough-time
Sunith Shetty
03 May 2018
5 min read
Save for later

Can a production ready Pytorch 1.0 give TensorFlow a tough time?

Sunith Shetty
03 May 2018
5 min read
PyTorch has announced a preview of the blueprint for PyTorch 1.0, the next major release of the framework. This breakthrough version is expected to bring more stability, integration support and complete production backing allowing developers to move from core research to production in an amicable way without having to deal with any migration challenges. PyTorch is an open-source Python-based scientific computing package which provides powerful GPU acceleration. PyTorch is known for advanced indexing and functions, imperative style, integration support and API simplicity. This is one of the key reasons why developers prefer PyTorch for research and hackability. To know more about how Facebook-based PyTorch competes with Google’s TensorFlow read our take on this deep learning war. Some of the noteworthy changes in the roadmap for PyTorch 1.0 are: Production support One of the biggest challenges faced by developers in terms of using PyTorch is production support. There are n number of issues faced while trying to run the models efficiently in production environments. Even though PyTorch provides excellent simplicity and flexibility, due to its tight coupling to Python, the performance at production-scale is a challenge.   To counter these challenges, the PyTorch team has decided to bring PyTorch and Caffe2 together to provide production-scale readiness to the developers. However, adding production support brings complexity and configurable options for models in the API. The PyTorch team will stick to the goal of keeping the platform -- a favorable choice -- for researchers and developers. Hence, they are introducing a new just-in-time (JIT) compiler, named torch.jit. torch.jit compiler rewrites PyTorch models during runtime in order to achieve scalability and efficiency in production environments. It can also export PyTorch models to run in a C++ environment. (runtime based on Caffe2 bits) Note: In PyTorch version 1.0, your existing code will continue to work as-is. Let’s go through how JIT compiler can be used to export models to a Python-less environment in order to improve their working performance. torch.jit: The go-to compiler for your PyTorch models Building models using Python code, no doubt gives maximum productivity and makes PyTorch very simple and easy-to-use. However, this also means PyTorch finding it difficult to know which operation you will run next. This can be frustrating for the developers during model export and automatic performance optimizations because they need to be aware of how the computations will look like before it even gets implemented. To deal with these issues, PyTorch provides two ways of recovering information from the Python code. Both these methods will be useful based on different contexts, giving you the leverage to use/mix them with ease. Tracing the native Python code Compiling a subset of the Python language Tracing mode torch.jit.trace function allows you to record the native PyTorch operations performed along with the data dependencies between them. PyTorch version 0.3 already had a tracer function which is used to export models through ONNX. This new version uses a high-performance C++ runtime that allows PyTorch to re-execute programs for you. The key advantage of using this method is that it doesn’t have to deal with how your Python code is structured since we only trace through native PyTorch operations. Script mode PyTorch team has come up with a solution called scripting mode made specially for those models such as RNNs which make use of control flow. However, you will have to write out a regular Python function (avoiding complex language features) In order to get your function compiled, you can assign @script decorator. This will make sure it alters your Python function directly into high-performance C++ during runtime. Advantages in optimization and export techniques Irrespective of you using a trace or a script function, the technique allows you to optimize/export the model for use in production environments (i.e. Python-free portrayal of the model) Now you can derive bigger segments of the model into an intermediate representation to work with sophisticated models. You can use high-performance backends available in Caffe2 to run the models efficiently Usability If you don’t need to export or optimize your model, you do not need to use these set of new features. These modes will be included into the core of the PyTorch ecosystem, thus allowing you to mix and match them with the existing code seamlessly as per your needs. Additional changes and improvements In addition to the major update in the production support for 1.0, PyTorch team will continue working on optimizing, working on the stability of the interface, and fixing other modules in PyTorch ecosystem PyTorch 1.0 will see some changes in the backend side which might affect user-written C and C++ extensions. In order to incorporate new features and optimization techniques from Caffe2, PyTorch team is replacing (optimizing) the backend ATen library. PyTorch team is planning to release 1.0 during the summer. For the detailed preview of the roadmap, you can refer the official PyTorch blog. Top 10 deep learning frameworks The Deep Learning Framework Showdown: TensorFlow vs CNTK Why you should use Keras for deep learning
Read more
  • 0
  • 0
  • 5915

article-image-neo4j-3-4-aims-to-make-connected-data-more-accessible
Richard Gall
03 May 2018
3 min read
Save for later

Neo4j 3.4 aims to make connected data even more accessible

Richard Gall
03 May 2018
3 min read
Graph database project Neo4j has announced the release of Neo4j 3.4. The update is poised to strengthen Neo4j's position as the market leader in the world of connected data, and could make it more accessible for more organizations and users. Alongside the updates and improvements in Neo4j 3.4, the team have also announced Neo4j Bloom. This is a data visualization tool designed to make it easier to communicate and present insights to key stakeholders. It's a well-established fact that communication is key if you're going to do data science and data analysis well, so it's clear that Neo4j are directly responding to a key issue for their customers. It's also a valuable hook for any potential new customers. Emil Eifrem (@emileifrem) founder and CEO of Neo4j has this to say about the updates: "Our investments in the Neo4j database extend its ability to scale and drive new use cases for our current customers... Graph databases are an enterprise standard, and the introduction of Neo4j 3.4 and Neo4j Bloom means more people will discover the power and value of connected data." What's new in Neo4j 3.4? There are a number of important new features in Neo4j 3.4. Let's take a look at some of them. There have been some impressive improvements in performance. Neo4j has always been best in class when it comes to the performance of its database. But with 3.4, it consolidates its position. Cypher now executes 70% faster, data loading is now 30%-50% faster, and backups are apparently now 100% faster. Multi-clustering brings fully sharded horizontal scaling one step closer. 3D geospatial and data/time search functions extend the range of potential use cases. That means the graph databases users build have additional dimensions for users to search. Neo4j Bloom: making connected data accessible Neo4j is a tool that has been designed by the project to make complex graph databases easier to visualize and explore. Abstract relationships that may be difficult to interpret and understand for an outsider will become much clearer. That's good news for data scientists and data analysts, but it's also good news for Neo4j - key stakeholders and decision makers who don't have technical expertise will now also be paying attention to Neo4j. It's important to note the uniqueness of Neo4j Bloom. It is able to present the connections and context around various data points; that gives it an extra dimension over other popular data visualization tools. "Neo4j Bloom is specifically designed to illuminate connections between data points in an intuitive way, especially for executives and stakeholders who might not be very technical" explains Eifrem. You can find out more about Neo4j 3.4 here. From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers Creating a graph application with Python, Neo4j, Gephi & Linkurious.js
Read more
  • 0
  • 0
  • 1970

article-image-swift-for-tensorflow-is-now-open-source
Richard Gall
01 May 2018
3 min read
Save for later

Swift for TensorFlow is now open source

Richard Gall
01 May 2018
3 min read
TensorFlow has continued its success in 2017 well into 2018. It's quickly expanding its capabilities, and we're beginning to see it used by engineers that aren't data specialists.  We've seen that in the launch of TensorFlow.js, which allows you to bring machine learning to the browser. But Swift for TensorFlow is a slightly different proposition. In fact, it does two things. On the one hand it offers a new way of approaching TensorFlow, but it also helps to redefine Swift. Let's be honest - Swift has come a long way since it was first launched by Apple back at WWDC 2014. Back then it was a new language created to reinvigorate iOS development. It was meant to make Apple mobile developers happier and more productive. That is, of course, a noble aim - and by and large it seems to have worked. If it hadn't we probably wouldn't still be talking about it. But Swift for TensorFlow marks Swift as a powerful modern programming language that can be applied to some of the most complex engineering problems. What is Swift for TensorFlow? Swift for TensorFlow was first unveiled at the TensorFlow Dev Summit in March 2018. Now it's open source, it's going to be interesting to see how it shapes the way engineers use TensorFlow - and, of course, how the toolchain might shift. But what is it exactly? Watch the video below, recorded at TensorFlow Dev Summit, to find out more. https://www.youtube.com/watch?v=Yze693W4MaU Here's what the TensorFlow team had to say about Swift for TensorFlow in a detailed post on Medium. "Swift for TensorFlow provides a new programming model that combines the performance of graphs with the flexibility and expressivity of Eager execution, with a strong focus on improved usability at every level of the stack. This is not just a TensorFlow API wrapper written in Swift — we added compiler and language enhancements to Swift to provide a first-class user experience for machine learning developers." Why did TensorFlow choose Swift? This is perhaps the key question: why did the TensorFlow team decide to use Swift for this project? The team themselves note that they are often asked this question themselves. Considering many of the features of Swift for TensorFlow can easily be implemented in other programming languages, it's a reasonable question to ask. To properly understand why TensorFlow chose Swift you need to go back to the aims of the project. And they're actually quite simple - the team want to make TensorFlow more usable. They explain: "We quickly realized that our core static analysis-based Graph Program Extraction algorithm would not work well for Python given its highly dynamic nature. This led us down the path of having to pick another language to work with, and we wanted to approach this methodically." The post on GitHub is well worth reading. It provides a detailed insight into how to best go about evaluating the advantages and disadvantages of one programming language over another. Incidentally, The TensorFlow team say the final shortlist of languages was Swift, Rust, Julia, and C++. Swift ended up winning out - there were 'usability concerns' around C++ and Rust, and compared to Julia not only was there a larger and more active community, it is also much more similar to Python in terms of syntax.
Read more
  • 0
  • 0
  • 2741
article-image-splunk-leverages-ai-in-its-monitoring-tools
Richard Gall
30 Apr 2018
2 min read
Save for later

Splunk leverages AI in its monitoring tools

Richard Gall
30 Apr 2018
2 min read
Just weeks after the announcement of Splunk IAI (Industrial Asset Intelligence), Splunk has revealed it will be enhancing machine learning across many of its products. This includes Splunk Enterprise, IT Service Intelligence, and User Behavior Analytics. Clearly, the company are using Spring 2018 as a period to build a solid foundation to future-proof their products. Splunk has also added an 'Experiment Management Interface' to its Machine Learning Toolkit. This is a crucial update that will make tracking machine learning and AI 'experiments' much easier. It means that monitoring a range of issues will become much easier. Splunk's goal here is to ensure a reduction in what it calls "event noise." The machine learning and AI algorithms will help to cut through the amount of data and information at users' disposal. It will allow them to identify the issues that are most business-critical. It's about more than just analytics - it's about the additional dimension that makes prioritization much more straightforward. That's what distinguishes what Splunk are doing compared to competitors. Typically, machine learning in BI software allows users to monitor issues, but doesn't have the capacity to place issues in a wider business context. There are a wide range of applications for this technology. It could be used to identify security issues within a given system, application performance, or even operational management. Tim Tully, CTO, had this to say: "Our latest wave of innovation is intended to arm customers with the tools needed to translate AI into actionable intelligence. While AI and machine learning often seem like unattainable and expensive pipe dreams, Splunk Cloud and Splunk Enterprise now make it easier and more affordable to monitor, analyze and visualize machine data in real time" Of course, while Tully's words contain an element of marketing-speak, made for a press release, it's worth noting that the goal here from Splunk's perspective is all about making AI and machine learning more accessible. Clearly the company knows what their customers want. This suggests, then, that for all the discussion around the machine learning revolution, there are still many businesses that regard machine learning as a considerable challenge.
Read more
  • 0
  • 0
  • 2314

article-image-tableau-2018-1-brings-new-features-to-help-organizations-easily-scale-analytics
Sunith Shetty
24 Apr 2018
3 min read
Save for later

Tableau 2018.1 brings new features to help organizations easily scale analytics

Sunith Shetty
24 Apr 2018
3 min read
Tableau software has brought unique packages which combine new and existing analytical capabilities to scale data analytics across the organizations. With Tableau 2018.1, you can enable an effective data-driven enterprise by providing easy access to data among the entire workforce. Tableau is one of the leading business intelligence tools used to derive quality insights. With the remarkable growth of data that customers are experiencing, the demand to analyze and interact with data is the need of the hour. This is where Tableau’s range of products help in visually interacting with data to make critical decisions. Some of the noteworthy offerings available in Tableau 2018.1 are: Tableau Creator It provides full analytical capabilities to data analysts, BI professionals, and other power users. One can now take advantage of Tableau's suite of products to uncover data insights in a fast and effective way You can combine a range of products offered by Tableau for powerful data analytics on Web and Desktop. The products included in the suite are Tableau Desktop (No additional cost required), Tableau Prep (data preparation tool to help customers ready their data for analysis), and a license for Tableau Server (to publish and share reports and dashboards). Tableau Explorer Perform governed self-service data analytics to analyze data quickly with ease Collaborate with others based on governed data sources, create new dashboards, and get timely updates with new subscriptions and alerting. Tableau Viewer   This product enables you to extend the value of data across organizations in a cost-effective manner. Better data-driven decisions by interacting with dashboards and reports created by others. You will be able to view and filter dashboards, subscriptions, and data-driven alerts on mobile and Web. In addition to the above products, they have also released Tableau Prep - a new data preparation application, an improved version of Tableau Desktop, and new web authoring capabilities. These new tailored offerings allow people to leverage the power of data analytics in a way that is flexible, easy to wrap up, and simple to scale. You can now migrate your existing Tableau Server and Desktop installations to the new features offered, in the Tableau 2018.1 release. Once you have done the migration procedure, administrators will be able to assign a specific option -- Creator, Explorer, or Viewer -- to each user in their organization, thus completing the transition process. Read More Top 5 free Business Intelligence tools What Tableau Data Handling Engine has to offer Hands on Table Calculation Techniques with Tableau  
Read more
  • 0
  • 0
  • 3952