Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-bittorrents-traffic-surges-as-the-number-of-streaming-services-explode
Pavan Ramchandani
02 Oct 2018
2 min read
Save for later

BitTorrent’s traffic surges as the number of streaming services explode

Pavan Ramchandani
02 Oct 2018
2 min read
Peer-to-peer file services like BitTorrent were seeing a decline in traffic over the last decade, with the increasing popularity of affordable streaming services like Netflix. The number of piracy streaming options fell and was said to be accounted for the services provided by the on-demand video services and strict anti-piracy laws. However, recent reports published by Sandvine, who has been keeping a close eye on the file-sharing traffic, suggested that the traffic for streaming file-services is now growing. The report also noted that BitTorrent was earlier evidently losing out on market share but is now emerging to be the leading file-sharing place with 97% share. Over the past few years, Netflix had emerged as the leader in the streaming market and was single-handedly responsible for hosting a wide variety of content. While this was happening, the market observed a decrease in upstream and downstream file sharing of content, popularly called the torrents. This went on to suggest that the streaming market was rising and piracy downloading was reducing. However, Netflix is no more the single most popular streaming services that exist. Other video streaming services like Amazon Prime videos, Hulu, and others have fragmented the market with popular content playing over multiple services. This seems to have re-introduced the traditional practice of file-sharing over the internet and BitTorrent has gained the market. Sandvine's report further stated the following in its report: “More sources than ever are producing ‘exclusive’ content available on a single streaming or broadcast service – think Game of Thrones for HBO, House of Cards for Netflix, The Handmaid’s Tale for Hulu, or Jack Ryan for Amazon. To get access to all of these services, it gets very expensive for a consumer, so they subscribe to one or two and pirate the rest.” File sharing involves uploading and downloading a part of a file over a peer-to-peer network. Reportedly, file sharing makes up for 3% of all download and 22% of all upload traffic. Here, BitTorrent enjoys the lion's share with a massive 97% of all upstream file-sharing traffic. Evidently, fragmentation in the subscription video-on-demand market is playing the major role. To know more about this in detail, check out the discussion thread on Hacker News. The cryptocurrency-based firm, Tron acquires BitTorrent Facebook Watch is now available world-wide challenging video streaming rivals, YouTube, Twitch, and more Implementing fault-tolerance in Spark Streaming data processing applications with Apache Kafka
Read more
  • 0
  • 0
  • 2353

article-image-neural-network-intelligence-microsofts-open-source-automated-machine-learning-toolkit
Amey Varangaonkar
01 Oct 2018
2 min read
Save for later

Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit

Amey Varangaonkar
01 Oct 2018
2 min read
Google’s Cloud AutoML now has competition; Microsoft have released an open-source automated machine learning toolkit of their own. Dubbed as Neural Network Intelligence, this toolkit will allow data scientists and machine learning developers to perform tasks such as neural architecture search and hyperparameter tuning with relative ease. Per Microsoft’s official page, this toolkit will allow data scientists, machine learning developers and AI researchers with the necessary tools to customize their AutoML models across various training environments. The toolkit was announced in November 2017 and has been in the research phase for a considerable period of time, before it was released for public use recently. Who can use the Neural Network Intelligence toolkit? Microsoft’s highly anticipated toolkit for automated machine learning is perfect for you if: You want to try out different AutoML algorithms for training your machine learning model You want to run AutoML jobs in different training environments, including remote servers and cloud You want to implement your own AutoML algorithms and compare their performance with other algorithms You want to incorporate your AutoML models in your own custom platform With Neural Network Intelligence toolkit, data scientists and machine learning developers can train and customize their machine learning models more effectively. The tool is expected to go head to head with Auto-Keras, another open source AutoML library for deep learning. Auto-Keras has quickly generated quite a traction with more than 3000 stars on GitHub, suggested the growth in popularity of Automated Machine Learning. You can download and learn more about this AutoML toolkit on their official GitHub page. Read more What is Automated Machine Learning (AutoML)? Top AutoML libraries for building your ML pipelines Anatomy of an automated machine learning algorithm (AutoML)
Read more
  • 0
  • 0
  • 6370

article-image-stable-release-of-cuda-10-0-out-with-turing-support-tools-and-library-changes
Prasad Ramesh
01 Oct 2018
3 min read
Save for later

Stable release of CUDA 10.0 out, with Turing support, tools and library changes

Prasad Ramesh
01 Oct 2018
3 min read
CUDA 10.0 was released mid-September bringing updates to the compiler, tools, and libraries. Support has also been added for the Turing architectures compute_75 and sm_75. Compiler changes in CUDA 10.0 The paths of some compilers have been changed. The CUDA-C and CUDA-C++ compiler—nvcc, is now located in the bin/ directory. nvcc is built on top of the NVVM optimizer, which is built on top of the LLVM compiler infrastructure. If you want to target NVVM directly use the Compiler SDK available in the nvvm/ directory. The following files are compiler-internal and can change without any prior notice. Any files in include/crt and bin/crt Files like include/common_functions.h, include/device_double_functions.h, include/device_functions.h, include/host_config.h, include/host_defines.h, and include/math_functions.h nvvm/bin/cicc bin/cudafe++, bin/bin2c, and bin/fatbinary These compilers are supported as host compilers in nvcc: Clang 6.0 Microsoft Visual Studio 2017 (RTW, Update 8 and later) Xcode 9.4 XLC 16.1.x ICC 18 PGI 18.x (with -std=c++14 mode) Note that, starting with CUDA 10.0, nvcc supports all versions of Visual Studio 2017, previous versions and newer updates. There is a new libNVVM API function called nvvmLazyAddModuleToProgram in CUDA 10.0. This function is to be used for adding the libdevice module along with any other similar modules to a program for making it more efficient. The --extensible-whole-program (or -ewp) option has been added to nvcc. This option can be used to do whole-program optimizations. With this option you can use cuda-device-parallelism features without having to use separate compilation. Warp matrix functions (wmma), first introduced in PTX ISA version 6.0 are now fully supported retroactively from PTX ISA version 6.0 onwards. Tool changes Except for Nsight Visual Studio Edition (VSE) which is installed as a plug-in to Microsoft Visual Studio, the following tools are available in the bin/ directory (). IDEs like nsight (Linux, Mac), Nsight VSE (Windows) Debuggers like cuda-memcheck, cuda-gdb (Linux), Nsight VSE (Windows) Profilers like nvprof, nvvp, Nsight VSE (Windows) Utilities like cuobjdump, nvdisasm, gpu-library-advisor CUDA 10.0 now includes Nsight Compute, a set of developer tools for profiling and debugging. It is supported on Windows, Linux and Mac. nvprof now supports OpenMP tools interface. NVIDIA Tools Extension API (NVTX) V3 us now supported by the profiler. Changes are also made to the libraries nvJPEG, cuFFT, cuBLAS, NVIDIA Performance Primitives (NPP), and cuSOLVER. CUDA 10.0 has optimized libraries for Turing architecture and there is a new library called nvJPEG for GPU accelerated hybrid JPEG decoding. For a complete list of changes, visit the NVIDIA website. Microsoft Azure now supports NVIDIA GPU Cloud (NGC) NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499
Read more
  • 0
  • 0
  • 4092
Visually different images

article-image-microsofts-new-neural-text-to-speech-service-lets-machines-speak-like-people
Natasha Mathur
28 Sep 2018
2 min read
Save for later

Microsoft’s new neural text-to-speech service lets machines speak like people

Natasha Mathur
28 Sep 2018
2 min read
Microsoft has come out with a production system that performs text-to-speech (TTS) synthesis using deep neural networks. This new production system makes it hard for you to distinguish the voice of computers from human voice recordings. The Neural text-to-speech synthesis has significantly reduced the ‘listening fatigue’ when talking about interaction with AI systems. It enables the system with human-like, natural sounding voice, that makes the interaction with chatbots and virtual assistants more engaging. This neural-network powered text-to-speech system was demonstrated by the Microsoft team at the Microsoft Ignite conference in Orlando, Florida, this week. Additionally, Neural text-to-speech converts digital texts such as e-books into audiobooks. It also enhances in-car navigation systems. Deep Neural networks are great at overcoming the limits of traditional text-to-speech systems. Neural networks are very accurate in matching the patterns of stress and intonation in spoken language, called prosody. They’re also quite effective in synthesizing the units of speech into a computer voice. Neural TTS Traditional text-to-speech systems generally break down the prosody into separate linguistic analysis and acoustic prediction steps that get governed by independent models. This usually results in muffled, buzzy voice synthesis. Whereas, neural networks perform prosody prediction and voice synthesis simultaneously. This results in a more fluid and natural-sounding voice. Microsoft makes use of the computational power of Azure to offer real-time streaming. This makes it useful for situations such as interacting with a chatbot or virtual assistant. This TTS capability is served in the Azure Kubernetes Service to ensure high scalability and availability. Only the preview of the text-to-speech service is available currently. The preview comes with two pre-built neural text-to-speech voices in English – Jessa, and Guy.  Microsoft will be making more languages available soon. It will also be offering customization services in 49 languages for customers wanting to build branded voices optimized for their specific needs. For more information, check out the official Microsoft Blog post. Microsoft acquires AI startup Lobe, a no code visual interface tool to build deep learning models easily DoWhy: Microsoft’s new python library for causal inference Say hello to FASTER: a new key-value store for large state management by Microsoft
Read more
  • 0
  • 0
  • 3833

article-image-ibm-watson-announces-pre-trained-ai-tools-to-accelerate-iot-operations
Savia Lobo
28 Sep 2018
3 min read
Save for later

IBM Watson announces pre-trained AI tools to accelerate IoT operations

Savia Lobo
28 Sep 2018
3 min read
Yesterday, IBM Watson announced the launch a set of new pre-trained AI tools for offering Connected Manufacturing. This offering includes a method and approach to help clients accelerate their IoT transformation, from strategy, implementation and security to managed services and ongoing operations. This new approach will help IBM’s client to connect all their manufacturing equipment, sensors, and systems for business improvement across OEE, quality, lead times and productivity. What is this new AI Watson enabled IoT approach all about? This new IoT offering by IBM focuses on industries that are heavily IoT dependent–industrial equipment, automotive (smart vehicles) and buildings (smart spaces). The IBM’s AI Watson solution, when combined with its Industrial IoT platform empowers,  provides ‘things’ with the data they need to understand the physical world as everything becomes connected. This helps customers to take advantage of the vast amounts of data generated by IoT. Kareem Yusuf, GM, IBM Watson IoT says, "We decided to release this largest-ever AI toolset pre-trained for industries and professions to help businesses re-imagine how they work. A key business advantage lies in tapping into organizational insights, historical customer data, internal reporting, past transactions, and client interactions. These elements are too often underutilized." Rob Enderle, Principal Analyst at tech analyst firm The Enderle Group said, "Training is where AI deployments get hung up. Much of the initial work with developed AI is to create this training, which then, through machine learning, can be passed on to new systems, significantly lowering the deployment cost and time to value. This is a critical phase to maturing the platform and getting it closer to its operational and sales potential." With the heavy lifting completed during the training period, Watson is ready to start producing targeted, industry-specific insights right away. Enderle further added, "Getting the system to this phase is anything but trivial. Once there, machine learning can allow the replication of an unlimited number of systems,". Areas where IBM is pre-training Watson for industries and functions Agriculture AI-powered visual recognition capabilities let growers decide where to spray pesticides, determine the severity of damage from pests and diseases, and forecast water usage. Farmers also gain insights from temperature and moisture levels, as well as crop distress. Human Resources IBM Watson Talent lets recruiters analyze the backgrounds of top-performing employees to find candidates for new positions. In fact, AI could help reduce bias in hiring decisions, according to IBM. Psychologists helped IBM produce an AI scoring system, which lets recruiters quickly sort through candidates. Marketing IBM Watson Assistant for Marketing is a component of Watson Campaign Automation SaaS. The assistant allows companies to evaluate their marketing campaigns, engage in more direct conversations with customers, and create a personalized customer experience. Manufacturing The Watson toolset for the manufacturing industry will provide visual and acoustic inspection capabilities. AI technology will also allow manufacturers to predict when equipment failures might occur, as well as energy waste and product quality issues. AI will let manufacturers gain insights and deal with workforce attrition, skills gaps, and rising raw material costs. The pre-training is also taking place in the advertising, commercial and transportation space To know more about this in detail, visit IBM’s official website. How IBM Watson is paving the road for Healthcare 3.0 Watson-CoreML : IBM and Apple’s new machine learning collaboration project Stack skills, not degrees: Industry-leading companies, Google, IBM, Apple no longer require degrees
Read more
  • 0
  • 0
  • 2980

article-image-ipython-7-0-releases-with-asyncio-integration-and-new-async-libraries
Natasha Mathur
28 Sep 2018
2 min read
Save for later

IPython 7.0 releases with AsyncIO Integration and new Async libraries

Natasha Mathur
28 Sep 2018
2 min read
IPython team released version 7.0 of IPython, yesterday. IPython is a powerful Python interactive shell with features such as advanced tab completion, syntactic coloration, and more. IPython 7.0 explores new features such as AsyncIO integration, new Async libraries, and Async support in Notebooks. IPython (Interactive Python) provides a rich toolkit for interactive computing in multiple programming languages. It’s the Jupyter kernel for Python used by millions of users. Let’s discuss the key features in IPython 7.0 release. AsyncIO Integration IPython 7.0 comes with the integration of IPython and AsyncIO. This means that you don’t have to import or learn about asyncIO anymore. AsyncIO is a library which lets you write concurrent code using the async/await syntax. The asyncIO library is used as a foundation for multiple Python asynchronous frameworks providing high-performance network, web-servers, database connection libraries, distributed task queues, etc. Just remember that asyncIO is an async function, it won’t magically make your code faster but will make it easier to write. New Async Libraries (Curio and Trio integration) Python consists of keywords async and await. This helps simplify the use of asynchronous programming and the standardization around asyncIO. It also allows experimentation with the new paradigms for asynchronous libraries. Now, two new Async Libraries namely Curio and Trio, have been added in IPython 7.0. Both of these libraries explore ways to write asynchronous programs. They also explore how to use async, awaits, and coroutines when starting from a blank slate. Curio is a library which helps perform concurrent I/O and common system programming tasks. It makes use of the Python coroutines and the explicit async/await syntax. Trio is an async/await-native I/O library for Python. It lets you write programs that do multiple things at the same time with parallelized I/O. Async support in Notebooks Async code will now work in a notebook when using ipykernel for Jupyter users. With IPython 7.0, async will work with all the frontends that support the Jupyter Protocol, including the classic Notebook, JupyterLab, Hydrogen, nteract desktop, and nteract web. The default code will run in the existing asyncIO/tornado loop that runs the kernel. For more information, check out the official release notes. Make Your Presentation with IPython How to connect your Vim editor to IPython Increase your productivity with IPython
Read more
  • 0
  • 0
  • 3648
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-tensorflow-1-11-0-releases
Pravin Dhandre
28 Sep 2018
2 min read
Save for later

TensorFlow 1.11.0 releases

Pravin Dhandre
28 Sep 2018
2 min read
It’s been just a month since the release of TensorFlow 1.10, and the TensorFlow community introduces the newer version 1.11 with few major additions, lots of bug fixes and numerous performance improvements. Major Features of TensorFlow 1.11.0: Prebuilt binaries built for Nvidia GPU Experimental tf.data integration for Keras Preview support for eager execution on Google Cloud TPUs Added multi-GPU DistributionStrategy support in tf.keras for model distribution Added multi-worker DistributionStrategy support in Estimator C, C++, and Python functions added for querying kernels Added simple Tensor and DataType classes to TensorFlow Lite Java Bug Fixes and Other Changes: Default values for tf.keras RandomUniform, RandomNormal, and TruncatedNormal initializers changed Added pruning mode for boosted trees Old checkpoints do not get deleted by default Total disk space for dumped tensor data limited to 100 GB. Added experimental IndexedDatasets Performance Improvements: Enhanced performance for StringSplitOp & StringSplitV2Op Regex replace operations improvised with max performance. Toco compilation/execution fixed for Windows Added GoogleZoneProvider class for detecting Google Cloud Engine zone tensorflow Import enabled for tensor.proto.h Added documentation clarifying the differences between tf.fill and tf.constant Added selective registration target using the lite proto runtime Support for bitcasting to and from uint32 and uint64 Estimator subclass added and can be created from a SavedModelEstimator Added argument leaf index modes Please see the full release notes for complete details on added features and changes. You can also check the GitHub repository to find various interesting use cases of TensorFlow. Top 5 Deep Learning Architectures A new Model optimization Toolkit for TensorFlow can make models 3x faster Intelligent mobile projects with TensorFlow: Build your first Reinforcement Learning model on Raspberry Pi
Read more
  • 0
  • 0
  • 4710

article-image-google-amazon-att-met-the-a-u-s-senate-committee-to-discuss-consumer-data-privacy-yesterday
Savia Lobo
27 Sep 2018
3 min read
Save for later

Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday

Savia Lobo
27 Sep 2018
3 min read
Yesterday, U.S. Senator, John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation summoned a hearing titled ‘Examining Safeguards for Consumer Data Privacy’. Executives from AT&T, Amazon, Google, Twitter, Apple, and Charter Communications provided their testimonies to the Committee. The hearing took place to, examine privacy policies of top technology and communications firms, review the current state of consumer data privacy, and offer members the opportunity to discuss possible approaches to safeguarding privacy more effectively. John Thune opened the meeting by saying, “This hearing will provide leading technology companies and internet service providers an opportunity to explain their approaches to privacy, how they plan to address new requirements from the European Union and California, and what Congress can do to promote clear privacy expectations without hurting innovation.” The two biggest issues surrounding the hearing included questions of jurisdiction and enforcement. The other issues discussed were, whether privacy policies should be legally mandated as opt-in or opt-out the ability to download data, and withdraw consent for data collection whether ad-based business models can truly protect customer's information how privacy policies translate to companies' work overseas (particularly in China) Tech industries choose Federal law over state laws A few months back, Europe and California passed strong laws governing online privacy and data, which the experts say are the most stringent and comprehensive protections. Due to these stringent laws, the tech industry is leaning towards having a common federal law that overrides state rules instead of multiple state laws. The hearing included a good argument between the two parties on how online privacy at the federal level is better than the state level. Federal commerce laws address inter-state commerce, and online information and business flows between states. The tech companies expressed interest in having the opportunity to weigh in on the contents of the federal law, should be considered. Many tech companies expressed their concern that having a state legislation could result in a patchwork of laws. However, AT&T's representative did not actually express why a federal law would be beneficial. The representative said that, if state law prevailed over federal law, the industry will be forced to comply with the most restrictive aspects of each state’s law. As a reply to the tech industries, Senator Brian Schatz (D-HI) said, "Your holy grail is 'preemption’ and we’re not going to replace a strong California law with a weaker federal one." Regarding the second biggest issue, which is enforcement, multiple senators asked the tech companies whether they believed enforcement for consumer data protection should rest with the FTC (Federal Trade Commission). They also questioned whether FTC's power to enforce laws should be expanded. At present, the FTC levies fine based on a complex order of operations, that involves tech companies agreeing that they have done something wrong before the FTC can enforce anything. To this, the tech companies had a mixed reaction. Senator Richard Blumenthal (D-CT) said, "Voluntary rules have proved insufficient to protect privacy.” The Congressional hearing reenforced the idea that the tech industry has accepted the fact that privacy regulation is coming. Once this happens, these companies would rather have a say in the regulation than oppose it. To know about the hearing in detail watch the complete video on the U.S. Senate Committee website. Google’s Senate testimony, “Combating disinformation campaigns requires efforts from across the industry.” Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity Google CEO Sundar Pichai won’t be testifying to Senate on election interference
Read more
  • 0
  • 0
  • 2563

article-image-scikit-learn-0-20-0-is-here
Natasha Mathur
27 Sep 2018
3 min read
Save for later

Scikit Learn 0.20.0 is here!

Natasha Mathur
27 Sep 2018
3 min read
Yesterday, the Scikit Learn community released the version 0.20.0 of Scikit-learn, a popular machine learning library for Python. Scikit learn 0.20.0 explores new features and enhancements for the Scikit-learn library. Scikit-learn is one of the most popular open source machine learning libraries for Python. It provides algorithms for machine learning tasks such as classification, regression, dimensionality reduction, and clustering. It also offers modules for extracting features, processing data, and evaluating models. Major features in Scikit Learn 0.20.0. New Features There’s a new impute module in Scikit Learn 0.20.0 that offers estimators for learning despite missing data. String or pandas Categorical Columns in Scikit Learn 0.20.0 can now be encoded with OneHotEncoder or OrdinalEncoder. PowerTransformer and KBinsDiscretizer join QuantileTransformer now as non-linear transformations. A sample_weight support has been added to several estimators (which includes KMeans, BayesianRidge and KernelDensity). This is the first release that comprises Glossary of Common Terms and API Elements developed by Joel Nothman. Other changes There are a lot of changes made in sklearn.cluster, sklearn.compose, sklearn.covariance, sklearn.datasets, sklearn.decomposition, etc., in Scikit Learn 0.20.0. Let’s have a look at them in detail. sklearn.cluster The cluster.AgglomerativeClustering feature now supports Single Linkage clustering via linkage='single'. The cluster.KMeans and cluster.MiniBatchKMeans features support sample weights through new parameter sample_weight in fit function. The cluster.KMeans, cluster.MiniBatchKMeans and cluster.k_means passed with algorithm='full' will now be enforcing row-major ordering, and improve runtime. sklearn.compose A compose.ColumnTransformer is a new feature that applies different transformers to different columns of arrays or pandas DataFrames. The compose.TransformedTargetRegressor has been added in this Scikit Learn version, which transforms the target y before fitting a regression model. sklearn.covariance The covariance.graph_lasso, covariance.GraphLasso and covariance.GraphLassoCV have now been renamed to covariance.graphical_lasso, covariance.GraphicalLasso and covariance.GraphicalLassoCV. It will be finally be removed in version 0.22. sklearn.datasets The datasets.fetch_openml has been added to fetch datasets from OpenML,a free, open data sharing platform. In datasets.make_blobs, you can now pass a list to the n_samples parameter. This helps indicate the number of samples to generate per cluster. The filename attribute has been added to datasets that have a CSV file. Another new feature return_X_y parameter has also been added to several dataset loaders. sklearn.decomposition The decomposition.dict_learning functions and models now offer support for positivity constraints. This applies to the dictionary and sparse code. Another decomposition.SparsePCA feature now exposes normalize_components. For more information, check out the official release notes. Machine Learning in IPython with scikit-learn Why you should learn Scikit-learn? Implementing 3 Naive Bayes classifiers in scikit-learn
Read more
  • 0
  • 0
  • 3257

article-image-unity-and-deepmind-partner-to-develop-virtual-worlds-for-advancing-artificial-intelligence
Sugandha Lahoti
27 Sep 2018
2 min read
Save for later

Unity and Deepmind partner to develop Virtual worlds for advancing Artificial Intelligence

Sugandha Lahoti
27 Sep 2018
2 min read
Unity has announced its collaboration with Deepmind to develop virtual environments for advancing Artificial Intelligence. They will be creating virtual environments for developing and testing experimental algorithms. This announcement is basically a broad agreement between the two companies with not much information disclosed about their actual intentions at this point. Unity is the most widely-used real-time development platform, powering 60% of all AR/VR content and 50% of all mobile games worldwide. With this partnership, they are taking the initial steps toward becoming the general platform for the development of intelligent agents and creating simulation environments. These virtual environments will be used to generate and capture synthetic data for different automotive and industrial verticals. Unity has been exploring Artificial Intelligence for quite some time now. Earlier this month, they released a new version of their ML-Agents toolkit to more easily integrate ML-Agents environments into their training workflows among other things. They also have a TensorFlow based algorithm to allow game developers to easily train intelligent agents for 2D, 3D, and VR/ AR games. These trained agents are then used for controlling the NPC behavior within games. DeepMind is also not new to games. Demis Hassabis, co-founder and CEO of DeepMind, says “Games and simulations have been a core part of DeepMind’s research programme from the very beginning and this approach has already led to significant breakthroughs in AI research.” In 2016, Deepmind’s AlphaGo emerged as the victor in a Go match scoring 4-1 after defeating South Korean Go champion, Lee Sedol. Another one of their programs, AlphaGo Zero perfected its Go and Chess skills simply by playing against itself iteratively. Alongside its work to train AI agents for playing games, DeepMind has also developed AI for spotting over 50 sight-threatening eye diseases and has recently developed Dopamine, a Tensorflow-based framework for Reinforcement Learning. Why DeepMind made Sonnet open source Key Takeaways from the Unity Game Studio Report 2018 Best game engines for Artificial Intelligence game development
Read more
  • 0
  • 0
  • 2078
article-image-googles-stories-to-use-artificial-intelligence-to-create-stories-like-snapchat-and-instagram
Sunith Shetty
26 Sep 2018
3 min read
Save for later

Google's Stories to use artificial intelligence to create stories like Snapchat and Instagram

Sunith Shetty
26 Sep 2018
3 min read
Google's Stories is exploring ways to bring more immersive visual content with ‘stories’. They will start curating stories using artificial intelligence algorithms to construct AMP stories and surface them in Search. They will provide a glimpse of facts and important moments of notable people in a rich and visual format. The format will help you to easily tap to gather more information and discover unique content from the web. What are Stories and why they matter? Snapchat was the first major vendor who created the photo and video montages to create a tool called Stories in October 2013. People liked the idea of getting a quick gist of all the day-to-day references which led to its increased popularity. Facebook further popularized it after copying the feature into all of its major popular apps like WhatsApp, Instagram, and Messenger. It have become the latest social media marketing tool which is extensively used for personal use and by businesses/startups. Its extensive use on various social media channels has left many confused for good reason. A key question of everyone’s mind is: Is it worth putting in the time and effort to create something that will just disappear in the next 24 hours? Google plays catch up with SnapChat and Instagram Google is now in the race pushing themselves deeper into Stories. Since February, they have already allowed some publishers to create AMP visual storytelling for delivering news and latest updates using tap-through stories. The Google team will start using artificial intelligence techniques to create stories that will appear in Google searches and image results. According to the company’s blog post, they will create them first around well-renowned people--like celebrities, and athletes. Later scaling and expanding it to other categories. Google's stories will be curated using artificial intelligence techniques, but will still require human moderators to review them to ensure there aren’t any severe mistakes or issues. They will be doubling their efforts to bring Stories into the search engine. More updates and announcements will be made in the next few months, said Cathy Edwards, Google’s head of engineering for image search. What's next for Stories? Google’s big step and positive confidence in the Stories tool is yet another signal that this new format is here to stay. With all major vendors adopting the tool within their channels, it will be interesting to see what new techniques and features will be added to this format. Stories no-doubt has become incredibly popular on Snapchat and Facebook-owned apps like Instagram and WhatsApp. If we look at the current usage, more than 400 million people use Instagram stories and 450 million use WhatsApp’s version of stories daily. We will start seeing more companies start advertising in Stories in order to turn all the user interest into actual revenue. If we look at the current scenario, they are more popular with users than advertisers. The key reason for it is because most of the ads are vertical video ads which are relatively a new format and takes a considerable amount of time for companies to create them. We can expect Google to start putting ads in the stories at some point down the line. Read more Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban Apple bans Facebook’s VPN app from the App Store for violating its data collection rules Snapchat is losing users – but revenue is up
Read more
  • 0
  • 0
  • 2836

article-image-introducing-missinglink-ai-a-platform-for-accelerating-deep-learning-lifecycle
Sugandha Lahoti
26 Sep 2018
2 min read
Save for later

Introducing Missinglink.ai, a platform for accelerating Deep Learning Lifecycle

Sugandha Lahoti
26 Sep 2018
2 min read
Yesterday, MissingLink.ai launched as a platform with a purpose to accelerate deep learning progress. Machine learning has shown immense promise across many use cases, from helping to diagnose diseases to powering autonomous vehicles, for a long time, but the actual process of delivering business outcomes currently takes too long and it is too expensive. MissingLink.ai was born out of a desire to fix that, say the MissingLink team. It enables data scientists to spend less time on the grunt work by automating and streamlining the entire deep learning cycle, including data, code, experiments, and resources. It automates repetitive, time-consuming tasks, shortening model training times, and accelerating learning cycles. This way data scientists can get more time to apply the actionable insights the data provides. Here are it’s top features: The system uses version-aware data management eliminating the need to copy files and only syncs changes to data. It offers real-time monitoring and tracking of experiments via visual dashboards. It also automatically tracks data, experiments, and code, so engineers can easily reproduce any experiment at any time. Users can run experiments on a training machine without the need to copy or move data. Once the data is integrated, MissingLink can manage it on a machine on premise. MissingLink.ai allows teams to manage both local and public cloud resources as a single environment, allowing them to grow and shrink compute resources elastically as needed. MissingLink.ai provides data management at scale, allowing companies to keep their data onsite and adjust experiment usage as needs change. MissingLink comes with support for popular frameworks such as Tensorflow, PyTorch, Caffe, and Keras. Aidoc is a AI-powered healthcare company which uses deep learning for medical imaging. With MissingLink, they’ve been able to help radiologists prioritize life-threatening cases and expedite patient care. Another MissingLink customer, Nanit, has developed a smart baby camera that uses deep learning and computer vision to monitor children. For more information, visit the MissingLink website. You may also read the blog from founder Yosi Taguri if you’d like to learn more. Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others. Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge. Hortonworks Data Platform 3.0 is now generally available
Read more
  • 0
  • 0
  • 2560

article-image-microsoft-ignite-2018-highlights-from-day-1
Savia Lobo
25 Sep 2018
7 min read
Save for later

Microsoft Ignite 2018: Highlights from day 1

Savia Lobo
25 Sep 2018
7 min read
Microsoft Ignite 2018 got started yesterday on the 24th of September 2018 in, Orlando, Florida. The event will run until the 28th of September 2018 and will host more than 26,000 Microsoft developers from more than 100 countries. Day 1 of Microsoft Ignite was full of exciting news and announcements including Microsoft Authenticator, AI-enabled updates to Microsoft 365, and much more! Let’s take a look at some of the most important announcements from Orlando. Microsoft puts an end to passwords via its Microsoft Authenticator app Microsoft security helps protect hundreds of thousands of line-of-business and SaaS apps as they connect to Azure AD. They plan to deliver new support for password-less sign-in to Azure AD-connected apps via Microsoft Authenticator. The Microsoft Authenticator app replaces your password with a more secure multi-factor sign-in that combines your phone and your fingerprint, face, or PIN. Using a multi-factor sign-in method, users can reduce compromise by 99.9%. Not only is it more secure, but it also improves user experience by eliminating passwords. The age of the password might be reaching its end thanks to Microsoft. Azure IoT Central is now generally available Microsoft announced the public preview of Azure IoT Central in December 2017. At Ignite yesterday, Azure made its IoT Central generally available. Azure IoT Central is a fully managed software-as-a-service (SaaS) offering, which enables customers and partners to provision an IoT solution in seconds. Users can customize it in just a few hours, and go to production the same day—all without requiring any cloud solution development expertise. Azure IoT Central is built on the hyperscale and enterprise-grade services provided by Azure IoT. In theory, it should match the security and scalability needs of Azure users. Microsoft has also collaborated with MultiTech, a leading provider of communications hardware for the Internet of Things, to integrate IoT Central functionality into the MultiConnect Conduit programmable gateway. This integration enables out-of-the-box connectivity from Modbus-connected equipment directly into IoT Central for unparalleled simplicity from proof of concept through wide-scale deployments. To know more about Azure IoT central, visit its blog. Microsoft Azure introduces Azure Digital Twins, the next evolution in IoT Azure Digital Twins allows customers and partners to create a comprehensive digital model of any physical environment, including people, places, and things, as well as the relationships and processes that bind them. Azure Digital Twins uses Azure IoT Hub to connect the IoT devices and sensors that keep the digital model up to date with the physical world. This will enable two powerful capabilities: Users can respond to changes in the digital model in an event-driven and serverless way to implement business logic and workflows for the physical environment. For instance, in a conference room when a presentation is started in PowerPoint, the environment could automatically dim the lights and lower the blinds. After the meeting, when everyone has left, the lights are turned off and the air conditioning is lowered. Azure Digital Twins also integrates seamlessly with Azure data and analytics services, enabling users to track the past and predict the future of their digital model. Azure Digital Twins will be available for preview on October 15 with additional capabilities. To know more, visit its webpage. Azure Sphere, a solution for creating highly secure MCU devices In order to help organizations seize connected device opportunities while meeting the challenge of IoT risks, Microsoft developed Azure Sphere, a solution for creating highly secure MCU devices. At Ignite 2018, Microsoft announced that Azure Sphere development kits are universally available and that the Azure Sphere OS, Azure Sphere Security Service, and Visual Studio development tools have entered public preview. Together, these tools provide everything needed to start prototyping new products and experiences with Azure Sphere. Azure Sphere allows manufacturers to build highly secure, internet-enabled MCU devices that stay protected even in an evolving threat landscape. Azure Sphere’s unique mix of three components works in unison to reduce risk, no matter how the threats facing organizations change: The Azure Sphere MCU includes built-in hardware-based security. The purpose-built Azure Sphere OS adds a four-layer defense, in-depth software environment. The Azure Sphere Security Service renews security to protect against new and emerging threats. Adobe, Microsoft, and SAP announced the Open Data Initiative At the Ignite conference, the CEOs of Adobe, Microsoft, and SAP introduced an Open Data Initiative to help companies connect, understand and use all their data to create amazing experiences for their customers with AI. Together, the three long-standing partners are reimagining customer experience management (CXM) by empowering companies to derive more value from their data and deliver world-class customer experiences in real-time. The Open Data Initiative is based on three guiding principles: Every organization owns and maintains complete, direct control of all their data. Customers can enable AI-driven business processes to derive insights and intelligence from unified behavioral and operational data. A broad partner ecosystem should be able to easily leverage an open and extensible data model to extend the solution. Microsoft now lets businesses rent a virtual Windows 10 desktop in Azure Until now, virtual Windows 10 desktops were the domain of third-party service providers. However, from now on, Microsoft itself will offer these desktops. The company argues that this is the first time users will get a multiuser virtualized Windows 10 desktop in the cloud. Most of the employees don’t necessarily always work from the same desktop or laptop. This virtualized solution will allow organizations to offer them a full Windows 10 desktop in the cloud, with all the Office apps they know, without the cost of having to provision and manage a physical machine. A universal search feature across Bing and Office.com Microsoft announced that it is rolling out a universal search feature across Bing and Office.com. The Search feature will be later supported in Edge, Windows, and Office. The Search feature will be able to index internal documents to make it easier to find files. Search is going to be moved to a prominent and consistent place across the apps that are used every day whether it is Outlook, PowerPoint, Excel, Teams, etc.  Also, personalized results will appear in the search box so that users can see documents that they worked on recently. Here’s a small video to know more about the universal search feature. https://youtu.be/mtjJdltMoWU New AutoML capabilities in Azure Machine Learning service Microsoft also announced new capabilities for its Azure Machine Learning service, a technology that allows anyone to build and train machine learning models to make predictions from data. These models can then be deployed anywhere – in the cloud, on-premises or at the edge.  At the center of the update is automated machine learning, an AI capability that automatically selects, tests and tweaks machine learning models that power many of today’s AI systems. The capability is aimed at making AI development more accessible to a broader set of customers. Preview announcement of SQL Server 2019 Microsoft announced the first public preview of SQL Server 2019 at Ignite 2018. This new release of SQL Server, businesses will be able to manage their relational and non-relational data workloads in a single database management system. Few expectations at the SQL Server 2019 include: Microsoft SQL Server 2019 will run either on-premise or on the Microsoft Azure stack Microsoft announced the Azure SQL Database Managed Instance, which will allow businesses to port their database to the cloud without any code changes Microsoft announced new database connectors that will allow organizations to integrate SQL Server with other databases such as Oracle, Cosmos DB, MongoDB, and Teradata To know more about SQL Server 2019, read, ‘Microsoft announces the first public preview of SQL Server 2019 at Ignite 2018’ Microsoft Ignite 2018: New Azure announcements you need to know Azure Functions 2.0 launches with better workload support for serverless Microsoft, Adobe and SAP announce Open Data Initiative, a joint vision to reimagine customer experience, at Ignite 2018  
Read more
  • 0
  • 0
  • 5051
article-image-microsoft-adobe-and-sap-announce-open-data-initiative-a-joint-vision-to-reimagine-customer-experience-at-ignite-2018
Bhagyashree R
25 Sep 2018
2 min read
Save for later

Microsoft, Adobe and SAP announce Open Data Initiative, a joint vision to reimagine customer experience, at Ignite 2018

Bhagyashree R
25 Sep 2018
2 min read
Yesterday at the Microsoft Ignite conference, Microsoft, Adobe, and SAP came together to announce the Open Data Initiative. This initiative aims to help companies to better govern their data and support privacy and security initiatives. What is the Open Data Initiative? Open Data Initiative is an open alliance that aims to eliminate silos and enables a seamless flow of customer data. For this initiative, the trios (Microsoft, Adobe, and SAP) are enhancing interoperability and data exchange between their applications and platforms through a single data model. These applications and platforms include Adobe Experience Cloud and Adobe Experience Platform, Microsoft Dynamics 365, SAP C/4HANA and S/4HANA. How this initiative will help companies? This initiative will help other companies in the following ways: Companies will be able to build and adopt intelligent applications that will understand data, relationships, and metadata spanning multiple services from Adobe, SAP, Microsoft and their partners It will help companies to use the information trapped in internal and external silos to extract more value from its own data in real time to better serve customers Based on their preference or needs, companies will be able to move transactional, operational, customer or IoT data to and from the common data lake Enable companies to create data-powered digital feedback loops for greater business impact Top retail companies are showing support and excitement for the Open Data Initiative. Barry Simpson, chief information officer at the Coca-Cola Company, said: “This initiative from Adobe, Microsoft and SAP is an important and strategic development for the Coca-Cola System. Our digital growth plans centered around our customers are fueled by these platforms and open standards. A more unified approach to the management and control of our data strengthens our ability to support our growth agenda and our ability to satisfy security, privacy and GDPR compliance requirements. The industry needs to follow these leaders.” To know more about the Open Data Initiative, check out the press release on Microsoft’s official website. Microsoft announces the first public preview of SQL Server 2019 at Ignite 2018 SAP creates AI ethics guidelines and forms an advisory panel Adobe set to acquire Marketo putting Adobe Experience Cloud at the heart of all marketing
Read more
  • 0
  • 0
  • 3024

article-image-meet-pypeline-a-simple-python-library-for-building-concurrent-data-pipelines
Natasha Mathur
25 Sep 2018
2 min read
Save for later

Meet Pypeline, a simple python library for building concurrent data pipelines

Natasha Mathur
25 Sep 2018
2 min read
The Python team came out with a new simple and powerful library called Pypeline, last week for creating concurrent data pipelines. Pypeline has been designed for solving simple to medium data tasks that require concurrency and parallelism. It can be used in places where using frameworks such as Spark or Dask feel unnatural. Pypeline comprises an easy to use familiar and functional API. It enables building data pipelines using Processes, Threads, and asyncio.Tasks via the exact same API. With Pypeline, you also have control over memory and CPU resources which are used at each stage of your pipeline. Pypeline Basic Usage Using Pypeline, you can easily create multi-stage data pipelines with the help of functions such as map, flat_map, filter, etc. To do so, you need to define a computational graph specifying the operations which are to be performed at each stage, the number of resources, and the type of workers you want to use. Pypeline comes with 3 main modules, and each of them uses a different type of worker. To build multi-stage data pipelines, you can use 3 type of workers, namely, processes, threads, and tasks. Processes You can create a pipeline based on multiprocessing. Process workers with the help of process module. After this, you can specify the numbers of workers at each stage. The maxsize parameter limits the maximum amount of elements that the stage can hold simultaneously. Threads and Tasks Create a pipeline using threading.Thread workers by using the thread module. Additionally, in order to create a pipeline based on asyncio.Task workers, use an asyncio_task module. Apart from being used to create multi-stage data pipelines, it can also help you create pipelines with the help of the pipe | operator. For more information, check out the official documentation. How to build a real-time data pipeline for web developers – Part 1 [Tutorial] How to build a real-time data pipeline for web developers – Part 2 [Tutorial] Create machine learning pipelines using unsupervised AutoML [Tutorial]
Read more
  • 0
  • 0
  • 7522