Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-video-to-video-synthesis-gan-nvidia-mit-csail-open-source
Fatema Patrawala
23 Aug 2018
2 min read
Save for later

Video-to-video synthesis method: A GAN by NVIDIA & MIT CSAIL is now Open source

Fatema Patrawala
23 Aug 2018
2 min read
Nvidia and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have open-sourced their video-to-video synthesis model. A generative adversarial learning framework is used as a method to generate high-resolution, photorealistic and temporally coherent results with various input format, including segmentation masks, sketches and poses. There has been less research into video to video synthesis compared to image to image translation. Video to video synthesis aims to solve the problem of low visual quality and incoherency of video results in existing image synthesis approach. The research group proposed a novel video-to-video synthesis approach capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long. An extensive experimental validation was performed on various datasets by the authors and the model showed better results than existing approaches in quantitative and qualitative perspectives. When this method was extended to multimodal video synthesis with identical input data, it produced new visual properties with high resolution and coherency. Researchers suggested the model may be improved in the future by adding additional 3D cues such as depth maps to better synthesize turning cars. We can use object tracking to ensure an object maintains its colour and appearance throughout the video; and training with coarser semantic labels to solve issues in semantic manipulation. The Video-to-Video Synthesis paper is on arxiv, the team’s model and data can be found on the Github page. NVIDIA shows off GeForce RTX, real-time raytracing GPUs, as the holy grail of computer graphics to gamers Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU” Baidu announces ClariNet, a neural network for text-to-speech synthesis
Read more
  • 0
  • 0
  • 4957

article-image-google-universal-transformers-extension-standard-translation-system
Fatema Patrawala
22 Aug 2018
4 min read
Save for later

Google Brain’s Universal Transformers: an extension to its standard translation system

Fatema Patrawala
22 Aug 2018
4 min read
Last year in August Google released the Transformer, a novel neural network architecture based on a self-attention mechanism particularly well suited for language understanding. Before the Transformer, most neural network based approaches to machine translation relied on recurrent neural networks (RNNs) which operated sequentially using recurrence. In contrast to RNN-based approaches, the Transformer used no recurrence, instead it processed all words or symbols in the sequence and let each word attend the other word over multiple processing steps using a self-attention mechanism to incorporate context from words farther away. This approach led Transformer to train the recurrent models much faster and yield better translation results than RNNs. “However, on smaller and more structured language understanding tasks, or even simple algorithmic tasks such as copying a string (e.g. to transform an input of “abc” to “abcabc”), the Transformer does not perform very well.”, says Stephan Gouws and Mostafa Dehghani from the Google Brain team. Hence this year the team has come up with Universal Transformers, an extension to standard Transformer which is computationally universal using a novel and efficient flavor of parallel-in-time recurrence. The Universal Transformer is built to yield stronger results across a wider range of tasks. How does the Universal Transformer function The Universal Transformer is built on the parallel structure of the Transformer to retain its fast training speed. It has replaced the Transformer’s fixed stack of different transformation functions with several applications of a single, parallel-in-time recurrent transformation function. Crucially, where an RNN can process a sequence symbol-by-symbol (left to right), the Universal Transformer will process all symbols at the same time (like the Transformer), but then refine its interpretation of every symbol in parallel over a variable number of recurrent processing steps using self-attention. This parallel-in-time recurrence mechanism is both faster than the serial recurrence used in RNNs, making the Universal Transformer more powerful than the standard feedforward Transformer. Source: Google AI Blog At each step, information is communicated from each symbol (e.g. word in the sentence) to all other symbols using self-attention, just like in the original Transformer. However, now the number of times transformation will be applied to each symbol (i.e. the number of recurrent steps) can either be manually set ahead of time (e.g. to some fixed number or to the input length), or it can be decided dynamically by the Universal Transformer itself. To achieve the latter, the team has added an adaptive computation mechanism to each position which will allocate more processing steps to symbols that are ambiguous or require more computations. Furthermore, on a diverse set of challenging language understanding tasks the Universal Transformer generalizes significantly better and achieves a new state of the art on the bAbI linguistic reasoning task and the challenging LAMBADA language modeling task. But perhaps the larger feat is that the Universal Transformer also improves translation quality by 0.9 BLEU1 over a base Transformer with the same number of parameters, trained in the same way on the same training data. “Putting things in perspective, this almost adds another 50% relative improvement on top of the previous 2.0 BLEU improvement that the original Transformer showed over earlier models when it was released last year”, says the Google Brain team. The code to train and evaluate Universal Transformers can be found in the open-source Tensor2Tensor repository page. Read in detail about the Universal Transformers on the Google AI blog. Create an RNN based Python machine translation system [Tutorial] FAE (Fast Adaptation Engine): iOlite’s tool to write Smart Contracts using machine translation Setting up the Basics for a Drupal Multilingual site: Languages and UI Translation
Read more
  • 0
  • 0
  • 5700

article-image-facebook-and-nyu-are-working-together-to-make-mri-scans-10x-faster
Richard Gall
22 Aug 2018
3 min read
Save for later

Facebook and NYU are working together to make MRI scans 10x faster

Richard Gall
22 Aug 2018
3 min read
Facebook is working with NYU in a bid to transform the speed at which MRI scans can be performed. Using artificial intelligence, and training on 3 million MRI scans, fastMRI can supposedly work ten times as fast as a traditional MRI scan. MRI scans can offer medical professionals a level of detail that other scans cannot. But they can take some time, compared to, say, an X-ray. This is because MRI scans work by gathering a sequence of views which can then be turned into cross sections of a patient's internal tissue. To get a detailed picture more data needs to be gathered in the scan. However, with this project, the aim is to reduce the amount of data that needs to be collected. It will do this by using neural networks to build up the foundational components within a scan, so the scan can instead focus on what's unique to that specific patient. This a bit like how humans are able to process information by filtering out what they already know/what is already familiar and focusing on what is important. Some work has already been done by researchers at NYU on getting neural networks to produce high quality images from limited data. What makes Facebook's and NYU's MRI research unique? In a post published on Monday, Larry Zitnick from Facebook's AI Research Lab and Daniel Sodickson from NYU School of Medicine explained why this project is different from similar artificial intelligence research in medicine: "Unlike other AI-related projects, which use medical images as a starting point and then attempt to derive anatomical or diagnostic information from them (in emulation of human observers), this collaboration focuses on applying the strengths of machine learning to reconstruct the most high-value images in entirely new ways. With the goal of radically changing the way medical images are acquired in the first place, our aim is not simply enhanced data mining with AI, but rather the generation of fundamentally new capabilities for medical visualization to benefit human health." Facebook and NYU are ambitious about the scale of the project. They plan to open source the research to encourage wider participation in the area, and potentially push the boundaries of AI-informed medical research even further. But the teams say that "its long-term impact could extend to many other medical imaging applications" such as CT scans. Read next HUD files complaint against Facebook over discriminatory housing ads Four 2018 Facebook patents to battle fake news and improve news feed Facebook launches a 6-part Machine Learning video series
Read more
  • 0
  • 0
  • 2398
Visually different images

article-image-mits-duckietown-kickstarter-project-aims-to-make-learning-how-to-program-self-driving-cars-affordable
Savia Lobo
22 Aug 2018
4 min read
Save for later

MIT’s Duckietown Kickstarter project aims to make learning how to program self-driving cars affordable

Savia Lobo
22 Aug 2018
4 min read
MIT is known for offering out-of-the-box interesting courses such as pirate training, street-fighting math, and so on. Its April 2016 spring course held at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) known as Duckietown tops them all. Duckietown Kickstarter project launched on 7th August 2018, teaches how to program self-driving cars at an affordable cost. Duckietown is designed to be an affordable, modular, scalable, duckie-filled introduction to autonomous vehicles. https://youtu.be/b0B6S2Ca75Q At present, the Duckietown kits are being used to teach practical self-driving robotics to students around the world. The idea with the Kickstarter is to help them scale up, providing robots and classroom kits that can do more for less money. On asking why the initial Duckietown class turned into a much larger project via email, Andrea Censi, President of the Duckietown Foundation, replied to IEEE Spectrum, “We found ourselves receiving emails from both independent learners and educational institutions all over the world showing interest in different forms. We realized that without “scaling up” our methods, it would have soon been impossible to manage all the people that wanted to be involved.” Why did Duckietown go to Kickstarter? Andrea explained the need to establish a one-click-solution to obtain hardware. The  bottlenecks faced during the distribution of the Duckietown platform was the accessibility of the hardware, which included (a) the time and effort necessary to obtain Duckiebots and Duckietowns, (b) the price, (c) the inconsistent availability of the components from the sources and their geographical location and related shipping limitations. Most of the components in the Duckietown kit are off-the-shelf, and links to the vendors are provided. The shipping process is time-consuming and cost-ineffective. Vendors do not guarantee the availability of the components, only ship to some parts of the world, and might at any time run out of inventory for specific components or change the prices. The Kickstarter is a way to solve this problem by raising the funds to create the necessary pipeline to make hardware available, anywhere, at any time, with the ease of a single click. Once the hardware distribution pipeline is established, one can purchase components in bulk and obtain lower prices. No prior coding experience required with Duckietown Kit Students or teachers who wish to use the Duckietown kit can follow step by step instructions detailed in their respective open-source Duckietown book (or Duckiebook). One of the highlights of this learning experience is, not much robotics or coding experience is necessary to follow these instructions. Andrea said, “By just following the instructions, learners will experience the hardware assembly of a robot (need for sensing, actuation, power and computation), the basics of Linux and ROS (Robotic Operating System) operations, the need to calibrate the camera, and be able to “play around” (tune high level parameters) with fundamental car behaviors like lane following, obstacle (i.e. duckie and Duckiebot) avoidance, intersection navigation, and stopping at a red light.” From an educational aspect, MIT envisions Duckietown to become a  milestone in learning experience in the fields of robotics and autonomy education. It will provide an educational experience which will be automatically tailored to each learner. From MIT’s research aspect, Duckietown could become a standardized research testbed for embodied autonomy. This is the main goal of the AI Driving Olympics (AI-DO), with its first edition at NIPS 2018, and the second edition at ICRA 2019. To know more about how Duckietown can be used to program self-driving cars, read Andrea Censi’s complete email interview at the IEEE Spectrum post. What the IEEE 2018 programming languages survey reveals to us Tesla is building its own AI hardware for self-driving cars Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics  
Read more
  • 0
  • 0
  • 3476

article-image-openai-five-set-eyes-to-beat-professional-dota-team
Fatema Patrawala
21 Aug 2018
3 min read
Save for later

OpenAI set their eyes to beat Professional Dota 2 team at The International

Fatema Patrawala
21 Aug 2018
3 min read
Back in June, the OpenAI Five team, had smashed amateur humans in the video game Dota 2. Then early this month, OpenAI Five beat semi-professional Dota 2 players. Now OpenAI Five is at set to claim the Dota 2 throne with plans to beat the world’s best professional Dota 2 players. The Elon Musk backed non-profit AI research company, OpenAI is pitting its team of five neural networks, called OpenAI Five, against a team of top professional “Dota 2” players at The International esports tournament. The International 2018 is an ongoing event this week (held from Aug 20-25) at the Rogers Arena in Vancouver, Canada. With this challenge the team is using Dota 2 as a testbed for general-purpose AI systems which will start to capture the messiness and continuous nature of the real world, such as teamwork, long time horizons, and hidden information. The Dota training system showed that the current AI algorithms can learn long-term planning with large but achievable scale. The system is not specific to Dota 2, and they’ve also used it to control a robotic hand—a previously unsolved problem in robotics.OpenAI’s mission is to ensure that artificial general intelligence can benefit all of the humanity. How does OpenAI Five work A team of five artificial neural networks, are a kind of simulated “brains” which the team has designed to be well-shaped for learning Dota 2. The OpenAI Five sees the world as a list of 20,000 numbers which encode the visible game state (limited to the information a human player is permitted to see), and chooses an action by emitting a list of 8 numbers. The OpenAI team writes code which maps between game state/actions and lists of numbers. Once trained, these neural networks are creatures of pure instinct—their neural networks implement memory but do not otherwise learn further. They play as a team, but do not design special communication structures and only provide with an incentive. OpenAI Five Training The Five neural networks start with random parameters and use a general-purpose training system, Rapid, to learn better parameters. Rapid has OpenAI Five play copies of itself. It generates 180 years of gameplay data each day across thousands of simultaneous games. It will consume 128,000 CPU cores and 256 GPUs. At each game frame, Rapid computes a numeric reward which is positive when something favorable happens (e.g. an allied hero gained experience) and negative when something unfavorable happens (e.g. an allied hero is killed). Rapid will apply the Proximal Policy Optimization algorithm to update the parameters of the neural network—making actions which occurred soon before positive reward more likely and those soon before negative reward less likely. “Our team is focused on making the goal. We don’t know if it will be achievable, but we believe that with hard work (and some luck) we have a real shot,” says the OpenAI team. For further details, read the OpenAI blog page. OpenAI Five bots beat a team of former pros at Dota 2 OpenAI builds reinforcement learning based system giving robots human like dexterity Extending OpenAI Gym environments with Wrappers and Monitors [Tutorial]
Read more
  • 0
  • 0
  • 2295

article-image-did-quantum-computing-just-take-a-quantum-leap-a-two-qubit-chip-by-uk-researchers-makes-controlled-quantum-entanglements-possible
Natasha Mathur
21 Aug 2018
2 min read
Save for later

Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes controlled quantum entanglements possible

Natasha Mathur
21 Aug 2018
2 min read
A team led by Xiaogang Qiang from the Quantum Engineering Technology Labs at the University of Bristol in the UK, have designed a fully programmable silicon chip to control two-qubits of information simultaneously within a single integrated chip, taking us closer to the quantum computing era. Read Also: Quantum Computing is poised to take a quantum leap with industries and governments on its side The researchers invented a silicon chip which guides single particles of light (or photons) in optical tracks called waveguides to produce quantum-bits of information called “qubits”. This small device can be used to perform a wide range of quantum information experiments. It can also be used to demonstrate how completely functional quantum computers can be engineered from large-scale fabrication processes. “We programmed the device to implement 98 different two-qubit unitary operations (with an average quantum process fidelity of 93.2 ± 4.5%), a two-qubit quantum approximate optimization algorithm, and efficient simulation of Szegedy directed quantum walks -- fosters further use of the linear-combination architecture with silicon photonics for future photonic quantum processors” write the researchers on the paper titled “Large-scale silicon quantum photonics implementing arbitrary two-qubit processing”. The new design has solved one of the major problems faced during quantum computer development.  With the current technology, it is possible to effectively carry out the operations requiring just a single qubit (a unit of information that is in a superposition of simultaneous “0” and “1”). But, by adding a second qubit, it enables quantum entanglement, which exacerbates the problem. Qiang and colleagues have found a solution to this problem as their new quantum processor is capable of controlling two qubits. As mentioned in the paper “by using large-scale silicon photonic circuits to implement-- a linear combination of quantum operators scheme --we realize a fully programmable two-qubit quantum processor, enabling universal two-qubit quantum information processing in optics”. The paper also mentions that the quantum processor has been fabricated with mature CMOS-compatible processing and consists of more than 200 photonic components. “It’s a very primitive processor because it only works on two qubits, which means there is still a long way before we can do useful computations with this technology,” says Lead author, Dr. Xiaogang Qiang. Read News What is Quantum Entanglement? Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation “The future is quantum” — Are you excited to write your first quantum computing code using Microsoft’s Q#?
Read more
  • 0
  • 0
  • 2833
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-nvidia-shows-off-geforce-rtx-real-time-raytracing-gpus-for-gamers
Sugandha Lahoti
21 Aug 2018
2 min read
Save for later

NVIDIA shows off GeForce RTX, real-time raytracing GPUs, as the holy grail of computer graphics to gamers

Sugandha Lahoti
21 Aug 2018
2 min read
NVIDIA has shocked (in a good way, of course) gamers all over the globe by introducing GeForce RTX, world’s first real-time ray-tracing gaming GPUs. Jensen Huang introduced the GeForce RTX series of gaming processors, at Gamescom 2018, calling it the “biggest leap in performance in NVIDIA’s history”. These gaming GPUs are based on NVIDIA Turing architecture and the NVIDIA RTX platform, which fuses shaders with real-time ray tracing and AI capabilities. Turing delivers 6x more performance than its predecessor, Pascal and delivers 4K HDR gaming at 60 frames per second on even the most advanced titles. [box type="shadow" align="" class="" width=""]Ray tracing models the behavior of light in real time as it intersects objects in a scene producing life-like simulations possible in games and other animations.[/box] The new GeForce RTX 2080 Ti, 2080 and 2070 GPUs are packed with amazing features including: New RT Cores to enable real-time ray tracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination. Turing Tensor Cores to perform lightning-fast deep neural network processing. New NGX neural graphics framework which integrates AI into the overall graphics pipeline for image enhancement and generation. New Turing shader architecture with Variable Rate Shading allows shaders to focus processing power on areas of rich detail, boosting overall performance. New memory system featuring ultra-fast GDDR6 with over 600GB/s of memory bandwidth for high-speed, high-resolution gaming. NVIDIA NVLink, a high-speed interconnect that provides higher bandwidth (up to 100 GB/s) and improved scalability for multi-GPU configurations. Hardware support for USB Type-C and VirtualLink. The world’s top game publishers, developers and engine creators have announced support for the NVIDIA RTX platform. These include Battlefield V, Shadow of the Tomb Raider, Metro Exodus, Control, and Assetto Corsa Competizione. Developers include EA, Square Enix, EPIC Games, and more. GeForce RTX graphics cards will be available worldwide, across 238 countries and territories at a starting price of $499. You can pre-order on nvidia.com. For more coverage of the news, including what motivated NVIDIA to develop these GPUs, read the NVIDIA blog. NVIDIA unveils a new Turing architecture: “The world’s first ray tracing GPU” NVIDIA’s Volta Tensor Core GPU hits performance milestones. But is it the best? NVIDIA open sources its material definition language, MDL SDK
Read more
  • 0
  • 0
  • 2137

article-image-a-new-stanford-artificial-intelligence-camera-uses-a-hybrid-optical-electronic-cnn-for-rapid-decision-making
Prasad Ramesh
21 Aug 2018
3 min read
Save for later

A new Stanford artificial intelligence camera uses a hybrid optical-electronic CNN for rapid decision making

Prasad Ramesh
21 Aug 2018
3 min read
Stanford University researchers have devised a new type of camera powered by artificial intelligence. This camera system is powered by two computers and can classify images faster while being more energy efficient. The underlying image recognition technology in today’s autonomous vehicles teach themselves to recognize objects around them. The problem with the current system is that the computers running the artificial intelligence algorithms are too large and slow for future handheld applications. For future applications to be viable and to be in production, the computers need to be much smaller. The hybrid optical-electronic system An assistant professor, Gordon Wetzstein with Julie Chang, a graduate student and first author on the paper Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification published in Nature Scientific Reports, married two types of computers into one. This created an optical-electronic hybrid computer whose aim is image analysis. The prototype camera’s first layer is an optical computer, which does not require power-intensive mathematical computing. The second layer is a conventional electronic computer. The optical computer physically preprocesses the image data, filtering it in multiple ways. An electronic computer would have had to do it mathematically otherwise. This layer operates with zero input power since the filtering happens naturally by light passing through the optics. A lot of time and power is saved in this hybrid model which would have been consumed by image computation. Chang said, “We’ve outsourced some of the math of artificial intelligence into the optics,” This results in fewer calculations which in turn means fewer calls to memory and far less time to complete the process. Skipping these preprocessing steps gives the digital computer a head start for the remaining analysis. Wetzstein said, “Millions of calculations are circumvented and it all happens at the speed of light. Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles,” Fast decision-making The prototype rivals the existing electronic-only computing processors in speed and accuracy. But the change here is that there are substantial computational cost savings which translates to time. The current prototype is arranged on a lab bench and could not be exactly classified as hand-held small. The researchers said that the system can one day be made small enough to be handheld. Wetzstein, Chang and the researchers at the Stanford Computational Imaging Lab are now working in ways to make the optical component do even more of the preprocessing. This would result in a smaller, faster AI camera system that can replace the trunk sized computers currently used in cars and drones. It is important to note that the system was successfully able to identify objects in both simulations and real-world experiments. For more information, you can visit the official Stanford news website and the research paper. Tesla is building its own AI hardware for self-driving cars AI powered Robotics : Autonomous machines in the making AutoAugment: Google’s research initiative to improve deep learning performance
Read more
  • 0
  • 0
  • 2489

article-image-intelligent-edge-analytics-7-ways-machine-learning-is-driving-edge-computing-adoption-in-2018
Melisha Dsouza
21 Aug 2018
9 min read
Save for later

Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018

Melisha Dsouza
21 Aug 2018
9 min read
Edge services and edge computing have been in talks since at least the 90s. When Edge computing is extended to the cloud it can be managed and consumed as if it were local infrastructure. The logic is simple. It’s the same as how humans find it hard to interact with infrastructure that is too far away. Edge Analytics is the exciting area of data analytics that is gaining a lot of attention these days. While traditional analytics, answer questions like what happened, why it happened, what is likely to happen and options on what you should do about it Edge analytics is data analytics in real time. It deals with the operations performed on data at the edge of a network either at or close to a sensor, a network switch or some other connected device. This saves time and overhead issues as well as latency problems. As they rightly say, Time is money! Now imagine using AI to facilitate edge analytics. What does AI in Edge Computing mean When Edge computing is extended to the cloud it can be managed and consumed as if it were local infrastructure. The logic is simple. It’s the same as how humans find it hard to interact with infrastructure that is too far away. Smart Applications these rely on sending tons of information to the cloud. Data can be compromised in such situations. As such, security and privacy challenges may arise. Application developers will have to consider whether the bulk of information sent to the cloud contains personally identifiable information (PII) and whether storing it is in breach of privacy laws. They’ll also have to take the necessary measures to secure the information they store and prevent it from being stolen, or accessed or shared illegally. Now that is a lot of work! Enter  “Intelligent Edge” computing used to save the day!  Edge computing by itself will not replace the power of the cloud. It can, however, reduce cloud payloads drastically when used in collaboration with machine learning. transform the AI’s operation model into that of the human brain: perform routine and time-critical decisions at the edge and only refer to the cloud where more intensive computation and historical analysis is needed. Why use AI in edge computing Most mobile apps, IoT devices and other applications that work with AI and machine learning algorithms and applications rely on the processing power of the cloud or on a datacenter situated thousands of miles away. They have little or no intelligence to apply processing at the edge. Even if you show your favorite pet picture to your smart device a thousand times, it’ll still have to look it up in its cloud server in order to recognize if its a dog or a cat for the 1001st time. OK, who cares if it takes a couple of minutes more for my device to differentiate between a dog and a cat! Let’s consider a robot surgeon which wants to perform a sensitive operation on a patient. It will need to be able to analyze images and make decisions dozens of times per second. The round trip to the cloud would cause lags that could have severe consequences. God forbid, if there is a cloud outage or poor internet connectivity. To perform this task efficiently, faster and to reduce the back and forth communication involved between the cloud and the device, implementing AI in edge is a good idea. Top 7 AI for edge computing use cases that caught our attention Now that you are convinced that Intelligent edge or AI powered edge computing does have potential, here are some recent advancements in edge AI and some ways it is being used in the real world. #1 Consumer IoT: Microsoft’s $5 billion investment in IoT to empower the intelligent cloud and the intelligent edge One of the central design principles of Microsoft’s intelligent edge products and services is to secure data no matter where it is stored. Azure Sphere is one of their intelligent edge solutions to power and protect connected microcontroller unit (MCU)-powered devices. There are 9 billion MCU-powered devices shipping every year, which power everything from household stoves and refrigerators to industrial equipment. That’s intelligent edge for you on the consumer end of the application spectrum. Let’s look at the industrial application use case next. #2  Industrial IoT: GE adds edge analytics, AI capabilities to its industrial IoT suite To make a mark in the field of industrial internet of things (IIoT), GE Digital is adding features to its Predix platform as a service (PaaS). This will let industrial enterprises run predictive analytics as close as possible to data sources, whether they be pumps, valves, heat exchangers, turbines or even machines on the move. The main idea behind edge computing is to analyze data in near real-time, optimize network traffic and cut costs. GE Digital has been working to integrate the company's field service management (FSM) software with GE products and third-party tools. For example, artificial intelligence-enabled predictive analytics now integrate the Apache Spark AI engine to improve service time estimates. New application integration features let service providers launch and share FSM data with third-party mobile applications installed on the same device. Read the whole story on Network World. #3 Embedded computing and robotics: Defining (artificial) intelligence at the edge for IoT systems Machine intelligence has largely been the domain of computer vision (CV) applications such as object recognition. While artificial intelligence technology is thus far still in its infancy, its benefits for advanced driver assistance systems (ADAS), collaborative robots (cobots), sense-and-avoid drones, and a host of other embedded applications are obvious. Related to the origins of AI technology is the fact that most, if not all, machine learning frameworks were developed to run on data center infrastructure. As a result, the software and tools required to create CNNs/DNNs for embedded targets have been lacking. In the embedded machine learning sense, this has meant that intricate knowledge of both embedded processing platforms and neural network creation has been a prerequisite for bringing AI to the embedded edge – a luxury most organizations do not have or is extremely time-consuming if they do. Thanks to embedded silicon vendors, this paradigm is set to shift. Based on the power consumption benchmarks, AI technology is quickly approaching deeply embedded levels. Read the whole article on embedded computing design to know more about how Intelligent edge is changing our outlook towards embedded systems. #4 Smart grids: Grid Edge Control and Analytics Grid Edge Controllers are intelligent servers, deployed as an interface between the edge nodes and the utility’s core network. Smart Grid, as we know, is essentially the concept of establishing a two-way communication between distribution infrastructure, consumer and the utility head end using Internet Protocol. From residential rooftops to solar farms, commercial solar, electric vehicles and wind farms, smart meters are generating a ton of data. This helps utilities to view the amount of energy available and required, allowing their demand response to become more efficient, avoid peaks and reduce costs. This data is first processed in the Grid Edge Controllers that perform local computation and analysis of the data, only sending necessary actionable information over a wireless network to the Utility. #5 Predictive maintenance: Oil and Gas Remote Monitoring Using Internet of Things devices such as temperature, humidity, pressure, and moisture sensors, alongside internet protocol (IP) cameras and other technologies, oil and gas monitoring operations produce an immense amount of data which provide key insights into the health of their specific systems. Edge computing allows this data to be analysed, processed, and then delivered to end-users in real-time. This, in turn, enables control centers to access data as it occurs in order to foresee and prevent malfunctions or incidents before they occur. #6 Cloudless Autonomous Vehicles Self-driving cars and intelligent traffic management systems are already the talk of the town today and the integration of edge AI could be the next big step. When it comes to autonomous systems, safety is paramount. Any delay, malfunction, or anomaly within autonomous vehicles can prove to be fatal. Calculating a number of parameters at the same time, edge computing and AI enables safe and fast transportation with quick decision making capabilities. #7 Intelligent Traffic Management Edge computing is able to analyse and process data on the traffic hardware itself and finds ways to remove unnecessary traffic. This reduces the overall amount of data that needs to be transmitted across a given network and helps to reduce both operating and storage costs. What’s next for AI enabled edge computing? The intelligent edge will allow humans to simplify multi-faceted processes by replacing the manual process of sorting and identifying complex data, key insights and actionable plans. This forte of technology can help humans gain a competitive edge by having better decision-making, improved ROI, operational efficiency and cost savings. However, on the flip side, there are also many cons to machine learning based edge computing.. The cost of deploying and managing an edge will be considerably huge. With all rapidly evolving technologies- evaluating, deploying and operating edge computing solutions has its risks. A key risk area being -security. Tons of data needs to be made available for processing at the edge and where there is data, there is always a fear of data breach. Performing so many operations on the data also can be challenging. All-in-All even though the concept of incorporating AI into edge computing is exciting, some work does need to be done to get intelligent edge-based solutions l fully set up, functional and running smoothly in production. What’s your take on this digital transformation? Reinforcement learning model optimizes brain cancer treatment Tesla is building its own AI hardware for self-driving cars OpenAI builds reinforcement learning based system giving robots human like dexterity
Read more
  • 0
  • 0
  • 6415

article-image-hud-files-complaint-against-facebook-over-discriminatory-housing-ads
Richard Gall
20 Aug 2018
3 min read
Save for later

HUD files complaint against Facebook over discriminatory housing ads

Richard Gall
20 Aug 2018
3 min read
The Department of Housing and Urban Development (HUD) filed a complaint against Facebook on Friday (17 August), alleging the platform is selling ads that discriminate against users based on race, religion and sexuality. This is a problem that Facebook has been struggling to deal with for at least 2 years and suggests a lack of seriousness on the part of Facebook's product teams responsible. It also suggests that the solutions Facebook have tried to employ - a mixture of policy and algorithms - have failed to make a real impact. Facebook's discriminatory housing ads: a timeline All the way back in November 2016, Erin Egan, Chief Privacy Officer at Facebook, published this post, saying: "Recently, policymakers and civil rights leaders have expressed concerns that advertisers could misuse some aspects of our affinity marketing segments. Specifically, they’ve raised the possibility that some advertisers might use these segments to run ads that discriminate against people, particularly in areas where certain groups have historically faced discrimination — housing, employment and the extension of credit." Since then, the issue has failed to go away. In February 2017, Facebook had claimed to put in place a number of measures that would once and for all deal with the issue, updating its policies and offering new tools. However, in November 2017, an investigation by ProPublica found that those measures that Facebook had claimed it was putting in place were having no impact whatsoever. The website purchased "dozens of rental housing ads on Facebook, but asked that they not be shown to certain categories of users." The HUD told ProPublica at the time that it was satisfied with the inquiry it had done with Facebook on discriminatory ads, but that now seems to have changed. HUD's case against Facebook In a statement, Anna Maria Faras,  Assistant Secretary for Fair Housing and Equal Opportunity explained that "the Fair Housing Act prohibits housing discrimination including those who might limit or deny housing options with a click of a mouse...When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it’s the same as slamming the door in someone’s face." Facebook did respond to the complaint, offering a comment to the Washington Post. "Over the past year we’ve strengthened our systems to further protect against misuse. We’re aware of the statement of interest filed and will respond in court; we’ll continue working directly with HUD to address their concerns." It's clear that Facebook's "systems" are struggling. There are certainly important questions to be answered about algorithmic problem solving, and it's likely to be some time before we see the conclusion of this story. Perhaps it might just require a little more human intervention.  Read next Four 2018 Facebook patents to battle fake news and improve news feed Facebook, Apple, Spotify pull Alex Jones content Time for Facebook, Twitter and other social media to take responsibility or face regulation
Read more
  • 0
  • 0
  • 1702
article-image-expect-two-nights-of-programmes-made-by-artificial-intelligence-with-bbc-4-1-ai-tv
Natasha Mathur
20 Aug 2018
3 min read
Save for later

Expect two nights of programmes made by artificial intelligence with BBC 4.1 AI TV

Natasha Mathur
20 Aug 2018
3 min read
BBC is set to embrace the AI revolution with open arms. Last week, it announced a new BBC 4.1 AI TV. This aims to bring “two nights of experimental programming” featuring the new and classic programmes that explore AI. The programme will launch on BBC four on two nights: 4-5 September with Dr. Hannah Fry as the presenter. It will also feature a “virtual co-presenter". BBC 4.1 AI TV promo ‘BBC 4.1 AI TV’ will feature 'Made by Machine: When AI met The Archive', an experimental programme partly made by artificial intelligence, trained to show information dating back to 1953 from well over 250,000 TV programmes. This approach manually would have been impractical as it would take hundreds of hours. But, with the help of latest AI technology from BBC Research & Development, it provided BBC four with a more manageable selection of shows. “The AI learnt what BBC Four audiences might like, based on the channel’s previous schedules and programme attributes, and then ranked programmes it thought were most relevant,” says BBC. It will be broadcasting a selection of programmes that haven’t been seen in years. The programme on BBC 4.1 AI TV features four sections of archive clips edited together that follows the sequence as mentioned below: In the first segment, the AI learns to detect different attributes of the scene such as what a scene consists of, the type of landscape, the objects present, whether people are featured and people’s apparel. This helps the people learn about how a compilation is created with each scene following up from the last. For the second segment, the subtitles or archive programmes are scanned to put together a footage by looking for links between words, topics and themes. The third segment consists of AI analyzing the activity levels on screen ( whether they are high or not ). It then attempts to create a compilation that moves back and forth between high energy and low energy scenes. The fourth sequence combines all its learned to create an altogether new piece of content. According to Cassian Harrison, Channel Editor, BBC Four, “ In collaboration with the BBC's world-beating R&D department, AI TV will explore -- and demonstrate just how AI and machine learning might inform and influence programme-making and scheduling, while also resurfacing some gems from the BBC Four archive along the way”. Also, “Helping BBC Four scour BBC’s vast archives more efficiently is exactly why we’re developing this kind of AI --  and has massive benefits for BBC programme makers and audiences -- Made By Machine: When AI Met The Archive gives people an unprecedented look under the hood” says George Wright, Head of Internet Research and Future Services, BBC R&D. For more coverage on this news, check out the official BBC announcement. Baidu announces ClariNet, a neural network for text-to-speech synthesis Nvidia and AI researchers create AI agent Noise2Noise that can denoise images How Amazon is reinventing Speech Recognition and Machine Translation with AI  
Read more
  • 0
  • 0
  • 1954

article-image-say-hello-to-faster-a-new-key-value-store-for-large-state-management-by-microsoft
Natasha Mathur
20 Aug 2018
3 min read
Save for later

Say hello to FASTER: a new key-value store for large state management by Microsoft

Natasha Mathur
20 Aug 2018
3 min read
The Microsoft research team announced a new key-value store named FASTER at SIGMOD 2018, in June. FASTER offers support for fast and frequent lookups of data. It also helps with updating large volumes of state information which poses a problem for cloud applications today. Let’s consider IoT as a scenario. Here billions of devices report and update state like per-device performance counters. This leads to applications underutilizing resources such as storage and networking on the machine. FASTER helps solve this problem as it makes use of the temporal locality in these applications for controlling the in-memory footprint of the system. According to Microsoft, “FASTER is a single-node shared memory key-value store library”. A key-value store is a NoSQL database which makes use of simple key/value method for data storage. It consists of two important innovations: A cache-friendly, concurrent and latch-free hash index. It maintains logical pointers to records in a log. The FASTER hash index refers to an array of cache-line-sized hash buckets, each with 8-byte entries to hold hash tags. It also consists of logical pointers to records that have been stored separately. A new concurrent and hybrid log record allocator. This helps in backing the index which includes fast storage (such as cloud storage and SSD) and main memory. What makes FASTER different? The traditional key-value stores make use of log-structured record organizations. But, FASTER is different as it has a hybrid log that combines log-structuring with read-copy-updates (good for external storage) and in-place updates (good for in-memory performance). So, the hybrid log head which lies in storage uses a read-copy-update whereas the hybrid log tail part in main memory uses in-place updates. There is a read-only region in memory that lies between these two regions. It provides the core records another chance to be copied back to the tail. This captures temporary location of the updates and allows a natural clustering of hot records in memory. As a result, FASTER is capable of outperforming even pure in-memory data structures like the Intel TBB hash map. It also performs far better than today’s popular key-value stores and caching systems like the RocksDB and Redis, says Microsoft. Other than that, FASTER also provides support for failure recovery as it consists of a recovery strategy in place which helps bring back the system to a recent consistent state at low cost. This is different than the recovery mechanism in traditional database systems as it does not involve blocking or creating a separate “write-ahead log”. For more information, check out the official research paper. Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project Microsoft Azure’s new governance DApp: An enterprise blockchain without mining Microsoft announces the general availability of Azure SQL Data Sync  
Read more
  • 0
  • 0
  • 4695

article-image-google-gives-artificial-intelligence-full-control-over-cooling-its-data-centers
Sugandha Lahoti
20 Aug 2018
2 min read
Save for later

Google gives Artificial Intelligence full control over cooling its data centers

Sugandha Lahoti
20 Aug 2018
2 min read
Google in collaboration with DeepMind is giving the control of cooling several of its data centers completely to an AI algorithm. Since 2016, they have been using an AI-powered recommendation system (developed by Google and DeepMind) to improve the energy efficiency of Google’s data centers. This system made recommendations to data center managers, leading to energy savings of around 40 percent in those cooling systems. Now, Google is completely handing the control over to cloud-based AI systems. https://twitter.com/mustafasuleymn/status/1030442412861218817 How Google’s safety-first AI system works Google’s previous AI engine required too much operator effort and supervision to implement the recommendations. So they explored a new system that could give similar energy savings without manual implementation. Here’s how the algorithm does it. A large number of sensors are embedded in the cooling center. The cloud-based AI system monitors the data centers and every five minutes pulls a snapshot of the data center. It then feeds this snapshot into deep neural networks, which predict how different combinations of potential actions will affect future energy consumption. The AI system then identifies which actions will minimize the energy consumption while satisfying safety constraints. Those actions are sent back to the data center, where the actions are verified by the local control system and then implemented. To ensure safety and reliability, the system uses eight different mechanisms to ensure it behaves as intended at all times and improve energy savings. The system is already delivering consistent energy savings of around 30 percent on average, with further expected improvements. Source: DeepMind Blog In the long term, Google wants to apply this technology in other industrial settings, and help tackle climate change on an even grander scale. You can read more about their Safety-first AI on DeepMind’s Blog. DeepMind Artificial Intelligence can spot over 50 sight-threatening eye diseases with expert accuracy. Why DeepMind made Sonnet open source. How Google’s DeepMind is creating images with artificial intelligence.
Read more
  • 0
  • 0
  • 4226
article-image-intel-acquires-vertex-ai-to-join-it-under-their-artificial-intelligence-unit
Prasad Ramesh
17 Aug 2018
2 min read
Save for later

Intel acquires Vertex.ai to join it under their artificial intelligence unit

Prasad Ramesh
17 Aug 2018
2 min read
After acquiring Nervana, Mobileye, and Movidius, Intel has now bought Vertex.ai and is merging it with their artificial intelligence group. Vertex.ai is a Seattle based startup unicorn with the vision to develop deep learning for every platform with their PlaidML deep learning engine. The terms of the deal are undisclosed but the 7-person Vertex.ai team including founders Choong Ng, Jeremy Bruestle, and Brian Retford will become a part of Movidius in Intel’s Artificial Intelligence Products Group. Vertex.ai was founded in 2015 and initially funded by Curious Capital and Creative Destruction Lab, among others. Intel said in a statement "With this acquisition, Intel gained an experienced team and IP (intellectual property) to further enable flexible deep learning at the edge." The chipmaker does intend to continue developing PlaidML as an open source project. They will shortly transition it to the Apache 2.0 license from the existing AGPLv3 license. The priority for PlaidML will continue to be an engine that supports a variety of hardware with an Intel nGraph backend. “There’s a large gap between the capabilities neural networks show in research and the practical challenges in actually getting them to run on the platforms where most applications run,” Ng stated on Vertex.ai’s launch in 2016. “Making these algorithms work in your app requires fast enough hardware paired with precisely tuned software compatible with your platform and language. Efficient plus compatible plus portable is a huge challenge—we can help.” Intel is among many other giants in the tech industry making heavy investments in AI. Their AI chip business is currently at $1B a year. Their PC/chip business makes $8.8B while their data-centric business makes $7.2 billion. “After 50 years, this is the biggest opportunity for the company,” Navin Shenoy, executive vice president said at Intel’s 2018 Data Centric Innovation Summit this year. “We have 20 percent of this market today…Our strategy is to drive a new era of data center technology.” The official announcement is stated in the Vertex.ai website. Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments Intel’s Spectre variant 4 patch impacts CPU performance SingularityNET and Mindfire unite talents to explore artificial intelligence
Read more
  • 0
  • 0
  • 2772

article-image-salesforce-open-sources-transmogrifai-automated-machine-learning-library
Sugandha Lahoti
17 Aug 2018
2 min read
Save for later

Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library

Sugandha Lahoti
17 Aug 2018
2 min read
Salesforce has open sourced TransmogrifAI, their end-to-end automated machine learning library for structured data. This library is currently used in production to help power Salesforce Einstein AI platform. TransmogrifAI enables data scientists at Salesforce to transform customer data into meaningful, actionable predictions.  Now, they have open-sourced this project to enable other developers and data scientists to build machine learning solutions at scale, fast. TransmogrifAI is built on Scala and SparkML that automates data cleansing, feature engineering, and model selection to arrive at a performant model. It encapsulates five main components of the machine learning process: Source: Salesforce Engineering Feature Inference: TransmogrifAI allows users to specify a schema for their data to automatically extract the raw predictor and response signals as “Features”. In addition to allowing for user-specified types, TransmogrifAI also does inference of its own. The strongly-typed features allow developers to catch a majority of errors at compile-time rather than run-time. Transmogrification or automated feature engineering: TransmogrifAI comes with a myriad of techniques for all the supported feature types ranging from phone numbers, email addresses, geo-location to text data. It also optimizes the transformations to make it easier for machine learning algorithms to learn from the data. Automated Feature Validation: TransgmogrifAI has algorithms that perform automatic feature validation to remove features with little to no predictive power. These algorithms are useful when working with high dimensional and unknown data. They apply statistical tests based on feature types, and additionally, make use of feature lineage to detect and discard bias. Automated Model Selection: The TransmogrifAI Model Selector runs several different machine learning algorithms on the data and uses the average validation error to automatically choose the best one. It also automatically deals with the problem of imbalanced data by appropriately sampling the data and recalibrating predictions to match true priors. Hyperparameter Optimization: It automatically tunes hyperparameters and offers advanced tuning techniques. This large-scale automation has brought down the total time taken to train models from weeks and months to a few hours with just a few lines of code. You can check out the project to get started with TransmogrifAI. For detailed information, read the Salesforce Engineering Blog. Salesforce Spring 18 – New features to be excited about in this release! How to secure data in Salesforce Einstein Analytics How to create and prepare your first dataset in Salesforce Einstein
Read more
  • 0
  • 0
  • 2925