Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Artificial Intelligence

86 Articles
article-image-how-pytorch-is-bridging-the-gap-between-research-and-production-at-facebook-pytorch-team-at-f8-conference
Vincy Davis
04 Dec 2019
7 min read
Save for later

How PyTorch is bridging the gap between research and production at Facebook: PyTorch team at F8 conference

Vincy Davis
04 Dec 2019
7 min read
PyTorch, the machine learning library which was originally developed as a research framework by a Facebook intern in 2017, has now grown into a popular deep learning workflow. One of the most loved products by Facebook, PyTorch is free, open source, and used for applications like computer vision and natural language processing (NLP).  At the F8 conference held this year, the PyTorch team consisting of Joe Spisak, the project manager for PyTorch at Facebook AI and Dmytro Dzhulgakov, the tech lead at Facebook AI gave a talk on how Facebook is developing and scaling AI experiences with PyTorch.  Spisak describes PyTorch as an eager and graph-based execution that is defined by ‘run’. This means that when a user executes a Python code, it generates a graph on the fly. It is dynamic in nature and allows the compilation of the static graph. The dynamic neural networks are accessible, thus, allowing the user to change the parameters very quickly. This feature comes in handy for applications like control flow in NLP. Another important feature of PyTorch, according to Spisak, is the ability to generate accurately distributed training models that possess close to billion parameters, including the cutting-edge ones. It also has a simple and easy API that is very intuitive by nature. This is one of the qualities of PyTorch which has endeared many developers, claims Spisak.  Become a pro at Deep Learning with PyTorch! If you want to become an expert in building and training neural network models with high speed and flexibility in text, vision, and advanced analytics using PyTorch 1.x, read our book Deep Learning with PyTorch 1.x - Second Edition written by Sri. Yogesh K., Laura Mitchell, et al.  It will give you an insight into solving real-world problems using CNNs, RNNs, and LSTMs, along with discovering state-of-the-art modern deep learning architectures, such as ResNet, DenseNet, and Inception. How PyTorch is bridging the gap between research and production at Facebook Dzhulgakov points out how general advances in AI are driven by innovative research in the fields of academia or industry and why it’s necessary to bridge this big lag between research and production. He says, “If you have a new idea and you want to take it all the way through to deployment, you usually need to go through multiple steps - figure out what the approach is and then find the training data maybe prepare massage it a little bit. Actually, build and train your model after that and then there is this painful step of transferring your model to a production environment which often historically involved reimplementation of a lot of code so you can actually take and deploy it and scale-up.”  According to Dzhulgakov, PyTorch is trying to minimize this big gap by encouraging advances and experimentations in the field, so that the research is brought into production in a few days, instead of months. Challenges in bringing research to production Following are the various classes of challenges associated with bringing research to production, according to the PyTorch team. Hardware efficiency: In case of a tight latency constraint environment, users are required to fit all the hardware into the performance budget. On the other hand, an underused hardware environment can lead to an increase in cost. Scalability: In Facebook’s recent work, Dzhulgakov says, they have trained on billions of public images, thus indicating significant accuracy gains as compared to regular datasets like imageNet. Similarly, when models are taken to inference, it means that billions of inferences per second are running with multiple diverse models sharing the same hardware. Cross-platform: Neural networks are mostly not isolated as they need to be deployed inside their target application. It has a lot of interdependence with the surrounding code and application, thus posing different constraints like the user will not be able to run Python code or the user will have to work on very constrained computer capabilities if running a mobile device, and more. Reliability: A lot of PyTorch jobs run for multiple weeks on hundreds of GPUs, hence it is important to design a reliable software which can tolerate hardware failures and deliver results.  How PyTorch is tackling these challenges In order to tackle the above-listed challenges, Dzhulgakov says Facebook develops systems that can take up a training job and perform optimizations focused on performance for the performance-critical pieces. The system also applies “recipes for reliability” so that the developer written modeling code is automatically transformed. The Jit package comes into the picture here and acts like a key factor that is built to capture the structure of the Python program with minimal changes. The main goal of the Jit package is to make this process almost seamless. He asserts that PyTorch has been successful since it feels like regular programming in Python and most of its users start developing in traditional PyTorch mode (eager mode) just by writing and prototyping in the program. “For the subset of promising models which shall show what results you need to bring to production either scale up, so you can apply techniques provided by Jit to exist mental codes and annotated in order to run in so-called script code.”   The Jit is like a subset of Python with a thread list of request semantics, which allows the user to apply transparent transformations for the eager mode to the user. The annotations include adding a few lines of Python code on top of the function in such a way that it can be done incrementally on function by function or module by module fashion. This hybrid fashion ensures that the model works along the way. Such powerful PyTorch tools permit the user to share the same code base between research and production environments.  Next, Dzhulgakov deduces that the common factor between research and production is that both teams of developers work on the same code base built on top of PyTorch. Thus, they share the codes among the teams that have a common domain like text classification or object detection or reinforcement learning. These developers prototype models, train new algorithms and address new tasks for quickly transitioning this functionality to the opposite environment. Watch the full talk to see Dzhulgakov’s examples of PyTorch bridging the gap between research and production at Facebook. If you want to become an expert at implementing deep learning applications in PyTorch, check out our latest book Deep Learning with PyTorch 1.x - Second Edition written by Sri. Yogesh K., Laura Mitchell, and Et al. This book will show you how to apply neural networks to domains such as computer vision and NLP. It will also guide you to build, train, and scale a model with PyTorch and cover complex neural networks such as GANs and autoencoders for producing text and images. NVIDIA releases Kaolin, a PyTorch library to accelerate research in 3D computer vision and AI Introducing ESPRESSO, an open-source, PyTorch based, end-to-end neural automatic speech recognition (ASR) toolkit for distributed training across GPUs Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more Transformers 2.0: NLP library with deep interoperability between TensorFlow 2.0 and PyTorch, and 32+ pretrained models in 100+ languages PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility
Read more
  • 0
  • 0
  • 3558

article-image-nips-2017-6-challenges-deep-learning-robotics-pieter-abbeel
Aaron Lazar
13 Dec 2017
10 min read
Save for later

NIPS 2017 Special: 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel

Aaron Lazar
13 Dec 2017
10 min read
Pieter Abbeel is a professor at UC Berkeley and a former Research Scientist at OpenAI. His current research focuses on robotics and machine learning with particular focus on deep reinforcement learning, deep imitation learning, deep unsupervised learning, meta-learning, learning-to-learn, and AI safety. This article attempts to bring our readers to Pieter’s fantastic Keynote speech at NIPS 2017. It talks about the implementation of Deep Reinforcement Learning in Robotics, what challenges exist and how these challenges can be overcome. Once you’ve been through this article, we’re certain you’d be extremely interested in watching the entire video on the NIPS Facebook page. All images in this article come from his presentation slides and do not belong to us. Robotics in ML has been growing in leaps and bounds with several companies investing huge amounts to tie both these technologies together in the best way possible. Although, there are still several aspects that are not thoroughly accomplished when it comes to AI Robotics. Here are a few of them: Maximize Signal Extracted from Real World Experience Faster/Data efficient Reinforcement Learning Long Horizon Reasoning Taskability (Imitation Learning) Lifelong Learning (Continuous Adaptation) Leverage Simulation Maximise signal extracted from real world experience We need more real world data, so we need to extract as much signal from it. In the diagram below, are the different layers of machine learning that engineers perform. There are engineers who look at the entire cake and train the agent to take both the learning from the reward and from auxiliary signals. This is because using only Reinforcement Learning doesn’t give you a lot of signal. Is there then, a possibility of having a Reward Signal in RL that ties more RL into the system? There’s something known as Hindsight Experience Replay. The idea is to get a reward signal from any experience by assuming the goal equals whatever happened, and not just from success like in usual RL. For this, we need to assume that whatever the agent does is a success. We use Q-learning and instead of a standard Q function, we use multiple goals even though they were not really a goal when you were acting.  Here, a replay buffer collects experience, Q-learning is then applied and a hindsight replay is performed to infuse a new reward for everything the agent has done. For various robotic tasks like pushing, sliding and picking and placing objects, this does very well. Faster Reinforcement Learning When we’re talking about faster RL, we’re talking about much more data efficient RL. Here is a diagram that demonstrates standard RL: An agent lets a robot perform an action in a particular environment or situation in order to achieve a reward. Here, the goal is to maximise the reward. As against Supervised Learning, there is no supervision as to whether the actions taken by the agents are right or wrong. That brings in a few additional challenges in RL, which are: Credit assignment: This is a major problem and is where you get the signal from in RL Stability: Because of the feedback loop, the system could destabilize and destroy itself Exploration: Doing things you’ve never done before when the only way to learn is based on what you’ve done before Despite this, there have been great improvements in Reinforcement Learning in the past few years, enabling AI systems to play games like Go, Dota, etc. It has also been implemented in building robots by NASA for planetary exploration. But the question still exists: “How good is learning?” In the game of pong, a human takes roughly 2 hours to learn what Deep Q-Network (DQN) learns in 40 hours! A more careful study reveals that after 15 minutes, humans tend to outperform DDQN that has trained for 115 hours. This is a tremendous gap in terms of learning efficiency. So, how do we overcome the challenge? Several fully generalised algorithms like Trust Region Policy Optimization (TRPO), DQN, Asynchronous Actor-Critic Agents (A3C) and Rainbow are available, meaning that they can be applied to any kind of environment. Although, only a very small subset of environments are actually encountered in the real world. Can we develop fast RL algorithms that take advantage of this situation? RL Agents can be reused to train various policies. The RL algorithm is developed to train the policy to adapt to a particular environment A. This can then be replicated to environment B and so on. Humans develop the RL algorithm and then rely on it to train the policy. Despite this, none of the algorithms are as good as human learners. Do we have an alternative then? Indeed, yes! Why not let the system learn not just the policy but the algorithm as well or in other words, the entire agent? Enter Meta-Reinforcement Learning In Meta-RL, the learning algorithm itself is being learnt. You could relate this to meta-programming, where one program is trained to write another. This process helps a system learn the world better so it can pick up on learning a new situation quicker. So how does this work? The system is faced with many environments, so that it learns the algorithms and then outputs a faster RL Agent. So, when faced with a new environment, it quickly adapts to it. For evaluating the actual performance, the Multi-armed bandits problem can be considered. Here’s the setting: each bandit has its own distribution over payouts, and in each episode you can choose one bandit. A good RL agent should be able to explore a sufficient number of bandits and exploit the best ones. We need to come up with an algorithm that pulls a higher probability of payoff, rather than a low probability. There are already several asymptotically optimal algorithms like Gittins index, UCB1, Thompson Sampling, that have been created to solve this problem. Here’s a comparison of some of them with the Meta-RL algorithm. The result is quite impressive. The Meta-RL algorithm is equally competitive with Gittins. In a case where the task is to obtain an on target running direction as well as attain the maximum speed, the agent when dropped into an environment is able to master the the task almost instantly. However, meta-learning succeeds only 2/3rd of the time. It doesn’t succeed the rest of the time due to two main reasons. Overfitting: You would usually tend to overfit to the current situation rather than generically fitting to situations Underfitting: This is when you don’t get enough signal to get any rewards The solution is to put a different structure underneath the system. Instead of using an RNN, we use a wavenet like architecture or maybe Simple Neural Attentive Meta-Learner (SNAIL). SNAIL is able to perform a bit better than RL2 in the same Bandits problem. Longer Horizon Reasoning We need to learn to reason over longer horizons than what canonical algorithms do. For this, we need hierarchy. For example, suppose a robot has to perform 10 tasks in a day. This would mean it has 10 timesteps per day? Each of these 10 tasks would have subtasks under them. Let’s assume that would make it a total of 1000 time steps. To perform these tasks, the robot would need footstep planning, which would amount to 100,000 time steps. Footsteps in turn require commands to be sent to motors, which would make it 100,000,000 time steps. This is a very long horizon. We can formulate this as a meta-learning problem. The agent has to solve a distribution of related long-horizon tasks with the goal of learning new tasks in the distribution quickly. If that is our objective, hierarchy would fall out. Taskability (Imitation Learning) There are several things we want from robots. We need to be able to tell them what to do and we can do this by giving them examples. This is called Imitation Learning, which can be successfully implemented to a variety of use cases. The idea is to collect many demonstrations, then train something from those demonstrations, then deploy the learn policy. The problem with this is that everytime there is a new task, you start from scratch. The solution to this problem is experience through several demonstrations, as in the case of humans. Although, instead of running the agent through several demos, it is trained completely on one, then showed a frame of a second demo, where it uses it to predict what the outcome would be. This is known as One-Shot imitation learning which is a part of supervised learning, where in several demonstrations are used to train the system to be able to handle any new environment it is put into. Lifelong learning (Continuous Adaptation) What we usually do in ML can be divided into two broad steps: Run Machine Learning Deploy it, which is a canonical way In this case, all the learning happens ahead of time, before the deployment. However, in real world cases, what you learn from past data might not work in the future. There is a necessity to learn during deployment, which is a lifelong learning spirit. This brings us to Continuous Adaptation. Can we train an agent to be good at non stationary environments? We need to find whether at the time of meta training the agent is able to adapt to a new/changing task. We can try changing the dynamics since it’s hard to do ML training in the real world. At the same time, we can also use competitor environments; which means you’re in an environment with other agents who are trying to beat your agent. The only way to succeed is to continuously adapt more quickly than the others. Leverage Simulation Simulation is very helpful and it’s not that expensive. It’s fast and scalable and lets you label more easily. However, the challenge is how to get useful things out of the simulator. One approach is to build realistic simulators. This is quite expensive. Another way is to use a close enough simulator that uses real world data through domain confusion or adaptation. It allows to learn from a small amount of real world data and is quite successful. Further, another approach to look at is Domain Randomisation, which is also working well in the real world. If the model sees enough simulated variations, the real world might appear like just the next simulator. This has worked in the context of using simulator data to train a quadcopter to avoid collision. Moreover, when pre trained from imagenet or just training in simulation, both performances were similar, after around 8000 examples. To conclude, the beauty of meta learning is that it enables the discovery of algorithms that are data driven, as against those that are created from pure human ingenuity. This requires more compute power, but several companies like Nvidia and Intel are working hard to overcome this challenge. This will surely power meta-learning to great heights to be implemented in robotics. While we figure out these above mentioned technical challenges of incorporating AI in robotics, some significant other challenges that we must focus on in parallel are safe learning, and value alignment among others.    
Read more
  • 0
  • 0
  • 3505

article-image-alarming-ways-governments-use-surveillance-tech
Neil Aitken
14 Jun 2018
12 min read
Save for later

Alarming ways governments are using surveillance tech to watch you

Neil Aitken
14 Jun 2018
12 min read
Mapquest, part of the Verizon company, is the second largest provider of mapping services in the world, after Google Maps. It provides advanced cartography services to companies like Snap and PapaJohns pizza. The company is about to release an app that users can install on their smartphone. Their new application will record and transmit video images of what’s happening in front of your vehicle, as you travel. Data can be sent from any phone with a camera – using the most common of tools – a simple mobile data plan, for example. In exchange, you’ll get live traffic updates, among other things. Mapquest will use the video image data they gather to provide more accurate and up to date maps to their partners. The real world is changing all the time – roads get added, cities re-route traffic from time to time. The new AI based technology Mapquest employ could well improve the reliability of driverless cars, which have to engage with this ever changing landscape, in a safe manner. No-one disagrees with safety improvements. Mapquests solution is impressive technology. The fact that they can use AI to interpret the images they see and upload the information they receive to update maps is incredible. And, in this regard, the company is just one of the myriad daily news stories which excite and astound us. These stories do, however, often have another side to them which is rarely acknowledged. In the wrong hands, Mapquest’s solution could create a surveillance database which tracked people in real time. Surveillance technology involves the use of data and information products to capture details about individuals. The act of surveillance is usually undertaken with a view to achieving a goal. The principle is simple. The more ‘they’ know about you, the easier it will be to influence you towards their ends. Surveillance information can be used to find you, apprehend you or potentially, to change your mind, without even realising that you had been watched. Mapquest’s innovation is just a single example of surveillance technology in government hands which has expanded in capability far beyond what most people realise. Read also: What does the US government know about you? The truth beyond the Facebook scandal Facebook’s share price fell 14% in early 2018 as a result of public outcry related to the Cambridge Analytica announcements the company made. The idea that a private company had allowed detailed information about individuals to be provided to a third party without their consent appeared to genuinely shock and appall people. Technology tools like Mapquest’s tracking capabilities and Facebook’s profiling techniques are being taken and used by police forces and corporate entities around the world. The reality of current private and public surveillance capabilities is that facilities exist, and are in use, to collect and analyse data on most people in the developing world. The known limits of these services may surprise even those who are on the cutting edge of technology. There are so many examples from all over the world listed below that will genuinely make you want to consider going off grid! Innovative, Ingenious overlords: US companies have a flare for surveillance The US is the centre for information based technology companies. Much of what they develop is exported as well as used domestically. The police are using human genome matching to track down criminals and can find ‘any family in the country’ There have been 2 recent examples of police arresting a suspect after using human genome databases to investigate crimes. A growing number of private individuals have now used publicly available services such as 23andme to sequence their genome (DNA) either to investigate further their family tree, or to determine the potential of a pre-disposition to the gene based component of a disease. In one example, The Golden State Killer, an ex cop, was arrested 32 years after the last reported rape in a series of 45 (in addition to 12 murders) which occurred between 1976 and 1986. To track him down, police approached sites like 23andme with DNA found at crime scenes, established a family match and then progressed the investigation using conventional means. More than 12 million Americans have now used a genetic sequencing service and it is believed that investigators could find a family match for the DNA of anyone who has committed a crime in America. In simple terms, whether you want it or not, the law enforcement has the DNA of every individual in the country available to them. Domain Awareness Centers (DAC) bring the Truman Show to life The 400,000 Residents of Oakland, California discovered in 2012, that they had been the subject of an undisclosed mass surveillance project, by the local police force, for many years. Feeds from CCTV cameras installed in Oakland’s suburbs were augmented with weather information feeds, social media feeds and extracted email conversations, as well as a variety of other sources. The scheme began at Oakland’s port with Federal funding as part of a national response to the events of 9.11.2001 but was extended to cover the near half million residents of the city. Hundreds of additional video cameras were installed, along with gunshot recognition microphones and some of the other surveillance technologies provided in this article. The police force conducting the surveillance had no policy on what information was recorded or for how long it was kept. Internet connected toys spy on children The FBI has warned Americans that children’s toys connected to the internet ‘could put the privacy and safety of children at risk.' Children’s toy Hello Barbie was specifically admonished for poor privacy controls as part of the FBI’s press release. Internet connected toys could be used to record video of children at any point in the day or, conceivably, to relay a human voice, making it appear to the child that the toy was talking to them. Oracle suggest Google’s Android operating system routinely tracks users’ position even when maps are turned off In Australia, two American companies have been involved in a disagreement about the potential monitoring of Android phones. Oracle accused Google of monitoring users’ location (including altitude), even when mapping software is turned off on the device. The tracking is performed in the background of their phone. In Australia alone, Oracle suggested that Google’s monitoring could involve around 1GB of additional mobile data every month, costing users nearly half a billion dollars a year, collectively. Amazon facial recognition in real time helps US law enforcement services Amazon are providing facial recognition services which take a feed from public video cameras, to a number of US Police Forces. Amazon can match images taken in real time to a database containing ‘millions of faces.’ Are there any state or Federal rules in place to govern police facial recognition? Wired reported that there are ‘more or less none.’ Amazon’s scheme is a trial taking place in Florida. There are at least 2 other companies offering similar schemes in the US to law enforcement services. Big glass microphone can help agencies keep an ear on the ground Project ‘Big Glass Microphone’ uses the vibrations that the movements of cars (among other things) cause in the buried fiber optic telecommunications links. A successful test of the technology has been undertaken on the fiber optic cables which run underground on the Stanford University Campus, to record vehicle movements. Fiber optic links now make up the backbone of much data transport infrastructure - the way your phone and computer connect to the internet. Big glass microphone as it stands is the first step towards ‘invisible’ monitoring of people and their assets. It appears the FBI now have the ability to crack/access any phone Those in the know suggest that Apple’s iPhone is the most secure smart device against government surveillance. In 2016, this was put to the test. The Justice Department came into possession of an iPhone allegedly belonging to one of the San Bernadino shooters and ultimately sued Apple in an attempt to force the company to grant access to it, as part of their investigation. The case was ultimately dropped leading some to speculate that NAND mirroring techniques were used to gain access to the phone without Apple’s assistance, implying that even the most secure phones can now be accessed by authorities. Cornell University’s lie detecting algorithm Groundbreaking work by Cornell University will provide ‘at a distance’ access to information that previously required close personal access to an accused subject. Cornell’s solution interprets feeds from a number of video cameras on subjects and analyses the results to judge their heart rate. They believe the system can be used to determine if someone is lying from behind a screen. University of Southern California can anticipate social unrest with social media feeds Researchers at the University Of Southern California have developed an AI tool to study Social Media posts and determine whether those writing them are likely to cause Social Unrest. The software claims to have identified an association between both the volume of tweets written / the content of those tweets and protests turning physical. They can now offer advice to law enforcement on the likelihood of a protest turning violent so they can be properly prepared based on this information. The UK, an epicenter of AI progress, is not far behind in tracking people The UK has a similarly impressive array of tools at its disposal to watch the people that representatives of the country feels may be required. Given the close levels of cooperation between the UK and US governments, it is likely that many of these UK facilities are shared with the US and other NATO partners. Project stingray – fake cell phone/mobile phone ‘towers’ to intercept communications Stingray is a brand name for an IMSI (the unique identifier on a SIM card) tracker. They ‘spoof’ real towers, presenting themselves as the closest mobile phone tower. This ‘fools’ phones in to connecting to them. The technology has been used to spy on criminals in the UK but it is not just the UK government which use Stingray or its equivalents. The Washington Post reported in June 2018 that a number of domestically compiled intelligence reports suggest that foreign governments acting on US soil, including China and Russia, have been eavesdropping on the Whitehouse, using the same technology. UK developed Spyware is being used by authoritarian regimes Gamma International is a company based in Hampshire UK, which provided the (notably authoritarian) Egyptian government with a facility to install what was effectively spyware delivered with a virus on to computers in their country. Once installed, the software permitted the government to monitor private digital interactions, without the need to engage the phone company or ISP offering those services. Any internet based technology could be tracked, assisting in tracking down individuals who may have negative feelings about the Egyptian government. Individual arrested when his fingerprint was taken from a WhatsApp picture of his hand A Drug Dealer was pictured holding an assortment of pills in the UK two months ago. The image of his hand was used to extract an image of his fingerprint. From that, forensic scientists used by UK police, confirmed that officers had arrested the correct person and associated him with drugs. AI solutions to speed up evidence processing including scanning laptops and phones UK police forces are trying out AI software to speed up processing evidence from digital devices. A dozen departments around the UK are using software, called Cellebrite, which employs AI algorithms to search through data found on devices, including phones and laptops. Cellbrite can recognize images that contain child abuse, accepts feeds from multiple devices to see when multiple owners were in the same physical location at the same time and can read text from screenshots. Officers can even feed it photos of suspects to see if a picture of them show up on someone’s hard drive. China takes the surveillance biscuit and may show us a glimpse of the future There are 600 million mobile phone users in China, each producing a great deal of information about their users. China has a notorious record of human rights abuses and the ruling Communist Party takes a controlling interest (a board seat) in many of their largest technology companies, to ensure the work done is in the interest of the party as well as profitable for the corporate. As a result, China is on the front foot when it comes to both AI and surveillance technology. China’s surveillance tools could be a harbinger of the future in the Western world. Chinese cities will be run by a private company Alibaba, China’s equivalent of Amazon, already has control over the traffic lights in one Chinese city, Hangzhou. Alibaba is far from shy about it’s ambitions. It has 120,000 developers working on the problem and intends to commercialise and sell the data it gathers about citizens. The AI based product they’re using is called CityBrain. In the future, all Chinese cities could well all be run by AI from the Alibaba corporation the idea is to use this trial as a template for every city. The technology is likely to be placed in Kuala Lumpur next. In the areas under CityBrain’s control, traffic speeds have increased by 15% already. However, some of those observing the situation have expressed concerns not just about (the lack of) oversight on CityBrain’s current capabilities but the potential for future abuse. What to make of this incredible list of surveillance capabilities Facilities like Mapquest’s new mapping service are beguiling. They’re clever ideas which create a better works. Similar technology, however, behind the scenes, is being adopted by law enforcement bodies in an ever growing list of countries. Even for someone who understands cutting edge technology, the sum of those facilities may be surprising. Literally any aspect of your behaviour, from the way you walk, to your face, your heatmap and, of course, the contents of your phone and laptops can now be monitored. Law enforcement can access and review information feeds with Artificial Intelligence software, to process and summarise findings quickly. In some cases, this is being done without the need for a warrant. Concerningly, these advances seem to be coming without policy or, in many cases any form of oversight. We must change how we think about AI, urge AI founding fathers  
Read more
  • 0
  • 0
  • 3479

article-image-kaggles-rachel-tatman-on-what-to-do-when-applying-deep-learning-is-overkill
Vincy Davis
11 Dec 2019
8 min read
Save for later

Kaggle's Rachel Tatman on what to do when applying deep learning is overkill 

Vincy Davis
11 Dec 2019
8 min read
Deep learning, an emerging branch of machine learning, has garnered a lot of recognition in the field of technology over the last decade. It is regarded as a game-changer in AI, with distinct progress in computer vision, natural language processing (NLP), speech and other areas of machine learning. This year an Indeed survey found ‘deep learning engineer’ to be the best job in a tech position in the USA. Though deep learning has many benefits and a very appealing track record, not everybody can afford deep learning. It has some downsides like large data requirements, being excessively expensive, and has a high computing time. Below is a breakdown of Rachael Tatman’s talk “Put down the deep learning: When not to use neural networks and what to do instead” at the PyCon 2019 conference that delved into the problems with deep learning. Tatman is a data science advocate at Kaggle. Deep learning models require a very large amount of data in order to perform better than other techniques. Also, according to Tatman, just the compute of a simple image generation model in deep learning can cost around $60,000. This cost will increase with the complexity of the data models. It additionally requires expensive GPUs and hundreds of machines which will again deepen the cost to the user. Many less skilled people also find it difficult to adopt deep learning, as there is no standard theory available for learning about deep learning tools. The choice of a deep learning tool depends on the user’s knowledge of topology, training method, and other parameters. Next, deep learning also takes a lot of time for training large models. As the talk progresses, Tatman provides a list of three different types of models that can be used instead of deep learning. The three proposed models are regression-based models, tree-based models, and distance-based models.  The three proposed models instead of deep learning The most interpretable: Regression-based models The biggest advantage of a regression-based model is that it has a “well-principled” understanding of problems and provides many kinds of regression models, unlike deep learning. Users can simply work through the flowchart and decide on the best type of regression model for their data.  Some other advantages of regression models include its “fast to fit” feature. This means that it is much faster to fit when compared to a neural network, especially “if you're working with a well-optimized library the Python regression libraries tend to vary wildly so you might want to do a little bit of shopping around”. It also works well with small data as Tatman affirmed that she has worked on eight dozen data points. She added that since regression models are easy to interpret, she was able to learn many useful and interesting things from the data.  A few drawbacks of regression models are that a bit more data preparation is needed than for some other methods. They also require validation as regression models are based on strong assumptions about the distribution of the data points or the distribution of the errors.  Tatman also proclaimed that if she were to use a single machine learning model for the rest of her life, it would be a mixed-effects regression model. Mixed-effects models are extensions of linear regression models for data that are collected and summarized in groups. It is mainly used to determine the expected or mean values of the subject population. She believes, “you need to do a little bit more hands-on stuff, you need to do your validation, you probably need to do some additional data cleaning,” however, it only takes some time to do a lot of computing in less money and data. Want to know more about Regression? [box type="shadow" align="" class="" width=""]With so many benefits in regression-based models, you should definitely give Regression models a try. Read our book ‘Python Machine Learning By Example’ written by Yuxi (Hayden) Liu, to learn about regression algorithms and their evaluation. You can also master the art of building your own machine learning systems using other models such as Support Vector Machines and Text Analysis Algorithms with this example-based practical guide.[/box] The user-friendliest: Tree-based models  The next model which has the ability to replace deep learning models is called the tree-based models works that similar to a decision tree. It checks each node for a feature and depending on the value of that feature, the user can decide the path to be followed. When going down a particular path, it again checks for nodes with a feature. In this way, it works recursively to cut down a decision region into smaller chunks. Tatman also notified that developers generally opt for a forests model, instead of a tree-based model. A random forest is an ensemble model that combines many different decision trees together into a single model.  Per Tatman, “If you're in the machine learning community you might actually associate random forests with Kaggle and from 2010 to 2016, about two-thirds of all Kaggle competition winners used random forests.” On the other hand, “less than half use some form of deep learning, also random forests continue to do very well today.”  In the case of classification of data, random forests deliver better performance than logistic regression. It also does not need a lot of data cleaning or model validation. Random forests also do not require a user to convert the categorical variables, it simply undertakes the values and provides a corresponding output. It also supports many easy to use packages like XG boost, LightGBM, CatBoost, and others. In short, regression trees are the most user-friendly model, especially when doing classification. The drawbacks of trees/random forests are that they can easily overfit, it is also more sensitive to differences between datasets. It is less interpretable and requires more compute and training time when compared to regression models. Thus, tree-based models require little money but do need some data and time to train big data sets. The most lightweight: Distance-based models The last model, which according to Tatman can replace deep learning models is a common notation to group together a large group of methods like K-nearest neighbors, Gaussian Mixture models, and Support Vector machine. These models work with the basic idea that “points closer together to each other in a particular feature space are more likely to be in the same group.” The K-nearest neighbor model decides the value of a point based on the nearest majority neighbors. The Gaussian mixture models utilizes any distribution of distribution points that are a mixture of different Gaussians. The support vector model tries to be as far away from all the data points as possible. Distance-based models, particularly support vector models work very well with small data sets. They also tend to train 10 times faster than a regression model on the same data. In terms of accuracy, distance-based models lag behind other models, but in case of quick and dirty modeling, they perform better. They are good at data classification but are a little slower when compared to regression-based models. Consequently, distance-based models take very little time, requires very little money and are extremely lightweight. To conclude, Tatman says that the choice of one’s model should depend on the kind of time and money, the individual or organization possesses. Also, the most vital point to choose a model depends on its performance. Tatman adds, “based on empirical evidence right now it looks like deep learning will perform the best on a given data set given sufficient time money and compute.” Watch Tatman’s full talk for a detailed comparison of the three models. You can learn more about all the above machine learning models from our book, ‘Python Machine Learning By Example’ written by Yuxi (Hayden) Liu. The book will help you in implementing machine learning classification and regression algorithms from scratch in Python. Also, learn how to optimize the performance of a machine learning model for your application from our book. François Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in Deep Learning, and more Baidu adds Paddle Lite 2.0, new development kits, EasyDL Pro, and other upgrades to its PaddlePaddle deep learning platform Why use JVM (Java Virtual Machine) for deep learning Prof. Rowel Atienza discusses the intuition behind deep learning, advances in GANs & techniques to create cutting-edge AI models Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision.
Read more
  • 0
  • 0
  • 3467

article-image-conversational-ai-in-2018-an-arms-race-of-new-products-acquisitions-and-more
Bhagyashree R
21 Jan 2019
5 min read
Save for later

Conversational AI in 2018: An arms race of new products, acquisitions, and more

Bhagyashree R
21 Jan 2019
5 min read
Conversational AI is one of the most interesting applications of artificial intelligence in recent years. While the trend isn’t yet ubiquitous in the way that recommendation systems are (perhaps unsurprising), it has been successfully productized by a number of tech giants, in the form of Google Home and Amazon Echo (which is ‘powered by’ Alexa). The conversational AI arms race Arguably, 2018 has seen a bit of an arms race in conversational AI. As well as Google and Amazon, the likes of IBM, Microsoft, and Apple have wanted a piece of the action. Here are some of the new conversational AI tools and products these companies introduced this year: Google Google worked towards enhancing its conversational interface development platform, Dialogflow. In July, at the Google Cloud Next event, it announced several improvements and new capabilities to Dialogflow including Text to Speech via DeepMind's WaveNet and Dialogflow Phone Gateway for telephony integration. It also launched a new product called Contact Center AI that comes with Dialogflow Enterprise Edition and additional capabilities to assist live agents and perform analytics. Google Assistant became better in having a back-and-forth conversation with the help of Continued Conversation, which was unveiled at the Google I/O conference. The assistant became multilingual in August, which means users can speak to it in more than one language at a time, without having to adjust their language settings. Users can enable this multilingual functionality by selecting two of the supported languages. Following the footsteps of Amazon, Google also launched its own smart display named Google Home Hub at the ‘Made by Google’ event held in October. Microsoft Microsoft in 2018 introduced and improved various bot-building tools for developers. In May, at the Build conference, Microsoft announced major updates in their conversational AI tools: Azure Bot Service, Microsoft Cognitive Services Language Understanding, and QnAMaker. To enable intelligent bots to learn from example interactions and handle common small talk, it launched new experimental projects from named Conversation Learner and Personality Chat. At Microsoft Ignite, Bot Framework SDK V4.0 was made generally available. Later in November, Microsoft announced the general availability of the Bot Framework Emulator V4 and Web Chat control. In May, to drive more research and development in its conversational AI products, Microsoft acquired Semantic Machines and established conversational AI center of excellence in Berkeley. In November, the organization's acquisition of Austin-based bot startup XOXCO was a clear indication that it wants to get serious about using artificial intelligence for conversational bots. Producing guidelines on developing ‘responsible’ conversational AI further confirmed Microsoft wants to play a big part in the future evolution of the area. Microsoft were the chosen tech partner by UK based conversational AI startup ICS.ai. The team at ICS are using Azure and LUIS from Microsoft in their public sector AI chatbots, aimed at higher education, healthcare trusts and county councils. Amazon Amazon with the aims to improve Alexa’s capabilities released Alexa Skills Kit (ASK) which consists of APIs, tools, documentation, and code samples using which developers can build new skills for Alexa. In September, it announced a preview of a new design language named Alexa Presentation Language (APL). With APL, developers can build visual skills that include graphics, images, slideshows, and video, and to customize them for different device types. Amazon’s smart speaker Echo Dot saw amazing success with becoming the best seller in smart speaker category on Amazon. At its 2018 hardware event in Seattle, Amazon announced the launch of redesigned Echo Dot and a new addition to Alexa-powered A/V device called Echo Plus. As well as the continuing success of Alexa and the Amazon Echo, Amazon’s decision to launch the Alexa Fellowship at a number of leading academic institutions also highlights that for the biggest companies conversational AI is as much about research and exploration as it is products. Like Microsoft, it appears that Amazon is well aware that conversational AI is an area only in its infancy, still in development - as much as great products, it requires clear thinking and cutting-edge insight to ensure that it develops in a way that is both safe and impactful. What’s next? This huge array of products is a result of advances in deep learning researches. Now conversational AI is not just limited to small tasks like setting an alarm or searching the best restaurant. We can have a back and forth conversation with the conversational agent. But, needless to say, it still needs more work. Conversational agents are yet to meet user expectations related to sensing and responding with emotion. In the coming years, we will see these systems understand and do a good job at generating natural language. They will be able to have reasonably natural conversations with humans in certain domains, grounded in context. Also, the continuous development in IoT will provide AI systems with more context. Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots Amazon is supporting research into conversational AI with Alexa fellowships
Read more
  • 0
  • 0
  • 3456

article-image-what-the-us-china-tech-and-ai-arms-race-means-for-the-world-frederick-kempe-at-davos-2019
Sugandha Lahoti
24 Jan 2019
6 min read
Save for later

What the US-China tech and AI arms race means for the world - Frederick Kempe at Davos 2019

Sugandha Lahoti
24 Jan 2019
6 min read
Atlantic Council CEO, Frederick Kempe spoke in the World Economic Forum (WEF) in Davos, Switzerland. He talked about the Cold war between the US and China and why the countries need to co-operate and not compete in the tech arms race, in his presentation Future Frontiers of Technology Control. He began his presentation by posing a question set forth by Former US Foreign National Security Advisor Stephen Hadley, “Can the incumbent US and insurgent China become strategic collaborators and strategic competitors in this tech space at the same time?” Read also: The New AI Cold War Between China and the USA Kempe’s three framing arguments Geopolitical Competition This fusion of tech breakthroughs blurring lines of the physical, digital, and biological space is reaching an inflection point that makes it already clear that they will usher in a revolution that will determine the shape of the global economy. It will also determine which nations and political constructs may assume the commanding heights of global politics in the coming decade. Technological superiority Over the course of history, societies that dominated economic innovation and progress have dominated in international relations — from military superiority to societal progress and prosperity. On balance, technological progress has contributed to higher standards of living in most parts of the world; however, the disproportionate benefit goes to first movers. Commanding Heights The technological arms race for supremacy in the fourth industrial revolution has essentially become a two-horse contest between the United States and China. We are in the early stages of this race, but how it unfolds and is conducted will do much to shape global human relations. The shift in 2018 in US-China relations from a period of strategic engagement to greater strategic competition has also significantly accelerated the Tech arms race. China vs the US: Why China has the edge? It was Vladimir Putin, President of the Russian Federation who said that “The one who becomes the leader in Artificial Intelligence, will rule the world.” In 2017, DeepMind’s AlphaGo defeated a Chinese master in Go, a traditional Chinese game. Following this defeat, China launched an ambitious roadmap, called the next generation AI plan. The goal was to become the Global leader in AI by 2030 in theory, technology, and application. On current trajectories, in the four primary areas of AI over the next 5 years, China will emerge the winner of this new technology race. Kempe also quotes, author of the book, AI superpowers, Kai-fu Lee who argues that harnessing of the power of AI today- the electricity of the 21st century- requires abundant data, hungry entrepreneurs, AI scientists, and an AI friendly policy. He believes that China has the edge in all of these. The current AI has translated from out of the box research, where the US has expertise in, to actual implementation, where China has the edge. Per, Kai-fu Lee China already has the edge in entrepreneurship, data, and government support, and is rapidly catching up to the U.S. in expertise. The world has translated from the age of world-leading expertise (US department) to the age of data, where China wins hands down. Economists call China the Saudi Arabia of Data and with that as the fuel for AI, it has an enormous advantage. The Chinese government without privacy restrictions can gain and use data in a manner that is out of reach of any democracy. Kemper concludes that the nature of this technological arms contest may favor insurgent China rather than the incumbent US. What are the societal implications of this tech cold war He also touched upon the societal implications of AI and the cold war between the US and China. A number of jobs will be lost by 2030. Quoting from Kai-fu Lee’s book, Kempe says that Job displacement caused by artificial intelligence and advanced robotics could possibly displace up to 54 million US workers which comprise 30% of the US labor force. It could also displace up to 100 million Chinese workers which are 12% of the Chinese labor force. What is the way forward with these huge societal implications of a bi-lateral race underway? Kempe sees three possibilities. A sloppy Status Quo A status quo where China and the US will continue to cooperate but increasingly view each other with suspicion. They will manage their rising differences and distrust imperfectly, never bridging them entirely, but also not burning bridges, either between researchers, cooperations, or others. Techno Cold War China and the US turn the global tech contest into more of a zero-sum battle for global domination. They organize themselves in a manner that separates their tech sectors from each other and ultimately divides up the world. Collaborative Future - the one we hope for Nicholas Thompson and Ian Bremmer argued in a wired interview that despite the two countries’ societal difference, the US should wrap China in a tech embrace. The two countries should work together to establish international standards to ensure that the algorithms governing people’s lives and livelihoods are transparent and accountable. They should recognize that while the geopolitics of technological change is significant, even more important will be the challenges AI poses to all societies across the world in terms of job automation and the social disruptions that may come with it. It may sound utopian to expect US and China to cooperate in this manner, but this is what we should hope for. To do otherwise would be self-defeating and at the cost of others in the global community which needs our best thinking to navigate the challenges of the fourth industrial revolution. Kempe concludes his presentation with a quote by Henry Kissinger, Former US Secretary of State and National Security Advisor, “We’re in a position in which the peace and prosperity of the world depend on whether China and the US can find a method to work together, not always in agreement, but to handle our disagreements...This is the key problem of our time.” Note: All images in this article are taken from Frederick Kempe’s presentation. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 3384
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-10-machine-learning-tools-to-look-out-for-in-2018
Amey Varangaonkar
26 Dec 2017
7 min read
Save for later

10 Machine Learning Tools to watch in 2018

Amey Varangaonkar
26 Dec 2017
7 min read
2017 has been a wonderful year for Machine Learning. Developing smart, intelligent models has now become easier than ever thanks to the extensive research into and development of newer and more efficient tools and frameworks. While the likes of Tensorflow, Keras, PyTorch and some more have ruled the roost in 2017 as the top machine learning and deep learning libraries, 2018 promises to be even more exciting with a strong line-up of open source and enterprise tools ready to take over - or at least compete with - the current lot. In this article, we take a look at 10 such tools and frameworks which are expected to make it big in 2018. Amazon Sagemaker One of the major announcements in the AWS re:Invent 2017 was the general availability of Amazon Sagemaker - a new framework that eases the building and deployment of machine learning models on the cloud. This service will be of great use to developers who don’t have a deep exposure to machine learning, by giving them a variety of pre-built development environments, based on the popular Jupyter notebook format. Data scientists looking to build effective machine learning systems on AWS and to fine-tune their performance without spending a lot of time will also find this service useful. DSSTNE Yet another offering by Amazon, DSSTNE (popularly called as Destiny) is an open source library for developing machine learning models. It’s primary strength lies in the fact that it can be used to train and deploy recommendation models which work with sparse inputs. The models developed using DSSTNE can be trained to use multiple GPUs, are scalable and are optimized for fast performance. Boasting close to 4000 stars on GitHub, this library is yet another tool to look out for in 2018! Azure Machine Learning Workbench Way back in 2014, Microsoft put Machine Learning and AI capabilities on the cloud by releasing Azure Machine Learning. However, this was strictly a cloud-only service. During the Ignite 2017 conference held in September, Microsoft announced the next generation of Machine Learning on Azure - bringing machine learning capabilities to the organizations through their Azure Machine Learning Workbench. Azure ML Workbench is a cross-platform client which can run on both Windows and Apple machines. It is tailor-made for data scientists and machine learning developers who want to perform their data manipulation and wrangling tasks. Built for scalability, users can get intuitive insights from a broad range of data sources and use them for their data modeling tasks. Neon Way back in 2016, Intel announced their intentions to become a major player in the AI market with the $350 million acquisition of Nervana, an AI startup which had been developing both hardware and software for effective machine learning. With Neon, they now have a fast, high-performance deep learning framework designed specifically to run on top of the recently announced Nervana Neural Network Processor. Designed for ease of use and supporting integration with the iPython notebook, Neon supports training of common deep learning models such as CNN, RNN, LSTM and others. The framework is showing signs of continuous improvement and with over 3000 stars on GitHub, Neon looks set to challenge the major league of deep learning libraries in the years to come. Microsoft DMLT One of the major challenges with machine learning for enterprises is the need to scale out the models quickly, without compromising on the performance while minimising significant resource consumption. Microsoft’s Distributed Machine Learning framework is designed to do just that. Open sourced by Microsoft so that it can receive a much wider support from the community, DMLT allows machine learning developers and data scientists to take their single-machine algorithms and scale them out to build high performance distributed models. DMLT mostly focuses on distributed machine learning algorithms and allows you to perform tasks such as word embedding, sampling, and gradient boosting with ease. The framework does not have support for training deep learning models yet, however, we can expect this capability to be added to the framework very soon. Google Cloud Machine Learning Engine Considered to be Google’s premium machine learning offering, the Cloud Machine Learning Engine allows you to build machine learning models on all kinds of data with relative ease. Leveraging the popular Tensorflow machine learning framework, this platform can be used to perform predictive analytics at scale. It also lets you fine-tune and optimize the performance of your machine learning models using the popular HyperTune feature. With a serverless architecture supporting automated monitoring, provisioning and scaling, the Machine Learning Engine ensures you only have to worry about the kind of machine learning models you want to train. This feature is especially useful for machine learning developers looking to build large-scale models on the go. Apple Core ML Developed by Apple to help iOS developers build smarter applications, the Core ML framework is what makes Siri smarter. It takes advantage of both CPU and GPU capabilities to allow the developers to build different kinds of machine learning and deep learning models, which can then be integrated seamlessly into the iOS applications. Core ML supports all popularly used machine learning algorithms such as decision trees, Support Vector Machines, linear models and more. Targeting a variety of real-world use-cases such as natural language processing, computer vision and more, Core ML’s capabilities make it possible to analyze data on the Apple devices on the go, without having to import to the models for learning. Apple Turi Create In many cases, the iOS developers want to customize the machine learning models they want to integrate into their apps. For this, Apple has come up with Turi Create. This library allows you to focus on the task at hand rather than deciding which algorithm to use. You can be flexible in terms of the data set, the scale at which the model needs to operate and what platform the models need to be deployed to. Turi Create comes in very handy for building custom models for recommendations, image processing, text classification and many more tasks. All you need is some knowledge of Python to get started! Convnetjs Move over supercomputers and clusters of machines, deep learning is well and truly here - on your web browsers! You can now train your advanced machine learning and deep learning models directly on your browser, without needing a CPU or a GPU, using the popular Javascript-based Convnetjs library. Originally written by Andrej Karpathy, the current director of AI at Tesla, the library has since been open sourced and extended by the contributions of the community. You can easily train deep neural networks and even reinforcement learning models on your browser directly, powered by this very unique and useful library. This library is suited for those who do not wish to purchase serious hardware for training computationally-intensive models. With close to 9000 stars on GitHub, Convnetjs has been one of the rising stars in 2017 and is quickly becoming THE go-to library for deep learning. BigML BigML is a popular machine learning company that provides an easy to use platform for developing machine learning models. Using BigML’s REST API, you can seamlessly train your machine learning models on their platform. It allows you to perform different tasks such as anomaly detection, time series forecasting, and build apps that perform real-time predictive analytics. With BigML, you can deploy your models on-premise or on the cloud, giving you the flexibility of selecting the kind of environment you need to run your machine learning models. True to their promise, BigML really do make ‘machine learning beautifully simple for everyone’. So there you have it! With Microsoft, Amazon, and Google all fighting for supremacy in the AI space, 2018 could prove to be a breakthrough year for developments in Artificial Intelligence. Add to this mix the various open source libraries that aim to simplify machine learning for the users, and you get a very interesting list of tools and frameworks to keep a tab on. The exciting thing about all this is - all of them possess the capability to become the next TensorFlow and cause the next AI disruption.  
Read more
  • 0
  • 0
  • 3350

article-image-nips-2017-special-machine-learning-genomics-bridging-gap-research-clinical-trial-success-brendan-frey
Sugandha Lahoti
14 Dec 2017
10 min read
Save for later

NIPS 2017 Special: How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey

Sugandha Lahoti
14 Dec 2017
10 min read
Brendan Frey is the founder and CEO of Deep Genomics. He is the professor of engineering and medicine at the University of Toronto. His major work focuses on using machine learning to model genome biology and understand genetic disorders. This article attempts to bring our readers to Brendan’s Keynote speech at NIPS 2017. It highlights how the human genome can be reprogrammed using Machine Learning and gives a glimpse into some of the significant work going on in this field. After reading this article, head over to the NIPS Facebook page for the complete keynote. All images in this article come from Brendan’s presentation slides and do not belong to us. 65% of people in their lifetime are at a risk of acquiring a disease with a genetic basis. 8 million births per year are estimated to have a serious genetic defect. According to the US healthcare system, the lifetime average cost of such a baby is 5M$ per child. These are just some statistics. If we also add the emotional component to this data, it gives us an alarming picture of the state of the healthcare industry today. According to a recent study,  investing in pharma is no longer as lucrative as it used to be in the 90s. Funding for this sector is dwindling, which serves as a barrier to drug discovery, trial, and deployment. All of these, in turn, add to the rising cost of healthcare. Better to stuff your money in a mattress than put it in a pharmaceutical company! Genomics as a field is rich in data. Experts in genomics strive to determine complete DNA sequences and perform genetic mapping to help understand a disease. However, the main problem confronting Genome Biology and Genomics is the inability to decipher information from the human genome i.e. how to convert the genome into actionable information. What genes are made of and why sequencing matter Essentially each gene consists of a promoter region, which basically activates the gene. Following the promoter region, there are alternating Exons and Introns. Introns are almost 10,000 nucleotides long. Exons are relatively short, around 100 nucleotides long. In software terms, you can think of Exons as print statements. Exons are the part that ends up in proteins. Introns get cut out/removed. However, Introns contain crucial control logic. There are words embedded in introns that tells the cells how to cut and paste these exons together and make the gene. A DNA sequence is transcribed into RNA and then the RNA is processed in various ways to translate into proteins. However, the picture is much more complicated than this. Proteins go back and interact with the DNA. Proteins also interact with RNA. even, RNA interacts with protein. So all these entities are interrelated. All these technicalities and interrelationships make biology generally complex for researchers or even a group of researchers to fully understand and make sense of the data. Another way to look at this is, that in the recent years our ability to measure biology (fitbits, tabloids, genomes) and the ability to alter biology (DNA editing) has far surpassed our ability to understand biology. In short, in this field, we have become very good at collecting data but not as good with interpreting it. Machine Learning brought to genomes Deep Genomics, is a genetic medicine company that uses an AI-driven platform to support geneticists, molecular biologists and chemists in the development of genetic therapies. In 2010, Deep Genomics used machine learning to understand how words embedded in introns control print statements splicing which puts exons into proteins. They also used machine learning to reverse engineer to infer those code words using datasets. Another deep genomic research project talks about Protein-DNA binding data. There are datasets which allow you to measure interactions between protein and DNA and understand how that works.  In this research, they took a dataset from Ray et al 2013 which consisted of 240,000 designed sequences and then evaluated which are the proteins that the sequence likes to stick to. Thus generating a big data matrix of proteins and designed sequences. The machine learning task here was to learn to take a sequence and predict whether the protein will bind to that sequence. How was this done? They took batches of data containing the designed sequences and fed it into a convolutional neural network. The CNN swept across those sequences to generate an intermediary representation.  This representation was then fed into different layers of convolutional pooling and fully connected layers produced the output. The output was then compared to the measurement (the data matrix of proteins and designed sequences described earlier ) and backpropagation was used to update the parameters. One of the challenges was figuring out the right metric. For this, they compared the measured binding affinity (how much protein sticks to the sequence) to the output of the neural network and determined the right cos function for producing a neural network that is useful in practice. Usecase One of the use cases of this neural network is to identify the pathological mutation and fix them. The above illustration is a sequence of the cholesterol gene. The researchers artificially in silico looked at every possible mutation in the promoter. So for each nucleotide, say if the nucleotide had a value of A, they switched it to G, C, and T and for each of those possibilities, they ran the entire promoter through a neural network and looked at its output. The neural network then predicted the mutations that will disrupt the protein binding. The heights of the letter showed the measured binding affinity i.e the output of the neural network.  The white box displays how much the mutation changed the output.  Pink or bright red was used in case of positive mutation, blue in case of negative mutation and white for no change. This map was then compared with known results to see the accuracy and also make predictions never seen before in a clinical trial. As shown in the image, Blues, which are the potential or known harmful mutations have correctly fallen in the white spaces. But there are some unknown mutations as well. Machine learning output such as this can help researchers narrow their focus on learning about new diseases and also in diagnosing existing ones and treating them. Another group of researchers used a neural network to figure out the 3D structure or the chromatin interaction structure of the DNA. The data used was a matrix form and showed how strongly two parts of a DNA are likely to interact. The researchers trained a multilayer convolutional network to take as input the raw DNA sequence and also a signal called chromatin accessibility( tells how available the DNA is) and fed it into CNN. The output of that system predicted the probability of contact which is crucial for gene expression. Deep genomics: Using AI to build a new universe of digital medicines The founding belief at Deep Genomics is that the future of medicine will rely on artificial intelligence, because biology is too complex for humans to understand. The goal of deep genomics is to build AI platform for detecting and treating genetic disease. Genome tools Genome processing tools are tools which help in identification of mutation, e.g DeepVariant. At deep genomics, the tool used is called genomic kit which is 20 to 800 times faster than other existing tools. Disease mechanism Prediction This is about figuring whether the disease mechanism is pathological or the mutation which simply changes hair color. Therapeutic Development Helping patients by providing them with better medicines. These are the basics of any drug development procedure. We start with patient genetic data and clinical mutations. Then we find the disease mechanism and figure the mechanism of action( the steps to remediate the problem). However, the disease mechanism and mechanism of action of a potential drug may not be the inverse of one another. The next step is to design a drug. With Digital medicines, if we know the mechanism of action that we are trying to achieve, and we have ML systems, like the ones described earlier, we can simulate the effects of modifying DNA or RNA. Thus we can, in silico design the compound we want to test. Next, we test the experimental work in the wet lab to actually see if it alters the way in which ML systems predicted. The next thing is toxicity or off-target effects. This evaluates if the compound is going to change some other part of the genome or has some unintended consequences. Next, we have clinical trials. In case of clinical trials, the biggest problems facing pharmaceutical companies is patient’s gratification. Then comes the marketing and distribution of that drug which is highly costly. This includes marketing strategies to convince people to buy those drugs, insurance companies to pay for them, and legal companies to deal with litigations. Here’s how long it took Ionis and Biogen, to develop Spinraza, which is a drug for curing Spinal Muscular Atrophy (SMA). It is the most effective drug for curing SMA. It has saved hundreds of lives already. However, it costs 750,000$ per child per year. Why does it cost so much? If we look at the timeline of the development of Spinraza, the initial period of testing was quite long. The goal of deep Genomics is to use ML to accelerate the research period of drugs such as Spinraza from 8 years down to a couple of years. They also aim to use AI to accelerate clinical trials, toxicity studies, and other aspects of drug development. The whole idea is to reduce the amount of time needed to develop the drug. Deep genomics uses AI to automate and accelerate each of these steps and make it fast and accurate. However, apart from AI, they also test compounds at their wet lab in human cells to see if they work. They also use Cloud Laboratory. At cloud lab, they upload a python script. Once uploaded, it specifies the experimental protocols and then robots conduct these experiments. These labs rapidly scale up the ability to do experiments, test compounds, and solve other problems. Earning trust of stakeholders One of the major issues ML systems face in the genomics industry is earning the trust of the stakeholders. These stakeholders include the patients, the physicians treating the patients, the insurance companies paying for the treatments, different technology providers, and the hospitals. Machine learning practitioners are also often criticized for producing black boxes, that are not open to interpretation. The way to gain this trust is to exactly figure out what these stakeholders need.  For this, machine learning systems need to explain the intermediary steps of a prediction. For instance, instead of directly recommending double mastectomy, the system says you have a mutation, the mutation is going to cause splicing to go wrong, leading to malfunctioning protein, which is likely to lead to breast cancer. The likelihood is x%. The road ahead Researchers at Deep Genomics pare currently working primarily on Project Saturn. The idea is to use a Machine learning system to scan a vast space of 69 billion molecules all in silico and identify about a thousand active compounds. Active compounds allow us to manipulate cell biology. Think about it as 1000 control switches which we can turn and twist to adjust what is going inside a cell, a toolkit for therapeutic development. They plan to have 3 compounds in clinical trials within the next 3 years.
Read more
  • 0
  • 0
  • 3248

article-image-2018-data-science-part-2-of-3
Savia Lobo
04 Jan 2018
7 min read
Save for later

2018 new year resolutions to thrive in the Algorithmic World - Part 2 of 3

Savia Lobo
04 Jan 2018
7 min read
In our first resolution, we talked about learning the building blocks of data science i.e developing your technical skills. In this second resolution, we walk you through steps to stay relevant in your field and how to dodge jobs that have a high possibility of getting automated in the near future. 2nd Resolution: Stay relevant in your field even as job automation is on the rise (Time investment: half an hour every day, 2 hours on weekends) Once you have got your fundamentals right, it is important to stay relevant through continuous learning and reskilling. In addition to honing your technical skills, you must also deepen your domain expertise and keep adding to your portfolio of soft skills to stay ahead of not the just human competition but also to thrive in an automated job market. We list below some simple ways to do all these in a systematic manner. All it requires is a commitment of half an hour to one hour of your time daily for your professional development. 1. Commit to and execute a daily learning-practice-participation ritual Here are some ways to stay relevant. Follow data science blogs and podcasts relevant to your area of interest. Here are some of our favorites: Data Science 101, the journey of a data scientist The Data Skeptic for a healthy dose of scientific skepticism Data Stories for data visualization This Week in Machine Learning & AI for informative discussions with prominent people in the data science/machine learning community Linear Digressions, a podcast co-hosted by a data scientist and a software engineer attempting to make data science accessible You could also follow individual bloggers/vloggers in this space like Siraj Raval, Sebastian Raschka, Denny Britz, Rodney Brookes, Corinna Cortes, Erin LeDell Newsletters are a great way to stay up-to-date and to get a macro-level perspective. You don’t have to spend an awful lot of time doing the research yourself on many different subtopics. So, subscribe to useful newsletters on data science. You can subscribe to our newsletter here. It is a good idea to subscribe to multiple newsletters on your topic of interest to get a balanced and comprehensive view of the topic. Try to choose newsletters that have distinct perspectives, are regular and are published by people passionate about the topic. Twitter gives a whole new meaning to ‘breaking news’. Also, it is a great place to follow contemporary discussions on topics of interest where participation is open to all. When done right, it can be a gold mine for insights and learning. But often it is too overwhelming as it is viewed as a broadcasting marketing tool. Follow your role models in data science on Twitter. Or you could follow us on Twitter @PacktDataHub for curated content from key data science influencers and our own updates about the world of data science. You could also click here to keep a track of 737 twitter accounts most followed by the members of the NIPS2017 community. Quora, Reddit, Medium, and StackOverflow are great places to learn about topics in depth when you have a specific question in mind or a narrow focus area. They help you get multiple informed opinions on topics. In other words, when you choose a topic worth learning, these are great places to start. Follow them up by reading books on the topic and also by reading the seminal papers to gain a robust technical appreciation. Create a Github account and participate in Kaggle competitions. Nothing sticks as well as learning by doing. You can also browse into Data Helpers, a site voluntarily set up by Angela Bass where interested data science people can offer to help newcomers with their queries on entering the required field and anything else. 2. Identify your strengths and interests to realign your career trajectory OK, now that you have got your daily learning routine in place, it is time to think a little more strategically about your career trajectory, goals and eventually the kind of work you want to be doing. This means: Getting out of jobs that can be automated Developing skills that augment or complement AI driven tasks Finding your niche and developing deep domain expertise that AI will find hard to automate in the near future Here are some ideas to start thinking about some of the above ideas. The first step is to assess your current job role and understand how likely it is to get automated. If you are in a job that has well-defined routines and rules to follow, it is quite likely to go the AI job apocalypse route. Eg: data entry, customer support that follows scripts, invoice processing, template-based software testing or development etc. Even “creative” job such as content summarization, news aggregation, template-based photo-editing/video editing etc fall in this category. In the world of data professionals, jobs like data cleaning, database optimization, feature generation, even model building (gasp!) among others could head the same way given the right incentives. Choose today to transition out of jobs that may not exist in the next 10 years. Then instead of hitting the panic button, invest in redefining your skills in a way that would be helpful in the long run. If you are a data professional, skills such as data interpretation, data-driven storytelling,  data pipeline architecture and engineering, feature engineering, and others that require a high level of human judgment skills are least likely to be replicated by machines anytime soon. By mastering skills that complement AI driven tasks and jobs, you should be able to present yourself as a lucrative option to potential employers in a highly competitive job market space.    In addition to reskilling, try to find your niche and dive deep. By niche, we mean, if you are a data scientist, choose a specific technical aspect in your field, something that interests you. It could be anything from computer vision to NLP to even a class of algorithms like neural nets or a type of problem that machine learning solves such as recommender systems or classification systems. It could even be a specific phase of a data science project such as data visualization or data pipeline engineering. Master your niche while keeping up with what’s happening in other related areas. Next, understand where your strengths lie. In other words, what your expertise is, what industry or domain do you understand well or have amassed experience in. For instance, NLP, a subset of machine learning abilities, can be applied to customer reviews to mine useful insights, perform sentiment analysis, build recommendation systems in conjunction with predictive modeling among other things. In order to build an NLP model to mine some kind of insights from customer feedback, we must have some idea of what we are looking for. Your domain expertise can be of great value here. If you are in the publishing business, you would know what keywords matter most in reviews and more importantly why they matter and how to convert the findings into actionable insights - aspects that your model or even a machine learning engineer outside your industry may not understand or appreciate. Take the case of Brendan Frey and the team of researchers at Deep Genomics as a real-world example. They applied AI and machine learning (their niche expertise) to build a neural network to identify pathological mutations in genes (their domain expertise). Their knowledge of how genes get created and how they work, what a mutation looks like etc helped them feed the features and hyperparameters into their model. Similarly, you can pick up any of your niche skills and apply them in whichever field you find interesting and worthwhile. Based on your domain knowledge and area of expertise, it could range from sorting a person into a Hogwarts house because you are a Harry Potter fan to sorting them into potential patients with a high likelihood to develop diabetes because you have a background in biotechnology.   This brings us to the next resolution where we cover aspects related to how your work will come to define you and why it matters that you choose your projects well.   
Read more
  • 0
  • 0
  • 3168

article-image-setting-microsoft-bot-framework-dev-environment
Packt
30 Dec 2016
8 min read
Save for later

Setting up Microsoft Bot Framework Dev Environment

Packt
30 Dec 2016
8 min read
In this article by Kishore Gaddam, author of the book Building Bots with Microsoft Bot Framework, we introduced what is Microsoft Bot Framework and how it helps in the development of bots. (For more resources related to this topic, see here.) Since past several decades, the corporate, government, and business world has experienced several waves of IT architecture foundations, moving from mainframes, to minicomputers, to distributed PCs, to the Internet, to social/mobile and now the Cloud/Internet of Thuings (IoT) Stack. We call this the Sixth wave of Corporate IT, and like its predecessors, Cloud and IoT technologies are causing significant disruption and displacement, even while it drives new levels of productivity. Each architecture focused on key business processes and supported killer technology applications to drive new levels of value. Very soon we will be looking at an enormous networked interconnection of everyday machines to one another, as well as to humans. Machine-to-machine-to-human connectivity will have a profound impact on the consumer and corporate IT experience. As these machines become social and talkto us, we have enormous opportunity to greatly enhance their value proposition through improved product quality, customer experience, and lowered cost of operations. A heightened consumer expectation for more personal and real-time interactions is driving business to holistically embrace the next wave of technology innovation like Cloud, IoT, and Bots to boost business performance. In this age of billions of connected devices, there is a need for such a technology where our apps could talk back, like bots? Bots that have specific purposes and talk to any device or any app or to anyone, Bots that live in cloud, Bots that we can talk to you via any communication channel such as email, text, voice, chat, and others. Bots can go where no apps have gone before when it comes to machine-to-machine-to-human connectivity. And to make this happen we will need a whole new platform. A Platform for Conversations. Conservation as a Service (CaaS) Messaging apps in general are becoming a second home screen for many people, acting as their entry point into the internet. And where the youngins are, the brands will follow. Companies are coming to messaging apps as bots and apps, to offer everything from customer service to online shopping and banking. Conversations are shaping up be the next major human-computer interface. Thanks to advances in natural language processing and machine learning, the tech is finally getting fast and accurate enough to be viable. Imagine a platform where language is the new UI layer. When we talk about conversation as a platform, there are 3 parts: There are people talking to people – Skype translator as an ex where people can communicate across cross languages Then there is the presence or being able to enhance a conversation by the ability to be present and interact remotely Then there is personal assistance and the bots Think of Bots as the new mechanism that you can converse with. Instead of looking through multiple mobile apps or pages and pages of websites, you can call on any application as a bot within the conversational canvas. Bots are the new apps and digital assistants are the meta apps. This way intelligence is infused into all our interactions. This leads us to Microsoft Bot Framework, which is a comprehensive offering from Microsoft to build and deploy high quality bots for your users to interact using Conversation as a Platform (CaaP). This is a framework that lets you build and connect intelligent bots. The idea is that they interact naturally wherever your users are talking, like Skype, Slack, Facebook Messenger, Text/SMS, and others. Basically any kind of channel that you use today as a human being to talk to other people, well, you will be able to use them to talk to bots all using natural language. Microsoft Bot Framework is a Microsoft operated CaaP service and an open source SDK. Bot Framework is one of the many tools Microsoft is offering to for building a complete Bot. Other tools include Language Understanding Intelligent Service (LUIS), Speech APIs, Microsoft Azure, Cortana Intelligence Suit and many more. Your Bot The Microsoft Bot Builder SDK is one of three main components of the Microsoft Bot Framework. First you have to build your bot. Your bot lives in the cloud and you host it yourself. You write it just like a web service component using Node.js or C#, like a ASP.NET WebAPI component. Microsoft Bot builder SDK is open source and so you will have more languages and web stack get supported over time. Your bot will have its own logic, but you also need a conversation logic using dialogs to model a conversation. The Bot builder SDK gives you facilities for this and there are many types of dialogs that are included from simple Yes/No questions to full natural language understanding or LUIS, which is one of the API's provided in Microsoft Cognitive Services: Bot Connector Bot Connector is hosted and operated by Microsoft. Think of it as a central router between your bots and many channels to communicate with your bots. Apart from routing messages, it would be managing state within the conversation. The Bot Connector is an easy way to create a single back-end and then publish to a bunch of different platforms called channels. Bot Directory Bot Directory is where user will be able to find bots. It's like app store for mobile apps. The Bot Directory is a public directory of all reviewed bots registered through the developer portal. Users will be able to discover, try, and add bots to their favorite conversation experiences from the Bot Directory. Anyone can access it and anyone can submit Bots to the directory. As you begin your development with Microsoft Bot Framework, you might be wondering how to best get started. Bots can be built in C#, however, Microsoft's Bot Framework can also be used to build bots using Node.js. For developing any bots, we need to first setup the development environment and have the right tools installed for successfully developing and deploying a bot. Let's see how we can setup a development environment using Visual Studio. Setting up development environment Let's first look at the Prerequisites required to set up the development environment: Prerequisites To use the Microsoft Bot Framework Connector, you must have: A Microsoft Account (Hotmail, Live, or Outlook) to log into the Bot Framework developer portal, which you will use to register your Bot. An Azure subscription (Free trial: https://azure.microsoft.com/en-us/). This Azure subscription is essential for having an Azure-accessible REST endpoint exposing a callback for the Connector service. Developer accounts on one or more communication services (such as Skype, Slack, Facebook) where your Bot will communicate. In addition, you may wish to have an Azure App Insights account so you can capture telemetry from your Bot. There are additionally different ways to go about building a Bot; from scratch, coded directly to the Bot Connector REST API, the Bot Builder SDK's for Node.js and .NET, and the Bot Connector .NET template which is what this quick start guide demonstrates. Setting up Bot Framework Connector SDK .NET This is a step-by-step guide to setting up dev environment to develop a Bot in C# using the Bot Framework Connector SDK .NET template: Install prerequisite software Visual Studio 2015 (latest update) - you can download the community version here for free: www.visualstudio.com Important: Please update all Visual Studio extensions to their latest versions to do so navigate to Tools | Extensions and Updates | Updates Download and install the Bot Application template: Download the file from the direct download link at http://aka.ms/bf-bc-vstemplate Save the zip file to your Visual Studio 2015 templates directory which is traditionally in %USERPROFILE%DocumentsVisual Studio 2015TemplatesProjectTemplatesVisual C# Open Visual Studio. Create a new C# project using the new Bot Application template. The template is a fully functional Echo Bot that takes the user's text utterance as input and returns it as output. In order to run however: The bot has to be registered with Bot Connector The AppId and AppPassword from the Bot Framework registration page have to be recorded in the project's web.config The project needs to be published to the web Emulator Use the Bot Framework Emulator to test your Bot application. The Bot Framework provides a channel emulator that lets you test calls to your Bot as if it were being called by the Bot Framework cloud service. To install the Bot Framework Emulator, download it from https://download.botframework.com/bf-v3/tools/emulator/publish.html. One installed, you're ready to test. First, start your Bot in Visual Studio using a browser as the application host. The following screenshot uses Microsoft Edge: Summary In this article, we introduced what is Microsoft Bot Framework and how it helps in the development of bots. Also, we have seen how to setup development environment, Emulator and the tools needed for programming. This article is based on the thought that programming knowledge and experience grow best when they grow together.  Resources for Article: Further resources on this subject: Talking to Bot using Browser [article] Webhooks in Slack [article] Creating our first bot, WebBot [article]
Read more
  • 0
  • 0
  • 3146
article-image-accountability-and-algorithmic-bias-why-diversity-and-inclusion-matters-neurips-invited-talk
Sugandha Lahoti
08 Dec 2018
4 min read
Save for later

Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]

Sugandha Lahoti
08 Dec 2018
4 min read
One of the most awaited machine learning conference, NeurIPS 2018 is happening throughout this week in Montreal, Canada. It will feature a series of tutorials, invited talks, product releases, demonstrations, presentations, and announcements related to machine learning research. For the first time, NeurIPS invited a diversity and inclusion (D&I) speaker Laura Gomez to talk about the lack of diversity in the tech industry, which leads to biased algorithms, faulty products, and unethical tech. Laura Gomez is the CEO of Atipica that helps tech companies find and hire diverse candidates. Being a Latina woman herself, she had to face oppression when seeking capital and funds for her startup trying to establish herself in Silicon Valley. This experience led to her realization that there is a strong need to talk about why diversity and inclusion matters. Her efforts were not in vain and recently, she raised $2M in seed funding led by True Ventures. “At Atipica, we think of Inclusive AI in terms of data science, algorithms, and their ethical implications. This way you can rest assure our models are not replicating the biases of humans that hinder diversity while getting patent-pending aggregate demographic insights of your talent pool,” reads the website. She talks about her journey as a Latina woman in the tech industry. She reminisced on how she was the only one like her who got an internship with Hewlett Packard and the fact that she hated it. Nevertheless, she still decided to stay, determined not to let the industry turn her into a victim. She believes she made the right choice going forward with tech; now, years later, diversity is dominating the conversation in the industry. After HP, she also worked at Twitter and YouTube, helping them translate and localize their applications for a global audience. She is also a founding advisor of Project Include, which is a non-profit organization run by women, that uses data and advocacy to accelerate diversity and inclusion solutions in the tech industry. She opened her talk by agreeing to a quote from Safiya Noble, who wrote Algorithms of Oppression. “Artificial Intelligence will become a major human rights issue in the twenty-first century.” She believes we need to talk about difficult questions such as where AI is heading? And where should we hold ourselves and each other accountable.” She urges people to evaluate their role in AI, bias, and inclusion, to find the empathy and value in difficult conversations, and to go beyond your immediate surroundings to consider the broader consequences. It is important to build accountable AI in a way that allows humanity to triumph. She touched upon discriminatory moves by tech giants like Amazon and Google. Amazon recently killed off its AI recruitment tool because it couldn’t stop discriminating against women. She also criticized upon Facebook’s Myanmar operation where Facebook data scientists were building algorithms for hate speech. They didn’t understand the importance of localization or language or actually internationalize their own algorithms to be inclusive towards all the countries. She also talked about algorithmic bias in library discovery systems, as well as how even ‘black robots’ are being impacted by racism. She also condemned Palmer Luckey's work who is helping U.S. immigration agents on the border wall identify Latin refugees. Finally, she urged people to take three major steps to progress towards being inclusive: Be an ally Think of inclusion as an approach, not a feature Work towards an Ethical AI Head over to NeurIPS facebook page for the entire talk and other sessions happening at the conference this week. NeurIPS 2018: Deep learning experts discuss how to build adversarially robust machine learning models NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale NeurIPS 2018: A quick look at data visualization for Machine learning by Google PAIR researchers [Tutorial]
Read more
  • 0
  • 0
  • 3141

article-image-2017-generative-adversarial-networks-gans-research-milestones
Savia Lobo
30 Dec 2017
9 min read
Save for later

2017 Generative Adversarial Networks (GANs) Research Milestones

Savia Lobo
30 Dec 2017
9 min read
Generative Adversarial Models, introduced by Ian Goodfellow, are the next big revolution in the field of deep learning. Why? Because of their ability to perform semi-supervised learning where there is a vast majority of data is unlabelled. Here, GANs can efficiently carry out image generation tasks and other tasks such as converting sketches to an image, conversion of satellite images to a map, and many other tasks. GANs are capable of generating realistic images in any circumstances, for instance, giving some text written in a particular handwriting as an input to the generative model in order to generate more texts in the similar handwriting. The speciality of these GANs is that as compared to discriminative models, these generative models make use of a joint distribution probability to generate more likely samples. In short, these generative models or GANs are an improvisation to the discriminative models. Let’s explore some of the research papers that are contributing to further advancements in GANs. CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks This paper talks about CycleGANs, a class of generative Adversarial networks that carry out Image-to-Image translation. This means, capturing special characteristics of one image collection and figuring out how these characteristics could be translated into the other image collection, all in the absence of any paired training examples. CycleGANs method can also be applied in variety of applications such as Collection Style Transfer, Object Transfiguration, season transfer and photo enhancement. Cycle GAN architecture Source: GitHub CycleGANs are built upon the advantages of PIX2PIX architecture. The key advantage of CycleGANs model is, it allows to point the model at two discrete, unpaired collection of images.For example, one image collection say Group A, would consist photos of landscapes in summer whereas Group B would include photos of  landscapes in winter. The CycleGAN model can learn to translate the images between these two aesthetics without the need to merge tightly correlated matches together into a single X/Y training image. Source: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks The way CycleGANs are able to learn such great translations without having explicit X/Y training images involves introducing the idea of a full translation cycle to determine how good the entire translation system is, thus improving both generators at the same time. Source: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Currently, the applications of CycleGANs can be seen in Image-to-Image translation and video translations. For example they can be seen used in Animal Transfiguration, Turning portrait faces into doll faces, and so on. Further ahead, we could potentially see its implementations in audio, text, etc., would help us in generating new data for training. Although this method has compelling results, it also has some limitations The geometric changes within an image are not fully successful (for instance, the cat to dog transformation showed minute success). This could be caused by the generator architecture choices, which are tailored for good performance on the appearance changes. Thus, handling more varied and extreme transformations, especially geometric changes, is an important problem. Failure caused by the distribution characteristics of the training datasets. For instance, in the horse to zebra transfiguration, the model got confused as it was trained on the wild horse and zebra synsets of ImageNet, which does not contain images of a person riding a horse or zebra. These and some other limitations are described in the research paper. To read more about CycleGANs in detail visit the link here. Wasserstein GAN In this paper, we get an exposure to Wasserstein GANs and how they overcomes the drawbacks in original GANs. Although GANs have shown a drastic success in realistic image generation, the training however is not that easy as the process is slow and unstable. In the paper proposed for WGANs, it is empirically shown that WGANs cure the training problem. Wasserstein distance, also known as Earth Mover’s (EM) distance, is a measure of distance between two probability distributions. The basic idea in WGAN is to replace the loss function so that there always exists a non-zero gradient. This can be done using Wasserstein distance between the generator distribution and the data distribution. Training these WGANs does not require keeping a balance in training of the discriminator and the generator. It also doesn’t require a design of the network architecture too. One of the most fascinating practical benefits of WGANs is the ability to continuously estimate the EM distance by training the discriminator to an optimal level. The learning curves when used for plotting are useful for debugging and hyperparameter searches. These curves also correlate well with the observed sample quality and improved stability of the optimization process. Thus, Wasserstein GANs are an alternative to traditional GAN training with features such as: Improvement in the stability of learning Elimination of problems like mode collapse Provide meaningful learning curves useful for debugging and hyperparameter searches Furthermore, the paper also showcases that the corresponding optimization problem is sound, and provides extensive theoretical work highlighting the deep connections to other distances between distributions. The Wasserstein GAN has been utilized to train a language translation machine. The condition here is that there is no parallel data between the word embeddings between the two languages. Wasserstein GANs have been used to perform English-Russian and English-Chinese language mappings. Limitations of WGANs: WGANs suffer from unstable training at times, when one uses a momentum based optimizer or when one uses high learning rates. Includes slow convergence after weight clipping, especially when clipping window is too large. It also suffers from the vanishing gradient problem when the clipping window is too small. To have a detailed understanding of WGANs have a look at the research paper here. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets This paper describes InfoGAN (Information-theoretic extension to the Generative Adversarial Network). It can learn disentangled representations in a completely unsupervised manner. In traditional GANs, learned dataset is entangled i.e. encoded in a complex manner within the data space. However, if the representation is disentangled, it would be easy to implement and easy to apply tasks on it. InfoGAN solves the entangled data problem in GANs.   Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, extracts poses of objects correctly irrespective of the lighting conditions within the 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hairstyles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. InfoGAN does not require any kind of supervision. In comparison to InfoGAN, the only other unsupervised method that learns disentangled representations is hossRBM, a higher-order extension of the spike-and-slab restricted Boltzmann machine which disentangles emotion from identity on the Toronto Face Dataset. However, hossRBM can only disentangle discrete latent factors, and its computation cost grows exponentially in the number of factors. Whereas, InfoGAN can disentangle both discrete and continuous latent factors, scale to complicated datasets, and typically requires no more training time than regular GAN. In the experiments given in the paper, firstly the comparison of  InfoGAN with prior approaches on relatively clean datasets is shown. Another experiment shown is, where InfoGAN can learn interpretable representations on complex datasets (here no previous unsupervised approach is known to learn representations of comparable quality.) Thus, InfoGAN is completely unsupervised and learns interpretable and disentangled representations on challenging datasets. Additionally, InfoGAN adds only negligible computation cost on top of GAN and is easy to train. The core idea of using mutual information to induce representation can be applied to other methods like VAE (Variational AutoEncoder) in future. The other possibilities with InfoGAN in future could be,learning hierarchical latent representations, improving semi-supervised learning with better codes, and using InfoGAN as a high-dimensional data discovery tool. To know more about this research paper in detail, visit the link given here. Progressive growing of GANs for improved Quality, Stability, and Variation This paper describes a brand new method for training your Generative Adversarial Networks. The basic idea here is to train both the generator and the discriminator progressively. This means, starting from a low resolution and adding new layers so that the model increases in providing images with finer details as training progresses. Such a method speeds up the training and also stabilizes it to a greater extent, which in turn produces images of unprecedented quality. For instance, a higher quality version of the CELEBA images dataset that provides output resolutions up to 10242 pixels.   Source: https://arxiv.org/pdf/1710.10196.pdf When new layers are added to the networks, they fade in smoothly. This helps in avoiding the sudden shocks to the already well-trained, smaller resolution layers. Also, the progressive training has various other benefits. The generation of smaller images is substantially more stable because there is less class information and fewer modes By increasing the resolution little by little, we are continuously asking a much simpler question compared to the end goal of discovering a mapping from latent vectors to e.g. 10242 images Progressive growing of GANs also reduces the training time. In addition to this, most of the iterations are done at lower resolutions, and the quality of the result obtained is upto 2-6 times faster, depending on the resolution of the final output. Thus, by progressively training GANs results into better quality, stability, and variation in images. This may also lead to true photorealism in near future. The paper concludes with the fact that, though there are certain limitations with this training method, which include semantic sensibility and understanding dataset-dependent constraints(such as certain objects being straight rather than curved). This leaves a lot to be desired from GANs and there is also room for improvement in the micro-structure of the images. To have a thorough understanding of this research paper, read the paper here.  
Read more
  • 0
  • 0
  • 3085

article-image-francois-chollet-tensorflow-2-0-keras-integration-tricky-design-decisions-deep-learning
Sugandha Lahoti
10 Dec 2019
6 min read
Save for later

François Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in Deep Learning, and more

Sugandha Lahoti
10 Dec 2019
6 min read
TensorFlow 2.0 was made available in October. One of the major highlights of this release was the integration of Keras into TensorFlow. Keras is an open-source deep-learning library that is designed to enable fast, user-friendly experimentation with deep neural networks. It serves as an interface to several deep learning libraries, most popular of which is TensorFlow, and it was integrated into TensorFlow main codebase in TensorFlow 2.0. In September, Lex Fridman, Research scientist at MIT popularly known for his podcasts, spoke to François Chollet, who is the author of Keras on Keras, Deep Learning, and the Progress of AI. In this post, we have tried to highlight François’ views on the Keras and TensorFlow 2.0 integration, early days of Keras and the importance of design decisions for building deep learning models. We recommend the full podcast that’s available on Fridman’s YouTube channel. Want to build Neural Networks? [box type="shadow" align="" class="" width=""]If you want to build multiple neural network architectures such as CNN, RNN, LSTM in Keras, we recommend you to read Neural Networks with Keras Cookbook by V Kishore Ayyadevara. This book features over 70 recipes such as object detection and classification, building self-driving car applications, understanding data encoding for image, text and recommender systems and more. [/box] Early days of Keras and how it was integrated into TensorFlow I started working on Keras in 2015, says Chollet. At that time Caffe was the popular deep learning library, based on C++ and was popular for building Computer Vision projects. Chollet was interested in Recurrent Neural Networks (RNNs) which was a niche topic at that time. Back then, there was no good solution or reusable open-source implementation of RNNs and LSTMs, so he decided to build his own and that’s how Keras started. “It was going to be mostly around RNNs and LSTMs and the models would be defined by Python code, which was going against mainstream,” he adds. Later, he joined Google’s research team working on image classification. At that time, he was exposed to the early internal version of Tensorflow - which was an improved version of Theano. When Tensorflow was released in 2015, he refactored Keras to run on TensorFlow. Basically he was abstracting away all the backend functionality into one module so that the same codebase could run on top of multiple backends. A year later, the TensorFlow team requested him to integrate the Keras API into TensorFlow more tightly.  They build a temporary TensorFlow-only version of Keras that was in tf.contrib for a while. Then they finally moved to TensorFlow Core in 2017. TensorFlow 2.0 gives both usability and flexibility to Keras Keras has been a very easy-to-use high-level interface to do deep learning. However, it lacked in flexibility - Keras framework was not the optimal way to do things compared to just writing everything from scratch. TensorFlow 2.0 offers both usability and flexibility to Keras. You have the usability of the high-level interface but you have the flexibility of the lower-level interface. You have this spectrum of workflows where you can get more or less usability and flexibility,  the trade-offs depending on your needs. It's very flexible, easy to debug, and powerful but also integrates seamlessly with higher-level features up to classic Keras workflows. “You have the same framework offering the same set of APIs that enable a spectrum of workflows that are more or less high level and are suitable for you know profiles ranging from researchers to data scientists and everything in between,” says Chollet. Design decisions are especially important while integrating Keras with Tensorflow “Making design decisions is as important as writing code”, claims Chollet. A lot of thought and care is taken in coming up with these decisions, taking into account the diverse user base of TensorFlow - small-scale production users, large-scale production users, startups, and researchers. Chollet says, “A lot of the time I spend on Google is actually discussing design. This includes writing design Docs, participating in design review meetings, etc.” Making a design decision is about satisfying a set of constraints but also trying to do so in the simplest way possible because this is what can be maintained and expanded in the future. You want to design APIs that are modular and hierarchical so that they have an API surface that is as small as possible. You want this modular hierarchical architecture to reflect the way that domain experts think about the problem. On the future of Keras and TensorFlow. What’s going to happen in TensorFlow 3.0? Chollet says that he’s really excited about developing even higher-level APIs with Keras. He’s also excited about hyperparameter tuning by automated machine learning. He adds, “The future is not just, you know, defining a model, it's more like an automatic model.” Limits of deep learning wrt function approximators that try to generalize from data Chollet emphasizes that “Neural Networks don't generalize well, humans do.” Deep Learning models are like huge parametric and differentiable models that go from an input space to an output space, trained with gradient descent. They are learning a continuous geometric morphing from an input vector space to an output space. As this is done point by point; a deep neural network can only make sense of points in space that are very close to things that it has already seen in string data. At best it can do the interpolation across points. However, that means in order to train your network you need a dense sampling of the input, almost a point-by-point sampling which can be very expensive if you're dealing with complex real-world problems like autonomous driving or robotics.  In contrast to this, you can look at very simple rules algorithms. If you have a symbolic rule it can actually apply to a very large set of inputs because it is abstract, it is not obtained by doing a point by point mapping. Deep learning is really like point by point geometric morphings. Meanwhile, abstract rules can generalize much better. I think the future is which can combine the two. Chollet also talks about self-improving Artificial General Intelligence, concerns about short-term and long-term threats in AI, Program synthesis, Good test for intelligence and more. The full podcast is available on Lex’s YouTube channel. If you want to implement neural network architectures in Keras for varied real-world applications, you may go through our book Neural Networks with Keras Cookbook. TensorFlow.js contributor Kai Sasaki on how TensorFlow.js eases web-based machine learning application development 10 key announcements from Microsoft Ignite 2019 you should know about What does a data science team look like?
Read more
  • 0
  • 0
  • 3041
article-image-f8-pytorch-announcements-pytorch-1-1-releases-with-new-ai-toolsopen-sourcing-botorch-and-ax-and-more
Bhagyashree R
03 May 2019
4 min read
Save for later

F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more

Bhagyashree R
03 May 2019
4 min read
Despite Facebook’s frequent appearance in the news for all the wrong reasons, we cannot deny that its open source contributions to AI have been its one redeeming quality. At its F8 annual developer conference showcasing its exceptional AI prowess, Facebook shared how the production-ready PyTorch 1.0 is being adopted by the community and also the release of PyTorch 1.1. Facebook introduced PyTorch in 2017, and since then it has been well-received by developers. It partnered with the AI community for further development in PyTorch and released the stable version last year in December. Along with optimizing and fixing other parts of PyTorch, the team introduced Just-in-time compilation for production support that allows seamless transitions between eager mode and graph mode. PyTorch 1.0 in leading businesses, communities, and universities Facebook is leveraging end-to-end workflows of PyTorch 1.0 for building and deploying translation and NLP at large scale. These NLP systems are delivering a staggering 6 billion translations for applications such as Messenger. PyTorch has also enabled Facebook to quickly iterate their ML systems. It has helped them accelerate their research-to-production cycle. Other leading organizations and businesses are also now using PyTorch for speeding up the development of AI features. Airbnb’s Smart Reply feature is backed by PyTorch libraries and APIs for conversational AI. ATOM (Accelerating Therapeutics for Opportunities in Medicine) has come up with a variational autoencoder that represents diverse chemical structures and designs new drug candidates. Microsoft has built large-scale distributed language models that are now in production in offerings such as Cognitive Services. PyTorch 1.1 releases with new model understanding and visualization tools Along with showcasing how the production-ready version is being accepted by the community, the PyTorch team further announced the release of PyTorch 1.1. This release focuses on improved performance, brings new model understanding and visualization tools for improved usability, and more. Following are some of the key feature PyTorch 1.1 comes with: Support for TensorBoard: TensorBoard, a suite of visualization tools, is now natively supported in PyTorch. You can use it through the  “from torch.utils.tensorboard import SummaryWriter” command. Improved JIT compiler: Along with some bug fixes, the team has expanded capabilities in TorchScript such as support for dictionaries, user classes, and attributes. Introducing new APIs: New APIs are introduced to support Boolean tensors and custom recurrent neural networks. Distributed training: This release comes with improved performance for common models such as CNNs. Multi-device modules support and the ability to split models across GPUs while still using Distributed Data Parallel is added. Ax, BoTorch, and more: Open source tools for Machine Learning engineers Facebook announced that it is open sourcing two new tools, Ax and BoTorch that are aimed at solving large scale exploration problems both in research and production environment. Built on top of PyTorch, BoTorch leverages its features such as auto-differentiation, massive parallelism, and deep learning to help in researches related Bayesian optimization. Ax is a general purpose ML platform for managing adaptive experiments. Both Ax and BoTorch use probabilistic models that efficiently use data and meaningfully quantify the costs and benefits of exploring new regions of problem space. Facebook has also open sourced PyTorch-BigGraph (PBG), a tool that makes it easier and faster to produce graph embeddings for extremely large graphs with billions of entities and trillions of edges. PBG comes with support for sharding and negative sampling and also offers sample use cases based on Wikidata embedding. As a result of its collaboration with Google, AI Platform Notebooks, a new histed JupyterLab service from Google Cloud Platform, now comes preinstalled with PyTorch. It also comes integrated with other GCP services such as BigQuery, Cloud Dataproc, Cloud Dataflow, and AI Factory. The broader PyTorch community has also come up with some impressive open source tools. BigGAN-Torch is basically a full reimplementation of PyTorch that uses gradient accumulation to provide the benefits of big batches by only using a few GPUs. GeomLoss is an API written in Python that defines PyTorch layers for geometric loss functions between sampled measures, images, and volumes. It provides efficient GPU implementations for Kernel norms, Hausdorff divergences, and unbiased Sinkhorn divergences. PyTorch Geometric is a geometric deep learning extension library for PyTorch consisting of various methods for deep learning on graphs and other irregular structures. Read the official announcement on Facebook’s AI  blog. Facebook open-sources F14 algorithm for faster and memory-efficient hash tables “Is it actually possible to have a free and fair election ever again?,” Pulitzer finalist, Carole Cadwalladr on Facebook’s role in Brexit F8 Developer Conference Highlights: Redesigned FB5 app, Messenger update, new Oculus Quest and Rift S, Instagram shops, and more
Read more
  • 0
  • 0
  • 2964

article-image-teaching-gans-a-few-tricks-a-bird-is-a-bird-is-a-bird-robots-holding-on-to-things-and-bots-imitating-human-behavior
Savia Lobo
11 Dec 2019
7 min read
Save for later

Teaching GANs a few tricks: a bird is a bird is a bird, robots holding on to things and bots imitating human behavior

Savia Lobo
11 Dec 2019
7 min read
Generative adversarial networks (GANs) have been at the forefront of research on generative models in the last couple of years. GANs have been used for image generation, image processing, image synthesis from captions, image editing, visual domain adaptation, data generation for visual recognition, and many other applications, often leading to state of the art results. One of the tutorials titled, ‘Generative Adversarial Networks’ conducted at the CVPR 2018 (a Conference on Computer Vision and Pattern Recognition held at Salt Lake City, USA) provides a broad overview of generative adversarial networks and how GANs can be trained to perform different purposes.  The tutorial involved various speakers sharing basic concepts, best practices of the current state-of-the-art GAN including network architectures, objective functions, other training tricks, and much more. Let us look at how GANs are trained for different use cases.  There’s more to GANs….. If you further want to explore different examples of modern GAN implementations, including CycleGAN, simGAN, DCGAN, and 2D image to 3D model generation, you can explore the book, Generative Adversarial Networks Cookbook written by Josh Kalin. The recipes given in this cookbook will help you build on a common architecture in Python, TensorFlow and Keras to explore increasingly difficult GAN architectures in an easy-to-read format. Training GANs for object detection using Adversarial Learning Xialong Wang, from Carnegie Mellon University talked about object detection in computer vision as well as from the context of taking actions in robots. He also explained how to use adversarial learning for instances beyond image generation. To train a GAN, the key idea is to find the adversarial tasks for your target tasks to improve your target by fighting against these adversarial tasks. In computer vision if your target task is to recognize a bird using object detection, one adversarial task is adding occlusions by generating a mask to accrue the bird’s head and its leg which will make it difficult for the detector to recognize. The detector will further try to conquer these difficult tasks and from then on it will become robust to Occlusions. Another adversarial task for object detection can be Deformations. Here the image can be slightly rotated to make the detection difficult.  For training robots to grasp objects, one of the adversaries would be the Shaking test. If the robot arm is stable enough the object it grasps should not fall even with a rigourous shake. Another example is snatching. If another arm can snatch easily, it means it is not completely trained to resist snatching or stealing. Wang said the CMU research team tried generating images using DCGAN on the COCO dataset. However, the images generated could not assist in training the detector as the detectors could easily detect them as false images. Next, the team generated images using Conditional GANs on COCO but these didn’t help either. Hence, the team generated hard positive examples in feed by adding real world occlusions or real world deformations to challenge the detectors. He then talked about a Standard Fast R-CNN Detector which takes an image input in the convolutional neural network language model. After taking the input, the detector extracts features for the whole image, and later you can crop the features according to the proposal bounding box. These cropped features are resized to channel (C*6*6); here 6*6 is interred spatial dimensions. These features are the object features you want to focus on and can also use them to perform classification or regression for detections. The team has added a small network in the middle that would input the extracted features and generate a mask. The mask will assist which spatial locations to chop out certain features that would make it hard for the detectors to recognize. He also shared the benchmark results of the tests using different datasets like the AlexNet, VGG16, FRCN, and so on. The ASTN and the ASDN model showed improved output over the other networks.   Understanding Generative Adversarial Imitation Learning (GAIL) for training a machine to imitate human behaviours Stefano Ermon from Stanford University explained how to use Generative modeling ideas and GAN training to imitate human behaviours in complex environments.  A lot of progress in reinforcement learning has been made with successes in playing board games such as Chess, video games, and so on. However, Reinforcement Learning has one limitation. If you want to use it to solve a new task you have to specify a cost signal / a reward signal to provide some supervision to your reinforcement learning algorithm. You also need to specify what kind of behaviors are desirable and which are not.   In a game scenario the cost signal is whether you win or you lose. However, in further complex tasks like driving an autonomous vehicles to specify a cost signal becomes difficult as there are different objective functions like going off road, not moving above the speed limit, avoiding a road crash, and much more.  The simplest method one can use is Behavioural cloning where you can use your trajectories and your demonstrations to construct a training set of states with the corresponding action that the expert took in those states. You can further use your favorite supervised learning method classification or regression if the actions are continuous. However, this has some limitations: Small errors may compound over time as the learning algorithm will make certain mistakes initially and these mistakes will lead towards never seen before states or objects. It is like a Black box approach where every decision requires initial planning. Ermon suggests an alternative to imitation could be an Inverse RL (IRL) approachHe also demonstrates the similarities between RL and IRL. For the complete demonstration, you can check out the video.  The main difference between a GAIL and GANs is that in GANs the generator is taking inputs, random noise and maps them to the neural network producing some samples for the detector. However, in GAIL, the generator is more complex as it includes two components, a policy P which you can train and an environment (Black Box simulator) that can’t be controlled. What matters is the distribution over states and actions that you encounter when you navigate the environment using the policy that can be tuned. As the environment is difficult to control, training the GAIL model is harder than the simple GANs model. On the other hand, in a GANs model, training the policy is challenging such that the discriminator goes into the direction of fooling.  However, GAIL is the easier generative modelling task because you don’t have to learn the whole thing end to end and neither do you have to come up with a large neural network that maps noise into behaviours as some part of the input is given by the environment. But it is harder to train because you don't really know how the black box works. Ermon further explains how using Generative Adversarial Imitation Learning, one can not only imitate complex behaviors, but also learn interpretable and meaningful representations of complex behavioral data, including visual demonstrations with a method named as InfoGAN, a method, built on top of GAIL.   He also explained a new framework for multi-agent imitation learning for general Markov games by integrating multi-agent RL with a suitable extension of multi-agent inverse RL. This method will generalize Generative Adversarial Imitation Learning (GAIL) in the single agent case. This method will successfully imitate complex behaviors in high-dimensional environments with multiple cooperative or competing agents. To know more about further demonstrations on GAIL, InfoGAIL, and Multi-agent GAIL, watch the complete video on YouTube. Knowing the basics isn’t enough, putting them to practice is necessary. If you want to use GANs practically and experiment with them, Generative Adversarial Networks Cookbook by Josh Kalin is your go-to guide. With this cookbook, you will work with use cases involving DCGAN, Pix2Pix, and so on. To understand these complex applications, you will take different real-world data sets and put them to use. Prof. Rowel Atienza discusses the intuition behind deep learning, advances in GANs & techniques to create cutting edge AI- models Now there is a Deepfake that can animate your face with just your voice and a picture using temporal GANs Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 2952
Modal Close icon
Modal Close icon