Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1204 Articles
article-image-iclr-2019-highlights-algorithmic-fairness-ai-for-social-good-climate-change-protein-structures-gan-magic-adversarial-ml-and-much-more
Amrata Joshi
09 May 2019
7 min read
Save for later

ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more

Amrata Joshi
09 May 2019
7 min read
The ongoing ICLR 2019 (International Conference on Learning Representations) has brought a pack full of surprises and key specimens of innovation. The conference started on Monday, this week and it’s already the last day today! This article covers the highlights of ICLR 2019 and introduces you to the ongoing research carried out by experts in the field of deep learning, data science, computational biology, machine vision, speech recognition, text understanding, robotics and much more. The team behind ICLR 2019, invited papers based on Unsupervised objectives for agents, Curiosity and intrinsic motivation, Few shot reinforcement learning, Model-based planning and exploration, Representation learning for planning, Learning unsupervised goal spaces, Unsupervised skill discovery and Evaluation of unsupervised agents. https://twitter.com/alfcnz/status/1125399067490684928 ICLR 2019, sponsored by Google marks the presence of 200 researchers contributing to and learning from the academic research community by presenting papers and posters. ICLR 2019 Day 1 highlights: Neural network, Algorithmic fairness, AI for social good and much more Algorithmic fairness https://twitter.com/HanieSedghi/status/1125401294880083968 The first day of the conference started with a talk on Highlights of Recent Developments in Algorithmic Fairness by Cynthia Dwork, an American computer scientist at Harvard University. She focused on "group fairness" notions that address the relative treatment of different demographic groups. And she talked on research in the ML community that explores fairness via representations. The investigation of scoring, classifying, ranking, and auditing fairness was also discussed in this talk by Dwork. Generating high fidelity images with Subscale Pixel Networks and Multidimensional Upscaling https://twitter.com/NalKalchbrenner/status/1125455415553208321 Jacob Menick, a senior research engineer at Google, Deep Mind and Nal Kalchbrenner, staff research scientist and co-creator of the Google Brain Amsterdam research lab talked on Generating high fidelity images with Subscale Pixel Networks and Multidimensional Upscaling. They talked about the challenges involved in generating the images and how they address this issue with the help of Subscale Pixel Network (SPN). It is a conditional decoder architecture that helps in generating an image as a sequence of image slices of equal size. They also explained how Multidimensional Upscaling is used to grow an image in both size and depth via intermediate stages corresponding to distinct SPNs. There were in all 10 workshops conducted on the same day based on AI and deep learning covering topics such as, The 2nd Learning from Limited Labeled Data (LLD) Workshop: Representation Learning for Weak Supervision and Beyond Deep Reinforcement Learning Meets Structured Prediction AI for Social Good Debugging Machine Learning Models The first day also witnessed a few interesting talks on neural networks covering topics such as The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks, How Powerful are Graph Neural Networks? etc. Overall the first day was quite enriching and informative. ICLR 2019 Day 2 highlights: AI in climate change, Protein structure, adversarial machine learning, CNN models and much more AI’s role in climate change https://twitter.com/natanielruizg/status/1125763990158807040 Tuesday, also the second day of the conference, started with an interesting talk on Can Machine Learning Help to Conduct a Planetary Healthcheck? by Emily Shuckburgh, a Climate scientist and deputy head of the Polar Oceans team at the British Antarctic Survey. She talked about the sophisticated numerical models of the Earth’s systems which have been developed so far based on physics, chemistry and biology. She then highlighted a set of "grand challenge" problems and discussed various ways in which Machine Learning is helping to advance our capacity to address these. Protein structure with a differentiable simulator On the second day of ICLR 2019, Chris Sander, computational biologist, John Ingraham, Adam J Riesselman, and Debora Marks from Harvard University, talked on Learning protein structure with a differentiable simulator. They about the protein folding problem and their aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. They also composed a neural energy function with a novel and efficient simulator which is based on Langevin dynamics for building an end-to-end-differentiable model of atomic protein structure given amino acid sequence information. They also discussed certain techniques for stabilizing backpropagation and demonstrated the model's capacity to make multimodal predictions. Adversarial Machine Learning https://twitter.com/natanielruizg/status/1125859734744117249 Day 2 was long and had Ian Goodfellow, a machine learning researcher and inventor of GANs, to talk on Adversarial Machine Learning. He talked about supervised learning works and making machine learning private, getting machine learning to work for new tasks and also reducing the dependency on large amounts of labeled data. He then discussed how the adversarial techniques in machine learning are involved in the latest research frontiers. Day 2 covered poster presentation and a few talks on Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset,  Learning to Remember More with Less Memorization, Learning to Remember More with Less Memorization, etc. ICLR 2019 Day 3 highlights: GAN, Autonomous learning and much more Developmental autonomous learning: AI, Cognitive Sciences and Educational Technology https://twitter.com/drew_jaegle/status/1125522499150721025 Day 3 of ICLR 2019 started with Pierre-Yves Oudeyer’s, research director at Inria talk on Developmental Autonomous Learning: AI, Cognitive Sciences and Educational Technology. He presented a research program that focuses on computational modeling of child development and learning mechanisms. He then discussed the several developmental forces that guide exploration in large real-world spaces. He also talked about the models of curiosity-driven autonomous learning that enables machines to sample and explore their own goals and learning strategies. He then explained how these models and techniques can be successfully applied in the domain of educational technologies. Generating knockoffs for feature selection using Generative Adversarial Networks (GAN) Another interesting topic on the third day of ICLR 2019 was Generating knockoffs for feature selection using Generative Adversarial Networks (GAN) by James Jordon from Oxford University, Jinsung Yoon from California University, and Mihaela Schaar Professor at UCLA. The experts talked about the Generative Adversarial Networks framework that helps in generating knockoffs with no assumptions on the feature distribution. They also talked about the model they created which consists of 4 networks, a generator, a discriminator, a stability network and a power network. They further demonstrated the capability of their model to perform feature selection. Followed by few more interesting topics like Deterministic Variational Inference for Robust Bayesian Neural Networks, there were series of poster presentations. ICLR 2019 Day 4 highlights: Neural networks, RNN, neuro-symbolic concepts and much more Learning natural language interfaces with neural models Today’s focus was more on neural models and neuro symbolic concepts. The day started with a talk on Learning natural language interfaces with neural models by Mirella Lapata, a computer scientist. She gave an overview of recent progress on learning natural language interfaces which allow users to interact with various devices and services using everyday language. She also addressed the structured prediction problem of mapping natural language utterances onto machine-interpretable representations. She further outlined the various challenges it poses and described a general modeling framework based on neural networks which tackle these challenges. Ordered neurons: Integrating tree structures into Recurrent Neural Networks https://twitter.com/mmjb86/status/1126272417444311041 The next interesting talk was on Ordered neurons: Integrating tree structures into Recurrent Neural Networks by Professors Yikang Shen, Aaron Courville and Shawn Tan from Montreal University, and, Alessandro Sordoni, a researcher at Microsoft. In this talk, the experts focused on how they proposed a new RNN unit: ON-LSTM, which achieves good performance on four different tasks including language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference. The last day of ICLR 2019 was exciting and helped the researchers present their innovations and attendees got a chance to interact with the experts. To have a complete overview of each of these sessions, you can head over to ICLR’s Facebook page. Paper in Two minutes: A novel method for resource efficient image classification Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop
Read more
  • 0
  • 0
  • 4042

article-image-deepmind-alphago-zero-game-changer-for-ai-research
Guest Contributor
09 May 2019
10 min read
Save for later

Why DeepMind AlphaGo Zero is a game changer for AI research

Guest Contributor
09 May 2019
10 min read
DeepMind, a London based artificial intelligence (AI) company currently owned by Alphabet, recently made great strides in AI with its AlphaGo program. It all began in October 2015 when the program beat the European Go champion Fan Hui 5-0, in a game of Go. This was the very first time an AI defeated a professional Go player. Earlier, computers were only known to have played Go at the "amateur" level. Then, the company made headlines again in 2016 after its AlphaGo program beat Lee Sedol, a professional Go player (a world champion) with a score of 4-1 in a five-game match. Furthermore, in late 2017, an improved version of the program called AlphaGo Zero defeated AlphaGo 100 games to 0. The best part? AlphaGo Zero's strategies were self-taught i.e it was trained without any data from human games. AlphaGo Zero was able to defeat its predecessor in only three days time with lesser processing power than AlphaGo. However, the original AlphaGo, on the other hand required months to learn how to play. All these facts beg the questions: what makes AlphaGo Zero so exceptional? Why is it such a big deal? How does it even work? So, without further ado, let’s dive into the what, why, and how of DeepMind’s AlphaGo Zero. What is DeepMind AlphaGo Zero? Simply put, AlphaGo Zero is the strongest Go program in the world (with the exception of AlphaZero). As mentioned before, it monumentally outperforms all previous versions of AlphaGo. Just check out the graph below which compares the Elo rating of the different versions of AlphaGo. Source: DeepMind The Elo rating system is a method for calculating the relative skill levels of players in zero-sum games such as chess and Go. It is named after its creator Arpad Elo, a Hungarian-American physics professor. Now, all previous versions of AlphaGo were trained using human data. The previous versions learned and improved upon the moves played by human experts/professional Go players. But AlphaGo Zero didn’t use any human data whatsoever. Instead, it had to learn completely from playing against itself. According to DeepMind's Professor David Silver, the reason that playing against itself enables it to do so much better than using strong human data is that AlphaGo always has an opponent of just the right level. So it starts off extremely naive, with perfectly random play. And yet at every step of the learning process, it has an opponent (a “sparring partner”) that’s exactly calibrated to its current level of performance. That is, to begin with, these players are terribly weak but over time they become progressively stronger and stronger. Why is reinforcement learning such a big deal? People tend to assume that machine learning is all about big data and massive amounts of computation. But actually, with AlphaGo Zero, AI scientists at DeepMind realized that algorithms matter much more than the computing processing power or data availability. AlphaGo Zero required less computation than previous versions and yet it was able to perform at a much higher level due to using much more principled algorithms than before. It is a system which is trained completely from scratch, starting from random behavior, and progressing from first principles to really discover tabula rasa, in playing the game of Go. It is, therefore, no longer constrained by the limits of human knowledge. Note that AlphaGo Zero did not use zero-shot learning which essentially is the ability of the machine to solve a task despite not having received any training for that task. How does it work? AlphaGo Zero is able to achieve all this by employing a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. As explained previously, the system starts off with a single neural network that knows absolutely nothing about the game of Go. By combining this neural network with a powerful search algorithm, it then plays games against itself. As it plays more and more games, the neural network is updated and tuned to predict moves, and even the eventual winner of the games. This revised neural network is then recombined with the search algorithm to generate a new, stronger version of AlphaGo Zero, and the process repeats. With each iteration, the performance of the system enhances with each iteration, and the quality of the self-play games’ advances, leading to increasingly accurate neural networks and ever-more powerful versions of AlphaGo Zero. Now, let’s dive into some of the technical details that make this version of AlphaGo so much better than all its forerunners. AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers. Only four Tensor Processing Units (TPUs) were used for inference. And of course, the neural network initially knew nothing about Go beyond the rules. Both AlphaGo and AlphaGo Zero took a general approach to play Go. Both evaluated the Go board and chose moves using a combination of two methods: Conducting a “lookahead” search: This means looking ahead several moves by simulating games, and hence seeing which current move is most likely to lead to a “good” position in the future. Assessing positions based on an “intuition” of whether a position is “good” or “bad”  and is likely to result in a win or a loss. Go is a truly intricate game which means computers can’t merely search all possible moves using a brute force approach to discover the best one. Method 1: Lookahead Before AlphaGo, all the finest Go programs tackled this issue by using “Monte Carlo Tree Search” or MCTS. This process involves initially exploring numerous possible moves on the board and then focusing this search over time as certain moves are found to be more likely to result in wins than others. Source: LOC Both AlphaGo and AlphaGo Zero apply a fairly elementary version of MCTS for their “lookahead” to correctly maintain the tradeoff between exploring new sequences of moves or more deeply explore already-explored sequences. Although MCTS has been at the heart of all effective Go programs preceding AlphaGo, it was DeepMind’s smart coalescence of this method with a neural network-based “intuition” that enabled it to attain superhuman performance. Method 2: Intuition DeepMind’s pivotal innovation with AlphaGo was to utilize deep neural networks to identify the state of the game and then use this knowledge to effectively guide the search of the MCTS. In particular, they trained networks that could record: The current board position Which player was playing The sequence of recent moves (in order to rule out certain moves as “illegal”) With this data, the neural networks could propose: Which move should be played If the current player is likely to win or not So how did DeepMind train neural networks to do this? Well, AlphaGo and AlphaGo Zero used rather different approaches in this case. AlphaGo had two separately trained neural networks: Policy Network and Value Network. Source: AlphaGo’s Nature Paper DeepMind then fused these two neural networks with MCTS  —  that is, the program’s “intuition” with its brute force “lookahead” search — in an ingenious way. It used the network that had been trained to predict: Moves to guide which branches of the game tree to search Whether a position was “winning” to assess the positions it encountered during its search This let AlphaGo to intelligently search imminent moves and eventually beat the world champion Lee Sedol. AlphaGo Zero, however, took this principle to the next level. Its neural network’s “intuition” was trained entirely differently from that of AlphaGo. More specifically: The neural network was trained to play moves that exhibited the improved evaluations from performing the “lookahead” search The neural network was tweaked so that it was more likely to play moves like those that led to wins and less likely to play moves similar to those that led to losses during the self-play games Much was made of the fact that no games between humans were used to train AlphaGo Zero. Thus, for a given state of a Go agent, it can constantly be made smarter by performing MCTS-based lookahead and using the results of that lookahead to upgrade the agent. This is how AlphaGo Zero was able to perpetually improve, from when it was an “amateur” all the way up to when it better than the best human players. Moreover, AlphaGo Zero’s neural network architecture can be referred to as a “two-headed” architecture. Source: Hacker Noon Its first 20 layers were “blocks” of a typically seen in modern neural net architectures. These layers were followed by two “heads”: One head that took the output of the first 20 layers and presented probabilities of the Go agent making certain moves Another head that took the output of the first 20 layers and generated a probability of the current player winning. What’s more, AlphaGo Zero used a more “state of the art” neural network architecture as opposed to AlphaGo. Particularly, it used a “residual” neural network architecture rather than a plainly “convolutional” architecture. Deep residual learning was pioneered by Microsoft Research in late 2015, right around the time work on the first version of AlphaGo would have been concluded. So, it is quite reasonable that DeepMind did not use them in the initial AlphaGo program. Notably, each of these two neural network-related acts —  switching from separate-convolutional to the more advanced dual-residual architecture and using the “two-headed” neural network architecture instead of separate neural networks  —  would have resulted in nearly half of the increase in playing strength as was realized when both were coupled. Source: AlphaGo’s Nature Paper Wrapping it up According to DeepMind: “After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo - which had itself defeated 18-time world champion Lee Sedol - by 100 games to 0. After 40 days of self-training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world's best players and world number one Ke Jie. Over the course of millions of AlphaGo vs AlphaGo games, the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days. AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves that echoed and surpassed the novel techniques it played in the games against Lee Sedol and Ke Jie.” Further, the founder and CEO of DeepMind, Dr. Demis Hassabis believes AlphaGo's algorithms are likely to most benefit to areas that need an intelligent search through an immense space of possibilities. Author Bio Gaurav is a Senior SEO and Content Marketing Analyst at The 20 Media, a Content Marketing agency that specializes in data-driven SEO. He has more than seven years of experience in Digital Marketing and along with that loves to read and write about AI, Machine Learning, Data Science and much more about the emerging technologies. In his spare time, he enjoys watching movies and listening to music. Connect with him on Twitter and LinkedIn. DeepMind researchers provide theoretical analysis on recommender system, ‘echo chamber’ and ‘filter bubble effect’ What if AIs could collaborate using human-like values? DeepMind researchers propose a Hanabi platform. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers  
Read more
  • 0
  • 0
  • 11098

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 5032
Banner background image

article-image-jupyterhub-1-0-releases-with-named-servers-support-for-tls-encryption-and-more
Sugandha Lahoti
06 May 2019
4 min read
Save for later

JupyterHub 1.0 releases with named servers, support for TLS encryption and more

Sugandha Lahoti
06 May 2019
4 min read
JupyterHub 1.0 was released last week as the first major update since 2015. JupyterHub allows multiple users to use Jupyter notebook. JupyterHub 1.0 comes with UI support for managing named servers, and TLS encryption and authentication support, among others. What’s new in JupyterHub 1.0? UI for named servers JupyterHub 1.0 comes with full UI support for managing named servers. Named servers allow each Jupyterhub user to have access to more than one named server. JupyterHub 1.0 introduces a new UI for managing these servers. Users can now create/start/stop/delete their servers from the hub home page. Source: Jupyter blog TLS encryption and authentication JupyterHub 1.0 supports TLS encryption and authentication of all internal communication. Spawners must implement .move_certs method to make certificates available to the notebook server if it is not local to the Hub. Currently, local spawners and DockerSpawner support internal ssl. Checking and refreshing authentication JupyterHub. 1.0 introduces three new configurations to refresh or expire authentication information. c.Authenticator.auth_refresh_age allows authentication to expire after a number of seconds. c.Authenticator.refresh_pre_spawn forces a refresh of authentication prior to spawning a server, effectively requiring a user to have up-to-date authentication when they start their server. Authenticator.refresh_auth defines what it means to refresh authentication and can be customized by Authenticator implementations. Other changes A new API is added in JupyterHub 1.0 for registering user activity. Activity is now tracked by pushing it to the Hub from user servers instead of polling the proxy API. Dynamic options_form callables may now return an empty string which will result in no options form being rendered. Spawner.user_options is persisted to the database to be re-used so that a server spawned once via the form can be re-spawned via the API with the same options. c.PAMAuthenticator.pam_normalize_username, option is added for round-tripping usernames through PAM to retrieve the normalized form. c.JupyterHub.named_server_limit_per_user configuration is added to limit the number of named servers each user can have. The default is 0, for no limit. API requests to HubAuthenticated services (e.g. single-user servers) may pass a token in the Authorization header, matching authentication with the Hub API itself. Authenticator.is_admin(handler, authentication) method and Authenticator.admin_groups configuration is added for automatically determining that a member of a group should be considered an admin. These are just a select few updates. For the full list of new features and improvements in JupyterHub 1.0, visit the changelog. You can upgrade jupyterhub with conda or pip: conda install -c conda-forge jupyterhub==1.0.* pip install --upgrade jupyterhub==1.0.* Users were quite excited about the release. Here are some comments from a Hacker News thread. “This is really cool and I’m impressed by the jupyter team. My favorite part is that it’s such a good product that beats the commercial products because it’s hard to figure out, I think, commercial models that support this wide range of collaborators (people who view once a month to people who author every day).” “Congratulations! JupyterHub is a great project with high-quality code and docs. Looking forward to trying the named servers feature as I run a JupyterHub instance that spawns servers inside containers based on a single image which inevitably tends to grow as I add libraries. Being able to manage multiple servers should allow me to split the image into smaller specialized images.” Introducing Jupytext: Jupyter notebooks as Markdown documents, Julia, Python or R scripts How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts. 10 reasons why data scientists love Jupyter notebooks
Read more
  • 0
  • 0
  • 4974

article-image-f8-pytorch-announcements-pytorch-1-1-releases-with-new-ai-toolsopen-sourcing-botorch-and-ax-and-more
Bhagyashree R
03 May 2019
4 min read
Save for later

F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more

Bhagyashree R
03 May 2019
4 min read
Despite Facebook’s frequent appearance in the news for all the wrong reasons, we cannot deny that its open source contributions to AI have been its one redeeming quality. At its F8 annual developer conference showcasing its exceptional AI prowess, Facebook shared how the production-ready PyTorch 1.0 is being adopted by the community and also the release of PyTorch 1.1. Facebook introduced PyTorch in 2017, and since then it has been well-received by developers. It partnered with the AI community for further development in PyTorch and released the stable version last year in December. Along with optimizing and fixing other parts of PyTorch, the team introduced Just-in-time compilation for production support that allows seamless transitions between eager mode and graph mode. PyTorch 1.0 in leading businesses, communities, and universities Facebook is leveraging end-to-end workflows of PyTorch 1.0 for building and deploying translation and NLP at large scale. These NLP systems are delivering a staggering 6 billion translations for applications such as Messenger. PyTorch has also enabled Facebook to quickly iterate their ML systems. It has helped them accelerate their research-to-production cycle. Other leading organizations and businesses are also now using PyTorch for speeding up the development of AI features. Airbnb’s Smart Reply feature is backed by PyTorch libraries and APIs for conversational AI. ATOM (Accelerating Therapeutics for Opportunities in Medicine) has come up with a variational autoencoder that represents diverse chemical structures and designs new drug candidates. Microsoft has built large-scale distributed language models that are now in production in offerings such as Cognitive Services. PyTorch 1.1 releases with new model understanding and visualization tools Along with showcasing how the production-ready version is being accepted by the community, the PyTorch team further announced the release of PyTorch 1.1. This release focuses on improved performance, brings new model understanding and visualization tools for improved usability, and more. Following are some of the key feature PyTorch 1.1 comes with: Support for TensorBoard: TensorBoard, a suite of visualization tools, is now natively supported in PyTorch. You can use it through the  “from torch.utils.tensorboard import SummaryWriter” command. Improved JIT compiler: Along with some bug fixes, the team has expanded capabilities in TorchScript such as support for dictionaries, user classes, and attributes. Introducing new APIs: New APIs are introduced to support Boolean tensors and custom recurrent neural networks. Distributed training: This release comes with improved performance for common models such as CNNs. Multi-device modules support and the ability to split models across GPUs while still using Distributed Data Parallel is added. Ax, BoTorch, and more: Open source tools for Machine Learning engineers Facebook announced that it is open sourcing two new tools, Ax and BoTorch that are aimed at solving large scale exploration problems both in research and production environment. Built on top of PyTorch, BoTorch leverages its features such as auto-differentiation, massive parallelism, and deep learning to help in researches related Bayesian optimization. Ax is a general purpose ML platform for managing adaptive experiments. Both Ax and BoTorch use probabilistic models that efficiently use data and meaningfully quantify the costs and benefits of exploring new regions of problem space. Facebook has also open sourced PyTorch-BigGraph (PBG), a tool that makes it easier and faster to produce graph embeddings for extremely large graphs with billions of entities and trillions of edges. PBG comes with support for sharding and negative sampling and also offers sample use cases based on Wikidata embedding. As a result of its collaboration with Google, AI Platform Notebooks, a new histed JupyterLab service from Google Cloud Platform, now comes preinstalled with PyTorch. It also comes integrated with other GCP services such as BigQuery, Cloud Dataproc, Cloud Dataflow, and AI Factory. The broader PyTorch community has also come up with some impressive open source tools. BigGAN-Torch is basically a full reimplementation of PyTorch that uses gradient accumulation to provide the benefits of big batches by only using a few GPUs. GeomLoss is an API written in Python that defines PyTorch layers for geometric loss functions between sampled measures, images, and volumes. It provides efficient GPU implementations for Kernel norms, Hausdorff divergences, and unbiased Sinkhorn divergences. PyTorch Geometric is a geometric deep learning extension library for PyTorch consisting of various methods for deep learning on graphs and other irregular structures. Read the official announcement on Facebook’s AI  blog. Facebook open-sources F14 algorithm for faster and memory-efficient hash tables “Is it actually possible to have a free and fair election ever again?,” Pulitzer finalist, Carole Cadwalladr on Facebook’s role in Brexit F8 Developer Conference Highlights: Redesigned FB5 app, Messenger update, new Oculus Quest and Rift S, Instagram shops, and more
Read more
  • 0
  • 0
  • 2953

article-image-new-york-ag-opens-investigation-against-facebook-as-canada-decides-to-take-facebook-to-federal-court-for-repeated-user-privacy-violations
Savia Lobo
26 Apr 2019
6 min read
Save for later

New York AG opens investigation against Facebook as Canada decides to take Facebook to Federal Court for repeated user privacy violations

Savia Lobo
26 Apr 2019
6 min read
Despite Facebook’s long line of scandals and multiple parliamentary hearings, the company and its leadership have remained unscathed, with no consequences or impact on their performance. Once again, Facebook is under fresh investigations; this time from New York’s Attorney General, Letitia James. The Canadian and British Columbia privacy commissioners have also decided to take Facebook to Federal Court to seek an order to force the company to correct its deficient privacy practices. It remains to be seen if Facebook’s lucky streak would continue in light of these charges. NY Attorney General’s investigation over FB’s email harvesting scandal Yesterday, New York’s Attorney General, Letitia James opened an investigation into Facebook Inc.’s unauthorized collection of 1.5 million users’ email contacts without users’ permission. This incident, which was first reported on Business Insider, happened last month where Facebook’s email password verification process for new users asked users to hand over the password to their personal email account. According to the Business Insider report, “a pseudononymous security researcher e-sushi noticed that Facebook was asking some users to enter their email passwords when they signed up for new accounts to verify their identities, a move widely condemned by security experts.” https://twitter.com/originalesushi/status/1112496649891430401 Read Also: Facebook confessed another data breach; says it “unintentionally uploaded” 1.5 million email contacts without consent On March 21st, Facebook opened up about a major blunder of exposing millions of user passwords in a plain text, soon after Security journalist, Brian Krebs first reported about this issue. “We estimate that we will notify hundreds of millions of Facebook Lite users, tens of millions of other Facebook users, and tens of thousands of Instagram users”, the company said in their press release. Recently, on April 18, Facebook updated the same post stating that not tens of thousands, but “millions” of Instagram passwords were exposed. “Reports indicate that Facebook proceeded to access those user’s contacts and upload all of those contacts to Facebook to be used for targeted advertising”, the Attorney General mentioned in the statement. https://twitter.com/NewYorkStateAG/status/1121512404272189440 She further mentions that “It is time Facebook is held accountable for how it handles consumers' personal information.” “Facebook has repeatedly demonstrated a lack of respect for consumers’ information while at the same time profiting from mining that data. Facebook’s announcement that it harvested 1.5 million users’ email address books, potentially gaining access to contact information for hundreds of millions of individual consumers without their knowledge, is the latest demonstration that Facebook does not take seriously its role in protecting our personal information”, James adds. “Facebook said last week that it did not realize this collection was happening until earlier this month when it stopped offering email password verification as an option for people signing up to Facebook for the first time”, CNN Business reports. One of the users on HackerNews wrote, “I'm glad the attorney general is getting involved. We need to start charging Facebook execs for these flagrant privacy violations. They're being fined 3 billion dollars for legal expenses relating to an FTC inquiry… and their stock price went up by 8%. The market just does not care; it's time regulators and law enforcement started to.” To know more about this news in detail, read Attorney General James’ official press release. Canadian and British Columbia privacy commissioners to take Facebook to Federal Court Canada and British Columbia privacy commissioners Daniel Therrien and Michael McEvoy, uncovered major shortcomings in Facebook’s procedures in their investigation, published yesterday. This investigation was initiated after media reported that “Facebook had allowed an organization to use an app to access users’ personal information and that some of the data was then shared with other organizations, including Cambridge Analytica, which was involved in U.S. political campaigns”, the report mentions. The app, at one point, called “This is Your Digital Life,” encouraged users to complete a personality quiz. It collected information about users who installed the app as well as their Facebook “friends.” Some 300,000 Facebook users worldwide added the app, leading to the potential disclosure of the personal information of approximately 87 million others, including more than 600,000 Canadians. The investigation also revealed that Facebook violated federal and B.C. privacy laws in a number of respects. According to the investigation, “Facebook committed serious contraventions of Canadian privacy laws and failed to take responsibility for protecting the personal information of Canadians.” According to the press release, Facebook has disputed the findings and refused to implement the watchdogs’ recommendations. They have also refused to voluntarily submit to audits of its privacy policies and practices over the next five years. Following this, the Office of the Privacy Commissioner of Canada (OPC) said it, therefore, plans to take Facebook to Federal Court to seek an order to force it the company to correct its deficient privacy practices. Daniel Therrien, the privacy commissioner of Canada, said, “Facebook’s refusal to act responsibly is deeply troubling given the vast amount of sensitive personal information users have entrusted to this company. Their privacy framework was empty, and their vague terms were so elastic that they were not meaningful for privacy protection.” He further added, “The stark contradiction between Facebook’s public promises to mend its ways on privacy and its refusal to address the serious problems we’ve identified – or even acknowledge that it broke the law – is extremely concerning. It is untenable that organizations are allowed to reject my office’s legal findings as mere opinions.” British Columbia Information and Privacy Commissioner Michael McEvoy said, “Facebook has spent more than a decade expressing contrition for its actions and avowing its commitment to people’s privacy. But when it comes to taking concrete actions needed to fix transgressions they demonstrate disregard.” The press release also mentions that “giving the federal Commissioner order-making powers would also ensure that his findings and remedial measures are binding on organizations that refuse to comply with the law”. To know more about the federal and B.C. privacy laws that FB violated, head over to the investigation report. Facebook AI introduces Aroma, a new code recommendation tool for developers Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late? Ahead of EU 2019 elections, Facebook expands its Ad Library to provide advertising transparency in all active ads
Read more
  • 0
  • 0
  • 1752
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-datacamp-reckons-in-metoo-movement-ceo-steps-down-from-his-role-indefinitely
Fatema Patrawala
25 Apr 2019
7 min read
Save for later

DataCamp reckons with its #MeToo movement; CEO steps down from his role indefinitely

Fatema Patrawala
25 Apr 2019
7 min read
The data science community is reeling after data science learning startup DataCamp penned a blog post acknowledging that an unnamed company executive made "uninvited physical contact" with one of its employees. DataCamp, which operates an e-platform where aspiring data scientists can take courses in coding and data analysis is a startup valued at $184 million. It has additionally raised over $30 million in funding. The company disclosed in a blog post published on 4th April that this incident occurred at an "informal employee gathering" at a bar in October 2017. The unnamed DataCamp executive had "danced inappropriately and made uninvited physical contact" with the employee on the dance floor, the post read. The company didn't name the executive involved in the incident in its post. But called the executive's behavior on the dance floor "entirely inappropriate" and "inconsistent" with employee expectations and policies. When Buisness Insider reached out to one of the course instructors OS Keyes familiar with this matter, Keyes said that the executive in question is DataCamp's co-founder and CEO Jonathan Cornelissen. Yesterday Motherboard also reported that the company did not adequately address sexual misconduct by a senior executive there and instructors at DataCamp have begun boycotting the service and asking the company to delete their courses following allegations. What actually happened and how did DataCamp respond? On April 4, DataCamp shared a statement on its blog titled “a note to our community.” In it, the startup addressed the accusations against one of the company’s executives: “In October 2017, at an informal employee gathering at a bar after a week-long company offsite, one of DataCamp’s executives danced inappropriately and made uninvited physical contact with another employee while on the dance floor.” DataCamp got the complaint reviewed by a “third party not involved in DataCamp’s day-to-day business,” and said it took several “corrective actions,” including “extensive sensitivity training, personal coaching, and a strong warning that the company will not tolerate any such behavior in the future.” DataCamp only posted its blog a day after more than 100 DataCamp instructors signed a letter and sent it to DataCamp executives. “We are unable to cooperate with continued silence and lack of transparency on this issue,” the letter said. “The situation has not been acknowledged adequately to the data science community, leading to harmful rumors and uncertainty.” But as instructors read the statement from DataCamp following the letter, many found the actions taken to be insufficient. https://twitter.com/hugobowne/status/1120733436346605568 https://twitter.com/NickSolomon10/status/1120837738004140038 Motherboard reported this case in detail taking notes from Julia Silge, a data scientist who co-authored the letter to DataCamp. Julia says that going public with our demands for accountability was the last resort. Julia spoke about the incident in detail and says she remembered seeing the victim of the assault start working at DataCamp and then leave abruptly. This raised “red flags” but she did not reach out to her. Then Silge heard about the incident from a mutual friend and she began to raise the issue with internal people at DataCamp. “There were various responses from the rank and file. It seemed like after a few months of that there was not a lot of change, so I escalated a little bit,” she said. DataCamp finally responded to Silge by saying “I think you have misconceptions about what happened,” and they also mentioned that “there was alcohol involved” to explain the behavior of the executive. DataCamp further explained that “We also heard over and over again, ‘This has been thoroughly handled.’” But according to Silge and other instructors who have spoken out, say that DataCamp hasn’t properly handled the situation and has tried to sweep it under the rug. Silge also created a private Slack group to communicate and coordinate their efforts to confront this issue. She along with the group got into a group video conference with DataCamp, which was put into “listen-only” mode for all the other participants except DataCamp, meaning they could not speak in the meeting, and were effectively silenced. “It felt like 30 minutes of the DataCamp leadership saying what they wanted to say to us,” Silge said. “The content of it was largely them saying how much they valued diversity and inclusion, which is hard to find credible given the particular ways DataCamp has acted over the past.” Following that meeting, instructors began to boycott DataCamp more blatantly, with one instructor refusing to make necessary upgrades to her course until DataCamp addressed the situation. Silge and two other instructors eventually drafted and sent the letter, at first to the small group involved in accountability efforts, then to almost every DataCamp instructor. All told, the letter received more than 100 signatures (of about 200 total instructors). A DataCamp spokesperson said in response to this, “When we became aware of this matter, we conducted a thorough investigation and took actions we believe were necessary and appropriate. However, recent inquiries have made us aware of mischaracterizations of what occurred and we felt it necessary to make a public statement. As a matter of policy, we do not disclose details on matters like this, to protect the privacy of the individuals involved.” “We do not retaliate against employees, contractors or instructors or other members of our community, under any circumstances, for reporting concerns about behavior or conduct,” the company added. The response received from DataCamp was not only inadequate, but technologically faulty, as per one of the contractors Noam Ross who pointed out in his blog post that DataCamp had published the blog with a “no-index” tag, meaning it would not show up in aggregated searches like Google results. Thus adding this tag knowingly represents DataCamp’s continued lack of public accountability. OS Keyes said to Business Insider that at this point, the best course of action for DataCamp is a blatant change in leadership. “The investors need to get together and fire the [executive], and follow that by publicly explaining why, apologising, compensating the victim and instituting a much more rigorous set of work expectations,” Keyes said. #Rstats and other data science communities and DataCamp instructors take action One of the contractors Ines Montani expressed this by saying, “I was pretty disappointed, appalled and frustrated by DataCamp's reaction and non-action, especially as more and more details came out about how they essentially tried to sweep this under the rug for almost two years,” Due to their contracts, many instructors cannot take down their DataCamp courses. Instead of removing the courses, many contractors for DataCamp, including Montani, took to Twitter after DataCamp published the blog, urging students to boycott the very courses they designed. https://twitter.com/noamross/status/1116667602741485571 https://twitter.com/daniellequinn88/status/1117860833499832321 https://twitter.com/_tetration_/status/1118987968293875714 Instructors put financial pressures on the company by boycotting their own courses. They also wanted to get the executive responsible for such misbehaviour account for his actions, compensate the victim and compensate those who were fired for complaining—this may ultimately undercut DataCamp’s bottom line. Influential open-source communities, including RStudio, SatRdays, and R-Ladies, have cut all ties with DataCamp to show disappointment with the lack of serious accountability.. CEO steps down “indefinitely” from his role and accepts his mistakes Today Jonathan Cornelissen, accepted his mistake and wrote a public apology for his inappropriate behaviour. He writes, “I want to apologize to a former employee, our employees, and our community. I have failed you twice. First in my behavior and second in my failure to speak clearly and unequivocally to you in a timely manner. I am sorry.” He has also stepped down from his position as the company CEO indefinitely until there is complete review of company’s environment and culture. While it is in the right direction, unfortunately this apology comes to the community very late and is seen as a PR move to appease the backlash from the data science community and other instructors. https://twitter.com/mrsnoms/status/1121235830381645824 9 Data Science Myths Debunked 30 common data science terms explained Why is data science important?
Read more
  • 0
  • 0
  • 3416

article-image-mongodb-is-going-to-acquire-realm-the-mobile-database-management-system-for-39-million
Richard Gall
25 Apr 2019
3 min read
Save for later

MongoDB is going to acquire Realm, the mobile database management system, for $39 million

Richard Gall
25 Apr 2019
3 min read
MongoDB, the open source NoSQL database, is going to acquire mobile database platform Realm. The purchase is certainly one with clear technological and strategic benefits for both companies - and with MongoDB paying $39 million for a company that has up to now raised $40 million since its launch in 2011, it's clear that this is a move that isn't about short term commercial gains. It's important to note that the acquisition is not yet complete. It's expected to close in January 2020 at the end of the second quarter MongoDB's fiscal year. Further details about the acquisition and what it means for both products, will be revealed at MongoDB World in June. Why is MongoDB acquiring Realm? In the materials that announce the launch there's a lot of talk about the alignment between the two projects. "The best thing in the world is when someone just gets you, and you get them" MongoDB CTO Eliot Horowitz wrote in a blog post accompanying the release, "because when you share a vision of the world like that, you can do incredible things together. That’s exactly the case with MongoDB and Realm." At a more fundamental level the acquisition allows MongoDB to do a number of things. It can reach a new community of developers  working primarily in mobile development (according to the press release Realm has 100,000 active users), but it also allows MongoDB to strengthen its capabilities as cloud evolves to become the dominant way that applications are built and hosted. According to Dev Ittycheria, MongoDB President and CEO, Realm "is a natural fit for our global cloud database, MongoDB Atlas, as well as a complement to Stitch, our serverless platform." Serverless might well be a nascent trend at the moment, but the level of conversation and interest around it indicates that it's going to play a big part in application developers lives in the months and years to come. What's in it for Realm? For Realm, the acquisition will give the project access to a new pool of users. With backing from MongoDB, is also provides robust foundations for the project to extend its roadmap and even move faster than it previously would have been able to. Realm CEO David Ratner wrote yesterday (April 24) that: "The combination of MongoDB and Realm will establish the modern standard for mobile application development and data synchronization for a new generation of connected applications and services. MongoDB and Realm are fully committed to investing in the Realm Database and the future of data synchronization, and taking both to the next phase of their evolution. We believe that MongoDB will help accelerate Realm’s product roadmap, go-to-market execution, and support our customers’ use cases at a whole new level of global operational scale." A new chapter for MongoDB? 2019 hasn't been the best year for MongoDB so far. The project withdrew its submission for its controversial Server Side Public License last month following news that Red Hat was dropping it from Enterprise Linux and Fedora. This brought an initiative that the leadership viewed as strategically important in defending MongoDB's interests to a dramatic halt. However, the Realm acquisition sets up a new chapter and could go some way in helping MongoDB bolster itself for a future that it has felt uncertain about.
Read more
  • 0
  • 0
  • 3454

article-image-tesla-autonomy-day-takeaways-full-self-driving-computer-robotaxis-launching-next-year-and-more
Bhagyashree R
24 Apr 2019
6 min read
Save for later

Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more

Bhagyashree R
24 Apr 2019
6 min read
This Monday, Tesla’s “Autonomy Investor Day” kickstarted at its headquarters in Palo Alto. At this invitation-only event, Elon Musk, the CEO of Tesla, with his fellow executives, talked about its new microchip, robotaxis hitting the road by next year, and more. Here are some of the key takeaways from the event: The Full Self-Driving (FSD) computer Tesla shared details of its new custom chip, Full Self-Driving (FSD) computer, previously known as Autopilot Hardware 3.0. Elon Musk, the CEO of Tesla, believes that the FSD computer is “the best chip in the world…objectively.” Tesla replaced Nvidia’s Autopilot 2.5 computer with its own custom chip for Model S and Model X about a month ago. For Model 3 vehicles this change happened about 10 days ago. Musk said, “All cars being produced all have the hardware necessary — computer and otherwise — for full self-driving. All you need to do is improve the software.” FSD is a high-performance, special-purpose chip built by Samsung with main focus on autonomy and safety. It comes with a factor of 21 improvements in frame per second processing as compared to the previous generation Tesla Autopilot hardware, which was powered by Nvidia hardware. The company further shared that retrofits will be offered to current Tesla owners who bought the ‘Full Self-Driving package’ in the next few months. Here’s the new Tesla FSD computer: Credits: Tesla Musk shared that the company has already started working on a next-generation chip. The design of FSD was completed within 2 years of time and Tesla is now about halfway through the design of the next-generation chip. Musk’s claims of building the best chip can be taken with a pinch of salt as it could surely upset some engineers from Nvidia, Mobileye, and other companies who have been in the chip manufacturing market for a long time. Nvidia, in a blog post, along with applauding Tesla for its FSD computer, highlighted “few inaccuracies” in the comparison made by Musk during the event: “It’s not useful to compare the performance of Tesla’s two-chip Full Self Driving computer against NVIDIA’s single-chip driver assistance system. Tesla’s two-chip FSD computer at 144 TOPs would compare against the NVIDIA DRIVE AGX Pegasus computer which runs at 320 TOPS for AI perception, localization and path planning.” While pointing out the “inaccuracies”, Nvidia did miss out the key point here: the power consumption. “Having a system that can do 160 TOPS means little if it uses 500 watts while tesla's 144 TOPS system uses 72 watts,” a Redditor said. Robotaxis will hit the roads in 2020 Musk shared that within the next year or so we will see Tesla’s robotaxis coming into the ride-hailing market giving competition to Uber and Lyft. Musk made a bold claim saying that though, similar to other ride-hailing services, the robotaxis will allow users to hail a Tesla for a ride, they will not have drivers. Musk announced, “I feel very confident predicting that there will be autonomous robotaxis from Tesla next year — not in all jurisdictions because we won’t have regulatory approval everywhere.” He did not share many details on what regulations he was talking about. The service will allow Tesla-owners to add their properly equipped vehicles to Tesla’s own ride-sharing app, following a similar business model as Uber or Airbnb. The company will provide a dedicated number of robotaxis in areas where there are not enough loanable cars. Musk predicted that the average robotaxi will be able to yield $30,000 in gross profit per car, annually. Of this profit, about 25% to 30% will go to Tesla, therefore, an owner will be able to make $21,000 a year. Musk's plans for launching robotaxis next year looks ambitious. Experts and the media are quite skeptical about his plan. The Partners for Automated Vehicle Education (PAVE) industry group tweeted: https://twitter.com/PAVECampaign/status/1120436981220237312 https://twitter.com/RobMcCargow/status/1120961462678245376 Musk says “Anyone relying on lidar is doomed” Musk has been pretty vocal about his dislike towards LIDAR. He calls this technology “a crutch for self-driving cars”. When this topic came up at the event, Musk said: “Lidar is a fool’s errand. Anyone relying on lidar is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.” LIDAR, which stands for Light Direction and Ranging, is used by Uber, Waymo, Cruise, and many other self-driving vehicles manufacturing companies. LIDAR projects low-intensity, harmless, and invisible laser beams at a target, or in the case of self-driving cars, all around. The reflected pulses are then measured for return time and wavelength to calculate the distance of an object from the sender. Lidar is capable of producing pretty detailed visualizations of the environment around a self-driving car. However, Tesla believes that this same functionality can be facilitated by cameras. According to Musk, cameras can provide much better resolutions and when combined with the neural net can predict depth very well. Andrej Karpathy, Tesla’s Senior Director of AI, took to the stage to explain the limitations of Lidar. He said, “In that sense, lidar is really a shortcut. It sidesteps the fundamental problems, the important problem of visual recognition, that is necessary for autonomy. It gives a false sense of progress and is ultimately a crutch. It does give, like, really fast demos!”. Karpathy further added, “You were not shooting lasers out of your eyes to get here.” While true, many felt that the reasoning is completely flawed. A Redditor in a discussion thread, said, “Musk's argument that "you drove here using your own two eyes with no lasers coming out of them" is reductive and flawed. It should be obvious to anyone that our eyes are more complex than simple stereo cameras. If the Tesla FSD system can reliably perceive depth at or above the level of the human eye in all conditions, then they have done something truly remarkable. Judging by how Andrej Karpathy deflected the question about how well the system works in snowy conditions, I would assume they have not reached that level.” Check out the live stream of the autonomy day on Tesla’s official website. Tesla v9 to incorporate neural networks for autopilot Tesla is building its own AI hardware for self-driving cars Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine  
Read more
  • 0
  • 0
  • 2384

article-image-openai-five-bots-destroyed-human-dota-2-players-this-weekend
Richard Gall
23 Apr 2019
3 min read
Save for later

OpenAI Five bots destroyed human Dota 2 players this weekend

Richard Gall
23 Apr 2019
3 min read
Last week, the team at OpenAI made it possible for humans to play the OpenAI Five bot at Dota 2 online. The results were staggering - over a period of just a few days, from April 18 to April 21, OpenAI Five had a win rate of 99.4%, winning 7,215 games (that includes humans giving up and abandoning their games 3,140 times) and losing only 42. But perhaps we shouldn't be that surprised. The artificial intelligence bot did, after all, defeat OG, one of the best e-sports teams on the planet earlier this month. https://twitter.com/OpenAI/status/1120421259274334209 What does OpenAI Five's Dota 2 dominance tell us about artificial intelligence? The dominance of OpenAI Five over the weekend is important because it indicates that it is possible to build artificial intelligence that can deal with complex strategic decision-making consistently. Indeed, that's what sets this experiment apart from other artificial intelligence gaming challenges - from the showdown with OG to DeepMind's AlphaZero defeating a professional Go and chess players, bots are typically playing individuals or small teams of players. By taking on the world, it would appear that OpenAI have developed an artificial intelligence system that a large group of intelligent humans with specific domain experience have found it consistently difficult to out-think. Learning how to win The key issue when it comes to artificial intelligence and games - Dota 2 or otherwise - is the ability of the bot to learn. One Dota 2 gamer, quoted on a Reddit thread, said "the bots are locked, they are not learning, but we humans are. We will win." This is true - up to a point. The reality is that they aren't locked - they are, in fact, continually learning, processing the consequences of every decision that is made and feeding it into its system. And although adaptability will remain an issue for any artificial intelligence system, the more games it plays and the more strategies it 'learns' it will essentially build adaptability into its system. This is something OpenAI CTO Greg Brockman noted when responding to suggestions that OpenAI Five's tiny proportion of defeats indicates a lack of adaptability. "When we lost at The International (100% vs pro teams), they said it was because Five can’t do strategy. So we trained for longer. When we lose (0.7% vs the entire Internet), they say it’s because Five can’t adapt." https://twitter.com/gdb/status/1119963994754670594 It's important to remember that this doesn't necessarily signal that much about the possibility of Artificial General Intelligence. OpenAI Five's decision making power is centered around a very specific domain - even if it is one that is relatively complex. However, it does highlight that the relationship between video games and artificial intelligence is particularly important. On the one hand, video games are a space that can help us develop AI further and explore the boundaries of what's possible. But equally, AI will likely evolve the way we think about gaming - and esports - too. Read next: How Artificial Intelligence and Machine Learning can turbocharge a Game Developer’s career
Read more
  • 0
  • 0
  • 2807
article-image-ai-now-institute-publishes-a-report-on-the-diversity-crisis-in-ai-and-offers-12-solutions-to-fix-it
Bhagyashree R
22 Apr 2019
7 min read
Save for later

AI Now Institute publishes a report on the diversity crisis in AI and offers 12 solutions to fix it

Bhagyashree R
22 Apr 2019
7 min read
Earlier this month, the AI Now Institute published a report, authored by Sarah Myers West, Meredith Whittaker, and Kate Crawford, highlighting the link between the diversity issue in the current AI industry and the discriminating behavior of AI systems. The report further recommends some solutions to these problems that companies and the researchers behind these systems need to adopt to address these issues. Sarah Myers West is a postdoc researcher at the AI Now Institute and an affiliate researcher at the Berkman-Klein Center for Internet and Society. Meredith Whittaker is the co-founder of the AI Now Institute and leads Google's Open Research Group and the Google Measurement Lab.  Kate Crawford is a Principal Researcher at Microsoft Research and the co-founder and Director of Research at the AI Now Institute. Kate Crawford tweeted about this study. https://twitter.com/katecrawford/status/1118509988392112128 The AI industry lacks diversity, gender neutrality, and bias-free systems In recent years, we have come across several cases of “discriminating systems”. Facial recognition systems miscategorize black people and sometimes fails to work for trans drivers. When trained in online discourse, chatbots easily learn racist and misogynistic language. This type of behavior by machines is actually a reflection of society. “In most cases, such bias mirrors and replicates existing structures of inequality in the society,” says the report. The study also sheds light on gender bias in the current workforce. According to the report, only 18% of authors at some of the biggest AI conferences are women. On the other side of the spectrum are men who cover 80%. The tech giants, Facebook and Google, have a meager 15% and 10% women as their AI research staff. The situation for black workers in the AI industry looks even worse. While Facebook and Microsoft have 4% of their current workforce as black workers, Google stands at just 2.5%. Also, vast majority of AI studies assume gender is binary, and commonly assigns people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity. The report further reveals that, though there have been various “pipeline studies” to check the flow of diverse job candidates, they have failed to show substantial progress in bringing diversity in the AI industry. “The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether,” the report reads. What steps can industries take to address bias and discrimination in AI Systems The report lists 12 recommendations that AI researchers and companies should employ to improve workplace diversity and address bias and discrimination in AI systems. Publish compensation levels, including bonuses and equity, across all roles and job categories, broken down by race and gender. End pay and opportunity inequality, and set pay and benefit equity goals that include contract workers, temps, and vendors. Publish harassment and discrimination transparency reports, including the number of claims over time, the types of claims submitted, and actions taken. Change hiring practices to maximize diversity: include targeted recruitment beyond elite universities, ensure more equitable focus on under-represented groups, and create more pathways for contractors, temps, and vendors to become full-time employees. Commit to transparency around hiring practices, especially regarding how candidates are leveled, compensated, and promoted. Increase the number of people of color, women and other under-represented groups at senior leadership levels of AI companies across all departments. Ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups. For academic workplaces, ensure greater diversity in all spaces where AI research is conducted, including AI-related departments and conference committees. Remedying bias in AI systems is almost impossible when these systems are opaque. Transparency is essential, and begins with tracking and publicizing where AI systems are used, and for what purpose. Rigorous testing should be required across the lifecycle of AI systems in sensitive domains. Pre-release trials, independent auditing, and ongoing monitoring are necessary to test for bias, discrimination, and other harms. The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise. The methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all, based on a thorough risk assessment. AI-related departments and conference committees. Credits: AI Now Institute Bringing diversity in the AI workforce In order to address the diversity issue in the AI industry, companies need to make changes in the current hiring practices. They should have a more equitable focus on under-represented groups. People of color, women, and other under-represented groups should get fair chance to get into senior leadership level of AI companies across all departments. Further opportunities should be created for contractors, temps, and vendors to become full-time employees. To bridge the gender pay gap in the AI industry, it is important that companies maintain transparency regarding the compensation levels, including bonuses and equity, regardless of gender or race. In the past few years, several cases of sexual misconducts involving some of the biggest companies like Google, Microsoft, have come into light because of movements like #MeToo, Google Walkout, and more. These movements gave the victims and other supporting employees  the courage to speak against employees at higher positions who were taking undue advantage of their power. There are cases were the sexual harassment complaints were not taken seriously by the HRs and victims were told to just “get over it”. This is why, companies should  publish harassment and discrimination transparency reports that include information like the number and types of claims made and the actions taken by the company. Academic workplaces should ensure diversity in all AI-related departments and conference committees. In the past, some of the biggest AI conferences like Neural Information Processing Systems conference has failed to provide a welcoming and safer environment for women. In a survey conducted last year, many respondents shared that they have experienced sexual harassment. Women reported persistent advances from men at the conference. The organizers of such conferences should ensure an inclusive and welcoming environment for everyone. Addressing bias and discrimination in AI systems To address bias and discrimination in AI systems, the report recommends to do rigorous testing across the lifecycle of these systems. These systems should have pre-release trials, independent auditing, and monitoring to check bias, discrimination, and other harms. Looking at the social implications of AI systems, just addressing the algorithmic bias is not enough. “The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise,” says the report. While assessing a AI system, researchers and developers should also check whether designing a certain system is required at all, considering the risks it poses. The study calls for re-evaluating the current AI systems used for classifying, detecting, and predicting the race and gender. The idea of identifying a race or gender just by appearance is flawed and can be easily abused. Especially, systems that use physical appearance to find interior states, for instance, those that claim to detect sexuality from headshots. These systems are urgently in need to be checked. To know more in detail, read the full report: Discriminating Systems. Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination Desmond U. Patton, Director of SAFElab shares why AI systems should be a product of interdisciplinary research and diverse teams Google’s Chief Diversity Officer, Danielle Brown resigns to join HR tech firm Gusto
Read more
  • 0
  • 0
  • 3306

article-image-brett-lantz-shows-how-data-scientists-learn-building-algorithms-in-third-edition-machine-learning-r
Packt Editorial Staff
22 Apr 2019
3 min read
Save for later

The hands-on guide to Machine Learning with R by Brett Lantz

Packt Editorial Staff
22 Apr 2019
3 min read
If science fiction stories are to be believed, the invention of Artificial Intelligence inevitably leads to apocalyptic wars between machines and their makers. Thankfully, at the time of this writing, machines still require user input. Though your impressions of Machine Learning may be colored by these mass-media depictions, today's algorithms are too application-specific to pose any danger of becoming self-aware. The goal of today's Machine Learning is not to create an artificial brain, but rather to assist us with making sense of the world's massive data stores. Conceptually, the learning process involves the abstraction of data into a structured representation, and the generalization of the structure into action that can be evaluated for utility. In practical terms, a machine learner uses data containing examples and features of the concept to be learned, then summarizes this data in the form of a model, which is used for predictive or descriptive purposes. The field of machine learning provides a set of algorithms that transform data into actionable knowledge. Among the many possible methods, machine learning algorithms are chosen on the basis of the input data and the learning task. This fact makes machine learning well-suited to the present-day era of big data. Machine Learning with R, Third Edition introduces you to the fundamental concepts that define and differentiate the most commonly used machine learning approaches and how easy it is to use R to start applying machine learning to real-world problems. Many of the algorithms needed for machine learning are not included as part of the base R installation. Instead, the algorithms are available via a large community of experts who have shared their work freely. These powerful tools are available to download at no cost, but must be installed on top of base R manually. This book covers a small portion of all of R's machine learning packages and will get you up to speed with the learning landscape of machine learning with R. Machine Learning with R, Third Edition updates the classic R data science book with newer and better libraries, advice on ethical and bias issues in machine learning, and an introduction to deep learning. Whether you are an experienced R user or new to the language, Brett Lantz teaches you everything you need to uncover key insights, make new predictions, and visualize your findings. Introduction to Machine Learning with R Machine Learning with R How to make machine learning based recommendations using Julia [Tutorial]
Read more
  • 0
  • 0
  • 3994

article-image-eu-approves-labour-protection-laws-for-whistleblowers-and-gig-economy-workers-with-implications-for-tech-companies
Savia Lobo
17 Apr 2019
5 min read
Save for later

EU approves labour protection laws for ‘Whistleblowers’ and ‘Gig economy’ workers with implications for tech companies

Savia Lobo
17 Apr 2019
5 min read
The European Union approved two new labour protection laws recently. This time, for the two not so hyped sects, the whistleblowers and the ones earning their income via the ‘gig economy’. As for the whistleblowers, with the new law, they receive an increased protection landmark legislation aimed at encouraging reports of wrongdoing. On the other hand, for those working for ‘on-demand’ jobs, thus, termed as the gig economy, the law sets minimum rights and demands increased transparency for such workers. Let’s have a brief look at each of the newly approved by the EU. Whistleblowers’ shield against retaliation On Tuesday, the EU parliament approved a new law for whistleblowers safeguarding them from any retaliation within an organization. The law protects whistleblowers against dismissal, demotion and other forms of punishment. “The law now needs to be approved by EU ministers. Member states will then have two years to comply with the rules”, the EU proposal states. Transparency International calls this as “pathbreaking legislation”, which will also give employees a "greater legal certainty around their rights and obligations". The new law creates a safe channel which allows the whistleblowers to report of an EU law breach both within an organization and to public authorities. “It is the first time whistleblowers have been given EU-wide protection. The law was approved by 591 votes, with 29 votes against and 33 abstentions”, the BBC reports. In cases where no appropriate action is taken by the organization’s authorities even after reporting, whistleblowers are allowed to make public disclosure of the wrongdoing by communicating with the media. European Commission Vice President, Frans Timmermans, says, “potential whistleblowers are often discouraged from reporting their concerns or suspicions for fear of retaliation. We should protect whistleblowers from being punished, sacked, demoted or sued in court for doing the right thing for society.” He further added, “This will help tackle fraud, corruption, corporate tax avoidance and damage to people's health and the environment.” “The European Commission says just 10 members - France, Hungary, Ireland, Italy, Lithuania, Malta, the Netherlands, Slovakia, Sweden, and the UK - had a "comprehensive law" protecting whistleblowers”, the BBC reports. “Attempts by some states to water down the reform earlier this year were blocked at an early stage of the talks with Luxembourg, Ireland, and Hungary seeking to have tax matters excluded. However, a coalition of EU states, including Germany, France, and Italy, eventually prevailed in keeping tax revelations within the proposal”, the Reuters report. “If member states fail to properly implement the law, the European Commission can take formal disciplinary steps against the country and could ultimately refer the case to the European Court of Justice”, BBC reports. To know more about this new law for whistleblowers, read the official proposal. EU grants protection to workers in Gig economy (casual or short-term employment) In a vote on Tuesday, the Members of the European Parliament (MEP) announced minimum rights for workers with on-demand, voucher-based or platform jobs, such as Uber or Deliveroo. However, genuinely self-employed workers would be excluded from the new rules. “The law states that every person who has an employment contract or employment relationship as defined by law, collective agreements or practice in force in each member state should be covered by these new rights”, BBC reports. “This would mean that workers in casual or short-term employment, on-demand workers, intermittent workers, voucher-based workers, platform workers, as well as paid trainees and apprentices, deserve a set of minimum rights, as long as they meet these criteria and pass the threshold of working 3 hours per week and 12 hours per 4 weeks on average”, according to EU’s official website. For this, all workers need to be informed from day one as a general principle, but no later than seven days where justified. Following are the specific set of rights to cover new forms of employment includes: Workers with on-demand contracts or similar forms of employment should benefit from a minimum level of predictability such as predetermined reference hours and reference days. They should also be able to refuse, without consequences, an assignment outside predetermined hours or be compensated if the assignment was not cancelled in time. Member states shall adopt measures to prevent abusive practices, such as limits to the use and duration of the contract. The employer should not prohibit, penalize or hinder workers from taking jobs with other companies if this falls outside the work schedule established with that employer. Enrique Calvet Chambon, the MEP responsible for seeing the law through, said, “This directive is the first big step towards the implementation of the European Pillar of Social Rights, affecting all EU workers. All workers who have been in limbo will now be granted minimum rights thanks to this directive, and the European Court of Justice rulings, from now on no employer will be able to abuse the flexibility in the labour market.” To know more about this new law on Gig economy, visit EU’s official website. 19 nations including The UK and Germany give thumbs-up to EU’s Copyright Directive Facebook discussions with the EU resulted in changes of its terms and services for users The EU commission introduces guidelines for achieving a ‘Trustworthy AI’
Read more
  • 0
  • 0
  • 1609
article-image-wikileaks-founder-julian-assange-arrested-for-conspiracy-to-commit-computer-intrusion
Savia Lobo
12 Apr 2019
6 min read
Save for later

Wikileaks founder, Julian Assange, arrested for “conspiracy to commit computer intrusion”

Savia Lobo
12 Apr 2019
6 min read
Julian Assange, the Wikileaks founder, was arrested yesterday in London, in accordance with the U.S./UK Extradition Treaty. He was charged with assisting Chelsea Manning, a former intelligence analyst in the U.S. Army, to crack a password on a classified U.S. government computer. The indictment states that in March 2010, Assange assisted Manning by cracking password stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a U.S. government network used for classified documents and communications. Being an intelligence analyst, Manning had access to certain computers and used these to download classified records to transmit to WikiLeaks. “Cracking the password would have allowed Manning to log on to the computers under a username that did not belong to her. Such a deceptive measure would have made it more difficult for investigators to determine the source of the illegal disclosures”, the indictment report states. “Manning confessed to leaking more than 725,000 classified documents to WikiLeaks following her deployment to Iraq in 2009—including battlefield reports and five Guantanamo Bay detainee profiles”, Gizmodo reports. In 2013, Manning was convicted of leaking the classified U.S. government documents to WikiLeaks. She was jailed in early March this year as a recalcitrant witness after she refused to answer the grand jury’s questions. According to court filings, after Manning’s arrest, she was held in solitary confinement in a Virginia jail for nearly a month. Following Assange’s arrest, a Swedish software developer and digital privacy activist, Ola Bini, who is allegedly close to Wikileaks founder Julian Assange has also been detained. “The official said they are looking into whether he was part of a possible effort by Assange and Wikileaks to blackmail Ecuador’s President, Lenin Moreno”, the Washington Post reports. Bini was detained at Quito’s airport as he was preparing to board a flight for Japan. Martin Fowler, a British software developer and renowned author and speaker, tweeted on Bini’s arrest. He said that Bini is a strong advocate and developer supporting privacy, and has not been able to speak to any lawyers. https://twitter.com/martinfowler/status/1116520916383621121 Following Assange’s arrest, Hillary Clinton, who was the nominee for the 2016 Presidential elections, said, “The bottom line is that he has to answer for what he has done”. “WikiLeaks’ publication of Democratic emails stolen by Russian intelligence officers during the 2016 election season hurt Clinton’s presidential campaign”, the Washington Post reports. Assange, who is an Australian citizen, was dragged out of Ecuador’s embassy in London after his seven-year asylum was revoked. He was granted Asylum by former Ecuadorian President Rafael Correa in 2012 for publishing sensitive information about U.S. national security interests. Australian PM, Scott Morrison told Australian Broadcasting Corp. the charge is a “matter for the United States” and has nothing to do with Australia. He was granted asylum just after “he was released on bail while facing extradition to Sweden on sexual assault allegations. The accusations have since been dropped but he was still wanted for jumping bail”, the Washington Post states. A Swedish woman alleged that she was raped by Julian Assange during a visit to Stockholm in 2010. Post Assange’s arrest on Thursday, Elisabeth Massi Fritz, the lawyer for the unnamed woman, said in a text message sent to The Associated Press that “we are going to do everything” to have the Swedish case reopened “so Assange can be extradited to Sweden and prosecuted for rape.” She further added, “no rape victim should have to wait nine years to see justice be served.” “In 2017, Sweden’s top prosecutor dropped a long-running inquiry into a rape claim against Assange, saying there was no way to have Assange detained or charged within a foreseeable future because of his protected status inside the embassy”, the Washington Post reports. In a tweet, Wikileaks posted a photo of Assange with the words: “This man is a son, a father, a brother. He has won dozens of journalism awards. He’s been nominated for the Nobel Peace Prize every year since 2010. Powerful actors, including CIA, are engaged in a sophisticated effort to dehumanize, delegitimize and imprison him. #ProtectJulian.” https://twitter.com/wikileaks/status/1116283186860953600 Duncan Ross, a data philanthropist, tweeted, “Random thoughts on Assange: 1) journalists don’t have to be nice people but 2) being a journalist (if he is) doesn’t put you above the law.” https://twitter.com/duncan3ross/status/1116610139023237121 Edward Snowden, a former security contractor who leaked classified information about U.S. surveillance programs, says the arrest of WikiLeaks founder Julian Assange is a blow to media freedom. “Assange’s critics may cheer, but this is a dark moment for press freedom”, he tweets. According to the Washington Post, in an interview with The Associated Press, Rafael Correa, Ecuador’s former president, was harshly critical of his successor’s decision to expel the Wikileaks founder from Ecuador’s embassy in London. He said that “although Julian Assange denounced war crimes, he’s only the person supplying the information.” Correa said “It’s the New York Times, the Guardian and El Pais publishing it. Why aren’t those journalists and media owners thrown in jail?” Yanis Varoufakis, Economics professor and former Greek finance minister, tweeted, “It was never about Sweden, Putin, Trump or Hillary. Assange was persecuted for exposing war crimes. Will those duped so far now stand with us in opposing his disappearance after a fake trial where his lawyers will not even now the charges?” https://twitter.com/yanisvaroufakis/status/1116308671645061120 The Democracy in Europe Movement 2025 (@DiEM_25) tweeted that Assange’s arrest is “a chilling demonstration of the current disregard for human rights and freedom of speech by establishment powers and the rising far-right.” The movement has also put a petition against Assange’s extradition. https://twitter.com/DiEM_25/status/1116379013461815296 Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council A security researcher reveals his discovery on 800+ Million leaked Emails available online Leaked memo reveals that Facebook has threatened to pull investment projects from Canada and Europe if their data demands are not met
Read more
  • 0
  • 0
  • 3178

article-image-katie-bouman-unveils-the-first-ever-black-hole-image-with-her-brilliant-algorithm
Amrata Joshi
11 Apr 2019
11 min read
Save for later

Katie Bouman unveils the first ever black hole image with her brilliant algorithm

Amrata Joshi
11 Apr 2019
11 min read
Remember how we got to see the supermassive black hole in the movie Interstellar? Well, that wasn’t for real. We know that black holes end up sucking everything that’s too close to it, even light for that matter. Black hole’s event horizon cast a shadow and that shadow is enough for answering a lot of questions attached to black hole theory. And scientists and researchers have been working towards it since years to get that one image to give an angle to their research. And finally comes the biggest news that a team of astronomers, engineers, researchers and scientists have managed to capture the first ever image of a black hole, which is located in a distant galaxy. It is three million times the size of the Earth and it measures 40 billion Km across. The team describes it as "a monster" and was photographed by a network of eight telescopes across the world. In this article, we give you a glimpse of how did the image of the black hole got captured? Katie Bouman, a PhD student at MIT appeared at TED Talks and discussed the efforts taken by the team of researchers, engineers, astronomers and scientists to capture the first ever image of the black hole. Katie is a part of an international team of astronomers who worked for creating the world’s largest telescope, Event Horizon Telescope to click the first ever picture of the black hole. She led the development of a computer programme that made this impossible, possible! She started working on the algorithm three years ago while she was a graduate student. https://twitter.com/jenzhuscott/status/1115987618464968705 Katie wrote in the caption to one of the Facebook post, "Watching in disbelief as the first image I ever made of a black hole was in the process of being reconstructed." https://twitter.com/MIT_CSAIL/status/1116035007406116864 Further, she explains how the stars we see in the sky basically orbit an invisible object. And according to the astronomers, the only thing that can cause this motion of the stars is a supermassive black hole. Zooming in at radio wavelengths to see a ring of light “Well, it turns out that if we were to zoom in at radio wavelengths, we'd expect to see a ring of light caused by the gravitational lensing of hot plasma zipping around the black hole. Is it possible to see something that, by definition, is impossible to see? ” -Katie Bouman If we closely look at it, we can see that the black hole casts a shadow on the backdrop of bright material that carves out a sphere of darkness. It is a bright ring that reveals the black hole's event horizon, where the gravitational pull becomes so powerful that even light can’t escape. Einstein's equations have predicted the size and shape of this ring and taking a picture of it would help to verify that these equations hold in the extreme conditions around the black hole. Capturing black hole needs a telescope the size of the Earth “So how big of a telescope do we need in order to see an orange on the surface of the moon and, by extension, our black hole? Well, it turns out that by crunching the numbers, you can easily calculate that we would need a telescope the size of the entire Earth.” -Katie Bouman Bouman further explains that black hole is so far away from Earth that this ring appears incredibly small, as small as an orange on the surface of the moon. And this makes it difficult to capture the photo of the black hole. There are fundamental limits to the smallest objects that we can see because of diffraction. So the astronomers realized that they need to make their telescope bigger and bigger. Even the most powerful optical telescopes couldn’t get close to the resolution necessary to image on the surface of the moon. She showed one of the highest resolution images ever taken of the moon from Earth to the audience which contained around 13,000 pixels, and each pixel contained over 1.5 million oranges. Capturing the black hole turned into reality by connecting telescopes “And so, my role in helping to take the first image of a black hole is to design algorithms that find the most reasonable image that also fits the telescope measurements.” -Katie Bouman According to Bouman, we would require a telescope as big as earth’s size to see an orange on the surface of the moon. Capturing a black hole seemed to be imaginary back then as it was nearly impossible to have a powerful telescope. Bouman highlighted the famous words of Mick Jagger, "You can't always get what you want, but if you try sometimes, you just might find you get what you need." Capturing the black hole turned into a reality by connecting telescopes from around the world. Event Horizon Telescope, an international collaboration created a computational telescope the size of the Earth which was capable of resolving structure on the scale of a black hole's event horizon. The setup was such that each telescope in the worldwide network worked together. The researcher teams at each of the sites collected thousands of terabytes of data. This data then processed in a lab in Massachusetts. Let’s understand this in depth by assuming that we can build an Earth sized telescope! Further imagining that Earth is a spinning disco ball and each of the mirror of the ball can collect light that can be combined together to form a picture. If most of those mirrors are removed then a few will remain. In this case, it is still possible to combine this information together, but now there will be a lot of holes. The remaining mirrors represent the locations where these telescopes have been setup. Though this seems like a small number of measurements to make a picture from but it is effective. The light gets collected at a few telescope locations but as the Earth rotates, other new measurements also get explored. So, as the disco ball spins, the mirrors change locations and the astronomers get to observe different parts of the image. The imaging algorithms developed by the experts, scientists and researchers fill in the missing gaps of the disco ball in order to reconstruct the underlying black hole image. Katie Bouman said, “If we had telescopes located everywhere on the globe -- in other words, the entire disco ball -- this would be trivial. However, we only see a few samples, and for that reason, there are an infinite number of possible images that are perfectly consistent with our telescope measurements.” According to Bouman, not all the images are created equal. So some of those images look more like what the astronomers, scientists and researchers think of as images as compared to others. Bouman’s role in helping to take the first image of the black hole was to design the algorithms that find the most relevant or reasonable image that fits the telescope measurements. The imaging algorithms developed by Katie used the limited telescope data to guide the astronomers to a picture. With the help of these algorithms, it was possible to bring together the pieces of pictures from the sparse and noisy data. How was the algorithm used in creation of the black hole image “I'd like to encourage all of you to go out and help push the boundaries of science, even if it may at first seem as mysterious to you as a black hole.” -Katie Bouman There is an infinite number of possible images that perfectly explain the telescope measurements and the astronomers and researchers have to choose between them. This is possible by ranking the images based upon how likely they are to be the black hole image and further selecting the one that's most likely. Bouman explained it with the help of an example, “Let's say we were trying to make a model that told us how likely an image were to appear on Facebook. We'd probably want the model to say it's pretty unlikely that someone would post this noise image on the left, and pretty likely that someone would post a selfie like this one on the right. The image in the middle is blurry, so even though it's more likely we'd see it on Facebook compared to the noise image, it's probably less likely we'd see it compared to the selfie.” While talking about the images from the black hole, according to Katie it gets confusing for the astronomers and researchers as they have never seen a black hole before. She further explained how difficult it is to rely on any of the previous theories for these images. It is even difficult to completely rely on the images of the simulations for comparison. She said, “What is a likely black hole image, and what should we assume about the structure of black holes? We could try to use images from simulations we've done, like the image of the black hole from "Interstellar," but if we did this, it could cause some serious problems. What would happen if Einstein's theories didn't hold? We'd still want to reconstruct an accurate picture of what was going on. If we bake Einstein's equations too much into our algorithms, we'll just end up seeing what we expect to see. In other words, we want to leave the option open for there being a giant elephant at the center of our galaxy.” According to Bouman, different types of images have distinct features, so it is quite possible to identify the difference between black hole simulation images and images captured by the team. So the researchers need to let the algorithms know what images look like without imposing one type of image features. And this can be done by imposing the features of different kinds of images and then looking at how the image type we assumed affects the reconstruction of the final image. The researchers and astronomers become more confident about their image assumptions if the images' types produce a very similar-looking image. She said, “This is a little bit like giving the same description to three different sketch artists from all around the world. If they all produce a very similar-looking face, then we can start to become confident that they're not imposing their own cultural biases on the drawings.” It is possible to impose different image features by using pieces of existing images. So the astronomers and researchers took a large collection of images and broke them down into little image patches. And then they treated each image patch like piece of a puzzle. They use commonly seen puzzle pieces to piece together an image that also fits in their telescope measurements. She said, “Let's first start with black hole image simulation puzzle pieces. OK, this looks reasonable. This looks like what we expect a black hole to look like. But did we just get it because we just fed it little pieces of black hole simulation images?” If we take a set of puzzle pieces from everyday images, like the ones we take with our own personal camera then we get the same image from all different sets of puzzle pieces. And we then become more confident that the image assumptions made by us aren't biasing the final image. According to Bouman, another thing that can be done is take the same set of puzzle pieces like the ones derived from everyday images and then use them to reconstruct different kinds of source images. Bouman said, “So in our simulations, we pretend a black hole looks like astronomical non-black hole objects, as well as everyday images like the elephant in the center of our galaxy.” And when the results of the algorithms look very similar to the simulated image then researchers and astronomers become more confident about their algorithms. She emphasized that all of these pictures were created by piecing together little pieces of everyday photographs, like the ones we take with own personal camera. So an image of a black hole which we have never seen before can be created by piecing together pictures we see regularly like images of people, buildings, trees, cats and dogs. She concluded by appreciating the efforts taken by her team, “But of course, getting imaging ideas like this working would never have been possible without the amazing team of researchers that I have the privilege to work with. It still amazes me that although I began this project with no background in astrophysics. But big projects like the Event Horizon Telescope are successful due to all the interdisciplinary expertise different people bring to the table.” This project will surely encourage many researchers, engineers, astronomers and students who are under dark and not confident of themselves but have the potential to make the impossible, possible. https://twitter.com/fchollet/status/1116294486856851459 Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact? YouTube disables all comments on videos featuring children in an attempt to curb predatory behavior and appease advertisers Using Genetic Algorithms for optimizing your models [Tutorial]  
Read more
  • 0
  • 0
  • 21394